Bernard Chazelle is a French-American computer scientist. He is the Eugene Higgins Professor of Computer Science at Princeton University. Much of his work is in computational geometry, where he is known for his study of algorithms, such as linear-time triangulation of a simple polygon, as well as major complexity results, such as lower bound techniques based on discrepancy theory, he is known for his invention of the soft heap data structure and the most asymptotically efficient known algorithm for finding minimum spanning trees. Chazelle was born in Clamart, the son of Marie-Claire and Jean Chazelle, he grew up in Paris, where he received his bachelor's degree and master's degree in applied mathematics at the École des mines de Paris in 1977. At the age of 21, he attended Yale University in the United States, where he received his PhD in computer science in 1980 under the supervision of David P. Dobkin, he went on to claim important research positions at institutions such as Carnegie Mellon, Brown, NEC, Xerox PARC, the Institute for Advanced Study, the Paris institutions École normale supérieure, École polytechnique and Collège de France.
He is a fellow of the ACM, the American Academy of Arts and Sciences, the John Simon Guggenheim Memorial Foundation, NEC, as well as a member of the European Academy of Sciences. He has written essays about music and politics. Chazelle is married to Celia Chazelle, he is the father of director Damien Chazelle, the youngest person in history to win an Academy Award for Best Director, Anna Chazelle, an entertainer. The Discrepancy Method: Randomness and Complexity. Cambridge University Press. 2000. ISBN 978-0-521-00357-5. Bernard Chazelle at Princeton University
Gustave Choquet was a French mathematician. Choquet was born in Nord, his contributions include work in functional analysis, potential theory and measure theory. He is known for creating the Choquet integral and the theory of capacities, he did postgraduate work at the École Normale Supérieure Paris. He was Professor at the University of Paris from 1940 to 1984 and was Professor at the École Polytechnique from 1960 to 1969, his honours and awards included being a Member of the Académie des Sciences, an Officier of the Légion d’Honneur. His students include Haïm Brezis, Gilles Godefroy, Nassif Ghoussoub, Michel L. Lapidus, Michel Talagrand, he was married to mathematician and mathematical physicist Yvonne Choquet-Bruhat. He died in Lyon in 2006. Capacity of a set Choquet game Choquet integral Choquet theory Choquet, Gustave, "La naissance de la théorie des capacités: réflexion sur une expérience personnelle", Comptes rendus de l'Académie des sciences. Série générale, La Vie des sciences, 3: 385–397, MR 0867115, Zbl 0607.01017, available from Gallica.
An historical account on the development of the theory of capacities by the founder of the theory and one of the main contributors: an English translation of the title reads as:-"The birth of capacity theory: reflections on a personal experience". A biography by the Académie des Sciences. A short biography. A commemorative section of the N° 111 Gazette des Mathématiciens of the Société Mathématique de France. Gustave Choquet at the Mathematics Genealogy Project
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can be divided into smaller ones, which can be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling; as power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture in the form of multi-core processors. Parallel computing is related to concurrent computing—they are used together, conflated, though the two are distinct: it is possible to have parallelism without concurrency, concurrency without parallelism. In parallel computing, a computational task is broken down into several many similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion.
In contrast, in concurrent computing, the various processes do not address related tasks. Parallel computers can be classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is implemented as a serial stream of instructions; these instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements to solve a problem; this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Parallel computing was used for scientific computing and the simulation of scientific problems in the natural and engineering sciences, such as meteorology.
This led to the design of parallel software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004; the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle, V is voltage, F is the processor frequency. Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores.
The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers, thus parallelisation of serial programmes has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months; this could mean that after 2020 a typical processor will have hundreds of cores. An operating system can ensure that different tasks and user programmes are run in parallel on the available cores. However, for a serial software programme to take full advantage of the multi-core architecture the programmer needs to restructure and parallelise the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelise their software code to take
Moravia is a historical region in the Czech Republic and one of the historical Czech lands, together with Bohemia and Czech Silesia. The medieval and early modern Margraviate of Moravia was a crown land of the Lands of the Bohemian Crown, an imperial state of the Holy Roman Empire a crown land of the Austrian Empire and also one of 17 former crown lands of the Cisleithanian part of the Austro-Hungarian Empire from 1867 to 1918. During the early 20th century, Moravia was one of the five lands of Czechoslovakia from 1918 to 1928. Moravia has an area of over 22,000 km2 and about 3 million inhabitants, 2/7 or 30% of the whole Czech Republic; the statistics from 1921 states, that the whole area of Moravia including the enclaves in Silesia covers 22,623.41 km2. The people are named Moravians, a subgroup of Czechs; the land takes its name from the Morava river, which rises in the northern tip of the region and flows southward to the opposite end, being its major stream. Moravia's largest city and historical capital is Brno.
Before being sacked by the Swedish army during the Thirty Years' War, Olomouc was another capital. Though abolished by an administrative reform in 1949, Moravia is still acknowledged as a specific land in the Czech Republic. Moravian people are aware of their Moravian identity and there is some rivalry between them and the Czechs from Bohemia; the region and former margraviate of Moravia, Morava in Czech, is named after its principal river Morava. It is theorized that the river's name is derived from Proto-Indo-European *mori: "waters", or indeed any word denoting water or a marsh; the German name for Moravia is Mähren, again from the river's German name March. Interestingly, this might hint at a different etymology, as march is a term used in the Medieval times for an outlying territory, a border or a frontier. Moravia occupies most of the eastern part of the Czech Republic. Moravian territory is strongly determined, in fact, as the Morava river basin, with strong effect of mountains in the west and in the east, where all the rivers rise.
Moravia occupies an exceptional position in Central Europe. All the highlands in the west and east of this part of Europe run west-east, therefore form a kind of filter, making north-south or south north movement more difficult. Only Moravia with the depression of the westernmost Outer Subcarpathia, 14–40 kilometers wide, between the Bohemian Massif and the Outer Western Carpathians, provides a comfortable connection between the Danubian and Polish regions, this area is thus of great importance in terms of the possible migration routes of large mammals – both as regards periodically recurring seasonal migrations triggered by climatic oscillations in the prehistory, when permanent settlement started. Moravia borders Bohemia in the west, Lower Austria in the south, Slovakia in the southeast, Poland shortly in the north, Czech Silesia in the northeast, its natural boundary is formed by the Sudetes mountains in the north, the Carpathians in the east and the Bohemian-Moravian Highlands in the west.
The Thaya river meanders along the border with Austria and the tripoint of Moravia and Slovakia is at the confluence of the Thaya and Morava rivers. The northeast border with Silesia runs along the Moravice and Ostravice rivers. Between 1782–1850, Moravia included a small portion of the former province of Silesia – the Austrian Silesia. Today Moravia including the South Moravian Region, the Zlín Region, vast majority of the Olomouc Region, southeastern half of the Vysočina Region and parts of the Moravian-Silesian and South Bohemian regions. Geologically, Moravia covers a transitive area between the Bohemian Massif and the Carpathians, between the Danube basin and the North European Plain, its core geomorphological features are three wide valleys, namely the Dyje-Svratka Valley, the Upper Morava Valley and the Lower Morava Valley. The first two form the westernmost part of the Outer Subcarpathia, the last is the northernmost part of the Vienna Basin; the valleys surround the low range of Central Moravian Carpathians.
The highest mountains of Moravia are situated on its northern border in Hrubý Jeseník, the highest peak is Praděd. Second highest is the massive of Králický Sněžník the third are the Moravian-Silesian Beskids at the east, with Smrk, south from here Javorníky; the White Carpathians along the southeastern border rise up to 970 m at Velká Javořina. The spacious, but moderate Bohemian-Moravian Highlands on the west reach 837 m at Javořice; the fluvial system of Moravia is cohesive, as the region border is similar to the watershed of the Morava river, thus the entire area is drained by a single stream. Morava's far biggest tributaries are Thaya from Bečva. Morav
Jaroslav Nešetřil is a Czech mathematician, working at Charles University in Prague. His research areas include combinatorics, graph theory, posets, computer science. Nešetřil received his Ph. D. from Charles University in 1973 under the supervision of Aleš Pultr and Gert Sabidussi. He is responsible for more than 300 publications. Since 2006, he is chairman of the Committee of Mathematics of Czech Republic. Jaroslav Nešetřil is Editor in Chief of Computer Science Review and INTEGERS: the Electronic Journal of Combinatorial Number Theory, he is honorary editor of Electronic Journal of Graph Theory and Applications. Since 2008, Jaroslav Nešetřil belongs to the Advisory Board of the Academia Sinica, he was awarded the state prize for a collection of papers in Ramsey theory. The book Sparsity - Graphs and Algorithms he co-authored with Patrice Ossona de Mendez was included in ACM Computing Reviews list of Notable Books and Articles of 2012. Nešetřil is a corresponding member of the German Academy of Sciences since 1996 and has been declared Doctor Honoris Causa of the University of Alaska in 2002.
He has been declared Doctor Honoris Causa of the University of Bordeaux 1 in 2009. He received in 2010 the Medal of Merit of Czech Republic and the Gold medal of Faculty of Mathematics and Physics, Charles University in 2011. In 2012, he has been elected to the Academia Europaea, he has been elected honorary member of the Hungarian Academy of Sciences in 2013. He was an invited speaker of the European Congress of Mathematics, in Amsterdam, 2008, invited speaker at the Combinatorics session of the International Congress of Mathematicians, in Hyderabad, 2010. In 2018, on the occasion of the 670th anniversary of the establishment of Charles University, Nešetřil has received from the rector of Charles university the Donatio Universitatis Carolinae prize “for his contribution to mathematics and for his leading role in establishing a world-renowned group in discrete mathematics at Charles University”. Hell, Pavol. Graphs and Homomorphisms. Oxford University Press. ISBN 0-19-852817-5. Matoušek, Jiří. Invitation to Discrete Mathematics.
Oxford University Press. ISBN 0-19-850207-9. Matoušek, Jiří. Diskrete Mathematik: Eine Entdeckungsreise. Springer. ISBN 3-540-42386-9. Matoušek, Jiří. Introduction aux mathématiques discrètes. Springer. ISBN 228720010X. Nešetřil, Jaroslav. Sparsity - Graphs and Algorithms. Springer. ISBN 978-3-642-27874-7. Nešetřil, Jaroslav. Mathematics of Ramsey Theory. Springer. ISBN 0-387-18191-1. Official website Official website
In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, so on until no further improvements can be found. For example, hill climbing can be applied to the travelling salesman problem, it is easy to find an initial solution that visits all the cities but will be poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. A much shorter route is to be obtained. Hill climbing finds optimal solutions for convex problems – for other problems it will find only local optima, which are not the best possible solution out of all possible solutions. Examples of algorithms that solve convex problems by hill-climbing include the simplex algorithm for linear programming and binary search.
To attempt to avoid getting stuck in local optima, one could use restarts, or more complex schemes based on iterations, or on memory, or on memory-less stochastic modifications. The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms, it is used in artificial intelligence, for reaching a goal state from a starting node. Different choices for next nodes and starting nodes are used in related algorithms. Although more advanced algorithms such as simulated annealing or tabu search may give better results, in some situations hill climbing works just as well. Hill climbing can produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems, so long as a small number of increments converges on a good solution. At the other extreme, bubble sort can be viewed as a hill climbing algorithm, yet this approach is far from efficient for modest N, as the number of exchanges required grows quadratically.
Hill climbing is an anytime algorithm: it can return a valid solution if it's interrupted at any time before it ends. Hill climbing attempts to maximize a target function f, where x is a vector of continuous and/or discrete values. At each iteration, hill climbing will adjust a single element in x and determine whether the change improves the value of f. With hill climbing, any change that improves f is accepted, the process continues until no change can be found to improve the value of f. X is said to be "locally optimal". In discrete vector spaces, each possible value for x may be visualized as a vertex in a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing the value of f, until a local maximum x m is reached. In simple hill climbing, the first closer node is chosen, whereas in steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node, which may happen if there are local maxima in the search space which are not solutions.
Steepest ascent hill climbing is similar to best-first search, which tries all possible extensions of the current path instead of only one. Stochastic hill climbing does not examine all neighbors before deciding. Rather, it selects a neighbor at random, decides whether to move to that neighbor or to examine another. Coordinate descent does a line search along one coordinate direction at the current point in each iteration; some versions of coordinate descent randomly pick a different coordinate direction each iteration. Random-restart hill climbing is a meta-algorithm built on top of the hill climbing algorithm, it is known as Shotgun hill climbing. It iteratively does hill-climbing, each time with a random initial condition x 0; the best x m is kept: if a new run of hill climbing produces a better x m than the stored state, it replaces the stored state. Random-restart hill climbing is a effective algorithm in many cases, it turns out that it is better to spend CPU time exploring the space, than optimizing from an initial condition.
Hill climbing will not find the global maximum, but may instead converge on a local max
Otakar Borůvka was a Czech mathematician best known today for his work in graph theory, long before this was an established mathematical discipline. Borůvka was born in a town in Moravia, the son of a school headmaster, he attended the grammar school in Uherské Hradiště beginning in 1910. In 1916, influenced by the ongoing World War I, he moved to the military school in Hranice, he enrolled into the military technical academy in Mödling near Vienna; when the war ended, Borůvka returned to Uherské Hradiště, finished his studies in 1918 at the Gymnasium there, became a student at the Imperial Czech Technical University of Franz Joseph, in Brno studying civil engineering. In 1920, Masaryk University opened in Brno, Borůvka began taking courses there, he became an assistant to Mathias Lerch at Masaryk in 1921, but Lerch died in 1922. At Čech's suggestion, Borůvka visited Élie Cartan in Paris from 1926 to 1927, he earned his habilitation from Masaryk University in 1927, he became a docent there in 1928.
He continued to travel abroad through the late 1920s and early 1930s, to Cartan in Paris again as well as to Wilhelm Blaschke in Hamburg. He was promoted to assistant professor at Masaryk in 1934, given a chair in 1940, made an ordinary professor in 1946. In 1965, he founded the new journal Archivum Mathematicum, in 1969, he became a founding member of the Institute of Mathematics of the Czechoslovak Academy of Sciences, splitting his time between the Institute and his professorship at Masaryk; the problem of designing efficient electric distribution networks had been suggested to Borůvka by his friend Jindřich Saxel, an employee of an electrical utility, during World War I. In his 1926 paper O jistém problému minimálním, Borůvka solved this problem by modeling it mathematically as a minimum spanning tree problem, described the first known algorithm for finding the minimum spanning tree of a metric space. Now called Borůvka's algorithm, his method works by adding a connections between each subtree of the minimum spanning tree found so far and its nearest neighboring subtree.
The same algorithm has been rediscovered repeatedly. It is more suitable for distributed and parallel computation than many other minimum spanning tree algorithms, can achieve linear time complexity on planar graphs and more in minor-closed graph families, plays a central role in the randomized linear time algorithm of Karger, Klein & Tarjan. From 1924 to 1935, Borůvka's primary interest was in differential geometry, his work in this area concerned analytic correspondences between projective planes, normal curvature of high-dimensional surfaces, Frenet formula for curves in high-dimensional spaces. Beginning in the 1930s, Borůvka's interests shifted to abstract algebra, in particular the theory of groups, he was one of the first to study a generalization of groups, called by him "groupoids" but now more referred to as magmas. A textbook by him on groups and groupoids published in Czech in 1944, went through several expansions, translations, including an English edition in 1976. Following the war, Borůvka shifted gears again, from algebra to the theory of differential equations.
He published several research papers on this subject, as well as a monograph on second-order differential equations which he published in 1971. Borůvka became a corresponding member of the Czechoslovak Academy of Sciences at its creation in 1953, an ordinary member in 1965. In 1969, Comenius University in Bratislava gave him an honorary doctorate, in 1994 he received a second honorary doctorate from Masaryk University in Brno, he has been given medals by the Free University of Brussels, the University of Liège, Jagiellonian University, Comenius University, Palacký University of Olomouc, Jan Evangelista Purkyně University in Ústí nad Labem, the German Academy of Sciences at Berlin, the Russian Academy of Sciences#Academy of Sciences of the USSR, the Czechoslovak Academy of Sciences. Borůvka, Czech Digital Mathematics Library