A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic. The algorithm uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random bits. Formally, the algorithm's performance will be a random variable determined by the random bits. One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite, algorithms which have a chance of producing an incorrect result or fail to produce a result either by signaling a failure or failing to terminate. In the second case, random performance and random output, the term "algorithm" for a procedure is somewhat questionable. In the case of random output, it is no longer formally effective. However, in some cases, probabilistic algorithms are the only practical means of solving a problem. In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits.
As a motivating example, consider the problem of finding an ‘a’ in an array of n elements. Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s. Output: Find an ‘a’ in the array. We give one Las Vegas algorithm and one Monte Carlo algorithm. Las Vegas algorithm: This algorithm succeeds with probability 1; the number of iterations varies and can be arbitrarily large, but the expected number of iterations is lim n → ∞ ∑ i = 1 n i 2 i = 2 Since it is constant the expected run time over many calls is Θ. Monte Carlo algorithm: If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is: This algorithm does not guarantee success, but the run time is bounded; the number of iterations is always equal to k. Taking k to be constant the run time is Θ. Randomized algorithms are useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm such as in the Prisoner's dilemma.
It is for this reason. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm deterministic. Therefore, either a source of random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing. In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable; the Monte Carlo algorithm is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm, by having it output an arbitrary incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm till a correct answer is obtained.
Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, several complexity classes are studied; the most basic randomized complexity class is RP, the class of decision problems for which there is an efficient randomized algorithm which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP; the class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms; the first randomized algorithm was a method developed by Michael O. Rabin for the closest pair problem in computational geometry; the study of randomized algorithms was spurred by the 1977 discovery of a randomized primality test by Robert M. Solovay and Volker Strassen.
Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test can be turned into a randomized algorithm. At that time, no practical deterministic algorithm for primality was known; the Miller-Rabin primality test relies on a binary relation between two positive integers k and n that can be expressed by saying that k "is a witness to the compositeness of" n. It can be shown that If there is a witness to the
Vitali Davidovich Milman is a mathematician specializing in analysis. He is a professor at the Tel-Aviv University. In the past he was a President of the Israel Mathematical Union and a member of the “Aliyah” committee of Tel-Aviv University. Milman received his Ph. D. at Kharkiv State University in 1965 under the direction of Boris Levin. In a 1971 paper, Milman gave a new proof of Dvoretzky's theorem, stating that every convex body in dimension N has a section of dimension d, with d tending to infinity with N, arbitrarily close to being isometric to an ellipsoid. Milman's proof gives the optimal bound d ≥ const log N. In this proof, Milman put forth the concentration of measure phenomenon which has since found numerous applications. Milman made important contributions to the study of Banach spaces of large dimension, which led to the development of asymptotic geometric analysis, his results in this field include Milman's reverse Brunn–Minkowski inequality and the quotient of subspace theorem.
He holds several positions including being the advisor to the Israel Ministry of Science on the immigration of scientists, being a member of the European Mathematical Union. He is including Geometric and Functional Analysis, he has published over a monograph and eleven edited books. He has delivered lectures at Universities such as MIT, IAS Princeton, Berkeley, IHES Paris, Cambridge. Milman received the Landau Prize in Mathematics in 2002 and the EMET Prize in mathematics in 2007. In 2012 he became a fellow of the American Mathematical Society. Mathematics runs in the Milman family, his father is the mathematician David Milman. His brother is the mathematician Pierre Milman and his son is the young mathematician Emanuel Milman. Artstein, Shiri. Asymptotic Geometric Analysis, Part I. American Mathematical Society.. Vitali Milman's website Vitali Milman at the Mathematics Genealogy Project
Peter Clive Sarnak is a South African-born mathematician with dual South-African and American nationalities. He has been Eugene Higgins Professor of Mathematics at Princeton University since 2002, succeeding Andrew Wiles, is an editor of the Annals of Mathematics, he is known for his work in analytic number theory. Sarnak is on the permanent faculty at the School of Mathematics of the Institute for Advanced Study, he sits on the Board of Adjudicators and the selection committee for the Mathematics award, given under the auspices of the Shaw Prize. Sarnak graduated from the University of the Witwatersrand and Stanford University, under the direction of Paul Cohen. Sarnak's cited work applied deep results in number theory to Ramanujan graphs, with connections to combinatorics and computer science. Sarnak has made major contributions to number theory, he is recognised internationally as one of the leading analytic number theorists of his generation. His early work on the existence of cusp forms led to the disproof of a conjecture of Atle Selberg.
He has obtained the strongest known bounds towards the Ramanujan–Petersson conjectures for sparse graphs, he was one of the first to exploit connections between certain questions of theoretical physics and analytic number theory. There are fundamental contributions to arithmetical quantum chaos, a term which he introduced, to the relationship between random matrix theory and the zeros of L-functions, his work on subconvexity for Rankin–Selberg L-functions led to the resolution of Hilbert's eleventh problem. During his career he has held numerous appointments including: Assistant Professor, 1980–83. "Spectral Behavior of Quasi Periodic Potentials". Commun. Math. Phys. 84: 377–401. Doi:10.1007/bf01208483. Some Applications of Modular Forms, 1990 Extremal Riemann Surfaces, 1997 Random Matrices, Frobenius Eigenvalues and Monodromy, 1998 Peter Sarnak. "Some problems in Number Theory and Mathematical Physics". In V. I. Arnold, M. Atiyah, P. Lax, B. Mazur. Mathematics: frontiers and perspectives. American Mathematical Society.
Pp. 261–269. ISBN 978-0821826973. CS1 maint: Uses editors parameter Selected Works of Ilya Piatetski-Shapiro, 2000 Elementary Number Theory, Group Theory and Ramanujan Graphs, 2003 Selected Papers Volume I-Peter Lax, 2005 Automorphic Forms and Applications, 2007 Peter Sarnak was awarded the Polya Prize of Society of Industrial & Applied Mathematics in 1998, the Ostrowski Prize in 2001, the Levi L. Conant Prize in 2003, the Frank Nelson Cole Prize in Number Theory in 2005 and a Lester R. Ford Award in 2012, he is the recipient of the 2014 Wolf Prize in Mathematics. The University of the Witwatersrand conferred an honorary doctorate on Professor Peter Sarnak on 2 July 2014 for his distinguished contribution to the field of mathematics, he was elected as member of the National Academy of Sciences and Fellow of the Royal Society in 2002. He was awarded an honorary doctorate by the Hebrew University of Jerusalem in 2010, he was awarded an honorary doctorate by the University of Chicago in 2015. He was elected to the 2018 class of fellows of the American Mathematical Society.
This article incorporates text available under the CC BY 4.0 license
Professor Alexander Lubotzky is an Israeli mathematician and former politician, a professor at the Hebrew University of Jerusalem and an adjunct professor at Yale University. He served as a member of the Knesset for The Third Way party between 1996 and 1999. In 2018 he won the Israel Prize for his accomplishments in maths and computer science. Lubotzky was born in Tel Aviv to Holocaust survivors, his father, Iser Lubotzky was the legal advisor of Herut. After school, Lubotzky did his IDF national service as a captain officer in a special intelligence and communication unit, he studied mathematics at Bar-Ilan University during highschool, gaining a BA and continued to direct PhD. He worked as a professor of mathematics at the Hebrew University, he has been a visiting professor at the Institute for Advanced Study in Princeton and the University of Chicago, with visits at Columbia, Yale, NYU and ETH Zurich. Lubotzky holds a Maurice and Clara Weil Chair in mathematics at the Einstein Institute of Mathematics of the Hebrew University of Jerusalem.
He is known for contributions to geometric group theory, the study of lattices in Lie groups, representation theory of discrete groups and Kazhdan's property, the study of subgroup growth and applications of group theory to combinatorics and computer science and error correcting codes. Lubotzky received the Erdős Prize in 1990. in the years 1994–1996 Lubotzky was the chairman of Einstein Institute of Mathematics at the Hebrew University of Jerusalem. In 1992 Lubotzky was a recipient of the Sunyer i Balaguer Prize from the Institut d'Estudis Catalans for his book "Discrete Groups Expanding Graphs and Invariant Measures" and again in 2002 with Dan Segal for their book "Subgroup Growth". In 2002 he has received the Rothschild Prize in mathematics. Lubotzky is listed as an ISI cited researcher in mathematics since 2003. Lubotzky was elected a foreign member of the American Academy of Arts and Sciences in 2005. In 2005-6 He led in the Institute for Advanced Study in Princeton a year long program on "Pro-finite groups and the congruence subgroup problem".
In 2006, he got an honorary degree from the University of Chicago for his contribution to Modern mathematics. In 2008 Lubotzky received the European Research Council advanced grant for exceptional established research leaders. In 2011 Lubotzky was chosen to be the keynote speaker at the joint meeting of the American Mathematical Society and the Mathematical Association of America in New Orleans. Lubotzky's keynote address in front of the conference's 6,000 attendees marked the first time that an Israeli was the keynote speaker at one of these conferences. In 2012 he was a visiting researcher at Microsoft Research Center. In 2014, he was elected to the Israel Academy of Humanities. In 2015 Lubotzky received the European Research Council advanced grant for exceptional established research leaders, becoming one of the only researchers receiving the grant twice. In honor of Lubotzky's sixtieth birthday The Israel Institute for Advanced Studies hosted a conference from 5 November through 11 November 2016, with scholars from around the world convening to celebrate his work and collaborations.
In 2018 Lubotzky received the Israel Prize, for mathematics. Lubotzky gave a Plenary lecture in the 2018 International Congress of Mathematicians at Rio de Janeiro, Brazil. A founding member of The Third Way in March 1996, he chaired its secretariat and was elected to the Knesset in the May 1996 elections, he served as a member of the Knesset's Foreign Defense Committee. As an MK, Prof. Lubotzky was known for his compromise proposals on religious issues and pluralism, he was involved in setting up a solution to the conversion bill crisis via the Ne'eman Commission, working to avoid a conflict between Israel and the Jewish Diaspora. Prof. Lubotzky co-drafted a comprehensive proposal for a new covenant for religion-state affairs in Israel with MK Yossi Beilin. Lubotzky married Yardenna, a lecturer in Art History and English, in 1980; the couple had six children. 1991: Erdős Prize for the best Israeli mathematician/computer scientist under the age of 40 1993: Ferran Sunyer i Balaguer Prize for the book Discrete groups, Expanding Graphs and Invariant Reassures 2002: The Rothschild Prize 2002: Ferran Sunyer i Balaguer Prize for the book Subgroup growth 2003: ISI list of Highly Cited Researchers 2005: Elected Foreign Honorary Member of the American Academy of Arts and Sciences 2006: Honorary doctoral degree from the University of Chicago 2014: Elected to the Israel Academy of Sciences and Humanities 2018: Israel Prize for mathematics Varieties of Representations of finitely generated groups, 1985 Discrete Groups, Expanding Graphs and Invariant Measures, 1994 Tree Lattices, 2000 Subgroup Growth, 2001 מבנים אלגבריים: חבורות, חוגים ושדות, 2018 Alexander Lubotzky on the Knesset website Alexander Lubotzky's webpage, Hebrew University Alexander Lubotzky at the Mathematics Genealogy Project Alexander Lubotzky at the Mathematical Reviews author summary
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
Eigenvalues and eigenvectors
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V, not the zero vector v is an eigenvector of T if T is a scalar multiple of v; this condition can be written as the equation T = λ v, where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. If the vector space V is finite-dimensional the linear transformation T can be represented as a square matrix A, the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left-hand side and a scaling of the column vector on the right-hand side in the equation A v = λ v. There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space to itself, given any basis of the vector space.
For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction, stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations; the prefix eigen- is adopted from the German word eigen for "proper", "characteristic". Utilized to study principal axes of the rotational motion of rigid bodies and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue.
This condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex; the Mona Lisa example pictured at right provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point; the linear transformation in this example is called a shear mapping. Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting; the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction.
Moreover, these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length, either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can take many forms. For example, the linear transformation could be a differential operator like d d x, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x. Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices that are referred to as eigenvectors. If the linear transformation is expressed in the form of an n by n matrix A the eigenvalue equation above for a linear transformation can be rewritten as the matrix multiplication A v = λ v, where the eigenvector v is an n by 1 matrix. For a matrix and eigenvectors can be used to decompose the matrix, for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many related mathematical concepts, the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T this basis is called an eigenbasis. Eigenvalues are introduced in the context of linear algebra or matrix theory. However, they arose in the study of quadratic forms and differential equations. In the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the pri
Connectivity (graph theory)
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements that need to be removed to separate the remaining nodes into isolated subgraphs. It is related to the theory of network flow problems; the connectivity of a graph is an important measure of its resilience as a network. An undirected graph is connected. In a connected graph, there are no unreachable vertices. A graph, not connected is disconnected. An undirected graph G is said to be disconnected if there exist two nodes in G such that no path in G has those nodes as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected. In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length 1, i.e. by a single edge, the vertices are called adjacent. A graph is said to be connected.
A connected component is a maximal connected subgraph of G. Each vertex belongs to one connected component, as does each edge. A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected graph, it is connected if it contains a directed path from u to v or a directed path from v to u for every pair of vertices u, v. It is connected, diconnected, or strong if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v; the strong components are the maximal connected subgraphs. A cut, vertex cut, or separating set of a connected graph G is a set of vertices whose removal renders G disconnected; the connectivity or vertex connectivity κ is the size of a minimal vertex cut. A graph is called k-vertex-connected if its vertex connectivity is k or greater. More any graph G is said to be k-connected if it contains at least k+1 vertices, but does not contain a set of k − 1 vertices whose removal disconnects the graph.
In particular, a complete graph with n vertices, denoted Kn, has no vertex cuts at all, but κ = n − 1. A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v; the local connectivity κ is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs. Moreover, except for complete graphs, κ equals the minimum of κ over all nonadjacent pairs of vertices u, v. 2-connectivity is called biconnectivity and 3-connectivity is called triconnectivity. A graph G, connected but not 2-connected is sometimes called separable. Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More an edge cut of G is a set of edges whose removal renders the graph disconnected; the edge-connectivity λ is the size of a smallest edge cut, the local edge-connectivity λ of two vertices u, v is the size of a smallest edge cut disconnecting u from v. Again, local edge-connectivity is symmetric.
A graph is called k-edge-connected. A graph is said to be maximally connected. A graph is said to be maximally edge-connected. A graph is said to be super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates two components, one of, an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into two components. More precisely: a G connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one vertex. A G connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some vertex. A cutset X of G is called a non-trivial cutset if X does not contain the neighborhood N of any vertex u ∉ X; the superconnectivity κ1 of G is: κ1 = min. A non-trivial edge-cut and the edge-superconnectivity λ1 are defined analogously. One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices.
If u and v are vertices of a graph G a collection of paths between u and v is called independent if no two of them share a vertex. The collection is edge-independent if no two paths in it share an edge; the number of mutually independent paths between u and v is written as κ′, the number of mutually edge-independent paths between u and v is written as λ′. Menger's theorem asserts that for distinct vertices u,v, λ equals λ′, if u is not adjacent to v κ equals κ′; this fact is a special case of the max-flow min-cut theorem. The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More it is easy to determine computationally whether a graph is connected, or to count the number of connected compone