Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
In mathematics, the Fibonacci numbers denoted Fn form a sequence, called the Fibonacci sequence, such that each number is the sum of the two preceding ones, starting from 0 and 1. That is, F 0 = 0, F 1 = 1, F n = F n − 1 + F n − 2, for n > 1. One has F2 = 1. In some books, in old ones, F0, the "0" is omitted, the Fibonacci sequence starts with F1 = F2 = 1; the beginning of the sequence is thus: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … Fibonacci numbers are related to the golden ratio: Binet's formula expresses the nth Fibonacci number in terms of n and the golden ratio, implies that the ratio of two consecutive Fibonacci numbers tends to the golden ratio as n increases. Fibonacci numbers are named after Italian mathematician Leonardo of Pisa known as Fibonacci, they appear to have first arisen as early as 200 BC in work by Pingala on enumerating possible patterns of poetry formed from syllables of two lengths. In his 1202 book Liber Abaci, Fibonacci introduced the sequence to Western European mathematics, although the sequence had been described earlier in Indian mathematics.
Fibonacci numbers appear unexpectedly in mathematics, so much so that there is an entire journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data structure, graphs called Fibonacci cubes used for interconnecting parallel and distributed systems, they appear in biological settings, such as branching in trees, the arrangement of leaves on a stem, the fruit sprouts of a pineapple, the flowering of an artichoke, an uncurling fern and the arrangement of a pine cone's bracts. Fibonacci numbers are closely related to Lucas numbers L n in that they form a complementary pair of Lucas sequences U n = F n and V n = L n. Lucas numbers are intimately connected with the golden ratio; the Fibonacci sequence appears in Indian mathematics in connection with Sanskrit prosody, as pointed out by Parmanand Singh in 1985. In the Sanskrit poetic tradition, there was interest in enumerating all patterns of long syllables of 2 units duration, juxtaposed with short syllables of 1 unit duration.
Counting the different patterns of successive L and S with a given total duration results in the Fibonacci numbers: the number of patterns of duration m units is Fm + 1. Knowledge of the Fibonacci sequence was expressed as early as Pingala. Singh cites Pingala's cryptic formula misrau cha and scholars who interpret it in context as saying that the number of patterns for m beats is obtained by adding one to the Fm cases and one to the Fm−1 cases. Bharata Muni expresses knowledge of the sequence in the Natya Shastra. However, the clearest exposition of the sequence arises in the work of Virahanka, whose own work is lost, but is available in a quotation by Gopala: Variations of two earlier meters... For example, for four, variations of meters of two three being mixed, five happens.... In this way, the process should be followed in all mātrā-vṛttas. Hemachandra is credited with knowledge of the sequence as well, writing that "the sum of the last and the one before the last is the number... of the next mātrā-vṛtta."
Outside India, the Fibonacci sequence first appears in the book Liber Abaci by Fibonacci. Using it to calculate the growth of rabbit populations. Fibonacci considers the growth of a hypothetical, idealized rabbit population, assuming that: a newly born pair of rabbits, one male, one female, are put in a field. Fibonacci posed the puzzle: how many pairs will there be in one year? At the end of the first month, they mate. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits in the field. At the end of the third month, the original female produces a second pair, making 3 pairs in all in the field. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produces her first pair, making 5 pairs. At the end of the nth month, the number of pairs of rabbits is equal to the number of new pairs plus the number of pairs alive last month; this is the nth Fibonacci number. The name "Fibonacci sequence" was first used by the 19th
Number theory is a branch of pure mathematics devoted to the study of the integers. German mathematician Carl Friedrich Gauss said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." Number theorists study prime numbers as well as the properties of objects made out of integers or defined as generalizations of the integers. Integers can be considered either as solutions to equations. Questions in number theory are best understood through the study of analytical objects that encode properties of the integers, primes or other number-theoretic objects in some fashion. One may study real numbers in relation to rational numbers, for example, as approximated by the latter; the older term for number theory is arithmetic. By the early twentieth century, it had been superseded by "number theory"; the use of the term arithmetic for number theory regained some ground in the second half of the 20th century, arguably in part due to French influence. In particular, arithmetical is preferred as an adjective to number-theoretic.
The first historical find of an arithmetical nature is a fragment of a table: the broken clay tablet Plimpton 322 contains a list of "Pythagorean triples", that is, integers such that a 2 + b 2 = c 2. The triples are too large to have been obtained by brute force; the heading over the first column reads: "The takiltum of the diagonal, subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity 2 + 1 = 2, implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and reordered by c / a for actual use as a "table", for example, with a view to applications, it is not known whether there could have been any. It has been suggested instead. While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, Babylonian algebra was exceptionally well developed. Late Neoplatonic sources state.
Much earlier sources state that Pythagoras traveled and studied in Egypt. Euclid IX 21–34 is probably Pythagorean. Pythagorean mystics gave great importance to the even; the discovery that 2 is irrational is credited to the early Pythagoreans. By revealing that numbers could be irrational, this discovery seems to have provoked the first foundational crisis in mathematical history; this forced a distinction between numbers, on the one hand, lengths and proportions, on the other hand. The Pythagorean tradition spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc. are seen now as more natural than triangular numbers, pentagonal numbers, etc. the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period. We know of no arithmetical material in ancient Egyptian or Vedic sources, though there is some algebra in both; the Chinese remainder theorem appears as an exercise in Sunzi Suanjing There is some numerical mysticism in Chinese mathematics, unlike that of the Pythagoreans, it seems to have led nowhere.
Like the Pythagoreans' perfect numbers, magic squares have passed from superstition into recreation. Aside from a few fragments, the mathematics of Classical Greece is known to us either through the reports of contemporary non-m
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
Hendrik Willem Lenstra Jr. is a Dutch mathematician. Lenstra received his doctorate from the University of Amsterdam in 1977 and became a professor there in 1978. In 1987 he was appointed to the faculty of the University of Berkeley. Lenstra has worked principally in computational number theory. Lenstra is well known for co-discovering of the Lenstra–Lenstra–Lovász lattice basis reduction algorithm in 1982 and for discovering the elliptic curve factorization method in 1987. In 1992, he computed all solutions to the inverse Fermat equation; the Cohen–Lenstra heuristics is a set of precise conjectures about the structure of class groups of quadratic fields that are named after him. Three of his brothers, Arjen Lenstra, Andries Lenstra, Jan Karel Lenstra, are mathematicians. Jan Karel Lenstra is the former director of the Netherlands Centrum Informatica. Hendrik Lenstra was the Chairman of the Program Committee of the International Congress of Mathematicians in 2010. In 1984 Lenstra became member of the Royal Netherlands Academy of Sciences.
He won the Fulkerson Prize in 1985 for his research using the geometry of numbers to solve integer programs with few variables in time polynomial in the number of constraints. He was awarded the Spinoza Prize in 1998, on 24 April 2009 he was made a Knight of the Order of the Netherlands Lion. In 2009, he was awarded a Gauss Lecture by the German Mathematical Society. In 2012 he became a fellow of the American Mathematical Society. Euclidean Number Fields. Parts 1-3, Mathematical Intelligencer 1980 Factoring integers with elliptic curves. Annals of Mathematics, vol. 126, 1987, pp. 649–673 with A. K. Lenstra: Algorithms in Number Theory. Pp. 673–716, In Jan van Leeuwen: Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity. Elsevier and MIT Press 1990, ISBN 0-444-88071-2, ISBN 0-262-22038-5. Algorithms in Algebraic Number Theory. Bulletin of the AMS, vol. 26, 1992, pp. 211–244. Primality testing algorithms. Séminaire Bourbaki 1981. With Stevenhagen: Artin reciprocity and Mersenne Primes.
Nieuw Archief for Wiskunde 2000. With Stevenhagen: Chebotarev and his density theorem. Mathematical Intelligencer 1992. Profinite Fibonacci Numbers, December 2005, PDF Print Gallery Prof. dr. H. W. Lenstra, 1949 - at the University of Amsterdam Album Academicum website "Cryptocoinnews.com speculation that Lenstra might be Satoshi Nakamoto". "Home Page: Emeritus professor, Department of Mathematics, University of California, Berkeley". "Hendrik W. Lenstra". Homepage at the Leiden Mathematisch Instituut Hendrik Lenstra at the Mathematics Genealogy Project
Arjen Klaas Lenstra is a Dutch mathematician. He studied mathematics at the University of Amsterdam, he is a professor at the EPFL, in the Laboratory for Cryptologic Algorithms, worked for Citibank and Bell Labs. Lenstra is active in cryptography and computational number theory in areas such as integer factorization. With Mark Manasse, he was the first to seek volunteers over the internet for a large scale scientific distributed computing project; such projects became more common after the Factorization of RSA-129, a high publicity distributed factoring success led by Lenstra along with Derek Atkins, Michael Graff and Paul Leyland. He was a leader in the successful factorizations of several other RSA numbers. Lenstra was involved in the development of the number field sieve. With coauthors, he showed the great potential of the algorithm early on by using it to factor the ninth Fermat number, far out of reach by other factoring algorithms of the time, he has since been involved with several other number field sieve factorizations including the current record, RSA-768.
Lenstra's most cited scientific result is the first polynomial time algorithm to factor polynomials with rational coefficients in the seminal paper that introduced the LLL lattice reduction algorithm with Hendrik Willem Lenstra and László Lovász. Lenstra is co-inventor of the XTR cryptosystem. Lenstra's brother Hendrik Lenstra is a professor in mathematics at Leiden University and his brother Jan Karel Lenstra is a former director of Centrum Wiskunde & Informatica. On 1 March 2005, Arjen Lenstra, Xiaoyun Wang, Benne de Weger of Eindhoven University of Technology demonstrated construction of two X.509 certificates with different public keys and the same MD5 hash, a demonstrably practical hash collision. The construction included private keys for both public keys. Lenstra is the recipient of the RSA Award for Excellence in Mathematics 2008 Award. Lenstra–Lenstra–Lovász lattice basis reduction algorithm Lenstra elliptic-curve factorization General number field sieve Web page on Arjen Lenstra at EPFL
Mathematics includes the study of such topics as quantity, structure and change. Mathematicians use patterns to formulate new conjectures; when mathematical structures are good models of real phenomena mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity from as far back; the research required to solve mathematical problems can take years or centuries of sustained inquiry. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe Peano, David Hilbert, others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.
Mathematics is essential in many fields, including natural science, medicine and the social sciences. Applied mathematics has led to new mathematical disciplines, such as statistics and game theory. Mathematicians engage in pure mathematics without having any application in mind, but practical applications for what began as pure mathematics are discovered later; the history of mathematics can be seen as an ever-increasing series of abstractions. The first abstraction, shared by many animals, was that of numbers: the realization that a collection of two apples and a collection of two oranges have something in common, namely quantity of their members; as evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have recognized how to count abstract quantities, like time – days, years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic and geometry for taxation and other financial calculations, for building and construction, for astronomy.
The most ancient mathematical texts from Mesopotamia and Egypt are from 2000–1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry, it is in Babylonian mathematics that elementary arithmetic first appear in the archaeological record. The Babylonians possessed a place-value system, used a sexagesimal numeral system, still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, the Ancient Greeks began a systematic study of mathematics as a subject in its own right with Greek mathematics. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom and proof, his textbook Elements is considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is held to be Archimedes of Syracuse, he developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus.
Other notable achievements of Greek mathematics are conic sections, trigonometry (Hipparchus of Nicaea, the beginnings of algebra. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition of sine and cosine, an early form of infinite series. During the Golden Age of Islam during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics; the most notable achievement of Islamic mathematics was the development of algebra. Other notable achievements of the Islamic period are advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe.
The development of calculus by Newton and Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries; the foremost mathematician of the 19th century was the German mathematician Carl Friedrich Gauss, who made numerous contributions to fields such as algebra, differential geometry, matrix theory, number theory, statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show that any axiomatic system, consistent will contain unprovable propositions. Mathematics has since been extended, there has been a fruitful interaction between mathematics and science, to