1.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
2.
Randomized algorithm
–
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic. The algorithm typically uses uniformly random bits as an input to guide its behavior. Formally, the performance will be a random variable determined by the random bits, thus either the running time. In the second case, random performance and random output, the algorithm for a procedure is somewhat questionable. In the case of output, it is no longer formally effective. However, in cases, probabilistic algorithms are the only practical means of solving a problem. As a motivating example, consider the problem of finding an ‘a’ in an array of n elements, input, An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s. Output, Find an ‘a’ in the array and we give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm. Las Vegas algorithm, This algorithm succeeds with probability 1, after k iterations, the probability of finding an ‘a’ is, This algorithm does not guarantee success, but the run time is bounded. The number of iterations is less than or equal to k. Taking k to be constant the run time is Θ, Randomized algorithms are particularly useful when faced with a malicious adversary or attacker who deliberately tries to feed a bad input to the algorithm such as in the Prisoners dilemma. It is for this reason that randomness is ubiquitous in cryptography, in cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing, in the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm is guaranteed to complete in an amount of time that can be bounded by a function the size and its parameter k. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm, by having it output an arbitrary, Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied, the complement class for RP is co-RP. Problem classes having algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP, the class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP
3.
Michael O. Rabin
–
Michael Oser Rabin, is an Israeli computer scientist and a recipient of the Turing Award. Rabin was born in 1931 in Breslau, Germany, the son of a rabbi, in 1935, he emigrated with his family to Mandate Palestine. After high school, he was drafted into the army during the 1948 Arab–Israeli War, the mathematician Abraham Fraenkel, who was a professor of mathematics in Jerusalem, intervened with the army command, and Rabin was discharged to study at the university in 1949. He received an M. Sc. from Hebrew University of Jerusalem in 1953 and he became Associate Professor of Mathematics at the University of California, Berkeley and MIT. Before moving to Harvard University as Gordon McKay Professor of Computer Science in 1981, in the late 1950s, he was invited for a summer to do research for IBM at the Lamb Estate in Westchester County, New York with other promising mathematicians and scientists. It was there that he and Dana Scott wrote the paper Finite Automata, soon, using nondeterministic automata, they were able to re-prove Kleenes result that finite state machines exactly accept regular languages. As to the origins of what was to become computational complexity theory, nondeterministic machines have become a key concept in computational complexity theory, particularly with the description of complexity classes P and NP. Rabin then returned to Jerusalem, researching logic, and working on the foundations of what would later be known as computer science. He was a professor and the head of the Institute of Mathematics at the Hebrew University at 29 years old. Rabin recalls, There was absolutely no appreciation of the work on the issues of computing, mathematicians did not recognize the emerging new field. In 1960, he was invited by Edward F. Moore to work at Bell Labs, where Rabin introduced probabilistic automata that employ coin tosses in order to decide which state transitions to take. He showed examples of languages that required a very large number of states. In 1969, Rabin proved that the theory of n successors is decidable. A key component of the proof implicitly showed determinacy of parity games, in 1975, Rabin finished his tenure as Rector of the Hebrew University of Jerusalem and went to the Massachusetts Institute of Technology in the USA as a visiting professor. Gary Miller was also there and had his polynomial time test for primality based on the extended Riemann hypothesis, while there, Rabin invented the Miller-Rabin primality test, a randomized algorithm that can determine very quickly whether a number is prime. In 1976 he was invited by Joseph Traub to meet at Carnegie Mellon University, after he gave that lecture, Traub had said, No, no, this is revolutionary, and its going to become very important. In 1979, Rabin invented the Rabin cryptosystem, the first asymmetric cryptosystem whose security was proved equivalent to the intractability of integer factorization. In 1987, Rabin, together with Richard Karp, created one of the most well-known efficient string search algorithms, rabins more recent research has concentrated on computer security
4.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
5.
CC (complexity)
–
In computational complexity theory, CC is the complexity class containing decision problems which can be solved by comparator circuits of polynomial size. The most important problem which is complete for CC is a variant of the stable marriage problem. A comparator circuit is a network of wires and gates, each comparator gate, which is a directed edge connecting two wires, takes its two inputs and outputs them in sorted order. The input to any wire can be either a variable, its negation, one of the wires is designated as the output wire. The comparator circuit value problem is the problem of evaluating a comparator circuit given an encoding of the circuit, the complexity class CC is defined as the class of problems logspace reducible to CCVP. An equivalent definition is the class of problems AC0 reducible to CCVP, since there are sorting networks which can be constructed in AC0, this shows that the majority function is in CC. A problem in CC is CC-complete if every problem in CC can be reduced to it using a logspace reduction, the comparator circuit value problem is CC-complete. In the stable marriage problem, there is a number of men and women. Each person ranks all members of the opposite sex, a matching between men and women is stable if there are no unpaired man and woman who prefer each other over their current partners. Among the stable matchings, there is one in each woman gets the best man that she ever gets in any stable matching. The decision version of the matching problem is, given the rankings of all men and women, whether a given man. Although the classical Gale–Shapley algorithm cannot be implemented as a comparator circuit, another problem which is CC-complete is lexicographically-first maximal matching. In this problem, we are given a graph with an order on the vertices. The lexicographically-first maximal matching is obtained by successively matching vertices from the first bipartition to the minimal available vertices from the second bipartition, the problem asks whether the given edge belongs to this matching. Scott Aaronson showed that the model is CC-complete. The problem is to decide whether any pebbles are present in a particular pile after executing the program and he used this to show that the problem of deciding whether any balls reach a designated sink vertex in a Digi-Comp II-like device is also CC-complete. The comparator circuit evaluation problem can be solved in polynomial time, on the other hand, comparator circuits can solve directed reachability, and so CC contains NL. There is a world in which CC and NC are incomparable
6.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of other. Similarly, two variables are independent if the realization of one does not affect the probability distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities, although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P or P are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. A finite set of events is independent if every pair of events is independent—that is, if. A finite set of events is independent if every event is independent of any intersection of the other events—that is, if and only if for every n-element subset. This is called the rule for independent events. Note that it is not a condition involving only the product of all the probabilities of all single events. For more than two events, an independent set of events is pairwise independent, but the converse is not necessarily true. Two random variables X and Y are independent if and only if the elements of the π-system generated by them are independent, that is to say, for every a and b, the events and are independent events. A set of variables is pairwise independent if and only if every pair of random variables is independent. A set of variables is mutually independent if and only if for any finite subset X1, …, X n and any finite sequence of numbers a 1, …, a n. The measure-theoretically inclined may prefer to substitute events for events in the above definition and that definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space. Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, for instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z. The formal definition of independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, if X and Y are conditionally independent given Z, then P = P for any x, y and z with P >0. That is, the distribution for X given Y and Z is the same as that given Z alone