In computational complexity theory, CC is the complexity class containing decision problems which can be solved by comparator circuits of polynomial size. Comparator circuits are sorting networks in which each comparator gate is directed, each wire is initialized with an input variable, its negation, or a constant, one of the wires is distinguished as the output wire; the most important problem, complete for CC is a decision variant of the stable marriage problem. A comparator circuit is a network of gates; each comparator gate, a directed edge connecting two wires, takes its two inputs and outputs them in sorted order. The input to any wire can be its negation, or a constant. One of the wires is designated as the output wire; the function computed by the circuit is evaluated by initializing the wires according to the input variables, executing the comparator gates in order, outputting the value carried by the output wire. The comparator circuit value problem is the problem of evaluating a comparator circuit given an encoding of the circuit and the input to the circuit.
The complexity class CC is defined as the class of problems logspace reducible to CCVP. An equivalent definition is the class of problems AC0 reducible to CCVP; as an example, a sorting network can be used to compute majority by designating the middle wire as an output wire: If the middle wire is designated as output, the wires are annotated with 16 different input variables the resulting comparator circuit computes majority. Since there are sorting networks which can be constructed in AC0, this shows that the majority function is in CC. A problem in CC is CC-complete if every problem in CC can be reduced to it using a logspace reduction; the comparator circuit value problem is CC-complete. In the stable marriage problem, there is an equal number of women; each person ranks all members of the opposite sex. A matching between men and women is stable if there are no unpaired man and woman who prefer each other over their current partners. A stable matching always exists. Among the stable matchings, there is one in which each woman gets the best man that she gets in any stable matching.
The decision version of the stable matching problem is, given the rankings of all men and women, whether a given man and a given woman are matched in the woman-optimal stable matching. Although the classical Gale–Shapley algorithm cannot be implemented as a comparator circuit, Subramanian came up with a different algorithm showing that the problem is in CC; the problem is CC-complete. Another problem, CC-complete is lexicographically-first maximal matching. In this problem, we are given a bipartite graph with an order on the vertices, an edge; the lexicographically-first maximal matching is obtained by successively matching vertices from the first bipartition to the minimal available vertices from the second bipartition. The problem asks. Scott Aaronson showed. In this problem, we are given a starting number of pebbles and a description of a program which may contain only two types of instructions: combine two piles of sizes y and z to get a new pile of size y + z, or split a pile of size y into piles of size ⌈ y / 2 ⌉ and ⌊ y / 2 ⌋.
The problem is to decide whether any pebbles are present in a particular pile after executing the program. He used this to show that the problem of deciding whether any balls reach a designated sink vertex in a Digi-Comp II-like device is CC-complete; the comparator circuit evaluation problem can be solved in polynomial time, so CC is contained in P. On the other hand, comparator circuits can solve directed reachability, so CC contains NL. There is a relativized world in which CC and NC are incomparable, so both containments are strict. Complexity Zoo: CC
Hex (board game)
Hex is a strategy board game for two players played on a hexagonal grid, theoretically of any size and several possible shapes, but traditionally as an 11×11 rhombus. Players alternate placing markers or stones on unoccupied spaces in an attempt to link their opposite sides of the board in an unbroken chain. One player must win; the game has deep strategy, sharp tactics and a profound mathematical underpinning related to the Brouwer fixed-point theorem. It was invented in the 1940s independently by Piet Hein and John Nash; the game was first marketed as a board game in Denmark under the name Con-tac-tix, Parker Brothers marketed a version of it in 1952 called Hex. Hex can be played with paper and pencil on hexagonally ruled graph paper. Hex-related research is current in the areas of topology and matroid theory, game theory and artificial intelligence. Hex is a connection game, can be classified as a Maker-Breaker game, a particular type of positional game; the game can never end in a draw, in other words, Hex is a "determined game".
Hex is a finite, perfect information game, an abstract strategy game that belongs to the general category of connection games. When played on a generalized graph, it is equivalent to the Shannon switching game; as a product, Hex is a board game. The game was invented by the Danish mathematician Piet Hein, who introduced it in 1942 at the Niels Bohr Institute. Although Hein called it Con-tac-tix, it became known in Denmark under the name Polygon due to an article by Hein in the December 26, 1942 edition of the Danish newspaper Politiken, the first published description of the game, in which he used that name; the game was independently re-invented in 1948 by the mathematician John Nash at Princeton University. According to Martin Gardner, who featured Hex in his July 1957 Mathematical Games column, Nash's fellow players called the game either Nash or John, with the latter name referring to the fact that the game could be played on hexagonal bathroom tiles. In 1952, Parker Brothers marketed a version.
They called the name stuck. Hex was issued as one of the games in the 1974 3M Paper Games Series. About 1950, American mathematician and electrical engineer Claude Shannon and E. F. Moore constructed an analog Hex playing machine, a resistance network with resistors for edges and lightbulbs for vertices; the move to be made corresponded to a certain specified saddle point in the network. The machine played a reasonably good game of Hex. Researchers attempting to solve the game and develop hex-playing computer algorithms emulated Shannon's network to make strong automatons. In 1952 John Nash expounded an existence proof that on symmetrical boards, the first player has a winning strategy. In 1964, mathematician Alfred Lehman showed that Hex cannot be represented as a binary matroid, so a determinate winning strategy like that for the Shannon switching game on a regular rectangular grid was unavailable; the game was shown to be PSPACE-complete. In 2002, the first explicit winning strategy on a 7 ×.
In the 2000s, by using brute force search computer algorithms, Hex boards up to size 9×9 have been solved. In the early 1980s Dolphin Microware published Hexmaster, an implementation for Atari 8-bit computers. Various paradigms resulting from research into the game have been used to create digital computer Hex playing automatons starting about 2000; the first implementations used evaluation functions that emulated Shannon and Moore's electrical circuit model embedded in an alpha-beta search framework with hand-crafted knowledge-based patterns. Starting about 2006, Monte Carlo tree search methods borrowed from successful computer implementations of Go were introduced and soon dominated the field. Hand crafted patterns were supplemented by machine learning methods for pattern discovery; these programs are now competitive against skilled human players. Elo based ratings have been assigned to the various programs and can be used to measure technical progress as well as assess playing strength against Elo-rated humans.
Current research is published in either the quarterly ICGA Journal or the annual Advances in Computer Games series. Each player has an allocated color, conventionally White and Black. Players take; the goal for each player is to form a connected path of their own stones linking the opposing sides of the board marked by their colors, before their opponent connects his or her sides in a similar fashion. The first player to complete his or her connection wins the game; the four corner hexagons each belong to both adjacent sides. Since the first player to move in Hex has a distinct advantage, the pie rule is implemented for fairness; this rule allows the second player to choose whether to switch positions with the first player after the first player makes the first move. From the proof of a winning strategy for the first player, it is known that the hex board must have a complex type of connectivity which has never been solved. Play consists of creating small patterns which have a simpler type of connectivity called "safely connected", joining them into sequences that form a "path".
One of the players will succeed in forming a safely connected path of stones and spaces between his sides of the board and win. The final stage of the game, if necessary, consists of filling in the empty spaces in the path. A "safely con
David Arthur Eppstein is an American computer scientist and mathematician. He is a Chancellor's Professor of computer science at the University of Irvine, he is known for his work in computational geometry, graph algorithms, recreational mathematics. In 2012, he was named an ACM Fellow. Eppstein received a B. S. in Mathematics from Stanford University in 1984, an M. S. and Ph. D. in computer science from Columbia University, after which he took a postdoctoral position at Xerox's Palo Alto Research Center. He joined the UC Irvine faculty in 1990, was co-chair of the Computer Science Department there from 2002 to 2005. In 2014, he was named a Chancellor's Professor. In October 2017, Eppstein was one of 396 members elected as Fellows of the Council of the American Association for the Advancement of Science. In computer science, Eppstein's research is focused in computational geometry: minimum spanning trees, shortest paths, dynamic graph data structures, graph coloring, graph drawing and geometric optimization.
He has published in application areas such as finite element meshing, used in engineering design, in computational statistics in robust, nonparametric statistics. Eppstein served as the program chair for the theory track of the ACM Symposium on Computational Geometry in 2001, the program chair of the ACM-SIAM Symposium on Discrete Algorithms in 2002, the co-chair for the International Symposium on Graph Drawing in 2009. Eppstein, David. "Finding the k shortest paths". SIAM Journal on Computing. 28: 652–673. CiteSeerX 10.1.1.39.3901. Doi:10.1109/SFCS.1994.365697. ISBN 978-0-8186-6580-6. Eppstein, D.. "Sparsification—a technique for speeding up dynamic graph algorithms". Journal of the ACM. 44: 669–696. Doi:10.1145/265910.265914. Amenta, N.. "The Crust and the β-Skeleton: Combinatorial Curve Reconstruction". Graphical Models and Image Processing. 60: 125–135. Doi:10.1006/gmip.1998.0465. Bern, Marshall. "Mesh generation and optimal triangulation". Technical Report CSL-92-1. Xerox PARC. Republished in Du, D.-Z..
Computing in Euclidean Geometry. World Scientific. Pp. 23–90. Eppstein, D.. Media Theory. Springer-Verlag. ISBN 978-3-642-09083-7. Eppstein's algorithm David Eppstein's profile at the University of California, Irvine David Eppstein at DBLP Bibliography Server David Eppstein publications indexed by Google Scholar David Eppstein's Wikipedia userpage
Computers and Intractability
In computer science, more computational complexity theory and Intractability: A Guide to the Theory of NP-Completeness is an influential textbook by Michael Garey and David S. Johnson, it was the first book on the theory of NP-completeness and computational intractability. The book features an appendix providing a thorough compendium of NP-complete problems; the book is now outdated in some respects as it does not cover more recent development such as the PCP theorem. It is still in print and is regarded as a classic: in a 2006 study, the CiteSeer search engine listed the book as the most cited reference in computer science literature. Another appendix of the book featured problems for which it was not known whether they were NP-complete or in P; the problems are: Graph isomorphism This problem is known to be in NP, but it is unknown if it is NP-complete. Subgraph homeomorphism Graph genus Chordal graph completion Chromatic index Spanning tree parity problem Partial order dimension Precedence constrained 3-processor scheduling This problem was still open as of 2016.
Linear programming Total unimodularity Composite number Testing for compositeness is known to be in P, but the complexity of the related integer factorization problem remains open. Minimum length triangulationProblem 12 is known to be NP-hard, but it is unknown if it is in NP. Soon after it appeared, the book received positive reviews by reputed researchers in the area of theoretical computer science. In his review, Ronald V. Book recommends the book to "anyone who wishes to learn about the subject of NP-completeness", he explicitly mentions the "extremely useful" appendix with over 300 NP-hard computational problems, he concludes: "Computer science needs more books like this one."Harry R. Lewis praises the mathematical prose of the authors: "Garey and Johnson's book is a thorough and practical exposition of NP-completeness. In many respects it is hard to imagine a better treatment of the subject." He considers the appendix as "unique" and "as a starting point in attempts to show new problems to be NP-complete".
Twenty-three years after the book appeared, Lance Fortnow, editor-in-chief of the scientific journal Transactions on Computational Theory, states: "I consider Garey and Johnson the single most important book on my office bookshelf. Every computer scientist should have this book on their shelves as well. Garey and Johnson has the best introduction to computational complexity I have seen." List of NP-complete problems List of important publications in theoretical computer science
Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an 8×8 grid. The game is played by millions of people worldwide. Chess is believed to be derived from the Indian game chaturanga some time before the 7th century. Chaturanga is the ancestor of the Eastern strategy games xiangqi and shogi. Chess reached Europe by the 9th century, due to the Umayyad conquest of Hispania; the pieces assumed their current powers in Spain in the late 15th century with the introduction of "Mad Queen Chess". Play does not involve hidden information; each player begins with 16 pieces: one king, one queen, two rooks, two knights, two bishops, eight pawns. Each of the six piece types moves differently, with the most powerful being the queen and the least powerful the pawn; the objective is to checkmate the opponent's king by placing it under an inescapable threat of capture. To this end, a player's pieces are used to attack and capture the opponent's pieces, while supporting each other.
During the game, play involves making exchanges of one piece for an opponent's similar piece, but finding and engineering opportunities to trade advantageously, or to get a better position. In addition to checkmate, a player wins the game if the opponent runs out of time. There are several ways that a game can end in a draw; the first recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886. Since 1948, the World Championship has been regulated by the Fédération Internationale des Échecs, the game's international governing body. FIDE awards life-time master titles to skilled players, the highest of, grandmaster. Many national chess organizations have a title system of their own. FIDE organizes the Women's World Championship, the World Junior Championship, the World Senior Championship, the Blitz and Rapid World Championships, the Chess Olympiad, a popular competition among international teams. FIDE is a member of the International Olympic Committee, which can be considered as a recognition of chess as a sport.
Several national sporting bodies recognize chess as a sport. Chess was included in 2010 Asian Games. There is a Correspondence Chess World Championship and a World Computer Chess Championship. Online chess has opened professional competition to a wide and varied group of players. Since the second half of the 20th century, chess engines have been programmed to play chess with increasing success, to the point where the strongest personal computers play at a higher level than the best human players. Since the 1990s, computer analysis has contributed to chess theory in the endgame; the IBM computer Deep Blue was the first machine to overcome a reigning World Chess Champion in a match when it defeated Garry Kasparov in 1997. The rise of strong chess engines runnable on hand-held devices has led to increasing concerns about cheating during tournaments. There are many variants of chess that utilize pieces, or boards. One of these, Chess960, incorporates standard rules but employs 960 different possible starting positions, thus negating any advantage in opening preparation.
Chess960 has gained widespread popularity as well as some FIDE recognition. The rules of chess are published by chess's international governing body, in its Handbook. Rules published by national governing bodies, or by unaffiliated chess organizations, commercial publishers, etc. may differ. FIDE's rules were most revised in 2017. Chess is played on a square board of eight columns; the 64 squares are referred to as light and dark squares. The chessboard is placed with a light square at the right-hand end of the rank nearest to each player. By convention, the game pieces are divided into white and black sets, the players are referred to as White and Black, respectively; each player begins the game with 16 pieces of the specified color, consisting of one king, one queen, two rooks, two bishops, two knights, eight pawns. The pieces are set out as shown in the diagram and photo, with each queen on a square of its own color. In competitive games, the colors are allocated by the organizers; the player with the white pieces moves first.
After the first move, players alternate turns. Pieces are moved to either an unoccupied square or one occupied by an opponent's piece, captured and removed from play. With the sole exception of en passant, all pieces capture by moving to the square that the opponent's piece occupies. A player may not make any move that would leave the player's own king under attack. A player cannot "pass" a turn. If the player to move has no legal move, the game is over; each piece has its own way of moving. In the diagrams, the dots mark the squares to which the piece can move if there are no intervening piece of either color; the king moves one square in any direction. The king has
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x. In the simplest case, the logarithm counts repeated multiplication of the same factor; the logarithm of x to base b is denoted as logb . More exponentiation allows any positive real number to be raised to any real power, always producing a positive result, so the logarithm for any two positive real numbers b and x where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is: log b = y if b y = x. For example, log2 64 = 6, as 26 = 64; the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number e as its base; the binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations.
They were adopted by navigators, scientists and others to perform computations more using slide rules and logarithm tables. Tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors: log b = log b x + log b y, provided that b, x and y are all positive and b ≠ 1; the present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes. For example, the decibel is a unit used to express ratio as logarithms for signal power and amplitude. In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, in measurements of the complexity of algorithms and of geometric objects called fractals, they help describing frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, can aid in forensic accounting.
In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant. Addition and exponentiation are three fundamental arithmetic operations. Addition, the simplest of these, can be undone by subtraction: adding, say, 2 to 3 gives 5; the process of adding 2 can be undone by subtracting 2: 5 − 2 = 3. Multiplication, the next-simplest operation, can be undone by division: doubling a number x, i.e. multiplying x by 2, the result is 2x. To get back x, it is necessary to divide by 2. For example 2 ⋅ 3 = 6 and the process of multiplying by 2 is undone by dividing by 2: 6 / 2 = 3; the idea and purpose of logarithms is to undo a fundamental arithmetic operation, namely raising a number to a certain power, an operation known as exponentiation. For example, raising 2 to the third power yields 8, because 8 is the product of three factors of 2: 2 3 = 2 × 2 × 2 = 8 The logarithm of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8.
This subsection contains a short overview of the exponentiation operation, fundamental to understanding logarithms. Raising b to the n-th power, where n is a natural number, is done by multiplying n factors equal to b; the n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors Exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b − 1 is the reciprocal of b. Raising b to the power 1/2 is the square root of b. More raising b to a rational power p/q, where p and q are integers, is given by b p / q = b p q, the q-th root of bp. Any irrational number y can be approximated to arbitrary precision by rational numbers; this can be used to compute the y-th power of b: for example 2 ≈ 1.414... and
A regular expression, regex or regexp is a sequence of characters that define a search pattern. This pattern is used by string searching algorithms for "find" or "find and replace" operations on strings, or for input validation, it is a technique developed in formal language theory. The concept arose in the 1950s when the American mathematician Stephen Cole Kleene formalized the description of a regular language; the concept came into common use with Unix text-processing utilities. Since the 1980s, different syntaxes for writing regular expressions exist, one being the POSIX standard and another used, being the Perl syntax. Regular expressions are used in search engines and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK and in lexical analysis. Many programming languages provide regex capabilities, built-in or via libraries; the phrase regular expressions, regexes, is used to mean the specific, standard textual syntax for representing patterns for matching text.
Each character in a regular expression is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex a. A is a literal character which matches just'a', while'.' is a meta character that matches every character except a newline. Therefore, this regex matches, for example,'a', or'ax', or'a0'. Together and literal characters can be used to identify text of a given pattern, or process a number of instances of it. Pattern matches may vary from a precise equality to a general similarity, as controlled by the metacharacters. For example. Is a general pattern, is less general and a is a precise pattern; the metacharacter syntax is designed to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard. A simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression serialie matches both "serialise" and "serialize".
Wildcards achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are employed in applications that pattern-match text strings in general. For example, the regex ^+|+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is??. A regex processor translates a regular expression in the above syntax into an internal representation which can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton, made deterministic and the resulting deterministic finite automaton is run on the target text string to recognize substrings that match the regular expression; the picture shows the NFA scheme N obtained from the regular expression s*, where s denotes a simpler regular expression in turn, recursively translated to the NFA N.
Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular sets. These arose in theoretical computer science, in the subfields of automata theory and the description and classification of formal languages. Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation, he added this capability to the Unix editor ed, which led to the popular search tool grep's use of regular expressions.
Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions, used for lexical analysis in compiler design. Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including vi, sed, AWK, expr, in other programs such as Emacs. Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992. In the 1980s the more complicated regexes arose in Perl, which derived from a regex library written by Henry Spencer, who wrote an implementation of Advanced Regular Expressions for Tcl; the Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl expanded on Spencer's original library