1.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely

2.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state

3.
Avraham Trahtman
–
Avraham Naumovich Trahtman is a mathematician at Bar-Ilan University. In 2007, Trahtman solved a problem in combinatorics that had been open for 37 years, trahtmans solution to the road coloring problem was accepted in 2007 and published in 2009 by the Israel Journal of Mathematics. The problem arose in the subfield of symbolic dynamics, a part of the field of dynamical systems. The road coloring problem was raised by R. L. Adler and L. W. Goodwyn from the United States, the proof used results from earlier work. The problem of estimating the length of synchronizing word has a history and was posed independently by several authors. In 1964 Jan Černý conjectured that 2 is the bound for the length of the shortest synchronizing word for any n-state complete DFA. If this is true, it would be tight, in his 1964 paper, in 2011 Trakhtman published a proof of upper bound n/48, but then he found an error in it. The conjecture holds in many cases, see for instance, Kari. The finite basis problem for semigroups of order less than six in the theory of semigroups was posed by Alfred Tarski in 1966, in 1983, Trahtman solved this problem by proving that all semigroups of order less than six are finitely based. In the theory of varieties of semigroups and universal algebras the problem of existence of covering elements in the lattice of varieties was posed by Evans in 1971, the positive solution of the problem was found by Trahtman. He also found a six-element semigroup that generates a variety with a continuum of subvarieties, the theory of locally testable automata can be based on the theory of varieties of locally testable semigroups. Trahtman found the precise estimation on the order of local testability of finite automata, there are results in theoretical mechanics and in the promising area of extracting moisture from the air mentioned in New Scientist

4.
Seymour Papert
–
Seymour Aubrey Papert was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT. He was one of the pioneers of artificial intelligence, and of the constructionist movement in education and he was co-inventor, with Wally Feurzeig and Cynthia Solomon, of the Logo programming language. Papert attended the University of the Witwatersrand, receiving a Bachelor of Arts degree in philosophy in 1949 followed by a PhD in mathematics in 1952 and he then went on to receive a second doctorate, also in mathematics, at the University of Cambridge, supervised by Frank Smithies. At MIT, Papert went on to create the Epistemology and Learning Research Group at the MIT Architecture Machine Group which later became the MIT Media Lab. Here, he was the developer of a theory on learning called constructionism, Papert had worked with Piaget at the University of Geneva from 1958 to 1963 and was one of Piagets protégés, Piaget himself once said that no one understands my ideas as well as Papert. Papert has rethought how schools should work, based on theories of learning. Papert used Piagets work in his development of the Logo programming language while at MIT and he created Logo as a tool to improve the way children think and solve problems. A small mobile robot called the Logo Turtle was developed, a main purpose of the Logo Foundation research group is to strengthen the ability to learn knowledge. Papert insisted a simple language or program that children can learn—like Logo—can also have advanced functionality for expert users. Counter-free automata,1971, ISBN 0-262-13076-9 Perceptrons, MIT Press,1969, ISBN 0-262-63111-3 Mindstorms, Children, Computers, Constructionism, research reports and essays 1985 -1990 by the Epistemology and Learning Research Group, the Media Lab, Massachusetts Institute of Technology, Ablex Pub. He was one of the principals for the One Laptop Per Child initiative to manufacture, Papert also collaborated with the construction toy manufacturer Lego on their Logo-programmable Lego Mindstorms robotics kits, which were named after his groundbreaking 1980 book. He was a figure in the revolutionary socialist circle around Socialist Review while living in London in the 1950s. Papert was also a prominent activist against South African apartheid policies during his university education, Papert was married to Dona Strauss, and later to Androula Christofides Henriques. Paperts third wife was MIT professor Sherry Turkle, and together wrote the influential paper Epistemological Pluralism. In his final 24 years, Papert was married to Suzanne Massie and he was moved to a hospital closer to his home in January 2007, but then contracted septicemia which damaged a heart valve, which was later replaced. By 2008 he had returned home, could think and communicate clearly and walk almost unaided and his rehabilitation team used some of the very principles of experiential, hands-on learning that he had pioneered. Papert died at his home in Blue Hill, Maine, on July 31,2016, paperts work has been used by other researchers in the fields of education and computer science. In 1981, Papert along with others in the Logo group at MIT

5.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage

6.
International Standard Serial Number
–
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication. The ISSN is especially helpful in distinguishing between serials with the same title, ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature. The ISSN system was first drafted as an International Organization for Standardization international standard in 1971, ISO subcommittee TC 46/SC9 is responsible for maintaining the standard. When a serial with the content is published in more than one media type. For example, many serials are published both in print and electronic media, the ISSN system refers to these types as print ISSN and electronic ISSN, respectively. The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers, as an integer number, it can be represented by the first seven digits. The last code digit, which may be 0-9 or an X, is a check digit. Formally, the form of the ISSN code can be expressed as follows, NNNN-NNNC where N is in the set, a digit character. The ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, for calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, the modulus 11 of the sum must be 0. There is an online ISSN checker that can validate an ISSN, ISSN codes are assigned by a network of ISSN National Centres, usually located at national libraries and coordinated by the ISSN International Centre based in Paris. The International Centre is an organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, at the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept, where ISBNs are assigned to individual books, an ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an identifier associated with a serial title. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change, separate ISSNs are needed for serials in different media. Thus, the print and electronic versions of a serial need separate ISSNs. Also, a CD-ROM version and a web version of a serial require different ISSNs since two different media are involved, however, the same ISSN can be used for different file formats of the same online serial

7.
Pushdown automaton
–
In computer science, a pushdown automaton is a type of automaton that employs a stack. Pushdown automata are used in theories about what can be computed by machines and they are more capable than finite-state machines but less capable than Turing machines. The term pushdown refers to the fact that the stack can be regarded as being pushed down like a tray dispenser at a cafeteria, a stack automaton, by contrast, does allow access to and operations on deeper elements. Stack automata can recognize a larger set of languages than pushdown automata. A nested stack automaton allows full access, and also allows stacked values to be entire sub-stacks rather than just single finite symbols, a finite state machine just looks at the input signal and the current state, it has no stack to work with. It chooses a new state, the result of following the transition, a pushdown automaton differs from a finite state machine in two ways, It can use the top of the stack to decide which transition to take. It can manipulate the stack as part of performing a transition, a pushdown automaton reads a given input string from left to right. In each step, it chooses a transition by indexing a table by input symbol, current state, a pushdown automaton can also manipulate the stack, as part of performing a transition. The manipulation can be to push a particular symbol to the top of the stack, the automaton can alternatively ignore the stack, and leave it as it is. Put together, Given an input symbol, current state, and stack symbol, the automaton can follow a transition to another state, if, in every situation, at most one such transition action is possible, then the automaton is called a deterministic pushdown automaton. In general, several actions are possible, and the automaton is called a general, or nondeterministic and we use standard formal language notation, Γ ∗ denotes the set of strings over alphabet Γ and ε denotes the empty string. Q0 ∈ Q is the start state Z ∈ Γ is the initial stack symbol F ⊆ Q is the set of accepting states An element ∈ δ is a transition of M. The component of the relation is used to formalize that the PDA can either read a letter from the input. One writes for example δ = precisely when ∈, ∈ δ, note that finite in this definition is essential. Computations In order to formalize the semantics of the pushdown automaton a description of the current situation is introduced. Any 3-tuple ∈ Q × Σ ∗ × Γ ∗ is called an instantaneous description of M, which includes the current state, the part of the tape that has not been read. The transition relation δ defines the step-relation ⊢ M of M on instantaneous descriptions, for instruction ∈ δ there exists a step ⊢ M, for every x ∈ Σ ∗ and every γ ∈ Γ ∗. In general pushdown automata are nondeterministic meaning that in an instantaneous description there may be several possible steps

8.
Subset
–
In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is contained inside B, that is, all elements of A are also elements of B. The relationship of one set being a subset of another is called inclusion or sometimes containment, the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra in which the relation is called inclusion. For any set S, the inclusion relation ⊆ is an order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may also partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A, when quantified, A ⊆ B is represented as, ∀x. So for example, for authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively and this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or may not equal y, but if x < y, then x definitely does not equal y, and is less than y. Similarly, using the convention that ⊂ is proper subset, if A ⊆ B, then A may or may not equal B, the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, any set is a subset of itself, but not a proper subset. The empty set, denoted by ∅, is also a subset of any given set X and it is also always a proper subset of any set except itself. These are two examples in both the subset and the whole set are infinite, and the subset has the same cardinality as the whole. The set of numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the set has a larger cardinality than the former set. Another example in an Euler diagram, Inclusion is the partial order in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n, then a ≤ b if and only if ⊆. For the power set P of a set S, the partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 <1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T

9.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

10.
Synchronizing word
–
Not every DFA has a synchronizing word, for instance, a DFA with two states, one for words of even length and one for words of odd length, can never be synchronized. Given a DFA, the problem of determining if it has a word can be solved in polynomial time using a theorem due to Černý. A simple approach considers the set of states of the DFA, and builds a directed graph where nodes belong to the power set. A path from in the graph to a singleton state shows the existence of a synchronizing word and this algorithm is exponential in the number of states. The problem of estimating the length of synchronizing words has a history and was posed independently by several authors. In 1964 Jan Černý conjectured that 2 is the bound for the length of the shortest synchronizing word for any n-state complete DFA. If this is true, it would be tight, in his 1964 paper, the best upper bound known is /6, far from the lower bound. For n-state DFAs over a k-letter input alphabet, an algorithm by David Eppstein finds a word of length at most 11n3/48 + O. This algorithm does not always find the shortest possible synchronizing word for an automaton, as Eppstein also shows. The road coloring problem is the problem of labeling the edges of a directed graph with the symbols of a k-letter input alphabet in order to form a synchronizable DFA. A transformation semigroup is synchronizing if it contains an element of rank 1, a DFA corresponds to a transformation semigroup with a distinguished generator set. Černýs conjecture, retrospects and prospects, Proc, volkov, Mikhail V. Synchronizing Automata and the Černý Conjecture, Proc. Language and Automata Theory and Applications, LNCS,5196, Springer-Verlag, pp. 11–27