1.
Theory of computation
–
In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. In order to perform a study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine and it might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved by a Turing machine can be solved by a computer that has an amount of memory. The theory of computation can be considered the creation of models of all kinds in the field of computer science, therefore, mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics, some pioneers of the theory of computation were Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, John von Neumann and Claude Shannon. Automata theory is the study of abstract machines and the problems that can be solved using these machines. These abstract machines are called automata, Automata comes from the Greek word which means that something is doing something by itself. Automata theory is closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a representation of a formal language that may be an infinite set. Automata are used as models for computing machines, and are used for proofs about computability. Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet and it is closely linked with automata theory, as automata are used to generate and recognize formal languages. Because automata are used as models for computation, formal languages are the mode of specification for any problem that must be computed. Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer, much of computability theory builds on the halting problem result. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory, Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. In order to analyze how much time and space a given algorithm requires, for example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number were seeking. We thus say that in order to solve this problem, the needs to perform a number of steps that grows linearly in the size of the problem

2.
Distributed computing
–
Distributed computing is a field of computer science that studies distributed systems. A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages, the components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are, concurrency of components, lack of a global clock, examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications. A computer program that runs in a system is called a distributed program. There are many alternatives for the message passing mechanism, including pure HTTP, RPC-like connectors, Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing. The terms are used in a much wider sense, even referring to autonomous processes that run on the same physical computer. The entities communicate with each other by message passing, a distributed system may have a common goal, such as solving a large computational problem, the user then perceives the collection of autonomous processors as a unit. Other typical properties of distributed systems include the following, The system has to tolerate failures in individual computers. The structure of the system is not known in advance, the system may consist of different kinds of computers and network links, each computer has only a limited, incomplete view of the system. Each computer may know one part of the input. Distributed systems are groups of networked computers, which have the goal for their work. The terms concurrent computing, parallel computing, and distributed computing have a lot of overlap, the same system may be characterized both as parallel and distributed, the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a tightly coupled form of distributed computing. In distributed computing, each processor has its own private memory, Information is exchanged by passing messages between the processors. The figure on the right illustrates the difference between distributed and parallel systems, figure shows a parallel system in which each processor has a direct access to a shared memory. The situation is complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel. The use of concurrent processes that communicate by message-passing has its roots in operating system architectures studied in the 1960s, the first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s