1.
Parallel computing
–
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time, there are several different forms of parallel computing, bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for years, mainly in high-performance computing. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance, a theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahls law. Traditionally, computer software has been written for serial computation, to solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer, only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include such as a single computer with multiple processors, several networked computers, specialized hardware. Frequency scaling was the dominant reason for improvements in performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the time per instruction. Maintaining everything else constant, increasing the frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V2 × F, where C is the capacitance being switched per clock cycle, V is voltage, increases in frequency increase the amount of power used in a processor. Moores law is the observation that the number of transistors in a microprocessor doubles every 18 to 24 months. Despite power consumption issues, and repeated predictions of its end, with the end of frequency scaling, these additional transistors can be used to add extra hardware for parallel computing. Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, however, very few parallel algorithms achieve optimal speedup

2.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case

3.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms

4.
Distributed computing
–
Distributed computing is a field of computer science that studies distributed systems. A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages, the components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are, concurrency of components, lack of a global clock, examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications. A computer program that runs in a system is called a distributed program. There are many alternatives for the message passing mechanism, including pure HTTP, RPC-like connectors, Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing. The terms are used in a much wider sense, even referring to autonomous processes that run on the same physical computer. The entities communicate with each other by message passing, a distributed system may have a common goal, such as solving a large computational problem, the user then perceives the collection of autonomous processors as a unit. Other typical properties of distributed systems include the following, The system has to tolerate failures in individual computers. The structure of the system is not known in advance, the system may consist of different kinds of computers and network links, each computer has only a limited, incomplete view of the system. Each computer may know one part of the input. Distributed systems are groups of networked computers, which have the goal for their work. The terms concurrent computing, parallel computing, and distributed computing have a lot of overlap, the same system may be characterized both as parallel and distributed, the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a tightly coupled form of distributed computing. In distributed computing, each processor has its own private memory, Information is exchanged by passing messages between the processors. The figure on the right illustrates the difference between distributed and parallel systems, figure shows a parallel system in which each processor has a direct access to a shared memory. The situation is complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel. The use of concurrent processes that communicate by message-passing has its roots in operating system architectures studied in the 1960s, the first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s

5.
Cloud computing
–
Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a pool of configurable computing resources. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, advocates claim that cloud computing allows companies to avoid up-front infrastructure costs. As well, it enables organizations to focus on their core businesses instead of spending time, Cloud providers typically use a pay as you go model. This will lead to unexpectedly high charges if administrators do not adapt to the pricing model. Companies can scale up as computing needs increase and then scale down again as demands decrease, the origin of the term cloud computing is unclear. In analogy to the usage, the word cloud was used as a metaphor for the Internet. Later it was used to depict the Internet in computer network diagrams, with this simplification, the implication is that the specifics of how the end points of a network are connected are not relevant for the purposes of understanding the diagram. The cloud symbol was used to represent networks of computing equipment in the original ARPANET by as early as 1977, the term cloud has been used to refer to platforms for distributed computing. No one had conceived that before, references to cloud computing in its modern sense appeared as early as 1996, with the earliest known mention in a Compaq internal document. The popularization of the term can be traced to 2006 when Amazon. com introduced its Elastic Compute Cloud, during the 1960s, the initial concepts of time-sharing became popularized via RJE, this terminology was mostly associated with large vendors such as IBM and DEC. Full time-sharing solutions were available by the early 1970s on such platforms as Multics, Cambridge CTSS, yet, the data center model where users submitted jobs to operators to run on IBM mainframes was overwhelmingly predominant. By switching traffic as they saw fit to balance server use and they began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extended this boundary to all servers as well as the network infrastructure. As computers became more diffused, scientists and technologists explored ways to make large-scale computing power available to users through time-sharing. They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs, since 2000, cloud computing has come into existence. Will result in growth in IT products in some areas. In August 2006 Amazon introduced its Elastic Compute Cloud, Microsoft Azure was announced as Azure in October 2008 and was released on 1 February 2010 as Windows Azure, before being renamed to Microsoft Azure on 25 March 2014

6.
Supercomputer
–
A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second instead of instructions per second. As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS and it tops the rankings in the TOP500 supercomputer list. Sunway TaihuLights emergence is also notable for its use of indigenous chips, as of June 2016, China, for the first time, had more computers on the TOP500 list than the United States. However, U. S. built computers held ten of the top 20 positions, in November 2016 the U. S. has five of the top 10, throughout their history, they have been essential in the field of cryptanalysis. The use of multi-core processors combined with centralization is an emerging trend, the history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester and a series of computers at Control Data Corporation, designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance, Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, the Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the second fastest after M-13 supercomputer in Moscow. Fujitsus Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a speed of 1.7 gigaFLOPS per processor. The Hitachi SR2201 obtained a performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, the Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface. Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s, early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems, supercomputers of the 21st century can use over 100,000 processors connected by fast connections. The Connection Machine CM-5 supercomputer is a parallel processing computer capable of many billions of arithmetic operations per second. Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers, the large amount of heat generated by a system may also have other effects, e. g. reducing the lifetime of other system components. There have been diverse approaches to management, from pumping Fluorinert through the system. Systems with a number of processors generally take one of two paths