In computer programming, flow-based programming is a programming paradigm that defines applications as networks of "black box" processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus component-oriented. FBP is a particular form of dataflow programming based on bounded buffers, information packets with defined lifetimes, named ports, separate definition of connections. Flow-based programming defines applications using the metaphor of a "data factory", it views an application not as a single, sequential process, which starts at a point in time, does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called "information packets". In this view, the focus is on the application data and the transformations applied to it to produce the desired outputs.
The network is defined externally to the processes, as a list of connections, interpreted by a piece of software called the "scheduler". The processes communicate by means of fixed-capacity connections. A connection is attached to a process by means of a port, which has a name agreed upon between the process code and the network definition. More than one process can execute the same piece of code. At any point in time, a given IP can only be "owned" by a single process, or be in transit between two processes. Ports may either be simple, or array-type, as used e.g. for the input port of the Collate component described below. It is the combination of ports with asynchronous processes that allows many long-running primitive functions of data processing, such as Sort, Summarize, etc. to be supported in the form of software black boxes. Because FBP processes can continue executing as long they have data to work on and somewhere to put their output, FBP applications run in less elapsed time than conventional programs, make optimal use of all the processors on a machine, with no special programming required to achieve this.
The network definition is diagrammatic, is converted into a connection list in some lower-level language or notation. FBP is a visual programming language at this level. More complex network definitions have a hierarchical structure, being built up from subnets with "sticky" connections. Many other flow-based languages/runtimes are built around more traditional programming languages, the most notable example is RaftLib which uses C++ iostream-like operators to specify the flow graph. FBP has much in common with the Linda language in that it is, in Gelernter and Carriero's terminology, a "coordination language": it is language-independent. Indeed, given a scheduler written in a sufficiently low-level language, components written in different languages can be linked together in a single network. FBP thus lends itself to the concept of domain-specific languages or "mini-languages". FBP exhibits "data coupling", described in the article on coupling as the loosest type of coupling between components.
The concept of loose coupling is in turn related to that of service-oriented architectures, FBP fits a number of the criteria for such an architecture, albeit at a more fine-grained level than most examples of this architecture. FBP promotes high-level, functional style of specifications that simplify reasoning about system behavior. An example of this is the distributed data flow model for constructively specifying and analyzing the semantics of distributed multi-party protocols. Flow-Based Programming was invented by J. Paul Morrison in the early 1970s, implemented in software for a Canadian bank. FBP at its inception was influenced by some IBM simulation languages of the period, in particular GPSS, but its roots go all the way back to Conway's seminal paper on what he called coroutines. FBP has undergone a number of name changes over the years: the original implementation was called AMPS. One large application in Canada went live in 1975, and, as of 2013, has been in continuous production use, running daily, for 40 years.
Because IBM considered the ideas behind FBP "too much like a law of nature" to be patentable they instead put the basic concepts of FBP into the public domain, by means of a Technical Disclosure Bulletin, "Data Responsive Modular, Interleaved Task Programming System", in 1971. An article describing its concepts and experience using it was published in 1978 in the IBM Research IBM Systems Journal under the name DSLM. A second implementation was done as a joint project of IBM Canada and IBM Japan, under the name "Data Flow Development Manager", was marketed in Japan in the late'80s under the name "Data Flow Programming Manager"; the concepts were referred to within IBM as "Data Flow", but this term was felt to be too general, the name "Flow-Based Programming" was adopted. From the early'80s to 1993 J. Paul Morrison and IBM architect Wayne Stevens refined and promoted the concepts behind FBP. Stevens wrote several articles describing and supporting the FBP concept, included material about it in several of his books..
In 1994 Morrison published a book describing FBP, providing empirical evidence that FBP led to reduced development times. The following diagram shows the major entities of an FBP diagram; such a diagram can be converted directly into a list of connections, which can be executed by an appropriate engine. A, B and C are processes executing code components. O1, O2, the two INs are ports connec
Automata-based programming is a programming paradigm in which the program or part of it is thought of as a model of a finite state machine or any other formal automaton. Sometimes a infinite set of possible states is introduced, such a set can have a complicated structure, not just an enumeration. FSM-based programming is the same, formally speaking, doesn't cover all possible variants, as FSM stands for finite state machine, automata-based programming doesn't employ FSMs in the strict sense; the following properties are key indicators for automata-based programming: The time period of the program's execution is separated down to the steps of the automaton. Each of the steps is an execution of a code section, which has a single entry point; such a section can be other routine, or just a cycle body. The step section might be divided down to subsections to be executed depending on different states, although this is not necessary. Any communication between the steps is only possible via the explicitly noted set of variables named the state.
Between any two steps, the program can not have implicit components of its state, such as local variables' values, return addresses, the current instruction pointer, etc. That is, the state of the whole program, taken at any two moments of entering the step of the automaton, can only differ in the values of the variables being considered as the state of the automaton; the whole execution of the automata-based code is a cycle of the automaton's steps. Another reason for using the notion of automata-based programming is that the programmer's style of thinking about the program in this technique is similar to the style of thinking used to solve mathematical tasks using Turing machines, Markov algorithms, etc. Consider a program in C that reads a text from standard input stream, line by line, prints the first word of each line, it is clear we need first to read and skip the leading spaces, if any read characters of the first word and print them until the word ends, read and skip all the remaining characters until the end-of-line character is encountered.
Upon reaching the end of line character, we restart the algorithm from the beginning, upon encountering the end of file condition, we terminate the program. The program which solves the example task in traditional style can look something like this: The same task can be solved by thinking in terms of finite state machines. Note that line parsing has three stages: skipping the leading spaces, printing the word and skipping the trailing characters. Let's call these states before and after; the program may now look like this: Although the code now looks longer, it has at least one significant advantage: there's only one reading instruction in the program. Besides that, there's only one loop instead of the four. In this program, the body of the while loop is the automaton step, the loop itself is the cycle of the automaton's work; the program implements the work of a finite state machine shown on the picture. The N denotes the end of line character, the S denotes spaces, the A stands for all the other characters.
The automaton follows one arrow on each step depending on the current state and the encountered character. Some state switches are accompanied with printing the character, it is not necessary to divide the code down to separate handlers for each unique state. Furthermore, in some cases the notion of the state can be composed of several variables' values, so that it could be impossible to handle each possible state explicitly. In the discussed program it is possible to reduce the code length by noticing that the actions taken in response to the end of line character are the same for all the possible states; the following program is equal to the previous one but is a bit shorter: The most important property of the previous program is that the automaton step code section is localized. With a separate function for it, we can better demonstrate this property: This example demonstrates the basic properties of automata-based code: time periods of automaton step executions may not overlap the only information passed from the previous step to the next is the explicitly specified automaton state A finite automaton can be defined by an explicit state transition table.
Speaking, an automata-based program code can reflect this approach. In the program below there's an array named the _ table; the rows of the table stand for three states. For every possible combination, the table contains the new state number and the flag, which determines whether the automaton must print the symbol. In a real life task, this could be more complicated. Automata-based programming indeed matches the programming needs found in the field of automation. A production cycle is modelled as: A sequence of stages stepping according to input data. A set of actions performed depending on the current stage. Various dedicated programming languages allow expressing such a model in more or less sophisticated ways; the example presented above could be expressed according to this view like in the following program. Here pseudo-c
Macro (computer science)
A macro in computer science is a rule or pattern that specifies how a certain input sequence should be mapped to a replacement output sequence according to a defined procedure. The mapping process that instantiates a macro use into a specific sequence is known as macro expansion. A facility for writing macros may be provided as part of a software application or as a part of a programming language. In the former case, macros are used to make tasks using the application less repetitive. In the latter case, they are a tool that allows a programmer to enable code reuse or to design domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Macros allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors.
The term derives from "macro instruction", such expansions were used in generating assembly language code. Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other more time-consuming, sequences of keystrokes and mouse actions. In this way used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – SmartKey SuperKey, KeyWorks, Prokey – were popular, first as a means to automatically format screenplays for a variety of user input tasks; these programs were based on the TSR mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interface and the availability of keyboard and mouse macros in applications such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros.
Keyboard macros have in more recent times come to life as a method of exploiting the economy of massively multiplayer online role-playing games. By tirelessly performing a boring, but low risk action, a player running a macro can earn a large amount of the game's currency or resources; this effect is larger when a macro-using player operates multiple accounts or operates the accounts for a large amount of time each day. As this money is generated without human intervention, it can upset the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, administrators of MMORPGs fight a continual war to identify and punish macro users. Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros, they are created by letting the application record the actions. An underlying macro programming language, most a scripting language, with direct access to the features of the application may exist.
The programmers' text editor, follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was devised as a set of macros in the editing language TECO. Another programmers' text editor, Vim has full implementation of macros, it can record into a register what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim has a scripting language called Vimscript to create macros. Visual Basic for Applications is a programming language included in Microsoft Office from Office 97 through Office 2019. However, its function has evolved from and replaced the macro languages that were included in some of these applications. VBA executes when documents are opened; this makes it easy to write computer viruses in VBA known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been updating their programs. In addition, current anti-virus programs counteract such attacks.
A parameterized macro is a macro, able to insert given objects into its expansion. This gives the macro some of the power of a function; as a simple example, in the C programming language, this is a typical macro, not a parameterized macro: #define PI 3.14159 This causes the string "PI" to be replaced with "3.14159" wherever it occurs. It will always be replaced by this string, the resulting string cannot be modified in any way. An example of a parameterized macro, on the other hand, is this: #define pred What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred → pred → pred → Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions; the parameterized macros used in languages such
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can be divided into smaller ones, which can be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling; as power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture in the form of multi-core processors. Parallel computing is related to concurrent computing—they are used together, conflated, though the two are distinct: it is possible to have parallelism without concurrency, concurrency without parallelism. In parallel computing, a computational task is broken down into several many similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion.
In contrast, in concurrent computing, the various processes do not address related tasks. Parallel computers can be classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is implemented as a serial stream of instructions; these instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements to solve a problem; this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Parallel computing was used for scientific computing and the simulation of scientific problems in the natural and engineering sciences, such as meteorology.
This led to the design of parallel software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004; the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle, V is voltage, F is the processor frequency. Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores.
The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers, thus parallelisation of serial programmes has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months; this could mean that after 2020 a typical processor will have hundreds of cores. An operating system can ensure that different tasks and user programmes are run in parallel on the available cores. However, for a serial software programme to take full advantage of the multi-core architecture the programmer needs to restructure and parallelise the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelise their software code to take
Literate programming is a programming paradigm introduced by Donald Knuth in which a program is given as an explanation of the program logic in a natural language, such as English, interspersed with snippets of macros and traditional source code, from which a compilable source code can be generated. The literate programming paradigm, as conceived by Knuth, represents a move away from writing programs in the manner and order imposed by the computer, instead enables programmers to develop programs in the order demanded by the logic and flow of their thoughts. Literate programs are written as an uninterrupted exposition of logic in an ordinary human language, much like the text of an essay, in which macros are included to hide abstractions and traditional source code. Literate programming tools are used to obtain two representations from a literate source file: one suitable for further compilation or execution by a computer, the "tangled" code, another for viewing as formatted documentation, said to be "woven" from the literate source.
While the first generation of literate programming tools were computer language-specific, the ones are language-agnostic and exist above the programming languages. Literate programming was first introduced by Donald E. Knuth in 1984; the main intention behind this approach was to treat a program as literature understandable to human beings. This approach was implemented at Stanford University as a part of research on algorithms and digital typography; this implementation was called “WEB” by Donald Knuth since he believed that it was one of the few three-letter words of English that hadn’t been applied to computing. However, it resembles the complicated nature of software delicately pieced together from simple materials. Literate programming is writing out the program logic in a human language with included code snippets and macros. Macros in a literate source file are title-like or explanatory phrases in a human language that describe human abstractions created while solving the programming problem, hiding chunks of code or lower-level macros.
These macros are similar to the algorithms in pseudocode used in teaching computer science. These arbitrary explanatory phrases become precise new operators, created on the fly by the programmer, forming a meta-language on top of the underlying programming language. A preprocessor is used to substitute arbitrary hierarchies, or rather "interconnected'webs' of macros", to produce the compilable source code with one command, documentation with another; the preprocessor provides an ability to write out the content of the macros and to add to created macros in any place in the text of the literate program source file, thereby disposing of the need to keep in mind the restrictions imposed by traditional programming languages or to interrupt the flow of thought. According to Knuth, literate programming provides higher-quality programs, since it forces programmers to explicitly state the thoughts behind the program, making poorly thought-out design decisions more obvious. Knuth claims that literate programming provides a first-rate documentation system, not an add-on, but is grown in the process of exposition of one's thoughts during a program's creation.
The resulting documentation allows the author to restart his own thought processes at any time, allows other programmers to understand the construction of the program more easily. This differs from traditional documentation, in which a programmer is presented with source code that follows a compiler-imposed order, must decipher the thought process behind the program from the code and its associated comments; the meta-language capabilities of literate programming are claimed to facilitate thinking, giving a higher "bird's eye view" of the code and increasing the number of concepts the mind can retain and process. Applicability of the concept to programming on a large scale, that of commercial-grade programs, is proven by an edition of TeX code as a literate program. Knuth claims that Literate Programming can lead to easy porting of software to multiple environments, cites the implementation of TeX as an example. Literate programming is often misunderstood to refer only to formatted documentation produced from a common file with both source code and comments –, properly called documentation generation – or to voluminous commentaries included with code.
This is backwards: well-documented code or documentation extracted from code follows the structure of the code, with documentation embedded in the code. This misconception has led to claims that comment-extraction tools, such as the Perl Plain Old Documentation or Java Javadoc systems, are "literate programming tools". However, because these tools do not implement the "web of abstract concepts" hiding behind the system of natural-language macros, or provide an ability to change the order of the source code from a machine-imposed sequence to one convenient to the human mind, they cannot properly be called literate programming tools in the sense intended by Knuth. Implementing literate programming consists of two steps: Weaving: Generating comprehensive document about program and its maintenance. Tangling: Generating machine executable codeWeaving and tangling are done on the same source so that they are consistent with each other. A classic example of literate programming is the literate implementation of the standard Unix wc word counting program.
Knuth presented a CWEB version of this example in Chapter 12 of his Literate Programming book. The same example was rewritten for the noweb lit