Literate programming is a programming paradigm introduced by Donald Knuth in which a program is given as an explanation of the program logic in a natural language, such as English, interspersed with snippets of macros and traditional source code, from which a compilable source code can be generated. The literate programming paradigm, as conceived by Knuth, represents a move away from writing programs in the manner and order imposed by the computer, instead enables programmers to develop programs in the order demanded by the logic and flow of their thoughts. Literate programs are written as an uninterrupted exposition of logic in an ordinary human language, much like the text of an essay, in which macros are included to hide abstractions and traditional source code. Literate programming tools are used to obtain two representations from a literate source file: one suitable for further compilation or execution by a computer, the "tangled" code, another for viewing as formatted documentation, said to be "woven" from the literate source.
While the first generation of literate programming tools were computer language-specific, the ones are language-agnostic and exist above the programming languages. Literate programming was first introduced by Donald E. Knuth in 1984; the main intention behind this approach was to treat a program as literature understandable to human beings. This approach was implemented at Stanford University as a part of research on algorithms and digital typography; this implementation was called “WEB” by Donald Knuth since he believed that it was one of the few three-letter words of English that hadn’t been applied to computing. However, it resembles the complicated nature of software delicately pieced together from simple materials. Literate programming is writing out the program logic in a human language with included code snippets and macros. Macros in a literate source file are title-like or explanatory phrases in a human language that describe human abstractions created while solving the programming problem, hiding chunks of code or lower-level macros.
These macros are similar to the algorithms in pseudocode used in teaching computer science. These arbitrary explanatory phrases become precise new operators, created on the fly by the programmer, forming a meta-language on top of the underlying programming language. A preprocessor is used to substitute arbitrary hierarchies, or rather "interconnected'webs' of macros", to produce the compilable source code with one command, documentation with another; the preprocessor provides an ability to write out the content of the macros and to add to created macros in any place in the text of the literate program source file, thereby disposing of the need to keep in mind the restrictions imposed by traditional programming languages or to interrupt the flow of thought. According to Knuth, literate programming provides higher-quality programs, since it forces programmers to explicitly state the thoughts behind the program, making poorly thought-out design decisions more obvious. Knuth claims that literate programming provides a first-rate documentation system, not an add-on, but is grown in the process of exposition of one's thoughts during a program's creation.
The resulting documentation allows the author to restart his own thought processes at any time, allows other programmers to understand the construction of the program more easily. This differs from traditional documentation, in which a programmer is presented with source code that follows a compiler-imposed order, must decipher the thought process behind the program from the code and its associated comments; the meta-language capabilities of literate programming are claimed to facilitate thinking, giving a higher "bird's eye view" of the code and increasing the number of concepts the mind can retain and process. Applicability of the concept to programming on a large scale, that of commercial-grade programs, is proven by an edition of TeX code as a literate program. Knuth claims that Literate Programming can lead to easy porting of software to multiple environments, cites the implementation of TeX as an example. Literate programming is often misunderstood to refer only to formatted documentation produced from a common file with both source code and comments –, properly called documentation generation – or to voluminous commentaries included with code.
This is backwards: well-documented code or documentation extracted from code follows the structure of the code, with documentation embedded in the code. This misconception has led to claims that comment-extraction tools, such as the Perl Plain Old Documentation or Java Javadoc systems, are "literate programming tools". However, because these tools do not implement the "web of abstract concepts" hiding behind the system of natural-language macros, or provide an ability to change the order of the source code from a machine-imposed sequence to one convenient to the human mind, they cannot properly be called literate programming tools in the sense intended by Knuth. Implementing literate programming consists of two steps: Weaving: Generating comprehensive document about program and its maintenance. Tangling: Generating machine executable codeWeaving and tangling are done on the same source so that they are consistent with each other. A classic example of literate programming is the literate implementation of the standard Unix wc word counting program.
Knuth presented a CWEB version of this example in Chapter 12 of his Literate Programming book. The same example was rewritten for the noweb lit
Record (computer science)
In computer science, a record is a basic data structure. Records in a database or spreadsheet are called "rows". A record is a collection of fields of different data types in fixed number and sequence; the fields of a record may be called members in object-oriented programming. For example, a date could be stored as a record containing a numeric year field, a month field represented as a string, a numeric day-of-month field. A personnel record might contain a name, a salary, a rank. A Circle record might contain a center and a radius—in this instance, the center itself might be represented as a point record containing x and y coordinates. Records are distinguished from arrays by the fact that their number of fields is fixed, each field has a name, that each field may have a different type. A record type is a data type that describes such variables. Most modern computer languages allow the programmer to define new record types; the definition includes specifying the data type of each field and an identifier by which it can be accessed.
In type theory, product types are preferred due to their simplicity, but proper record types are studied in languages such as System F-sub. Since type-theoretical records may contain first-class function-typed fields in addition to data, they can express many features of object-oriented programming. Records can exist in any storage medium, including main memory and mass storage devices such as magnetic tapes or hard disks. Records are a fundamental component of most data structures linked data structures. Many computer files are organized as arrays of logical records grouped into larger physical records or blocks for efficiency; the parameters of a function or procedure can be viewed as the fields of a record variable. In the call stack, used to implement procedure calls, each entry is an activation record or call frame, containing the procedure parameters and local variables, the return address, other internal fields. An object in object-oriented language is a record that contains procedures specialized to handle that record.
Indeed, in most object-oriented languages, records are just special cases of objects, are known as plain old data structures, to contrast with objects that use OO features. A record can be viewed as the computer analog of a mathematical tuple, although a tuple may or may not be considered a record, vice versa, depending on conventions and the specific programming language. In the same vein, a record type can be viewed as the computer language analog of the Cartesian product of two or more mathematical sets, or the implementation of an abstract product type in a specific language. A record may have zero or more keys. A key is a set of fields in the record that serves as an identifier. A unique key is called the primary key, or the record key. For example an employee file might contain employee number, name and salary; the employee number would be the primary key. Depending on the storage medium and file organization the employee number might be indexed—that is stored in a separate file to make lookup faster.
The department code may not be unique. If it is not indexed the entire employee file would have to be scanned to produce a listing of all employees in a specific department; the salary field would not be considered usable as a key. Indexing is one factor considered; the concept of record can be traced to various types of tables and ledgers used in accounting since remote times. The modern notion of records in computer science, with fields of well-defined type and size, was implicit in 19th century mechanical calculators, such as Babbage's Analytical Engine; the original machine-readable medium used for data was punch card used for records in the 1890 United States Census: each punch card was a single record. Compare the journal entry from 1880 and the punch card from 1895. Records were well established in the first half of the 20th century, when most data processing was done using punched cards; each record of a data file would be recorded in one punched card, with specific columns assigned to specific fields.
A record was the smallest unit that could be read in from external storage. Most machine language implementations and early assembly languages did not have special syntax for records, but the concept was available through the use of index registers, indirect addressing, self-modifying code; some early computers, such as the IBM 1620, had hardware support for delimiting records and fields, special instructions for copying such records. The concept of records and fields was central in some early file sorting and tabulating utilities, such as IBM's Report Program Generator. COBOL was the first widespread programming language to support record types, its record definition facilities were quite sophisticated at the time; the language allows for the definition of nested records with alphanumeric and fractional fields of arbitrary size and precision, as well as fields that automatically format any value assigned to them (e.g. insertion of currency
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f