The Linux Programming Interface
The Linux Programming Interface: A Linux and UNIX System Programming Handbook is a book written by Michael Kerrisk, which documents the APIs of the Linux kernel and of the GNU C Library. It covers a wide array of topics dealing with the Linux operating system and operating systems in general, as well as providing a brief history of Unix and how it led to the creation of Linux, it provides many samples of code written in the C programming language, provides learning exercises at the end of many chapters. Kerrisk is a former writer for the Linux Weekly News and the current maintainer for the Linux man pages project,"The Linux Programming Interface" is regarded as the definitive work on Linux system programming and has been translated into several languages. Jake Edge, writer for LWN.net, in his review of the book, said "I found it to be useful and expect to return to it frequently. Anyone who has an interest in programming for Linux will feel the same way." Federico Lucifredi, the product manager for the SUSE Linux Enterprise and openSUSE distributions praised the book saying that "The Linux Programming Encyclopedia would have been a adequate title for it in my opinion" and called the book "…a work of encyclopedic breadth and depth, spanning in great detail concepts spread in a multitude of medium-sized books…" Lennart Poettering, the software engineer best known for PulseAudio and systemd, advises people to "get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software".
At FOSDEM 2016 Michael Kerrisk, the author of The Linux Programming Interface, explained some of the issues with the Linux kernel's user-space API he and others perceive. It is littered with design errors: APIs which are non-extensible, overly complex, limited-purpose, violations of standards, inconsistent. Most of those mistakes can't be fixed because doing so would break the ABI that the kernel presents to user-space binaries. Linux kernel interfaces Programming Linux Games The Linux Programming Interface at the publisher's Website The Linux Programming Interface Description at Kerrisk's Website API changes The Linux Programming Interface Traditional Chinese Translation
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function. A computer program is written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A collection of computer programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software; the underlying method used for some calculation or manipulation is known as an algorithm. The earliest programmable machines preceded the invention of the digital computer. In 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be repeated by arranging the cards.
In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled; the device would have had a "store"—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" would have been transferred to the "mill", for processing, and a "thread" being the execution of programmed instructions by the device. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea; the memoir covered the Analytical Engine. The translation contained Note G which detailed a method for calculating Bernoulli numbers using the Analytical Engine.
This note is recognized by some historians as the world's first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine, it is a finite-state machine. The machine can move the tape forth, changing its contents as it performs an algorithm; the machine starts in the initial state, goes through a sequence of steps, halts when it encounters the halt state. This machine is considered by some to be the origin of the stored-program computer—used by John von Neumann for the "Electronic Computing Instrument" that now bears the von Neumann architecture name; the Z3 computer, invented by Konrad Zuse in Germany, was a programmable computer. A digital computer uses electricity as the calculating component; the Z3 contained 2,400 relays to create the circuits. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.
The Electronic Numerical Integrator And Computer was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together, its 40 units weighed 30 tons, occupied 1,800 square feet, consumed $650 per hour in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables; each function table had 728 rotating knobs. Programming the ENIAC involved setting some of the 3,000 switches. Debugging a program took a week; the programmers of the ENIAC were women who were known collectively as the "ENIAC girls." The ENIAC featured parallel operations. Different sets of accumulators could work on different algorithms, it used punched card machines for input and output, it was controlled with a clock signal. It ran for eight years, calculating hydrogen bomb parameters, predicting weather patterns, producing firing tables to aim artillery guns.
The Manchester Baby was a stored-program computer. Programming transitioned away from setting dials. Only three bits of memory were available to store each instruction, so it was limited to eight instructions. 32 switches were available for programming. Computers manufactured; the computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed; this process was repeated. Computer programs were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed. In 1961, the Burroughs B5000 was built to be programmed in the ALGOL 60 language; the hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture; the Model 30 was the least expensive. Customers could retain the same application software; each System/360 model featured multiprogramming.
With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute; each model could emulate other computers. Customers could upgrade to the System/360 and ret
Michael Kerrisk is a technical author, programmer and, since 2004, maintainer of the Linux man-pages project, succeeding Andries Brouwer. He was born in 1961 in lives in Munich, Germany. Kerrisk has worked for Digital Equipment, The Linux Foundation and, as an editor and writer, for LWN.net. He works as a freelance consultant and trainer, he is best known for his book The Linux Programming Interface, published by No Starch Press in 2010. This book is regarded as the definitive work on Linux system programming and has been translated into several languages; as the maintainer of the Linux man-pages project, Kerrisk has authored or co-authored about a third of the man pages and worked on improving the project's infrastructure. For his contributions he received a Special Award of the 2016 New Zealand Open Source Awards
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra and functional programming; the term was introduced by Benjamin Peirce in the context of elements of algebras that remain invariant when raised to a positive integer power, means " the same power", from idem + potence. An element x of a magma is said to be idempotent if: x • x = x. If all elements are idempotent with respect to • • is called idempotent; the formula ∀x, x • x = x is called the idempotency law for •. The natural number 1 is an idempotent element with respect to multiplication, so is 0, but no other natural number is. For the latter reason, multiplication of natural numbers is not an idempotent operation. More formally, in the monoid, idempotent elements are just 0 and 1. In a magma, an identity element e or an absorbing element a, if it exists, is idempotent.
Indeed, e • e = e and a • a = a. In a group, the identity element e is the only idempotent element. Indeed, if x is an element of G such that x • x = x x • x = x • e and x = e by multiplying on the left by the inverse element of x. Taking the intersection x∩y of two sets x and y is an idempotent operation, since x∩x always equals x; this means that the idempotency law ∀ x ∩ x = x is true. Taking the union of two sets is an idempotent operation. Formally, in the monoids and of the power set of the set E with the set union ∪ and set intersection ∩ all elements are idempotent. In the monoids and of the Boolean domain with the logical disjunction ∨ and the logical conjunction ∧ all elements are idempotent. In a Boolean ring, multiplication is idempotent. In the monoid of the functions from a set E to a subset F of E with the function composition ∘, idempotent elements are the functions f: E → F such that f ∘ f = f, in other words such that for all x in E, f = f. For example: Taking the absolute value abs of an integer number x is an idempotent function for the following reason: abs = abs is true for each integer number x.
This means that abs ∘ abs = abs holds, that is, abs is an idempotent element in the set of all functions with respect to function composition. Therefore, abs satisfies the above definition of an idempotent function. Other examples include: the identity function is idempotent. If the set E has n elements, we can partition it into k chosen fixed points and n − k non-fixed points under f, kn−k is the number of different idempotent functions. Hence, taking into account all possible partitions, ∑ k = 0 n k n − k is the total number of possible idempotent functions on the set; the integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, … starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, …. Neither the property of being idempotent nor that of being not is preserved under function composition; as an example for the former, f = x mod 3 and g = max are both idempotent, but f ∘ g is not, although g ∘ f happens to be. As an example for the latter, the negation function ¬ on the Boolean domain is not idempotent, but ¬ ∘ ¬ is.
Unary negation − of real numbers is not idempotent, but − ∘ − is. In computer science, the term idempotence may have a different meaning depending on the context in which it is applied: in imperative programming, a subroutine with side effects is idempotent if the system state remains the same after one or several calls, in other words if the function from the system state space to itself associated to the subroutine is idempotent in the mathematical sense given in the definition; this is a useful property in many situations, as it means that an operation can be repeated or retried as as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was performed or not. A function looking up a customer's name and address in a database is idempotent, since this will not cause the database to change. Changing a customer's address is idempotent, because the final address will be the same no matter how many times it is submitted.
However, placing an order for a cart for the customer is not idempotent, since running the call several t
Chris Wright (programmer)
Chris Wright is a Linux kernel developer and CTO with Red Hat. He is the current Linux kernel co-maintainer for the -stable branch with Greg Kroah-Hartman, he is involved in Linux kernel security related topics and is the maintainer for the LSM framework. Chris serves as the Chair of the OpenDaylight Project Board of Directors. Chris is vice president and chief technology officer at Red Hat
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing; the processor responds by suspending its current activities, saving its state, executing a function called an interrupt handler to deal with the event. This interruption is temporary, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, either a part of the computer itself, such as a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position.
Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction set which causes an interrupt when it is executed; the former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, a divide-by-zero exception will be thrown if the processor's arithmetic logic unit is commanded to divide a number by zero as this instruction is an error and impossible; the operating system will catch this exception, can decide what to do about it: aborting the process and displaying an error message. Software interrupt instructions can function to subroutine calls and are used for a variety of purposes, such as to request services from device drivers, like interrupts sent to and from a disk controller to request reading or writing of data to and from the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request lines to the processor, but there may be hundreds of different software interrupts. Interrupts are a used technique for computer multitasking in real-time computing; such a system is said to be interrupt-driven. Interrupts are similar to signals, the difference being that signals are used for inter-process communication, mediated by the kernel and handled by processes, while interrupts are mediated by the processor and handled by the kernel; the kernel may pass an interrupt as a signal to the process. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events; the first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. Interrupts may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem.
If implemented in hardware, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. Interrupts can be categorized into these different types: Maskable interrupt: a hardware interrupt that may be ignored by setting a bit in an interrupt mask register's bit-mask. Non-maskable interrupt: a hardware interrupt that lacks an associated bit-mask, so that it can never be ignored. NMIs are used for the highest priority tasks such as timers watchdog timers. Inter-processor interrupt: a special case of interrupt, generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt: an interrupt generated within a processor by executing an instruction. Software interrupts are used to implement system calls because they result in a subroutine call with a CPU ring level change.
Spurious interrupt: a hardware interrupt, unwanted. They are generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware. Processors have an internal interrupt mask which allows software to ignore all external hardware interrupts while it is set. Setting or clearing this mask may be faster than accessing an interrupt mask register in a PIC or disabling interrupts in the device itself. In some cases, such as the x86 architecture and enabling interrupts on the processor itself act as a memory barrier. An interrupt that leaves the machine in a well-defined state is called a precise interrupt; such an interrupt has four properties: The Program Counter is saved in a known place. All instructions before the one pointed to by the PC have executed. No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt; the execution state of the instruction pointed to by the PC is known.
An interrupt that does not meet these requirements is called an impr