Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin
Subtraction is an arithmetic operation that represents the operation of removing objects from a collection. The result of a subtraction is called a difference. Subtraction is signified by the minus sign. For example, in the adjacent picture, there are 5 − 2 apples—meaning 5 apples with 2 taken away, a total of 3 apples. Therefore, the difference of 5 and 2 is 3, that is, 5 − 2 = 3. Subtraction represents removing or decreasing physical and abstract quantities using different kinds of objects including negative numbers, irrational numbers, decimals and matrices. Subtraction follows several important patterns, it is anticommutative. It is not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters; because 0 is the additive identity, subtraction of it does not change a number. Subtraction obeys predictable rules concerning related operations such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers and beyond.
General binary operations that continue these patterns are studied in abstract algebra. Performing subtraction is one of the simplest numerical tasks. Subtraction of small numbers is accessible to young children. In primary education, students are taught to subtract numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. In advanced algebra and in computer algebra, an expression involving subtraction like A − B is treated as a shorthand notation for the addition A +. Thus, A − B contains two terms, namely A and −B; this allows an easier use of commutativity. Subtraction is written using the minus sign "−" between the terms; the result is expressed with an equals sign. For example, 2 − 1 = 1 4 − 2 = 2 6 − 3 = 3 4 − 6 = − 2 There are situations where subtraction is "understood" though no symbol appears: A column of two numbers, with the lower number in red indicates that the lower number in the column is to be subtracted, with the difference written below, under a line.
This is most common in accounting. Formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend; the result is the difference. All of this terminology derives from Latin. "Subtraction" is an English word derived from the Latin verb subtrahere, in turn a compound of sub "from under" and trahere "to pull". Using the gerundive suffix -nd results in "subtrahend", "thing to be subtracted". From minuere "to reduce or diminish", one gets "minuend", "thing to be diminished". Imagine a line segment of length b with the left end labeled a and the right end labeled c. Starting from a, it takes b steps to the right to reach c; this movement to the right is modeled mathematically by addition: a + b = c. From c, it takes b steps to the left to get back to a; this movement to the left is modeled by subtraction: c − b = a. Now, a line segment labeled with the numbers 1, 2, 3. From position 3, it takes no steps to the left to stay at 3, so 3 − 0 = 3, it takes 2 steps to the left to get to position 1, so 3 − 2 = 1.
This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number. From 3, it takes 3 steps to the left to get to 0, so 3 − 3 = 0, but 3 − 4 is still invalid. The natural numbers are not a useful context for subtraction; the solution is to consider the integer number line. From 3, it takes 4 steps to the left to get to −1: 3 − 4 = −1. Subtraction of natural numbers is not closed; the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number; such a case uses one of two approaches: Say that 26 cannot be subtracted from 11. Give the answer as an integer representing a negative number, so the result of subtracting 26 from 11 is −15. Subtraction of real numbers is defined as addition of signed numbers. A number is subtracted by adding its additive inverse.
We have 3 − π = 3 +. This helps to keep the ring of real numbers "simple" by avoiding the introduction of "new" operators such as subtraction. Ordinarily a ring only has two operations defined on it. A ring has the concept of additive inverses, but it does not have any notion of a separate subtraction operation, so the use of signed addition as subtraction allows us to apply the ring axioms to subtraction without needing to prove anything. Subtraction is anti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, if a and b are any two numbers a − b = −. Subtraction is non-associative. Should the expres
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x. In the simplest case, the logarithm counts repeated multiplication of the same factor; the logarithm of x to base b is denoted as logb . More exponentiation allows any positive real number to be raised to any real power, always producing a positive result, so the logarithm for any two positive real numbers b and x where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is: log b = y if b y = x. For example, log2 64 = 6, as 26 = 64; the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number e as its base; the binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations.
They were adopted by navigators, scientists and others to perform computations more using slide rules and logarithm tables. Tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors: log b = log b x + log b y, provided that b, x and y are all positive and b ≠ 1; the present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes. For example, the decibel is a unit used to express ratio as logarithms for signal power and amplitude. In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, in measurements of the complexity of algorithms and of geometric objects called fractals, they help describing frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, can aid in forensic accounting.
In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant. Addition and exponentiation are three fundamental arithmetic operations. Addition, the simplest of these, can be undone by subtraction: adding, say, 2 to 3 gives 5; the process of adding 2 can be undone by subtracting 2: 5 − 2 = 3. Multiplication, the next-simplest operation, can be undone by division: doubling a number x, i.e. multiplying x by 2, the result is 2x. To get back x, it is necessary to divide by 2. For example 2 ⋅ 3 = 6 and the process of multiplying by 2 is undone by dividing by 2: 6 / 2 = 3; the idea and purpose of logarithms is to undo a fundamental arithmetic operation, namely raising a number to a certain power, an operation known as exponentiation. For example, raising 2 to the third power yields 8, because 8 is the product of three factors of 2: 2 3 = 2 × 2 × 2 = 8 The logarithm of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8.
This subsection contains a short overview of the exponentiation operation, fundamental to understanding logarithms. Raising b to the n-th power, where n is a natural number, is done by multiplying n factors equal to b; the n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors Exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b − 1 is the reciprocal of b. Raising b to the power 1/2 is the square root of b. More raising b to a rational power p/q, where p and q are integers, is given by b p / q = b p q, the q-th root of bp. Any irrational number y can be approximated to arbitrary precision by rational numbers; this can be used to compute the y-th power of b: for example 2 ≈ 1.414... and
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
Etymology is the study of the history of words. By extension, the term "the etymology" means the origin of the particular word and for place names, there is a specific term, toponymy. For Greek—with a long written history—etymologists make use of texts, texts about the language, to gather knowledge about how words were used during earlier periods and when they entered the language. Etymologists apply the methods of comparative linguistics to reconstruct information about languages that are too old for any direct information to be available. By analyzing related languages with a technique known as the comparative method, linguists can make inferences about their shared parent language and its vocabulary. In this way, word roots have been found that can be traced all the way back to the origin of, for instance, the Indo-European language family. Though etymological research grew from the philological tradition, much current etymological research is done on language families where little or no early documentation is available, such as Uralic and Austronesian.
The word etymology derives from the Greek word ἐτυμολογία, itself from ἔτυμον, meaning "true sense", the suffix -logia, denoting "the study of". In linguistics, the term etymon refers to a word or morpheme from which a word derives. For example, the Latin word candidus, which means "white", is the etymon of English candid. Etymologists apply a number of methods to study the origins of words, some of which are: Philological research. Changes in the form and meaning of the word can be traced with the aid of older texts, if such are available. Making use of dialectological data; the form or meaning of the word might show variations between dialects, which may yield clues about its earlier history. The comparative method. By a systematic comparison of related languages, etymologists may be able to detect which words derive from their common ancestor language and which were instead borrowed from another language; the study of semantic change. Etymologists must make hypotheses about changes in the meaning of particular words.
Such hypotheses are tested against the general knowledge of semantic shifts. For example, the assumption of a particular change of meaning may be substantiated by showing that the same type of change has occurred in other languages as well. Etymological theory recognizes that words originate through a limited number of basic mechanisms, the most important of which are language change, borrowing. While the origin of newly emerged words is more or less transparent, it tends to become obscured through time due to sound change or semantic change. Due to sound change, it is not obvious that the English word set is related to the word sit, it is less obvious that bless is related to blood. Semantic change may occur. For example, the English word bead meant "prayer", it acquired its modern meaning through the practice of counting the recitation of prayers by using beads. English derives from Old English, a West Germanic variety, although its current vocabulary includes words from many languages; the Old English roots may be seen in the similarity of numbers in English and German seven/sieben, eight/acht, nine/neun, ten/zehn.
Pronouns are cognate: I/mine/me and ich/mein/mich. However, language change has eroded many grammatical elements, such as the noun case system, simplified in modern English, certain elements of vocabulary, some of which are borrowed from French. Although many of the words in the English lexicon come from Romance languages, most of the common words used in English are of Germanic origin; when the Normans conquered England in 1066, they brought their Norman language with them. During the Anglo-Norman period, which united insular and continental territories, the ruling class spoke Anglo-Norman, while the peasants spoke the vernacular English of the time. Anglo-Norman was the conduit for the introduction of French into England, aided by the circulation of Langue d'oïl literature from France; this led to many paired words of English origin. For example, beef is related, through borrowing, to modern French bœuf, veal to veau, pork to porc, poultry to poulet. All these words and English, refer to the meat rather than to the animal.
Words that refer to farm animals, on the other hand, tend to be cognates of words in other Germanic languages. For example, swine/Schwein, cow/Kuh, calf/Kalb, sheep/Schaf; the variant usage has been explained by the proposition that it was the Norman rulers who ate meat and the Anglo-Saxons who farmed the animals. This explanation has been disputed. English has proved accommodating to words from many languages. Scientific terminology, for example, relies on words of Latin and Greek origin, but there are a great many non-scientific examples. Spanish has contributed many words in the southwestern United States. Examples include buckaroo, rodeo and states' names such as Colorado and Florida. Albino, lingo and coconut from Portuguese. Modern French has contributed café, naive and many more. Smorgasbord, slalom
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most to objects that are relevant to mathematics; the language of set theory can be used to define nearly all mathematical objects. The modern study of set theory was initiated by Richard Dedekind in the 1870s. After the discovery of paradoxes in naive set theory, such as Russell's paradox, numerous axiom systems were proposed in the early twentieth century, of which the Zermelo–Fraenkel axioms, with or without the axiom of choice, are the best-known. Set theory is employed as a foundational system for mathematics in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, with an active research community. Contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals.
Mathematical topics emerge and evolve through interactions among many researchers. Set theory, was founded by a single paper in 1874 by Georg Cantor: "On a Property of the Collection of All Real Algebraic Numbers". Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, mathematicians had struggled with the concept of infinity. Notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1870–1874 and was motivated by Cantor's work in real analysis. An 1872 meeting between Cantor and Richard Dedekind influenced Cantor's thinking and culminated in Cantor's 1874 paper. Cantor's work polarized the mathematicians of his day. While Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. Cantorian set theory became widespread, due to the utility of Cantorian concepts, such as one-to-one correspondence among sets, his proof that there are more real numbers than integers, the "infinity of infinities" resulting from the power set operation.
This utility of set theory led to the article "Mengenlehre" contributed in 1898 by Arthur Schoenflies to Klein's encyclopedia. The next wave of excitement in set theory came around 1900, when it was discovered that some interpretations of Cantorian set theory gave rise to several contradictions, called antinomies or paradoxes. Bertrand Russell and Ernst Zermelo independently found the simplest and best known paradox, now called Russell's paradox: consider "the set of all sets that are not members of themselves", which leads to a contradiction since it must be a member of itself and not a member of itself. In 1899 Cantor had himself posed the question "What is the cardinal number of the set of all sets?", obtained a related paradox. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics. In 1906 English readers gained the book Theory of Sets of Points by husband and wife William Henry Young and Grace Chisholm Young, published by Cambridge University Press.
The momentum of set theory was such. The work of Zermelo in 1908 and the work of Abraham Fraenkel and Thoralf Skolem in 1922 resulted in the set of axioms ZFC, which became the most used set of axioms for set theory; the work of analysts such as Henri Lebesgue demonstrated the great mathematical utility of set theory, which has since become woven into the fabric of modern mathematics. Set theory is used as a foundational system, although in some areas—such as algebraic geometry and algebraic topology—category theory is thought to be a preferred foundation. Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member of A, the notation o. Since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the subset relation called set inclusion. If all the members of set A are members of set B A is a subset of B, denoted A ⊆ B. For example, is a subset of, so is but is not; as insinuated from this definition, a set is a subset of itself.
For cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined. A is called a proper subset of B if and only if A is a subset of B, but A is not equal to B. Note that 1, 2, 3 are members of the set but are not subsets of it. Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The: Union of the sets A and B, denoted A ∪ B, is the set of all objects that are a member of A, or B, or both; the union of and is the set. Intersection of the sets A and B, denoted A ∩ B, is the set of all objects that are members of both A and B; the intersection of and is the set. Set difference of U and A, denoted U \ A, is the set of all members of U that are not members of A; the set difference \ is, conversely, the set difference \ is. When A is a subset of U, the set difference U \ A is called the complement of A in U. In this case, if the choice of U is clear from the context, the notation Ac is sometimes used instead of U \ A if U is a universal set as in the study of Venn diagrams.
Symmetric difference of sets A and B, denoted A △ B or A ⊖ B, is