1.
Distributivism
–
In the early 21st century, some observers speculated on Pope Franciss position on distributism after his denouncement of unfettered capitalism in his apostolic exhortation Evangelii gaudium. Distributism, therefore, advocates a society marked by widespread property ownership, co-operative economist Race Mathews argues that such a system is key to bringing about a just social order. Distributism has often described in opposition to both socialism and capitalism, which distributists see as equally flawed and exploitive. Thomas Storck argues, both socialism and capitalism are products of the European Enlightenment and are thus modernizing and anti-traditional forces, further, some distributists argue that socialism is the logical conclusion of capitalism as capitalisms concentrated powers eventually capture the state, resulting in a form of socialism. In contrast, distributism seeks to subordinate economic activity to human life as a whole, to our life, our intellectual life. Particularly influential in the development of distributist theory were Catholic authors G. K, Chesterton and Hilaire Belloc, the Chesterbelloc, two of distributisms earliest and strongest proponents. The mid-to-late 19th century witnessed an increase in popularity of political Catholicism across Europe, according to historian Michael A. Riff, a common feature of these movements was opposition not only to secularism, but also to both capitalism and socialism. Common and government property ownership was expressly dismissed as a means of helping the poor, around the start of the 20th century, G. K. In the United States in the 1930s, distributism was treated in numerous essays by Chesterton, Belloc and others in The American Review, pivotal among Bellocs and Chestertons other works regarding distributism are The Servile State, and Outline of Sanity. It also influenced the thought behind the Antigonish Movement, which implemented cooperatives and its practical implementation in the form of local cooperatives has recently been documented by Race Mathews in his 1999 book Jobs of Our Own, Building a Stakeholder Society. The position of distributists when compared to other political philosophies is somewhat paradoxical, but there was a rosy time of innocence when I believed in Liberals. While converging with certain elements of traditional Toryism, especially an appreciation of the Middle Ages and organic society, much of Dorothy L. Sayers writings on social and economic matters has affinity with distributism. Under such a system, most people would be able to earn a living without having to rely on the use of the property of others to do so. Examples of people earning a living in this way would be farmers who own their own land and related machinery, plumbers who own their own tools, software developers who own their own computer, etc. The cooperative approach advances beyond this perspective to recognise that such property and equipment may be co-owned by local communities larger than a family, Chesterton in his 1910 book What’s Wrong with the World. Chesterton believes that whilst God has limitless capabilities, man has limited abilities in terms of creation, as such, man therefore is entitled to own property and to treat it as he sees fit. He states “Property is merely the art of the democracy and it means that every man should have something that he can shape in his own image, as he is shaped in the image of heaven. But because he is not God, but only an image of God, his self-expression must deal with limits, properly with limits that are strict and even small. ”Chesterton summed up his distributist views in the phrase Three acres
2.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four
3.
Formal logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
4.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b
5.
Elementary algebra
–
Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to school students and builds on their understanding of arithmetic. Whereas arithmetic deals with specified numbers, algebra introduces quantities without fixed values and this use of variables entails a use of algebraic notation and an understanding of the general rules of the operators introduced in arithmetic. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real, the use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations, algebraic notation describes how algebra is written. It follows certain rules and conventions, and has its own terminology, a term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. By convention, letters at the beginning of the alphabet are used to represent constants. They are usually written in italics, algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation. and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example,3 × x 2 is written as 3 x 2, usually terms with the highest power, are written on the left, for example, x 2 is written to the left of x. When a coefficient is one, it is usually omitted, likewise when the exponent is one. When the exponent is zero, the result is always 1, however 00, being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents. Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters, for example, exponents are usually formatted using superscripts, e. g. x 2. In plain text, and in the TeX mark-up language, the symbol ^ represents exponents. In programming languages such as Ada, Fortran, Perl, Python and Ruby, many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example,3 x is written 3*x. Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general numbers and this is useful for several reasons. Variables may represent numbers whose values are not yet known, for example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as C = P +20. Variables allow one to describe general problems, without specifying the values of the quantities that are involved, for example, it can be stated specifically that 5 minutes is equivalent to 60 ×5 =300 seconds
6.
Propositional calculus
–
Logical connectives are found in natural languages. In English for example, some examples are and, or, not”, the following is an example of a very simple inference within the scope of propositional logic, Premise 1, If its raining then its cloudy. Both premises and the conclusion are propositions, the premises are taken for granted and then with the application of modus ponens the conclusion follows. Not only that, but they will also correspond with any other inference of this form, Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of rules and axioms allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions, a constructed sequence of such formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the represented by the theorem. When a formal system is used to represent formal logic, only statement letters are represented directly, usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-order logic, although propositional logic had been hinted by earlier philosophers, it was developed into a formal logic by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions and this advancement was different from the traditional syllogistic logic which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood, consequently, the system was essentially reinvented by Peter Abelard in the 12th century. Propositional logic was eventually refined using symbolic logic, the 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community, consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely independent of Leibniz. Just as propositional logic can be considered an advancement from the earlier syllogistic logic, one author describes predicate logic as combining the distinctive features of syllogistic logic and propositional logic. Consequently, predicate logic ushered in a new era in history, however, advances in propositional logic were still made after Frege, including Natural Deduction. Natural deduction was invented by Gerhard Gentzen and Jan Łukasiewicz, Truth-Trees were invented by Evert Willem Beth. The invention of truth-tables, however, is of controversial attribution, within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure, itself, is credited to either Ludwig Wittgenstein or Emil Post
7.
Logical conjunction
–
In logic and mathematics, and is the truth-functional operator of logical conjunction, the and of a set of operands is true if and only if all of its operands are true. The logical connective that represents this operator is written as ∧ or ⋅. A and B is true only if A is true and B is true, an operand of a conjunction is a conjunct. Related concepts in other fields are, In natural language, the coordinating conjunction, in programming languages, the short-circuit and control structure. And is usually denoted by an operator, in mathematics and logic, ∧ or ×, in electronics, ⋅. In Jan Łukasiewiczs prefix notation for logic, the operator is K, logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both of its operands are true. The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the expression. In keeping with the concept of truth, when conjunction is defined as an operator or function of arbitrary arity. The truth table of A ∧ B, As a rule of inference, conjunction introduction is a classically valid, the argument form has two premises, A and B. Intuitively, it permits the inference of their conjunction, therefore, A and B. or in logical operator notation, A, B ⊢ A ∧ B Here is an example of an argument that fits the form conjunction introduction, Bob likes apples. Therefore, Bob likes apples and oranges, Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from any conjunction of either element of that conjunction, therefore, A. or alternately, A and B. In logical operator notation, A ∧ B ⊢ A. falsehood-preserving, yes When all inputs are false, walsh spectrum, Nonlinearity,1 If using binary values for true and false, then logical conjunction works exactly like normal arithmetic multiplication. Many languages also provide short-circuit control structures corresponding to logical conjunction. Logical conjunction is used for bitwise operations, where 0 corresponds to false and 1 to true,0 AND0 =0,0 AND1 =0,1 AND0 =0,1 AND1 =1. The operation can also be applied to two binary words viewed as bitstrings of length, by taking the bitwise AND of each pair of bits at corresponding positions. For example,11000110 AND10100011 =10000010 and this can be used to select part of a bitstring using a bit mask. For example,10011101 AND00001000 =00001000 extracts the fifth bit of an 8-bit bitstring
8.
Logical disjunction
–
In logic and mathematics, or is the truth-functional operator of disjunction, also known as alternation, the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is written as ∨ or +. A or B is true if A is true, or if B is true, or if both A and B are true. In logic, or by means the inclusive or, distinguished from an exclusive or. An operand of a disjunction is called a disjunct, related concepts in other fields are, In natural language, the coordinating conjunction or. In programming languages, the short-circuit or control structure, or is usually expressed with an infix operator, in mathematics and logic, ∨, in electronics, +, and in most programming languages, |, ||, or or. In Jan Łukasiewiczs prefix notation for logic, the operator is A, logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value of false if and only if both of its operands are false. More generally, a disjunction is a formula that can have one or more literals separated only by ors. A single literal is often considered to be a degenerate disjunction, the disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original expression. In keeping with the concept of truth, when disjunction is defined as an operator or function of arbitrary arity. Falsehood-preserving, The interpretation under which all variables are assigned a value of false produces a truth value of false as a result of disjunction. The mathematical symbol for logical disjunction varies in the literature, in addition to the word or, and the formula Apq, the symbol ∨, deriving from the Latin word vel is commonly used for disjunction. For example, A ∨ B is read as A or B, such a disjunction is false if both A and B are false. In all other cases it is true, all of the following are disjunctions, A ∨ B ¬ A ∨ B A ∨ ¬ B ∨ ¬ C ∨ D ∨ ¬ E. The corresponding operation in set theory is the set-theoretic union, operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations, for example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged. Logical disjunction is usually short-circuited, that is, if the first operand evaluates to true then the second operand is not evaluated, the logical disjunction operator thus usually constitutes a sequence point. In a parallel language, it is possible to both sides, they are evaluated in parallel, and if one terminates with value true
9.
Formal proof
–
A formal proof or derivation is a finite sequence of sentences, each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system, the notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concept of natural deduction is a generalization of the concept of proof, the theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. Formal proofs often are constructed with the help of computers in interactive theorem proving, significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs is usually computationally intractable and/or only semi-decidable, a formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions, it can exist before any interpretation is assigned to it – that is, Formal proofs are expressed in some formal language. A formal grammar is a description of the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the language which constitute well formed formulas. However, it does not describe their semantics, a formal system consists of a formal language together with a deductive apparatus. The deductive apparatus may consist of a set of rules or a set of axioms. A formal system is used to derive one expression from one or more other expressions, an interpretation of a formal system is the assignment of meanings to the symbols, and values to the sentences of a formal system. The study of interpretations is called formal semantics, giving an interpretation is synonymous with constructing a model. Proof Mathematical proof Proof theory Axiomatic system A Special Issue on Formal Proof, notices of the American Mathematical Society. 2πix. com, Logic Part of a series of articles covering mathematics and logic
10.
Arithmetic
–
Arithmetic is a branch of mathematics that consists of the study of numbers, especially the properties of the traditional operations between them—addition, subtraction, multiplication and division. Arithmetic is an part of number theory, and number theory is considered to be one of the top-level divisions of modern mathematics, along with algebra, geometry. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are still used to refer to a wider part of number theory. The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC and these artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, in both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a board or the Roman abacus to obtain the results. Early number systems that included positional notation were not decimal, including the system for Babylonian numerals. Because of this concept, the ability to reuse the same digits for different values contributed to simpler. The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, Greek numerals were used by Archimedes, Diophantus and others in a positional notation not very different from ours. Because the ancient Greeks lacked a symbol for zero, they used three separate sets of symbols, one set for the units place, one for the tens place, and one for the hundreds. Then for the place they would reuse the symbols for the units place. Their addition algorithm was identical to ours, and their multiplication algorithm was very slightly different. Their long division algorithm was the same, and the square root algorithm that was taught in school was known to Archimedes. He preferred it to Heros method of successive approximation because, once computed, a digit doesnt change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a part, such as 546.934. The ancient Chinese used a positional notation. Because they also lacked a symbol for zero, they had one set of symbols for the place
11.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
12.
Multiplication
–
Multiplication is one of the four elementary, mathematical operations of arithmetic, with the others being addition, subtraction and division. Multiplication can also be visualized as counting objects arranged in a rectangle or as finding the area of a rectangle whose sides have given lengths, the area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis. The inverse operation of multiplication is division, for example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number, Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For these more abstract constructs, the order that the operands are multiplied sometimes does matter, a listing of the many different kinds of products that are used in mathematics is given in the product page. In arithmetic, multiplication is often written using the sign × between the terms, that is, in infix notation, there are other mathematical notations for multiplication, Multiplication is also denoted by dot signs, usually a middle-position dot,5 ⋅2 or 5. 2 The middle dot notation, encoded in Unicode as U+22C5 ⋅ dot operator, is standard in the United States, the United Kingdom, when the dot operator character is not accessible, the interpunct is used. In other countries use a comma as a decimal mark. In algebra, multiplication involving variables is often written as a juxtaposition, the notation can also be used for quantities that are surrounded by parentheses. In matrix multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a product of two vectors, yielding a vector as the result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk is still the most common notation and this is due to the fact that most computers historically were limited to small character sets that lacked a multiplication sign, while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language, the numbers to be multiplied are generally called the factors. The number to be multiplied is called the multiplicand, while the number of times the multiplicand is to be multiplied comes from the multiplier. Usually the multiplier is placed first and the multiplicand is placed second, however sometimes the first factor is the multiplicand, additionally, there are some sources in which the term multiplicand is regarded as a synonym for factor. In algebra, a number that is the multiplier of a variable or expression is called a coefficient, the result of a multiplication is called a product. A product of integers is a multiple of each factor, for example,15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5
13.
Addition
–
Addition is one of the four basic operations of arithmetic, with the others being subtraction, multiplication and division. The addition of two numbers is the total amount of those quantities combined. For example, in the picture on the right, there is a combination of three apples and two together, making a total of five apples. This observation is equivalent to the mathematical expression 3 +2 =5 i. e.3 add 2 is equal to 5, besides counting fruits, addition can also represent combining other physical objects. In arithmetic, rules for addition involving fractions and negative numbers have been devised amongst others, in algebra, addition is studied more abstractly. It is commutative, meaning that order does not matter, and it is associative, repeated addition of 1 is the same as counting, addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication, performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers, the most basic task,1 +1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the system, starting with single digits. Mechanical aids range from the ancient abacus to the modern computer, Addition is written using the plus sign + between the terms, that is, in infix notation. The result is expressed with an equals sign, for example, 3½ =3 + ½ =3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead, the sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, ∑ k =15 k 2 =12 +22 +32 +42 +52 =55. The numbers or the objects to be added in addition are collectively referred to as the terms, the addends or the summands. This is to be distinguished from factors, which are multiplied, some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an addend at all, today, due to the commutative property of addition, augend is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin, using the gerundive suffix -nd results in addend, thing to be added. Likewise from augere to increase, one gets augend, thing to be increased, sum and summand derive from the Latin noun summa the highest, the top and associated verb summare
14.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
15.
Commutative property
–
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says 3 +4 =4 +3 or 2 ×5 =5 ×2, the property can also be used in more advanced settings. The name is needed there are operations, such as division and subtraction. The commutative property is a property associated with binary operations and functions. If the commutative property holds for a pair of elements under a binary operation then the two elements are said to commute under that operation. The term commutative is used in several related senses, putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result, is the same, in contrast, putting on underwear and trousers is not commutative. The commutativity of addition is observed when paying for an item with cash, regardless of the order the bills are handed over in, they always give the same total. The multiplication of numbers is commutative, since y z = z y for all y, z ∈ R For example,3 ×5 =5 ×3. Some binary truth functions are also commutative, since the tables for the functions are the same when one changes the order of the operands. For example, the logical biconditional function p ↔ q is equivalent to q ↔ p and this function is also written as p IFF q, or as p ≡ q, or as Epq. Further examples of binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors. Concatenation, the act of joining character strings together, is a noncommutative operation, rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order. The twists of the Rubiks Cube are noncommutative and this can be studied using group theory. Some non-commutative binary operations, Records of the use of the commutative property go back to ancient times. The Egyptians used the property of multiplication to simplify computing products. Euclid is known to have assumed the property of multiplication in his book Elements
16.
Number
–
Numbers that answer the question How many. Are 0,1,2,3 and so on, when used to indicate position in a sequence they are ordinal numbers. To the Pythagoreans and Greek mathematician Euclid, the numbers were 2,3,4,5, Euclid did not consider 1 to be a number. Numbers like 3 +17 =227, expressible as fractions in which the numerator and denominator are whole numbers, are rational numbers and these make it possible to measure such quantities as two and a quarter gallons and six and a half miles. What we today would consider a proof that a number is irrational Euclid called a proof that two lengths arising in geometry have no common measure, or are incommensurable, Euclid included proofs of incommensurability of lengths arising in geometry in his Elements. In the Rhind Mathematical Papyrus, a pair of walking forward marked addition. They were the first known civilization to use negative numbers, negative numbers came into widespread use as a result of their utility in accounting. They were used by late medieval Italian bankers, by 1740 BC, the Egyptians had a symbol for zero in accounting texts. In Maya civilization zero was a numeral with a shape as a symbol. The ancient Egyptians represented all fractions in terms of sums of fractions with numerator 1, for example, 2/5 = 1/3 + 1/15. Such representations are known as Egyptian Fractions or Unit Fractions. The earliest written approximations of π are found in Egypt and Babylon, in Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats π as 25/8 =3.1250. In Egypt, the Rhind Papyrus, dated around 1650 BC, astronomical calculations in the Shatapatha Brahmana use a fractional approximation of 339/108 ≈3.139. Other Indian sources by about 150 BC treat π as √10 ≈3.1622 The first references to the constant e were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant and it is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli, the first known use of the constant, represented by the letter b, was in correspondence from Gottfried Leibniz to Christiaan Huygens in 1690 and 1691. Leonhard Euler introduced the letter e as the base for natural logarithms, Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and the first appearance of e in a publication was Eulers Mechanica. While in the subsequent years some researchers used the letter c, e was more common, the first numeral system known is Babylonian numeric system, that has a 60 base, it was introduced in 3100 B. C. and is the first Positional numeral system known
17.
Summation
–
In mathematics, summation is the addition of a sequence of numbers, the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a sum, prefix sum. The numbers to be summed may be integers, rational numbers, real numbers, besides numbers, other types of values can be added as well, vectors, matrices, polynomials and, in general, elements of any additive group. For finite sequences of elements, summation always produces a well-defined sum. The summation of a sequence of values is called a series. A value of such a series may often be defined by means of a limit, another notion involving limits of finite sums is integration. The summation of the sequence is an expression whose value is the sum of each of the members of the sequence, in the example,1 +2 +4 +2 =9. Addition is also commutative, so permuting the terms of a sequence does not change its sum. There is no notation for the summation of such explicit sequences. If, however, the terms of the sequence are given by a pattern, possibly of variable length. For the summation of the sequence of integers from 1 to 100. In this case, the reader can guess the pattern. However, for more complicated patterns, one needs to be precise about the used to find successive terms. Using this sigma notation the above summation is written as, ∑ i =1100 i, the value of this summation is 5050. It can be found without performing 99 additions, since it can be shown that ∑ i =1 n i = n 2 for all natural numbers n, more generally, formulae exist for many summations of terms following a regular pattern. By contrast, summation as discussed in this article is called definite summation, when it is necessary to clarify that numbers are added with their signs, the term algebraic sum is used. Mathematical notation uses a symbol that compactly represents summation of many terms, the summation symbol, ∑. The i = m under the symbol means that the index i starts out equal to m
18.
Subtraction
–
Subtraction is a mathematical operation that represents the operation of removing objects from a collection. It is signified by the minus sign, for example, in the picture on the right, there are 5 −2 apples—meaning 5 apples with 2 taken away, which is a total of 3 apples. It is anticommutative, meaning that changing the order changes the sign of the answer and it is not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Subtraction of 0 does not change a number, subtraction also obeys predictable rules concerning related operations such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers, general binary operations that continue these patterns are studied in abstract algebra. Performing subtraction is one of the simplest numerical tasks, subtraction of very small numbers is accessible to young children. In primary education, students are taught to subtract numbers in the system, starting with single digits. Subtraction is written using the minus sign − between the terms, that is, in infix notation, the result is expressed with an equals sign. This is most common in accounting, formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend. All of this terminology derives from Latin, subtraction is an English word derived from the Latin verb subtrahere, which is in turn a compound of sub from under and trahere to pull, thus to subtract is to draw from below, take away. Using the gerundive suffix -nd results in subtrahend, thing to be subtracted, likewise from minuere to reduce or diminish, one gets minuend, thing to be diminished. Imagine a line segment of length b with the left end labeled a, starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition, a + b = c, from c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction, c − b = a, now, a line segment labeled with the numbers 1,2, and 3. From position 3, it takes no steps to the left to stay at 3 and it takes 2 steps to the left to get to position 1, so 3 −2 =1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3, to represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number, from 3, it takes 3 steps to the left to get to 0, so 3 −3 =0. But 3 −4 is still invalid since it leaves the line
19.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
20.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
21.
Field (mathematics)
–
In mathematics, a field is a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus an algebraic structure, which is widely used in algebra, number theory. The best known fields are the field of numbers. In addition, the field of numbers is widely used, not only in mathematics. Finite fields are used in most cryptographic protocols used for computer security, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Formally, a field is a set together with two operations the addition and the multiplication, which have the properties, called axioms of fields. An operation is a mapping that associates an element of the set to every pair of its elements, the result of the addition of a and b is called the sum of a and b and denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, associativity of addition and multiplication For all a, b and c in F, one has a + = + c and a · = · c. Commutativity of addition and multiplication For all a and b in F one has a + b = b + a and a · b = b · a. Existence of additive and multiplicative identity elements There exists an element 0 in F, called the identity, such that for all a in F. There is an element 1, different from 0 and called the identity, such that for all a in F. Existence of additive inverses and multiplicative inverses For every a in F, there exists an element in F, denoted −a, such that a + =0. For every a ≠0 in F, there exists an element in F, denoted a−1, 1/a, or 1/a, distributivity of multiplication over addition For all a, b and c in F, one has a · = +. The elements 0 and 1 being required to be distinct, a field has, at least, for every a in F, one has − a = ⋅ a. Thus, the inverse of every element is known as soon as one knows the additive inverse of 1. A subtraction and a division are defined in every field by a − b = a +, a subfield E of a field F is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. It is straightforward to verify that a subfield is indeed a field, two groups are associated to every field. The field itself is a group under addition, when considering this group structure rather the field structure, one talks of the additive group of the field
22.
Rational number
–
In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number, the decimal expansion of a rational number always either terminates after a finite number of digits or begins to repeat the same finite sequence of digits over and over. Moreover, any repeating or terminating decimal represents a rational number and these statements hold true not just for base 10, but also for any other integer base. A real number that is not rational is called irrational, irrational numbers include √2, π, e, and φ. The decimal expansion of an irrational number continues without repeating, since the set of rational numbers is countable, and the set of real numbers is uncountable, almost all real numbers are irrational. Rational numbers can be defined as equivalence classes of pairs of integers such that q ≠0, for the equivalence relation defined by ~ if. The rational numbers together with addition and multiplication form field which contains the integers and is contained in any field containing the integers, finite extensions of Q are called algebraic number fields, and the algebraic closure of Q is the field of algebraic numbers. In mathematical analysis, the numbers form a dense subset of the real numbers. The real numbers can be constructed from the numbers by completion, using Cauchy sequences, Dedekind cuts. The term rational in reference to the set Q refers to the fact that a number represents a ratio of two integers. In mathematics, rational is often used as a noun abbreviating rational number, the adjective rational sometimes means that the coefficients are rational numbers. However, a curve is not a curve defined over the rationals. Any integer n can be expressed as the rational number n/1, a b = c d if and only if a d = b c. Where both denominators are positive, a b < c d if and only if a d < b c. If either denominator is negative, the fractions must first be converted into equivalent forms with positive denominators, through the equations, − a − b = a b, two fractions are added as follows, a b + c d = a d + b c b d. A b − c d = a d − b c b d, the rule for multiplication is, a b ⋅ c d = a c b d. Where c ≠0, a b ÷ c d = a d b c, note that division is equivalent to multiplying by the reciprocal of the divisor fraction, a d b c = a b × d c. Additive and multiplicative inverses exist in the numbers, − = − a b = a − b and −1 = b a if a ≠0
23.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
24.
Matrix multiplication
–
In mathematics, matrix multiplication or the matrix product is a binary operation that produces a matrix from two matrices. The definition is motivated by linear equations and linear transformations on vectors, which have applications in applied mathematics, physics. When two linear transformations are represented by matrices, then the matrix represents the composition of the two transformations. The matrix product is not commutative in general, although it is associative and is distributive over matrix addition, the identity element of the matrix product is the identity matrix, and a square matrix may have an inverse matrix. Determinant multiplicativity applies to the matrix product, the matrix product is also important for matrix groups, and the theory of group representations and irreps. Computing matrix products is both an operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing C = AB, especially for large matrices, index notation is often the clearest way to express definitions, and is used as standard in the literature. The i, j entry of matrix A is indicated by ij or Aij, whereas a numerical label on a collection of matrices is subscripted only, e. g. A1, A2, assume two matrices are to be multiplied. M, and summing the results over k, i j = ∑ k =1 m A i k B k j. Thus the product AB is defined if the number of columns in A is equal to the number of rows in B. Each entry may be computed one at a time, sometimes, the summation convention is used as it is understood to sum over the repeated index k. To prevent any ambiguity, this convention will not be used in the article, usually the entries are numbers or expressions, but can even be matrices themselves. The matrix product can still be calculated exactly the same way, see below for details on how the matrix product can be calculated in terms of blocks taking the forms of rows and columns. The figure to the right illustrates diagrammatically the product of two matrices A and B, showing how each intersection in the matrix corresponds to a row of A. Note AB and BA are two different matrices, the first is a 1 ×1 matrix while the second is a 3 ×3 matrix, if A =, B =, their matrix product is, A B = =, however BA is not defined. The product of a square matrix multiplied by a column matrix arises naturally in algebra, for solving linear equations. By choosing a, b, c, p, q, r, u, v, w in A appropriately, A can represent a variety of such as rotations, scaling and reflections, shears. If A =, B =, their products are, A B = =
25.
Ordinal number
–
In set theory, an ordinal number, or ordinal, is one generalization of the concept of a natural number that is used to describe a way to arrange a collection of objects in order, one after another. Any finite collection of objects can be put in order just by the process of counting, labeling the objects with distinct whole numbers, Ordinal numbers are thus the labels needed to arrange collections of objects in order. An ordinal number is used to describe the type of a well ordered set. Whereas ordinals are useful for ordering the objects in a collection, they are distinct from cardinal numbers, although the distinction between ordinals and cardinals is not always apparent in finite sets, different infinite ordinals can describe the same cardinal. Like other kinds of numbers, ordinals can be added, multiplied, a natural number can be used for two purposes, to describe the size of a set, or to describe the position of an element in a sequence. When restricted to finite sets these two concepts coincide, there is one way to put a finite set into a linear sequence. This is because any set has only one size, there are many nonisomorphic well-orderings of any infinite set. Whereas the notion of number is associated with a set with no particular structure on it. A well-ordered set is an ordered set in which there is no infinite decreasing sequence, equivalently. Ordinals may be used to label the elements of any given well-ordered set and this length is called the order type of the set. Any ordinal is defined by the set of ordinals that precede it, in fact, the most common definition of ordinals identifies each ordinal as the set of ordinals that precede it. For example, the ordinal 42 is the type of the ordinals less than it, i. e. the ordinals from 0 to 41. Conversely, any set of ordinals that is downward-closed—meaning that for any ordinal α in S and any ordinal β < α, β is also in S—is an ordinal. There are infinite ordinals as well, the smallest infinite ordinal is ω, which is the type of the natural numbers. After all of these come ω·2, ω·2+1, ω·2+2, and so on, then ω·3, now the set of ordinals formed in this way must itself have an ordinal associated with it, and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωωω, then later ωωωω and this can be continued indefinitely far. The smallest uncountable ordinal is the set of all countable ordinals, in a well-ordered set, every non-empty subset contains a distinct smallest element. Given the axiom of dependent choice, this is equivalent to just saying that the set is ordered and there is no infinite decreasing sequence
26.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
27.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
28.
Union (set theory)
–
In set theory, the union of a collection of sets is the set of all elements in the collection. It is one of the operations through which sets can be combined and related to each other. For explanation of the used in this article, refer to the table of mathematical symbols. The union of two sets A and B is the set of elements which are in A, in B, for example, if A = and B = then A ∪ B =. Sets cannot have duplicate elements, so the union of the sets and is, multiple occurrences of identical elements have no effect on the cardinality of a set or its contents. Binary union is an operation, that is, A ∪ = ∪ C. The operations can be performed in any order, and the parentheses may be omitted without ambiguity, similarly, union is commutative, so the sets can be written in any order. The empty set is an identity element for the operation of union and that is, A ∪ ∅ = A, for any set A. This follows from analogous facts about logical disjunction, since sets with unions and intersections form a Boolean algebra, intersection distributes over union A ∩ = ∪ and union distributes over intersection A ∪ = ∩. One can take the union of several sets simultaneously, for example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C. In mathematics a finite union means any union carried out on a number of sets. The most general notion is the union of a collection of sets. If M is a set whose elements are themselves sets, then x is an element of the union of M if, in symbols, x ∈ ⋃ M ⟺ ∃ A ∈ M, x ∈ A. This idea subsumes the preceding sections, in that A ∪ B ∪ C is the union of the collection, also, if M is the empty collection, then the union of M is the empty set. The notation for the concept can vary considerably. For a finite union of sets S1, S2, S3, …, S n one often writes S1 ∪ S2 ∪ S3 ∪ ⋯ ∪ S n or ⋃ i =1 n S i. In the case that the index set I is the set of natural numbers, whenever the symbol ∪ is placed before other symbols instead of between them, it is of a larger size
29.
Intersection (set theory)
–
In mathematics, the intersection A ∩ B of two sets A and B is the set that contains all elements of A that also belong to B, but no other elements. For explanation of the used in this article, refer to the table of mathematical symbols. The intersection of A and B is written A ∩ B, formally, A ∩ B = that is x ∈ A ∩ B if and only if x ∈ A and x ∈ B. For example, The intersection of the sets and is, the number 9 is not in the intersection of the set of prime numbers and the set of odd numbers. More generally, one can take the intersection of sets at once. The intersection of A, B, C, and D, Intersection is an associative operation, thus, A ∩ = ∩ C. Additionally, intersection is commutative, thus A ∩ B = B ∩ A, inside a universe U one may define the complement Ac of A to be the set of all elements of U not in A. We say that A intersects B if A intersects B at some element, a intersects B if their intersection is inhabited. We say that A and B are disjoint if A does not intersect B, in plain language, they have no elements in common. A and B are disjoint if their intersection is empty, denoted A ∩ B = ∅, for example, the sets and are disjoint, the set of even numbers intersects the set of multiples of 3 at 0,6,12,18 and other numbers. The most general notion is the intersection of a nonempty collection of sets. If M is a nonempty set whose elements are themselves sets, then x is an element of the intersection of M if, the notation for this last concept can vary considerably. Set theorists will sometimes write ⋂M, while others will instead write ⋂A∈M A, the latter notation can be generalized to ⋂i∈I Ai, which refers to the intersection of the collection. Here I is a nonempty set, and Ai is a set for every i in I. In the case that the index set I is the set of numbers, notation analogous to that of an infinite series may be seen. When formatting is difficult, this can also be written A1 ∩ A2 ∩ A3 ∩, even though strictly speaking, A1 ∩ (A2 ∩ (A3 ∩. Finally, let us note that whenever the symbol ∩ is placed before other symbols instead of them, it should be of a larger size. Note that in the section we excluded the case where M was the empty set
30.
Greatest common divisor
–
In mathematics, the greatest common divisor of two or more integers, when at least one of them is not zero, is the largest positive integer that is a divisor of both numbers. For example, the GCD of 8 and 12 is 4, the greatest common divisor is also known as the greatest common factor, highest common factor, greatest common measure, or highest common divisor. This notion can be extended to polynomials and other commutative rings, in this article we will denote the greatest common divisor of two integers a and b as gcd. What is the greatest common divisor of 54 and 24, the number 54 can be expressed as a product of two integers in several different ways,54 ×1 =27 ×2 =18 ×3 =9 ×6. Thus the divisors of 54 are,1,2,3,6,9,18,27,54, similarly, the divisors of 24 are,1,2,3,4,6,8,12,24. The numbers that these two share in common are the common divisors of 54 and 24,1,2,3,6. The greatest of these is 6 and that is, the greatest common divisor of 54 and 24. The greatest common divisor is useful for reducing fractions to be in lowest terms, for example, gcd =14, therefore,4256 =3 ⋅144 ⋅14 =34. Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1, for example,9 and 28 are relatively prime. For example, a 24-by-60 rectangular area can be divided into a grid of, 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, therefore,12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, in practice, this method is only feasible for small numbers, computing prime factorizations in general takes far too long. Here is another example, illustrated by a Venn diagram. Suppose it is desired to find the greatest common divisor of 48 and 180, first, find the prime factorizations of the two numbers,48 =2 ×2 ×2 ×2 ×3,180 =2 ×2 ×3 ×3 ×5. What they share in common is two 2s and a 3, Least common multiple =2 ×2 × ×3 ×5 =720 Greatest common divisor =2 ×2 ×3 =12. To compute gcd, divide 48 by 18 to get a quotient of 2, then divide 18 by 12 to get a quotient of 1 and a remainder of 6. Then divide 12 by 6 to get a remainder of 0, note that we ignored the quotient in each step except to notice when the remainder reached 0, signalling that we had arrived at the answer. Formally the algorithm can be described as, gcd = a gcd = gcd, in this sense the GCD problem is analogous to e. g. the integer factorization problem, which has no known polynomial-time algorithm, but is not known to be NP-complete. Shallcross et al. showed that a problem is NC-equivalent to the problem of integer linear programming with two variables, if either problem is in NC or is P-complete, the other is as well
31.
Least common multiple
–
Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm as 0 for all a, the LCM is the lowest common denominator that must be determined before fractions can be added, subtracted or compared. The LCM of more than two integers is also well-defined, it is the smallest positive integer that is divisible by each of them, a multiple of a number is the product of that number and an integer. For example,10 is a multiple of 5 because 5 ×2 =10, because 10 is the smallest positive integer that is divisible by both 5 and 2, it is the least common multiple of 5 and 2. By the same principle,10 is the least common multiple of −5, in this article we will denote the least common multiple of two integers a and b as lcm. The programming language J uses a*. b What is the LCM of 4 and 6. Multiples of 4 are,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76. and the multiples of 6 are,6,12,18,24,30,36,42,48,54,60,66,72. Common multiples of 4 and 6 are simply the numbers that are in both lists,12,24,36,48,60,72. So, from this list of the first few common multiples of the numbers 4 and 6, their least common multiple is 12. For instance,221 +16 =442 +742 =1142 where the denominator 42 was used because it is the least common multiple of 21 and 6. This formula is valid when exactly one of a and b is 0. However, if both a and b are 0, this formula would cause division by zero, lcm =0 is a special case, there are fast algorithms for computing the GCD that do not require the numbers to be factored, such as the Euclidean algorithm. To return to the example above, lcm =21 ⋅6 gcd =21 ⋅6 gcd =21 ⋅63 =1263 =42. Because gcd is a divisor of both a and b, it is efficient to compute the LCM by dividing before multiplying. This reduces the size of one input for both the division and the multiplication, and reduces the required storage needed for intermediate results. Because gcd is a divisor of both a and b, the division is guaranteed to yield an integer, so the result can be stored in an integer. Done this way, the previous becomes, lcm =21 gcd ⋅6 =21 gcd ⋅6 =213 ⋅6 =7 ⋅6 =42. The unique factorization theorem says that every integer greater than 1 can be written in only one way as a product of prime numbers
32.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
33.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0