In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two binary operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series and functions. A ring is an abelian group with a second binary operation, associative, is distributive over the abelian group operation, has an identity element. By extension from the integers, the abelian group operation is called addition and the second binary operation is called multiplication. Whether a ring is commutative or not has profound implications on its behavior as an abstract object; as a result, commutative ring theory known as commutative algebra, is a key topic in ring theory. Its development has been influenced by problems and ideas occurring in algebraic number theory and algebraic geometry. Examples of commutative rings include the set of integers equipped with the addition and multiplication operations, the set of polynomials equipped with their addition and multiplication, the coordinate ring of an affine algebraic variety, the ring of integers of a number field.
Examples of noncommutative rings include the ring of n × n real square matrices with n ≥ 2, group rings in representation theory, operator algebras in functional analysis, rings of differential operators in the theory of differential operators, the cohomology ring of a topological space in topology. The conceptualization of rings was completed in the 1920s. Key contributors include Dedekind, Hilbert and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry and mathematical analysis; the most familiar example of a ring is the set of all integers, Z, consisting of the numbers …, −5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, …The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings. A ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms R is an abelian group under addition, meaning that: + c = a + for all a, b, c in R. a + b = b + a for all a, b in R.
There is an element 0 in R such that a + 0 = a for all a in R. For each a in R there exists −a in R such that a + = 0. R is a monoid under multiplication, meaning that: · c = a · for all a, b, c in R. There is an element 1 in R such that a · 1 = a and 1 · a = a for all a in R. Multiplication is distributive with respect to addition, meaning that: a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many authors follow an alternative convention in which a ring is not defined to have a multiplicative identity; this article adopts the convention that, unless otherwise stated, a ring is assumed to have such an identity. A structure satisfying all the axioms except the requirement that there exists a multiplicative identity element is called a rng. For example, the set of integers with the usual + and ⋅ is a rng, but not a ring; the operations + and ⋅ are called multiplication, respectively. The multiplication symbol ⋅ is omitted, so the juxtaposition of ring elements is interpreted as multiplication.
For example, xy means x ⋅ y. Although ring addition is commutative, ring multiplication is not required to be commutative: ab need not equal ba. Rings that satisfy commutativity for multiplication are called commutative rings. Books on commutative algebra or algebraic geometry adopt the convention that ring means commutative ring, to simplify terminology. In a ring, multiplication does not have to have an inverse. A commutative ring such; the additive group of a ring is the ring equipped just with the structure of addition. Although the definition assumes that the additive group is abelian, this can be inferred from the other ring axioms; some basic properties of a ring follow from the axioms: The additive identity, the additive inverse of each element, the multiplicative identity are unique. For any element x in a ring R, one has x0 = 0 = 0x and x = –x. If 0 = 1 in a ring R R has only one element, is called the zero ring; the binomial formula holds for any commuting pair of elements. Equip the set Z 4 = with the following operat
Mathematics includes the study of such topics as quantity, structure and change. Mathematicians use patterns to formulate new conjectures; when mathematical structures are good models of real phenomena mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity from as far back; the research required to solve mathematical problems can take years or centuries of sustained inquiry. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe Peano, David Hilbert, others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.
Mathematics is essential in many fields, including natural science, medicine and the social sciences. Applied mathematics has led to new mathematical disciplines, such as statistics and game theory. Mathematicians engage in pure mathematics without having any application in mind, but practical applications for what began as pure mathematics are discovered later; the history of mathematics can be seen as an ever-increasing series of abstractions. The first abstraction, shared by many animals, was that of numbers: the realization that a collection of two apples and a collection of two oranges have something in common, namely quantity of their members; as evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have recognized how to count abstract quantities, like time – days, years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic and geometry for taxation and other financial calculations, for building and construction, for astronomy.
The most ancient mathematical texts from Mesopotamia and Egypt are from 2000–1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry, it is in Babylonian mathematics that elementary arithmetic first appear in the archaeological record. The Babylonians possessed a place-value system, used a sexagesimal numeral system, still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, the Ancient Greeks began a systematic study of mathematics as a subject in its own right with Greek mathematics. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom and proof, his textbook Elements is considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is held to be Archimedes of Syracuse, he developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus.
Other notable achievements of Greek mathematics are conic sections, trigonometry (Hipparchus of Nicaea, the beginnings of algebra. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition of sine and cosine, an early form of infinite series. During the Golden Age of Islam during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics; the most notable achievement of Islamic mathematics was the development of algebra. Other notable achievements of the Islamic period are advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe.
The development of calculus by Newton and Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries; the foremost mathematician of the 19th century was the German mathematician Carl Friedrich Gauss, who made numerous contributions to fields such as algebra, differential geometry, matrix theory, number theory, statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show that any axiomatic system, consistent will contain unprovable propositions. Mathematics has since been extended, there has been a fruitful interaction between mathematics and science, to
Fundamental theorem of arithmetic
In number theory, the fundamental theorem of arithmetic called the unique factorization theorem or the unique-prime-factorization theorem, states that every integer greater than 1 either is a prime number itself or can be represented as the product of prime numbers and that, this representation is unique, up to the order of the factors. For example, 1200 = 24 × 31 × 52 = 2 × 2 × 2 × 2 × 3 × 5 × 5 = 5 × 2 × 5 × 2 × 3 × 2 × 2 =... The theorem says two things for this example: first, that 1200 can be represented as a product of primes, second, that no matter how this is done, there will always be four 2s, one 3, two 5s, no other primes in the product; the requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique. This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime factorization into primes would not be unique. Book VII, propositions 30, 31 and 32, Book IX, proposition 14 of Euclid's Elements are the statement and proof of the fundamental theorem.
If two numbers by multiplying one another make some number, any prime number measure the product, it will measure one of the original numbers. Proposition 30 is referred to as Euclid's lemma, it is the key in the proof of the fundamental theorem of arithmetic. Any composite number is measured by some prime number. Proposition 31 is proved directly by infinite descent. Any number either is measured by some prime number. Proposition 32 is derived from proposition 31, proves that the decomposition is possible. If a number be the least, measured by prime numbers, it will not be measured by any other prime number except those measuring it. Book IX, proposition 14 is derived from Book VII, proposition 30, proves that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case. Article 16 of Gauss' Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic.
Every positive integer n > 1 can be represented in one way as a product of prime powers: n = p 1 n 1 p 2 n 2 ⋯ p k n k = ∏ i = 1 k p i n i where p1 < p2 <... < pk are primes and the ni are positive integers. This representation is extended to all positive integers, including 1, by the convention that the empty product is equal to 1; this representation is called the canonical representation of n, or the standard form of n. For example, 999 = 33×37, 1000 = 23×53, 1001 = 7×11×13. Note that factors p0 = 1 may be inserted without changing the value of n. In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers: n = 2 n 1 3 n 2 5 n 3 7 n 4 ⋯ = ∏ i = 1 ∞ p i n i, where a finite number of the ni are positive integers, the rest are zero. Allowing negative exponents provides a canonical form for positive rational numbers; the canonical representations of the product, greatest common divisor, least common multiple of two numbers a and b can be expressed in terms of the canonical representations of a and b themselves: a ⋅ b = 2 a 1 + b 1 3 a 2 + b 2 5 a 3 + b 3 7 a 4 + b 4 ⋯ = ∏ p i a i + b i, gcd = 2 min 3 min ( a 2, b 2
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
In mathematics, a group is a set equipped with a binary operation which combines any two elements to form a third element in such a way that four conditions called group axioms are satisfied, namely closure, associativity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation, but groups are encountered in numerous areas within and outside mathematics, help focusing on essential structural aspects, by detaching them from the concrete nature of the subject of the study. Groups share a fundamental kinship with the notion of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists of the set of transformations that leave the object unchanged and the operation of combining two such transformations by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics; the concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s.
After contributions from other fields such as number theory and geometry, the group notion was generalized and established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists study the different ways in which a group can be expressed concretely, both from a point of view of representation theory and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory; the modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4.
The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots. The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θn = 1 gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884; the third field contributing to group theory was number theory.
Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work Disquisitiones Arithmeticae, more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers; the convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques. Walther von Dyck introduced the idea of specifying a group by means of generators and relations, was the first to give an axiomatic definition of an "abstract group", in the terminology of the time; as of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, more locally compact groups was studied by Hermann Weyl, Élie Cartan and many others.
Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley and by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004; this project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification; these days, group theory is still a active mathematical branch, impacting many other fields. One of the most familiar groups is the set of integers Z which consists of the numbers... − 4, − 3, − − 1, 0, 1, 2, 3, 4... together with addition. The following properties of integer addition serve as a model for the group axioms given in the definition below.
For any two integers a and b, the sum a + b is an integer. That is, addition of integers always yields an integer; this property is known as closure under addition. For all integers a, b and c, + c = a +. Expressed in words
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, it is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, is a part of many other number-theoretic and cryptographic calculations; the Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105, the same number 21 is the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal.
When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g. 21 = 5 × 105 + × 252. The fact that the GCD can always be expressed in this way is known as Bézout's identity; the version of the Euclidean algorithm described above can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two. With this improvement, the algorithm never requires more steps than five times the number of digits of the smaller integer; this was proven by Gabriel Lamé in 1844, marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century; the Euclidean algorithm has many practical applications.
It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, in methods for breaking these cryptosystems by factoring large composite numbers; the Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, to find accurate rational approximations to real numbers. It can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations; the original algorithm was described only for natural numbers and geometric lengths, but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
The Euclidean algorithm calculates b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for the GCD include the greatest common factor, the highest common factor, the highest common divisor, the greatest common measure; the greatest common divisor is written as gcd or, more as, although the latter notation is ambiguous used for concepts such as an ideal in the ring of integers, related to GCD. If gcd = 1 a and b are said to be coprime; this property does not imply that b are themselves prime numbers. For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 × 3 and 35 = 5 × 7. 6 and 35 are coprime. No natural number other than 1 divides both 35, since they have no prime factors in common. Let g = gcd. Since a and b are both multiples of g, they can be written a = mg and b = ng, there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater.
Thus, any other number c that divides both a and b must divide g. The greatest common divisor g of a and b is the unique common divisor of a and b, divisible by any other common divisor c; the GCD can be visualized. Consider a rectangular area a by b, any common divisor c that divides both a and b exactly; the sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The greatest common divisor g is the largest value of c. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge and five squares along the other; the GCD of two numbers a and b is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both a and b.
For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, 3213 can be factored into 3 × 3 × 3 × 7 × 17, the greatest common divisor of 1386 and 3213 eq