1.
Sequence
–
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed. Like a set, it contains members, the number of elements is called the length of the sequence. Unlike a set, order matters, and exactly the elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a function whose domain is either the set of the numbers or the set of the first n natural numbers. The position of an element in a sequence is its rank or index and it depends on the context or of a specific convention, if the first element has index 0 or 1. For example, is a sequence of letters with the letter M first, also, the sequence, which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, the empty sequence is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order, Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations, Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are useful for specific types of sequences. One way to specify a sequence is to list the elements, for example, the first four odd numbers form the sequence. This notation can be used for sequences as well. For instance, the sequence of positive odd integers can be written. Listing is most useful for sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples, the prime numbers are the natural numbers bigger than 1, that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence, the prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers are the integer sequence whose elements are the sum of the two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is, for a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences
2.
On-Line Encyclopedia of Integer Sequences
–
The On-Line Encyclopedia of Integer Sequences, also cited simply as Sloanes, is an online database of integer sequences. It was created and maintained by Neil Sloane while a researcher at AT&T Labs, Sloane continues to be involved in the OEIS in his role as President of the OEIS Foundation. OEIS records information on integer sequences of interest to professional mathematicians and amateurs, and is widely cited. As of 30 December 2016 it contains nearly 280,000 sequences, the database is searchable by keyword and by subsequence. Neil Sloane started collecting integer sequences as a student in 1965 to support his work in combinatorics. The database was at first stored on punched cards and he published selections from the database in book form twice, A Handbook of Integer Sequences, containing 2,372 sequences in lexicographic order and assigned numbers from 1 to 2372. The Encyclopedia of Integer Sequences with Simon Plouffe, containing 5,488 sequences and these books were well received and, especially after the second publication, mathematicians supplied Sloane with a steady flow of new sequences. The collection became unmanageable in book form, and when the database had reached 16,000 entries Sloane decided to go online—first as an e-mail service, as a spin-off from the database work, Sloane founded the Journal of Integer Sequences in 1998. The database continues to grow at a rate of some 10,000 entries a year, Sloane has personally managed his sequences for almost 40 years, but starting in 2002, a board of associate editors and volunteers has helped maintain the database. In 2004, Sloane celebrated the addition of the 100, 000th sequence to the database, A100000, in 2006, the user interface was overhauled and more advanced search capabilities were added. In 2010 an OEIS wiki at OEIS. org was created to simplify the collaboration of the OEIS editors and contributors, besides integer sequences, the OEIS also catalogs sequences of fractions, the digits of transcendental numbers, complex numbers and so on by transforming them into integer sequences. Sequences of rationals are represented by two sequences, the sequence of numerators and the sequence of denominators, important irrational numbers such as π =3.1415926535897. are catalogued under representative integer sequences such as decimal expansions, binary expansions, or continued fraction expansions. The OEIS was limited to plain ASCII text until 2011, yet it still uses a form of conventional mathematical notation. Greek letters are represented by their full names, e. g. mu for μ. Every sequence is identified by the letter A followed by six digits, sometimes referred to without the leading zeros, individual terms of sequences are separated by commas. Digit groups are not separated by commas, periods, or spaces, a represents the nth term of the sequence. Zero is often used to represent non-existent sequence elements, for example, A104157 enumerates the smallest prime of n² consecutive primes to form an n×n magic square of least magic constant, or 0 if no such magic square exists. The value of a is 2, a is 1480028129, but there is no such 2×2 magic square, so a is 0
3.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
4.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory
5.
Product (mathematics)
–
In mathematics, a product is the result of multiplying, or an expression that identifies factors to be multiplied. Thus, for instance,6 is the product of 2 and 3, the order in which real or complex numbers are multiplied has no bearing on the product, this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, matrix multiplication, for example, and multiplication in other algebras is in general non-commutative. There are many different kinds of products in mathematics, besides being able to multiply just numbers, polynomials or matricies, an overview of these different kinds of products is given here. Placing several stones into a pattern with r rows and s columns gives r ⋅ s = ∑ i =1 s r = ∑ j =1 r s stones. Integers allow positive and negative numbers, the product of two quaternions can be found in the article on quaternions. However, it is interesting to note that in this case, the product operator for the product of a sequence is denoted by the capital Greek letter Pi ∏. The product of a sequence consisting of one number is just that number itself. The product of no factors at all is known as the empty product, commutative rings have a product operation. Under the Fourier transform, convolution becomes point-wise function multiplication, others have very different names but convey essentially the same idea. A brief overview of these is given here, by the very definition of a vector space, one can form the product of any scalar with any vector, giving a map R × V → V. A scalar product is a map, ⋅, V × V → R with the following conditions. From the scalar product, one can define a norm by letting ∥ v ∥, = v ⋅ v, now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U, then one can get g ∘ f = g = g j k f i j v i b U k. Or in matrix form, g ∘ f = G F v, in which the i-row, j-column element of F, denoted by Fij, is fji, the composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. To see this, let r = dim, s = dim, let U = be a basis of U, V = be a basis of V and W = be a basis of W. Then B ⋅ A = M W U ∈ R s × t is the matrix representing g ∘ f, U → W, in other words, the matrix product is the description in coordinates of the composition of linear functions. For inifinite-dimensional vector spaces, one also has the, Tensor product of Hilbert spaces Topological tensor product, the tensor product, outer product and Kronecker product all convey the same general idea
6.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
7.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics
8.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
9.
Mathematical analysis
–
Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions. These theories are studied in the context of real and complex numbers. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis, analysis may be distinguished from geometry, however, it can be applied to any space of mathematical objects that has a definition of nearness or specific distances between objects. Mathematical analysis formally developed in the 17th century during the Scientific Revolution, early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, a geometric sum is implicit in Zenos paradox of the dichotomy. The explicit use of infinitesimals appears in Archimedes The Method of Mechanical Theorems, in Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle. Zu Chongzhi established a method that would later be called Cavalieris principle to find the volume of a sphere in the 5th century, the Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolles theorem in the 12th century. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and his followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century. The modern foundations of analysis were established in 17th century Europe. During this period, calculus techniques were applied to approximate discrete problems by continuous ones, in the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the definition of continuity in 1816. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required a change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations, the contributions of these mathematicians and others, such as Weierstrass, developed the -definition of limit approach, thus founding the modern field of mathematical analysis. In the middle of the 19th century Riemann introduced his theory of integration, the last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the epsilon-delta definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of numbers without proof. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the size of the set of discontinuities of real functions, also, monsters began to be investigated
10.
Permutation
–
These differ from combinations, which are selections of some members of a set where order is disregarded. For example, written as tuples, there are six permutations of the set, namely and these are all the possible orderings of this three element set. As another example, an anagram of a word, all of whose letters are different, is a permutation of its letters, in this example, the letters are already ordered in the original word and the anagram is a reordering of the letters. The study of permutations of finite sets is a topic in the field of combinatorics, Permutations occur, in more or less prominent ways, in almost every area of mathematics. For similar reasons permutations arise in the study of sorting algorithms in computer science, the number of permutations of n distinct objects is n factorial, usually written as n. which means the product of all positive integers less than or equal to n. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself and that is, it is a function from S to S for which every element occurs exactly once as an image value. This is related to the rearrangement of the elements of S in which each element s is replaced by the corresponding f, the collection of such permutations form a group called the symmetric group of S. The key to this structure is the fact that the composition of two permutations results in another rearrangement. Permutations may act on structured objects by rearranging their components, or by certain replacements of symbols, in elementary combinatorics, the k-permutations, or partial permutations, are the ordered arrangements of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set, fabian Stedman in 1677 described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells, first, two must be admitted to be varied in two ways which he illustrates by showing 12 and 21 and he then explains that with three bells there are three times two figures to be produced out of three which again is illustrated. His explanation involves cast away 3, and 1.2 will remain, cast away 2, and 1.3 will remain, cast away 1, and 2.3 will remain. He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three, effectively this is an recursive process. He continues with five bells using the casting method and tabulates the resulting 120 combinations. At this point he gives up and remarks, Now the nature of these methods is such, in modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. There are two equivalent common ways of regarding permutations, sometimes called the active and passive forms, or in older terminology substitutions and permutations, which form is preferable depends on the type of questions being asked in a given discipline. The active way to regard permutations of a set S is to them as the bijections from S to itself. Thus, the permutations are thought of as functions which can be composed with each other, forming groups of permutations
11.
Fabian Stedman
–
Fabian Stedman was a leading figure in the early history of campanology, particularly in the field of method ringing. His two books Tintinnalogia and Campanalogia are the first two publications on the subject, Fabian Stedman was the third son to Reverend Francis Stedman. His father Francis Stedman was born in Aston Munslow, Shropshire in 1598 and he took Holy Orders in 1625 at the parish of Yarkhill, Herefordshire in 1625. Francis had seven children by two wives, the eldest was Francis Junior who followed his father and became Rector of the parish of Stoke Lacy, Herefordshire in 1660. Fabian Stedman was born in 1640 and baptised at Yarkhill Church on 7 December of that year, at the age of 15 he went to London to learn the trade of master printing, apprenticed to Daniel Pakeman. However it was while in London that he became the well-known bellringer, while in London Fabian became a member of the Scholars of Cheapside, a society of ringing that practised at St Mary-le-Bow, the famous great bell of Bow from the nursery rhyme. He acted as their treasurer in 1662 and it seems the society disbanded and he then applied to be a member of the Ancient Society of College Youths. The College Youths accepted him in 1664 at the age of 23, Fabian Stedman acted as publisher to the first book on change ringing called Tintinnalogia, written by Richard Duckworth. The book was published in 1667 and is seemed to have been popular as a second print was followed a year later. It was said that he was appointed clerk to St Benets Church in Cambridge in 1670, and to have instructed the ringers. Campanalogia was written solely by Fabian in 1677, also the year he became steward to the College Youths, in 1682 he became the Master of the College Youths. Of his later life, little is known other than it seems it did not involve ringing and he changed jobs and became auditor to Customs and Excise for the Crown. He wrote his will on 17 October 1713 and died later that year and he was buried at the parish church of St Andrew Undershaft in the City of London on 16 November. The exact date of his death is not known, on the first page of Tintinnalogia are the words by a lover of that art F. Stedman. Fabian will be remembered for his principle that is commonly rung as much today as it was in the 17th century, Stedman Doubles to Cinques is rung in many parish churches in the British Isles and other countries which practice the English style of method ringing. Bells in English churches, though very carefully tuned in the scale, are not used for tunes. If rung in order downwards they are said to be ringing rounds, if the order changes according to a predetermined pattern, they are ringing the changes – hence the activity of church bell ringing is usually simply known as change-ringing. Because a bells swing takes a time which cannot be much altered by the ringer
12.
Change ringing
–
Change ringing is the art of ringing a set of tuned bells in a controlled manner to produce variations in their striking sequences. This culminated in the custom of ringing bells through a circle, which enabled ringers to accurately ring ordered sequences. The considerable weights of full-circle tower bells also means they cannot be stopped or started. Change ringing is practised worldwide, but it is by far most common on church bells in English churches, today, some towers have as many as sixteen bells that can be rung together, though six or eight bells are more common. The highest pitch bell is known as the treble, and the lowest is the tenor. For convenience, the bells are referred to by number, with the treble being number 1, the bells are usually tuned to a diatonic major scale, with the tenor bell being the tonic note of the scale. Some towers contain additional bells so that different subsets of the number can be rung. For instance, many 12-bell towers have a sixth, which if rung instead of the normal number 6 bell allows 2 to 9 to be rung as light diatonic octave. The bells in a tower reside in the chamber or belfry usually with louvred windows to enable the sound to escape. The bells are mounted within a bellframe of steel or wood, each bell is suspended from a headstock fitted on trunnions mounted to the belfry framework so that the bell assembly can rotate. The headstock is fitted with a stay, which, in conjunction with a slider. To the headstock a large wheel is fitted and to which a rope is attached. The rope wraps and unwraps as the bell rotates backwards and forwards and this is full circle ringing and quite different from fixed or limited motion bells, which chime. Within the bell the clapper is constrained to swing in the direction that the bell swings, the clapper is a rigid steel or wrought iron bar with a large ball to strike the bell. The thickest part of the mouth of bell is called the soundbow, beyond the ball is a flight, which controls the speed of the clapper. In very small bells this can be nearly as long as the rest of the clapper, below the bell chamber there may be one or more sound chambers, and through which the rope passes before it drops into the ringing chamber or room. Typically, the length is such that it falls close to or on to the floor of the ringing chamber. About 5 feet from the floor, the rope has a grip called the sally while the lower end of the rope is doubled over to form an easily held tail-end
13.
Multiplication
–
Multiplication is one of the four elementary, mathematical operations of arithmetic, with the others being addition, subtraction and division. Multiplication can also be visualized as counting objects arranged in a rectangle or as finding the area of a rectangle whose sides have given lengths, the area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis. The inverse operation of multiplication is division, for example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number, Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For these more abstract constructs, the order that the operands are multiplied sometimes does matter, a listing of the many different kinds of products that are used in mathematics is given in the product page. In arithmetic, multiplication is often written using the sign × between the terms, that is, in infix notation, there are other mathematical notations for multiplication, Multiplication is also denoted by dot signs, usually a middle-position dot,5 ⋅2 or 5. 2 The middle dot notation, encoded in Unicode as U+22C5 ⋅ dot operator, is standard in the United States, the United Kingdom, when the dot operator character is not accessible, the interpunct is used. In other countries use a comma as a decimal mark. In algebra, multiplication involving variables is often written as a juxtaposition, the notation can also be used for quantities that are surrounded by parentheses. In matrix multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a product of two vectors, yielding a vector as the result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk is still the most common notation and this is due to the fact that most computers historically were limited to small character sets that lacked a multiplication sign, while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language, the numbers to be multiplied are generally called the factors. The number to be multiplied is called the multiplicand, while the number of times the multiplicand is to be multiplied comes from the multiplier. Usually the multiplier is placed first and the multiplicand is placed second, however sometimes the first factor is the multiplicand, additionally, there are some sources in which the term multiplicand is regarded as a synonym for factor. In algebra, a number that is the multiplier of a variable or expression is called a coefficient, the result of a multiplication is called a product. A product of integers is a multiple of each factor, for example,15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5
14.
Empty set
–
In mathematics, and more specifically set theory, the empty set is the unique set having no elements, its size or cardinality is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, in other theories, many possible properties of sets are vacuously true for the empty set. Null set was once a synonym for empty set, but is now a technical term in measure theory. The empty set may also be called the void set, common notations for the empty set include, ∅, and ∅. The latter two symbols were introduced by the Bourbaki group in 1939, inspired by the letter Ø in the Norwegian, although now considered an improper use of notation, in the past,0 was occasionally used as a symbol for the empty set. The empty-set symbol ∅ is found at Unicode point U+2205, in LaTeX, it is coded as \emptyset for ∅ or \varnothing for ∅. In standard axiomatic set theory, by the principle of extensionality, hence there is but one empty set, and we speak of the empty set rather than an empty set. The mathematical symbols employed below are explained here, in this context, zero is modelled by the empty set. For any property, For every element of ∅ the property holds, There is no element of ∅ for which the property holds. Conversely, if for some property and some set V, the two statements hold, For every element of V the property holds, There is no element of V for which the property holds. By the definition of subset, the empty set is a subset of any set A. That is, every element x of ∅ belongs to A. Indeed, since there are no elements of ∅ at all, there is no element of ∅ that is not in A. Any statement that begins for every element of ∅ is not making any substantive claim and this is often paraphrased as everything is true of the elements of the empty set. When speaking of the sum of the elements of a finite set, the reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one, a disarrangement of a set is a permutation of the set that leaves no element in the same position. The empty set is a disarrangment of itself as no element can be found that retains its original position. Since the empty set has no members, when it is considered as a subset of any ordered set, then member of that set will be an upper bound. For example, when considered as a subset of the numbers, with its usual ordering, represented by the real number line
15.
Exponential function
–
In mathematics, an exponential function is a function of the form in which the input variable x occurs as an exponent. A function of the form f = b x + c, as functions of a real variable, exponential functions are uniquely characterized by the fact that the growth rate of such a function is directly proportional to the value of the function. The constant of proportionality of this relationship is the logarithm of the base b. The argument of the function can be any real or complex number or even an entirely different kind of mathematical object. Its ubiquitous occurrence in pure and applied mathematics has led mathematician W. Rudin to opine that the function is the most important function in mathematics. In applied settings, exponential functions model a relationship in which a constant change in the independent variable gives the same change in the dependent variable. The graph of y = e x is upward-sloping, and increases faster as x increases, the graph always lies above the x -axis but can get arbitrarily close to it for negative x, thus, the x -axis is a horizontal asymptote. The slope of the tangent to the graph at each point is equal to its y -coordinate at that point, as implied by its derivative function. Its inverse function is the logarithm, denoted log, ln, or log e, because of this. The exponential function exp, C → C can be characterized in a variety of equivalent ways, the constant e is then defined as e = exp = ∑ k =0 ∞. The exponential function arises whenever a quantity grows or decays at a proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number lim n → ∞ n now known as e, later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If instead interest is compounded daily, this becomes 365, letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp = lim n → ∞ n first given by Euler. This is one of a number of characterizations of the exponential function, from any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, exp = exp ⋅ exp which is why it can be written as ex. The derivative of the function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself is expressible in terms of the exponential function and this function property leads to exponential growth and exponential decay. The exponential function extends to a function on the complex plane. Eulers formula relates its values at purely imaginary arguments to trigonometric functions, the exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra
16.
Gamma function
–
In mathematics, the gamma function is an extension of the factorial function, with its argument shifted down by 1, to real and complex numbers. That is, if n is an integer, Γ =. The gamma function is defined for all numbers except the non-positive integers. The gamma function can be seen as a solution to the interpolation problem. The simple formula for the factorial, x. =1 ×2 × … × x, a good solution to this is the gamma function. There are infinitely many continuous extensions of the factorial to non-integers, the gamma function is the most useful solution in practice, being analytic, and it can be characterized in several ways. The Bohr–Mollerup theorem proves that these properties, together with the assumption that f be logarithmically convex, uniquely determine f for positive, from there, the gamma function can be extended to all real and complex values by using the unique analytic continuation of f. Also see Eulers infinite product definition below where the properties f =1 and f = x f together with the requirement that limn→+∞. nx / f =1 uniquely define the same function. The notation Γ is due to Legendre, if the real part of the complex number z is positive, then the integral Γ = ∫0 ∞ x z −1 e − x d x converges absolutely, and is known as the Euler integral of the second kind. The identity Γ = Γ z can be used to extend the integral formulation for Γ to a meromorphic function defined for all complex numbers z. It is this version that is commonly referred to as the gamma function. When seeking to approximate z. for a number z, it turns out that it is effective to first compute n. for some large integer n. And then use the relation m. = m. backwards n times. Furthermore, this approximation is exact in the limit as n goes to infinity, specifically, for a fixed integer m, it is the case that lim n → + ∞ n. m. =1, and we can ask that the formula is obeyed when the arbitrary integer m is replaced by an arbitrary complex number z lim n → + ∞ n. z. =1. Multiplying both sides by z. gives z. = lim n → + ∞ n. z, Z = lim n → + ∞1 ⋯ n ⋯ z = ∏ n =1 + ∞. Similarly for the function, the definition as an infinite product due to Euler is valid for all complex numbers z except the non-positive integers. By this construction, the function is the unique function that simultaneously satisfies Γ =1, Γ = z Γ for all complex numbers z except the non-positive integers
17.
Calculator
–
An electronic calculator is a small, portable electronic device used to perform operations ranging from basic arithmetic to complex mathematics. The first solid state electronic calculator was created in the 1960s, building on the history of tools such as the abacus. It was developed in parallel with the computers of the day. The pocket sized devices became available in the 1970s, especially after the first microprocessor and they later became used commonly within the petroleum industry. Modern electronic calculators vary, from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers and they became popular in the mid-1970s. By the end of decade, calculator prices had reduced to a point where a basic calculator was affordable to most. In addition to general purpose calculators, there are designed for specific markets. For example, there are scientific calculators which include trigonometric and statistical calculations, some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, as of 2016, basic calculators cost little, but the scientific and graphing models tend to cost more. In 1986, calculators still represented an estimated 41% of the worlds general-purpose hardware capacity to compute information, by 2007, this diminished to less than 0. 05%. Modern 2016 electronic calculators contain a keyboard with buttons for digits and arithmetical operations, most basic calculators assign only one digit or operation on each button, however, in more specific calculators, a button can perform multi-function working with key combinations. Large-sized figures and comma separators are used to improve readability. Various symbols for function commands may also be shown on the display, fractions such as 1⁄3 are displayed as decimal approximations, for example rounded to 0.33333333. Also, some fractions can be difficult to recognize in decimal form, as a result, Calculators also have the ability to store numbers into computer memory. Basic types of these only one number at a time. The variables can also be used for constructing formulas, some models have the ability to extend memory capacity to store more numbers, the extended memory address is termed an array index. Power sources of calculators are, batteries, solar cells or mains electricity, some models even have no turn-off button but they provide some way to put off. Crank-powered calculators were also common in the computer era
18.
Mathematical software
–
Mathematical software is software used to model, analyze or calculate numeric, symbolic or geometric data. It is a type of software which is used for solving mathematical problems or mathematical study. There are various views to what is the mathematics, so there is various views of the category of software which used for them. A type of software also used by built in the part of an another scientific software. A most primary them may be in the category of mathematical software and they are often usually built in the general purpose systems as middleware. So to speak, mathematical software is not only an application software, and that is its one of the characteristic of mathematical software as that mean. Several mathematical software often have good user interface for educational purpose, but the core parts of solver of them direct dependent to the algorism by the mathematical knowledge. So it may be common sense that it does not process if it not well solved on mathematical construction at least and that is typical difference of mathematical software for another application software. Specially, It may be common sense that to the attention that there is a such as next case in mathematical software using. That may be solved theoritically, but most hard to solve actually or physically by computer caused by not in the polynomial time, encryption software apply the second case. Numerical analysis and symbolic computation had been in most important place of the subject, an useful mathematical knowledge of such as algorism which exist before the invention of electronic computer, helped to mathematical software developing. On the other hand, by the growth of computing power, the progress of mathematical information presentation such as TeX or MathML will demand to evolution form formula manipulation language to true mathematics manipulation language. So the diversity of software will be keeped. A software calculator allows the user to perform simple mathematical operations, like addition, multiplication, exponentiation, data input is typically manual, and the output is a text label. Many mathematical suites are computer systems that use symbolic mathematics. They are designed to solve classical algebra equations and problems in human readable notation, many tools are available for statistical analysis of data. See also Comparison of statistical packages, TK Solver is a mathematical modeling and problem solving software system based on a declarative, rule-based language, commercialized by Universal Technical Systems, Inc. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran, commercial products implementing many different numerical algorithms include the IMSL, NMath and NAG libraries, a free alternative is the GNU Scientific Library
19.
Maple (software)
–
Maple is a symbolic and numeric computing environment, and is also a multi-paradigm programming language. Developed by Maplesoft, Maple also covers aspects of technical computing, including visualization, data analysis, matrix computation. A toolbox, MapleSim, adds functionality for physical modeling. Users can enter mathematics in traditional mathematical notation, custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization, examples of symbolic computations are given below. Maple incorporates a dynamically typed programming language which resembles Pascal. The language permits variables of lexical scope, there are also interfaces to other languages. There is also an interface to Excel, Maple supports MathML2.0, a W3C format for representing and interpreting mathematical expressions, including their display in Web pages. Maple is based on a kernel, written in C. Most functionality is provided by libraries, which come from a variety of sources, most of the libraries are written in the Maple language, these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs, the standard interface and calculator interface are written in Java. The first concept of Maple arose from a meeting in November 1980 at the University of Waterloo, researchers at the university wished to purchase a computer powerful enough to run Macsyma. Instead, it was decided that they would develop their own computer system that would be able to run on lower cost computers. The first limited version appearing in December 1980 with Maple demonstrated first at conferences beginning in 1982, the name is a reference to Maples Canadian heritage. By the end of 1983, over 50 universities had copies of Maple installed on their machines, in 1984, the research group arranged with Watcom Products Inc to license and distribute the first commercially available version, Maple 3.3. In 1988 Waterloo Maple Inc. was founded, the company’s original goal was to manage the distribution of the software. In 1989, the first graphical user interface for Maple was developed and included with version 4.3 for the Macintosh, x11 and Windows versions of the new interface followed in 1990 with Maple V
20.
Wolfram Mathematica
–
Wolfram Mathematica is a mathematical symbolic computation program, sometimes termed a computer algebra system or program, used in many scientific, engineering, mathematical, and computing fields. It was conceived by Stephen Wolfram and is developed by Wolfram Research of Champaign, the Wolfram Language is the programming language used in Mathematica. The kernel interprets expressions and returns result expressions, all content and formatting can be generated algorithmically or edited interactively. Standard word processing capabilities are supported, including real-time multi-lingual spell-checking, documents can be structured using a hierarchy of cells, which allow for outlining and sectioning of a document and support automatic numbering index creation. Documents can be presented in an environment for presentations. Notebooks and their contents are represented as Mathematica expressions that can be created, modified or analyzed by Mathematica programs or converted to other formats, the front end includes development tools such as a debugger, input completion, and automatic syntax highlighting. Among the alternative front ends is the Wolfram Workbench, an Eclipse based integrated development environment and it provides project-based code development tools for Mathematica, including revision management, debugging, profiling, and testing. There is a plugin for IntelliJ IDEA based IDEs to work with Wolfram Language code which in addition to syntax highlighting can analyse and auto-complete local variables, the Mathematica Kernel also includes a command line front end. Other interfaces include JMath, based on GNU readline and MASH which runs self-contained Mathematica programs from the UNIX command line, version 5.2 added automatic multi-threading when computations are performed on multi-core computers. This release included CPU specific optimized libraries, in addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed. Support for CUDA and OpenCL GPU hardware was added in 2010, also, since version 8 it can generate C code, which is automatically compiled by a system C compiler, such as GCC or Microsoft Visual Studio. A free-of-charge version, Wolfram CDF Player, is provided for running Mathematica programs that have saved in the Computable Document Format. It can also view standard Mathematica files, but not run them and it includes plugins for common web browsers on Windows and Macintosh. WebMathematica allows a web browser to act as a front end to a remote Mathematica server and it is designed to allow a user written application to be remotely accessed via a browser on any platform. It may not be used to full access to Mathematica. Due to bandwidth limitations interactive 3D graphics is not fully supported within a web browser, Wolfram Language code can be converted to C code or to an automatically generated DLL. Wolfram Language code can be run on a Wolfram cloud service as a web-app or as an API either on Wolfram-hosted servers or in an installation of the Wolfram Enterprise Private Cloud. Communication with other applications occurs through a protocol called Wolfram Symbolic Transfer Protocol and it allows communication between the Wolfram Mathematica kernel and front-end, and also provides a general interface between the kernel and other applications
21.
Fraction (mathematics)
–
A fraction represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction consists of an integer numerator displayed above a line, numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals. The numerator represents a number of parts, and the denominator. For example, in the fraction 3/4, the numerator,3, tells us that the fraction represents 3 equal parts, the picture to the right illustrates 34 or ¾ of a cake. Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, an integer such as the number 7 can be thought of as having an implicit denominator of one,7 equals 7/1. Other uses for fractions are to represent ratios and to represent division, thus the fraction ¾ is also used to represent the ratio 3,4 and the division 3 ÷4. The test for a number being a number is that it can be written in that form. In a fraction, the number of parts being described is the numerator. Informally, they may be distinguished by placement alone but in formal contexts they are separated by a fraction bar. The fraction bar may be horizontal, oblique, or diagonal and these marks are respectively known as the horizontal bar, the slash or stroke, the division slash, and the fraction slash. In typography, horizontal fractions are known as en or nut fractions and diagonal fractions as em fractions. The denominators of English fractions are expressed as ordinal numbers. When the denominator is 1, it may be expressed in terms of wholes but is commonly ignored. When the numerator is one, it may be omitted, a fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. Fractions should always be hyphenated when used as adjectives, alternatively, a fraction may be described by reading it out as the numerator over the denominator, with the denominator expressed as a cardinal number. The term over is used even in the case of solidus fractions, Fractions with large denominators that are not powers of ten are often rendered in this fashion while those with denominators divisible by ten are typically read in the normal ordinal fashion. A simple fraction is a number written as a/b or a b
22.
Combination
–
In mathematics, a combination is a way of selecting items from a collection, such that the order of selection does not matter. In smaller cases it is possible to count the number of combinations, more formally, a k-combination of a set S is a subset of k distinct elements of S. The set of all k-combinations of a set S is sometimes denoted by, combinations refer to the combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-selection, k-multiset, or k-combination with repetition are often used. If, in the example, it was possible to have two of any one kind of fruit there would be 3 more 2-selections, one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, with large sets this becomes impractical, for example, a poker hand can be described as a 5-combination of cards from a 52 card deck. The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter, there are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 /2,598,960. The same number however occurs in other mathematical contexts, where it is denoted by, notably it occurs as a coefficient in the binomial formula. One can define for all natural numbers k at once by the relation n = ∑ k ≥0 X k, binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to n, one can use the recursion relation = +, for 0 < k < n, which follows from n = n −1, this leads to the construction of Pascals triangle. For determining an individual binomial coefficient, it is practical to use the formula = n ⋯ k. When k exceeds n/2, the formula contains factors common to the numerator and the denominator. This expresses a symmetry that is evident from the formula, and can also be understood in terms of k-combinations by taking the complement of such a combination. Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember and it is obtained from the previous formula by multiplying denominator and numerator by. So it is inferior as a method of computation to that formula. The last formula can be directly, by considering the n. permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements, =52 ×51 ×50 ×49 ×48 ×47. Another alternative computation, equivalent to the first, is based on writing =1 ×2 ×3 × ⋯ × k, which gives =521 ×512 ×503 ×494 ×485 =2,598,960
23.
Binomial coefficient
–
In mathematics, any of the positive integers that occurs as a coefficient in the binomial theorem is a binomial coefficient. Commonly, a coefficient is indexed by a pair of integers n ≥ k ≥0 and is written. It is the coefficient of the xk term in the expansion of the binomial power n. The value of the coefficient is given by the expression n. k, arranging binomial coefficients into rows for successive values of n, and in which k ranges from 0 to n, gives a triangular array called Pascals triangle. The properties of binomial coefficients have led to extending the definition to beyond the case of integers n ≥ k ≥0. Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier, the earliest known detailed discussion of binomial coefficients is in a tenth-century commentary, by Halayudha, on an ancient Sanskrit text, Pingalas Chandaḥśāstra. In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī, alternative notations include C, nCk, nCk, Ckn, Cnk, and Cn, k in all of which the C stands for combinations or choices. Many calculators use variants of the C notation because they can represent it on a single-line display, in this form the binomial coefficients are easily compared to k-permutations of n, written as P, etc. For natural numbers n and k, the binomial coefficient can be defined as the coefficient of the monomial Xk in the expansion of n, the same coefficient also occurs in the binomial formula, which explains the name binomial coefficient. This shows in particular that is a number for any natural numbers n and k. Most of these interpretations are easily seen to be equivalent to counting k-combinations, several methods exist to compute the value of without actually expanding a binomial power or counting k-combinations. It also follows from tracing the contributions to Xk in n−1, as there is zero Xn+1 or X−1 in n, one might extend the definition beyond the above boundaries to include =0 when either k > n or k <0. This recursive formula then allows the construction of Pascals triangle, surrounded by white spaces where the zeros, or the trivial coefficients, a more efficient method to compute individual binomial coefficients is given by the formula = n k _ k. = n ⋯ k ⋯1 = ∏ i =1 k n +1 − i i and this formula is easiest to understand for the combinatorial interpretation of binomial coefficients. The numerator gives the number of ways to select a sequence of k distinct objects, retaining the order of selection, the denominator counts the number of distinct sequences that define the same k-combination when order is disregarded. Due to the symmetry of the binomial coefficient with regard to k and n−k, calculation may be optimised by setting the limit of the product above to the smaller of k. This formula follows from the formula above by multiplying numerator and denominator by. As a consequence it involves many factors common to numerator and denominator and it is less practical for explicit computation unless common factors are first cancelled
24.
Binomial formula
–
In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. For example,4 = x 4 +4 x 3 y +6 x 2 y 2 +4 x y 3 + y 4, the coefficient a in the term of a xb yc is known as the binomial coefficient or. These coefficients for varying n and b can be arranged to form Pascals triangle and these numbers also arise in combinatorics, where gives the number of different combinations of b elements that can be chosen from an n-element set. Special cases of the theorem were known from ancient times. Greek mathematician Euclid mentioned the case of the binomial theorem for exponent 2. There is evidence that the theorem for cubes was known by the 6th century in India. Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting k objects out of n without replacement, were of interest to the ancient Hindus. The earliest known reference to this problem is the Chandaḥśāstra by the Hindu lyricist Pingala. The commentator Halayudha from the 10th century A. D. explains this method using what is now known as Pascals triangle. By the 6th century A. D. the Hindu mathematicians probably knew how to express this as a quotient n. k. the binomial theorem as such can be found in the work of 11th-century Persian mathematician Al-Karaji, who described the triangular pattern of the binomial coefficients. He also provided a proof of both the binomial theorem and Pascals triangle, using a primitive form of mathematical induction. The Persian poet and mathematician Omar Khayyam was probably familiar with the formula to higher orders, the binomial expansions of small degrees were known in the 13th century mathematical works of Yang Hui and also Chu Shih-Chieh. Yang Hui attributes the method to a much earlier 11th century text of Jia Xian, in 1544, Michael Stifel introduced the term binomial coefficient and showed how to use them to express n in terms of n −1, via Pascals triangle. Blaise Pascal studied the eponymous triangle comprehensively in the treatise Traité du triangle arithmétique, however, the pattern of numbers was already known to the European mathematicians of the late Renaissance, including Stifel, Niccolò Fontana Tartaglia, and Simon Stevin. Isaac Newton is generally credited with the binomial theorem, valid for any rational exponent. This formula is also referred to as the formula or the binomial identity. Using summation notation, it can be written as n = ∑ k =0 n x n − k y k = ∑ k =0 n x k y n − k. A simple variant of the formula is obtained by substituting 1 for y
25.
Average
–
In colloquial language, an average is the sum of a list of numbers divided by the number of numbers in the list. In mathematics and statistics, this would be called the arithmetic mean, in statistics, mean, median, and mode are all known as measures of central tendency. The most common type of average is the arithmetic mean, one may find that A = /2 =5. Switching the order of 2 and 8 to read 8 and 2 does not change the value obtained for A. The mean 5 is not less than the minimum 2 nor greater than the maximum 8. If we increase the number of terms in the list to 2,8, and 11, one finds that A = /3 =7. Along with the arithmetic mean above, the mean and the harmonic mean are known collectively as the Pythagorean means. The geometric mean of n numbers is obtained by multiplying them all together. See Inequality of arithmetic and geometric means, thus for the above harmonic mean example, AM =50, GM ≈49, and HM =48 km/h. The mode, the median, and the mid-range are often used in addition to the mean as estimates of central tendency in descriptive statistics, the most frequently occurring number in a list is called the mode. For example, the mode of the list is 3 and it may happen that there are two or more numbers which occur equally often and more often than any other number. In this case there is no agreed definition of mode, some authors say they are all modes and some say there is no mode. The median is the number of the group when they are ranked in order. Thus to find the median, order the list according to its elements magnitude, if exactly one value is left, it is the median, if two values, the median is the arithmetic mean of these two. This method takes the list 1,7,3,13, then the 1 and 13 are removed to obtain the list 3,7. Since there are two elements in this remaining list, the median is their arithmetic mean, /2 =5, the table of mathematical symbols explains the symbols used below. Other more sophisticated averages are, trimean, trimedian, and normalized mean, one can create ones own average metric using the generalized f-mean, y = f −1 where f is any invertible function. The harmonic mean is an example of this using f = 1/x, however, this method for generating means is not general enough to capture all averages
26.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
27.
Taylor's theorem
–
In calculus, Taylors theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. For analytic functions the Taylor polynomials at a point are finite order truncations of its Taylor series. The exact content of Taylors theorem is not universally agreed upon, indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial. Taylors theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712, yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory, Taylors theorem is taught in introductory level calculus courses and it is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the point of more advanced asymptotic analysis. Taylors theorem also generalizes to multivariate and vector valued functions f, R n → R m on any dimensions n and m and this generalization of Taylors theorem is the basis for the definition of so-called jets which appear in differential geometry and partial differential equations. If a real-valued function f is differentiable at the point a then it has an approximation at the point a. This means that there exists a function h1 such that f = f + f ′ + h 1, here P1 = f + f ′ is the linear approximation of f at the point a. The graph of y = P1 is the tangent line to the graph of f at x = a, the error in the approximation is R1 = f − P1 = h 1. Note that this goes to zero a little bit faster than x − a as x tends to a, if we wanted a better approximation to f, we might instead try a quadratic polynomial instead of a linear function. Instead of just matching one derivative of f at a, we can match two derivatives, thus producing a polynomial that has the slope and concavity as f at a. The quadratic polynomial in question is P2 = f + f ′ + f ″22, Taylors theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of the point a, a better approximation than the linear approximation. Specifically, f = P2 + h 22, lim x → a h 2 =0. Here the error in the approximation is R2 = f − P2 = h 22 which, given the behavior of h 2. Similarly, we might get better approximations to f if we use polynomials of higher degree. In general, the error in approximating a function by a polynomial of degree k will go to zero a little bit faster than k as x tends to a. Find the smallest degree k for which the polynomial Pk approximates f to within an error on a given interval
28.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
29.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
30.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
31.
Exponential growth
–
Exponential decay occurs in the same way when the growth rate is negative. In the case of a domain of definition with equal intervals, it is also called geometric growth or geometric decay. In either exponential growth or exponential decay, the ratio of the rate of change of the quantity to its current size remains constant over time. The formula for growth of a variable x at the growth rate r. This formula is transparent when the exponents are converted to multiplication, in this way, each increase in the exponent by a full interval can be seen to increase the previous total by another five percent. Since the time variable, which is the input to function, occurs as the exponent. Biology The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. Because exponential growth indicates constant growth rate, it is assumed that exponentially growing cells are at a steady-state. However, cells can grow exponentially at a constant rate while remodelling their metabolism, a virus typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people, human population, if the number of births and deaths per person per year were to remain at current levels. This means that the time of the American population is approximately 50 years. Physics Avalanche breakdown within a dielectric material, a free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons, the resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material. Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is an approximation to think of the first 53 generations as a latency period leading up to the actual explosion. Economics Economic growth is expressed in terms, implying exponential growth. For example, U. S. GDP per capita has grown at a rate of approximately two percent since World War 2. Finance Compound interest at a constant interest rate provides exponential growth of the capital, pyramid schemes or Ponzi schemes also show this type of growth resulting in high profits for a few initial investors and losses among great numbers of investors
32.
Double exponential function
–
A double exponential function is a constant raised to the power of an exponential function. The general formula is f = a b x = a, for example, if a = b =10, f =10 f =1010 f =10100 = googol f =101000 f =1010100 = googolplex. Factorials grow more quickly than exponential functions, but much more slowly than doubly exponential functions, tetration and the Ackermann function grow even faster. See Big O notation for a comparison of the rate of growth of various functions, the inverse of the double exponential function is the double logarithm ln. Aho and Sloane observed that in several important integer sequences, each term is a constant plus the square of the previous term. They show that such sequences can be formed by rounding to the nearest integer the values of an exponential function in which the middle exponent is two. Integer sequences with this behavior include The Fermat numbers F =22 m +1 The harmonic primes, The primes p. The first few numbers, starting with 0, are 2,5,277,5195977. The Double Mersenne numbers M M =22 p −1 −1 The elements of Sylvesters sequence s n = ⌊ E2 n +1 +12 ⌋ where E ≈1.264084735305302 is Vardis constant. Additional sequences of this type include The prime numbers 2,11,1361, a = ⌊ A3 n ⌋ where A ≈1.306377883863 is Mills constant. In the worst case, a Gröbner basis may have a number of elements which is exponential in the number of variables. On the other hand, the complexity of Gröbner basis algorithms is doubly exponential in the number of variables as well as in the entry size. Finding a complete set of associative-commutative unifiers Satisfying CTL+ Quantifier elimination on real closed fields takes doubly exponential time. An example is Chans algorithm for computing convex hulls, which performs a sequence of computations using test values hi = 22i, thus, the overall time for the algorithm is O where h is the actual output size. Some number theoretical bounds are double exponential, odd perfect numbers with n distinct prime factors are known to be at most 24 n a result of Nielsen. The maximal volume of a polytope with k ≥1 interior lattice points is at most d ⋅15 d ⋅22 d +1 a result of Pikhurko. The largest known prime number in the era has grown roughly as a double exponential function of the year since Miller and Wheeler found a 79-digit prime on EDSAC1 in 1951. In population dynamics the growth of population is sometimes supposed to be double exponential
33.
Natural logarithm
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one
34.
Linear function
–
In linear algebra and functional analysis, a linear function is a linear map. In calculus, analytic geometry and related areas, a function is a polynomial of degree one or less. When the function is of one variable, it is of the form f = a x + b. The graph of such a function of one variable is a nonvertical line, a is frequently referred to as the slope of the line, and b as the intercept. For a function f of any number of independent variables, the general formula is f = b + a 1 x 1 + … + a k x k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial and its graph, when there is only one independent variable, is a horizontal line. In this context, the meaning may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning is a kind of affine map. In linear algebra, a function is a map f between two vector spaces that preserves vector addition and scalar multiplication, f = f + f f = a f. Here a denotes a constant belonging to some field K of scalars and x and y are elements of a vector space, some authors use linear function only for linear maps that take values in the scalar field, these are also called linear functionals. The linear functions of calculus qualify as linear maps when f =0, or, equivalently, geometrically, the graph of the function must pass through the origin. Homogeneous function Nonlinear system Piecewise linear function Linear interpolation Discontinuous linear map Izrail Moiseevich Gelfand, Lectures on Linear Algebra, Interscience Publishers, ISBN 0-486-66082-6 Thomas S. Shores, Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6 James Stewart, Calculus, Early Transcendentals, edition 7E, ISBN 978-0-538-49790-9 Leonid N. Vaserstein, Linear Programming, in Leslie Hogben, ed. Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap
35.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
36.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
37.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
38.
Sorting algorithm
–
A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order, more formally, the output must satisfy two conditions, The output is in nondecreasing order, The output is a permutation of the input. Since the dawn of computing, the problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple. For example, bubble sort was analyzed as early as 1956, comparison sorting algorithms have a fundamental requirement of O comparisons, algorithms not based on comparisons, such as counting sort, can have better performance. Sorting algorithms are classified by, Computational complexity in terms of the size of the list. For typical serial sorting algorithms good behavior is O, with parallel sort in O, ideal behavior for a serial sort is O, but this is not possible in the average case. Optimal parallel sorting is O. Comparison-based sorting algorithms, need at least O comparisons for most inputs, in particular, some sorting algorithms are in-place. Strictly, an in-place sort needs only O memory beyond the items being sorted, some algorithms are either recursive or non-recursive, while others may be both. Stability, stable sorting algorithms maintain the order of records with equal keys. Whether or not they are a comparison sort, a comparison sort examines the data only by comparing two elements with a comparison operator. General method, insertion, exchange, selection, merging, etc, exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort, also whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates upon serial algorithms, adaptability, Whether or not the presortedness of the input affects the running time. Algorithms that take this account are known to be adaptive. When sorting some kinds of data, only part of the data is examined when determining the sort order, for example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list, more formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record, and the key is the rank. A sorting algorithm is stable if there are two records R and S with the same key, and R appears before S in the original list
39.
Comparison sort
–
The only requirement is that the operator obey two of the properties of a total order, if a ≤ b and b ≤ c then a ≤ c for all a and b, either a ≤ b or b ≤ a. It is possible that both a ≤ b and b ≤ a, in case either may come first in the sorted list. In a stable sort, the order determines the sorted order in this case. A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one is heavier. A comparison sort must have a lower bound of Ω comparison operations. This is a consequence of the information available through comparisons alone — or, to put it differently. In this sense, mergesort, heapsort, and introsort are asymptotically optimal in terms of the number of comparisons they must perform, non-comparison sorts can achieve O performance by using operations other than comparisons, allowing them to sidestep this lower bound. Note that comparison sorts may run faster on some lists, many adaptive sorts such as insertion sort run in O time on an already-sorted or nearly-sorted list, the Ω lower bound applies only to the case in which the input list can be in any possible order. Comparison sorts generally adapt more easily to complex such as the order of floating-point numbers. Additionally, once a comparison function is written, any sort can be used without modification, non-comparison sorts typically require specialized versions for each datatype. This flexibility, together with the efficiency of the above comparison sorting algorithms on modern computers, has led to widespread preference for comparison sorts in most practical work. Some sorting problems admit a strictly faster solution than the Ω bound for comparison sorting, an example is integer sorting, when the keys form a small range, counting sort is an example algorithm that runs in linear time. Other integer sorting algorithms, such as sort, are not asymptotically faster than comparison sorting. The problem of sorting pairs of numbers by their sum is not subject to the Ω bound either, the best known algorithm still takes O time, but only O comparisons. The number of comparisons that a sort algorithm requires increases in proportion to n log . Given a list of numbers, there are n factorial permutations exactly one of which is the list in sorted order. The sort algorithm must gain enough information from the comparisons to identify the correct permutation, if the algorithm always completes after at most f steps, it cannot distinguish more than 2f cases because the keys are distinct and each comparison has only two possible outcomes. Therefore,2 f ≥ n. or equivalently f ≥ log 2 , from Stirlings approximation we know that log 2 = n log 2 n − n + O = Ω
40.
Stirling's approximation
–
In mathematics, Stirlings approximation is an approximation for factorials. It is a good-quality approximation, leading to accurate results even for small values of n and it is named after James Stirling, though it was first stated by Abraham de Moivre. The formula as used in applications is ln n. = n ln n − n + O or, for instance in the lower bound for comparison sorting. N n +12 e − n is always between √2π =2.5066. and e =2.71828, the formula, together with precise estimates of its error, can be derived as follows. Instead of approximating n. one considers its natural logarithm as this is a slowly varying function, take limits to find that lim n → ∞ =1 − ∑ k =2 m k B k k + lim n → ∞ R m, n. = n ln +12 ln n + y + ∑ k =2 m k B k k n k −1 + O. Taking the exponential of both sides, and choosing any positive m, we get a formula involving an unknown quantity ey. For m =1, the formula is n, the quantity ey can be found by taking the limit on both sides as n tends to infinity and using Wallis product, which shows that ey = √2π. Therefore, we get Stirlings formula, n. =2 π n n, the formula may also be obtained by repeated integration by parts, and the leading term can be found through Laplaces method. Stirlings formula, without the factor √2πn that is irrelevant in applications. = ∑ j =1 n ln j with an integral, an alternative formula for n. using the gamma function is n. = ∫0 ∞ x n e − x d x. Rewriting and changing variables x = ny one gets n. = ∫0 ∞ e n ln x − x d x = e n ln n n ∫0 ∞ e n d y. Applying Laplaces method we have, ∫0 ∞ e n d y ∼2 π n e − n which recovers the Stirlings formula, ∼ e n ln n n 2 π n e − n =2 π n n. In fact further corrections can also be obtained using Laplaces method, for example, computing two-order expansion using Laplaces method yields ∫0 ∞ e n d y ∼2 π n e − n and gives Stirlings formula to two orders, n. ∼ e n ln n n 2 π n e − n =2 π n n, Stirlings formula is in fact the first approximation to the following series, n. ∼2 π n n. An explicit formula for the coefficients in this series was given by G. Nemes, the first graph in this section shows the relative error vs. n, for 1 through all 5 terms listed above. As n → ∞, the error in the series is asymptotically equal to the first omitted term
41.
Srinivasa Ramanujan
–
Srinivasa Iyengar Ramanujan FRS was an Indian mathematician and autodidact who lived during the British Raj. Though he had almost no training in pure mathematics, he made substantial contributions to mathematical analysis, number theory, infinite series. Ramanujan initially developed his own research in isolation, it was quickly recognized by Indian mathematicians. When his skills became obvious and known to the mathematical community, centred in Europe at the time. The Cambridge professor realized that Srinivasa Ramanujan had produced new theorems in addition to rediscovering previously known ones, during his short life, Ramanujan independently compiled nearly 3,900 results. Nearly all his claims have now been proven correct and his original and highly unconventional results, such as the Ramanujan prime and the Ramanujan theta function, have inspired a vast amount of further research. The Ramanujan Journal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan. Deeply religious, Ramanujan credited his substantial mathematical capacities to divinity, An equation for me has no meaning, he once said, the name Ramanujan means younger brother of the god Rama. Iyengar is a caste of Hindu Brahmins of Tamil origin whose members follow the Visishtadvaita philosophy propounded by Ramanuja, Ramanujan was born on 22 December 1887 into a Tamil Brahmin Iyengar family in Erode, Madras Presidency, at the residence of his maternal grandparents. His father, K. Srinivasa Iyengar, worked as a clerk in a sari shop and his mother, Komalatammal, was a housewife and also sang at a local temple. They lived in a traditional home on Sarangapani Sannidhi Street in the town of Kumbakonam. The family home is now a museum, when Ramanujan was a year and a half old, his mother gave birth to a son, Sadagopan, who died less than three months later. In December 1889, Ramanujan contracted smallpox, but unlike the thousands in the Thanjavur district who died of the disease that year and he moved with his mother to her parents house in Kanchipuram, near Madras. His mother gave birth to two children, in 1891 and 1894, but both died in infancy. On 1 October 1892, Ramanujan was enrolled at the local school, after his maternal grandfather lost his job as a court official in Kanchipuram, Ramanujan and his mother moved back to Kumbakonam and he was enrolled in the Kangayan Primary School. When his paternal grandfather died, he was sent back to his maternal grandparents and he did not like school in Madras, and tried to avoid attending. His family enlisted a local constable to make sure the boy attended school, within six months, Ramanujan was back in Kumbakonam. Since Ramanujans father was at work most of the day, his mother took care of the boy as a child and he had a close relationship with her
42.
Personal computer
–
A personal computer is a multi-purpose electronic computer whose size, capabilities, and price make it feasible for individual use. PCs are intended to be operated directly by a end-user, rather than by an expert or technician. In the 2010s, PCs are typically connected to the Internet, allowing access to the World Wide Web, personal computers may be connected to a local area network, either by a cable or a wireless connection. In the 2010s, a PC may be, a multi-component desktop computer, designed for use in a location a laptop computer, designed for easy portability or a tablet computer. In the 2010s, PCs run using a system, such as Microsoft Windows, Linux. The very earliest microcomputers, equipped with a front panel, required hand-loading of a program to load programs from external storage. Before long, automatic booting from permanent read-only memory became universal, in the 2010s, users have access to a wide range of commercial software, free software and free and open-source software, which are provided in ready-to-run or ready-to-compile form. Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the computer market, first with MS-DOS. Alternatives to Microsofts Windows operating systems occupy a minority share of the industry and these include Apples OS X and free open-source Unix-like operating systems such as Linux and Berkeley Software Distribution. Advanced Micro Devices provides the alternative to Intels processors. PC is an initialism for personal computer, some PCs, including the OLPC XOs, are equipped with x86 or x64 processors but not designed to run Microsoft Windows. PC is used in contrast with Mac, an Apple Macintosh computer and this sense of the word is used in the Get a Mac advertisement campaign that ran between 2006 and 2009, as well as its rival, Im a PC campaign, that appeared in 2008. Since Apples transition to Intel processors starting 2005, all Macintosh computers are now PCs, the “brain” may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far, in the history of computing there were many examples of computers designed to be used by one person, as opposed to terminals connected to mainframe computers. Using the narrow definition of operated by one person, the first personal computer was the ENIAC which became operational in 1946 and it did not meet further definitions of affordable or easy to use. An example of an early single-user computer was the LGP-30, created in 1956 by Stan Frankel and used for science and it came with a retail price of $47, 000—equivalent to about $414,000 today. Introduced at the 1965 New York Worlds Fair, the Programma 101 was a programmable calculator described in advertisements as a desktop computer. It was manufactured by the Italian company Olivetti and invented by the Italian engineer Pier Giorgio Perotto, the Soviet MIR series of computers was developed from 1965 to 1969 in a group headed by Victor Glushkov