In set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B. Products can be specified using e.g.. A × B =. A table can be created by taking the Cartesian product of a set of columns. If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More a Cartesian product of n sets known as an n-fold Cartesian product, can be represented by an array of n dimensions, where each element is an n-tuple. An ordered pair is a couple; the Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, further generalized in terms of direct product. An illustrative example is the standard 52-card deck; the standard playing card ranks form a 13-element set. The card suits form a four-element set; the Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, which correspond to all 52 possible playing cards.
Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form. Both sets are distinct disjoint; the main historical example is the Cartesian plane in analytic geometry. In order to represent geometrical shapes in a numerical way and extract numerical information from shapes' numerical representations, René Descartes assigned to each point in the plane a pair of real numbers, called its coordinates; such a pair's first and second components are called its x and y coordinates, respectively. The set of all such pairs is thus assigned to the set of all points in the plane. A formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair; the most common definition of ordered pairs, the Kuratowski definition, is =. Under this definition, is an element of P, X × Y is a subset of that set, where P represents the power set operator. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, power set, specification.
Since functions are defined as a special case of relations, relations are defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is prior to most other definitions. Let A, B, C, D be sets; the Cartesian product A × B is not commutative, A × B ≠ B × A, because the ordered pairs are reversed unless at least one of the following conditions is satisfied: A is equal to B, or A or B is the empty set. For example: A =. × C ≠ A × If for example A = × A = ≠ = A ×. The Cartesian product behaves nicely with respect to intersections. × = ∩. × ≠ ∪ In fact, we have that: ∪ = ∪ ∪ [ ( B
In computer science, a binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a binary tree is a tuple, where L and R are binary trees or the empty set and S is a singleton set; some authors allow the binary tree to be the empty set as well. From a graph theory perspective, binary trees as defined here are arborescences. A binary tree may thus be called a bifurcating arborescence—a term which appears in some old programming books, before the modern computer science terminology prevailed, it is possible to interpret a binary tree as an undirected, rather than a directed graph, in which case a binary tree is an ordered, rooted tree. Some authors use rooted binary tree instead of binary tree to emphasize the fact that the tree is rooted, but as defined above, a binary tree is always rooted. A binary tree is a special case of an ordered K-ary tree, where k is 2.
In mathematics, what is termed binary tree can vary from author to author. Some use the definition used in computer science, but others define it as every non-leaf having two children and don't order the children either. In computing, binary trees are used in two different ways: First, as a means of accessing nodes based on some value or label associated with each node. Binary trees labelled this way are used to implement binary search trees and binary heaps, are used for efficient searching and sorting; the designation of non-root nodes as left or right child when there is only one child present matters in some of these applications, in particular it is significant in binary search trees. However, the arrangement of particular nodes into the tree is not part of the conceptual information. For example, in a normal binary search tree the placement of nodes depends entirely on the order in which they were added, can be re-arranged without changing the meaning. Second, as a representation of data with a relevant bifurcating structure.
In such cases the particular arrangement of nodes under and/or to the left or right of other nodes is part of the information. Common examples occur with Huffman coding and cladograms; the everyday division of documents into chapters, paragraphs, so on is an analogous example with n-ary rather than binary trees. To define a binary tree in general, we must allow for the possibility that only one of the children may be empty. An artifact, which in some textbooks is called an extended binary tree is needed for that purpose. An extended binary tree is thus recursively defined as: the empty set is an extended binary tree if T1 and T2 are extended binary trees denote by T1 • T2 the extended binary tree obtained by adding a root r connected to the left to T1 and to the right to T2 by adding edges when these sub-trees are non-empty. Another way of imagining this construction is to consider instead of the empty set a different type of node—for instance square nodes if the regular ones are circles. A binary tree is a rooted tree, an ordered tree in which every node has at most two children.
A rooted tree imparts a notion of levels, thus for every node a notion of children may be defined as the nodes connected to it a level below. Ordering of these children makes possible to distinguish left child from right child, but this still doesn't distinguish between a node with left but not a right child from a one with right but no left child. The necessary distinction can be made by first partitioning the edges, i.e. defining the binary tree as triplet, where is a rooted tree and E1 ∩ E2 is empty, requiring that for all j ∈ every node has at most one Ej child. A more informal way of making the distinction is to say, quoting the Encyclopedia of Mathematics, that "every node has a left child, a right child, neither, or both" and to specify that these "are all different" binary trees. Tree terminology so varies in the literature. A rooted binary tree has a root node and every node has at most two children. A full binary tree is a tree in which every node has 2 children. Another way of defining a full binary tree is a recursive definition.
A full binary tree is either:A single vertex. A tree whose root note has two subtrees. In a complete binary tree every level, except the last, is filled, all nodes in the last level are as far left as possible, it can have between 2h nodes at the last level h. An alternative definition is a perfect tree; some authors use the term complete to refer instead to a perfect binary tree as defined above, in which case they call this type of tree an complete binary tree or nearly complete binary tree. A complete binary tree can be efficiently represented using an array. A perfect binary tree is a binary tree in which all interior nodes have two children and all leaves have the same depth or same level. An example of a perfect binary tree is the ancestry chart of a person to a given depth, as each person has two biological parents. In the infinite complete binary tree, every node has two children; the set of all nodes is countably infinite, but the set of all infinite paths from the root
A decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form. Different countries designate different symbols for the decimal separator; the choice of symbol for the decimal separator affects the choice of symbol for the thousands separator used in digit grouping, so the latter is treated in this article. Any such symbol can be called decimal marker or decimal sign, but symbol-specific names are used. In many contexts, when a number is spoken, the function of the separator is assumed by the spoken name of the symbol: comma or point in most cases. In some specialized contexts, the word decimal is instead used for this purpose. In mathematics the decimal separator is a type of radix point, a term that applies to number systems with bases other than ten. In the Middle Ages, before printing, a bar over the units digit was used to separate the integral part of a number from its fractional part, e.g. 9995. This practice derived from the decimal system used in Indian mathematics and was popularized by the Persian mathematician Al-Khwarizmi, when Latin translation of his work on the Indian numerals introduced the decimal positional number system to the Western world.
His Compendious Book on Calculation by Completion and Balancing presented the first systematic solution of linear and quadratic equations in Arabic. A similar notation remains in common use as an underbar to superscript digits for monetary values without a decimal separator, e.g. 9995. A "separatrix" between the units and tenths position became the norm among Arab mathematicians, while an L-shaped or vertical bar served as the separatrix in England; when this character was typeset, it was convenient to use the existing comma or full stop instead. Gerbert of Aurillac marked triples of columns with an arc when using his Hindu–Arabic numeral-based abacus in the 10th century. Fibonacci followed this convention when writing numbers such as in his influential work Liber Abaci in the 13th century. Tables of logarithms prepared by John Napier in 1614 and 1619 used the period as the decimal separator, adopted by Henry Briggs in his influential 17th century work. In France, the full stop was in use in printing to make Roman numerals more readable, so the comma was chosen.
Many other countries, such as Italy chose to use the comma to mark the decimal units position. It has been made standard by the ISO for international blueprints. However, English-speaking countries took the comma to separate sequences of three digits. In some countries, a raised dot or dash may be used for decimal separator. In the United States, the full stop or period was used as the standard decimal separator. In the nations of the British Empire, although the full stop could be used in typewritten material and its use was not banned, the interpunct was preferred for the decimal separator in printing technologies that could accommodate it, e.g. 99·95. However, as the mid dot was in common use in the mathematics world to indicate multiplication, the SI rejected its use as the decimal separator. During the beginning of British metrication in the late 1960s and with impending currency decimalisation, there was some debate in the United Kingdom as to whether the decimal comma or decimal point should be preferred: the British Standards Institution and some sectors of industry advocated the comma and the Decimal Currency Board advocated for the point.
In the event, the point was chosen by the Ministry of Technology in 1968. When South Africa adopted the metric system, it adopted the comma as its decimal separator, although a number of house styles, including some English-language newspapers such as The Sunday Times, continue to use the full stop; the three most spoken international auxiliary languages, Ido and Interlingua, all use the comma as the decimal separator. Interlingua has used the comma as its decimal separator since the publication of the Interlingua Grammar in 1951. Esperanto uses the comma as its official decimal separator, while thousands are separated by non-breaking spaces: 12 345 678,9. Ido's Kompleta Gramatiko Detaloza di la Linguo Internaciona Ido states that commas are used for the decimal separator while full stops are used to separate thousands, etc. So the number 12,345,678.90123 for instance, would be written 12.345.678,90123 in Ido. The 1931 grammar of Volapük by Arie de Jong uses the comma as its decimal separator, uses the middle dot as the thousands separator.
In 1958, disputes between European and American delegates over the correct representation of the decimal separator nearly stalled the development of the ALGOL computer programming language. ALGOL ended up allowing different decimal separators, but most computer languages and standard data formats specify a dot; the 22nd General Conference on Weights and Measures declared in 2003 that "the symbol for the decimal marker shall be either the point on the line or the comma on the line". It further reaffirmed that "numbers may be d
Adaptive mesh refinement
In numerical analysis, adaptive mesh refinement is a method of adapting the accuracy of a solution within certain sensitive or turbulent regions of simulation and during the time the solution is being calculated. When solutions are calculated numerically, they are limited to pre-determined quantified grids as in the Cartesian plane which constitute the computational grid, or'mesh'. Many problems in numerical analysis, however, do not require a uniform precision in the numerical grids used for graph plotting or computational simulation, would be better suited if specific areas of graphs which needed precision could be refined in quantification only in the regions requiring the added precision. Adaptive mesh refinement provides such a dynamic programming environment for adapting the precision of the numerical computation based on the requirements of a computation problem in specific areas of multi-dimensional graphs which need precision while leaving the other regions of the multi-dimensional graphs at lower levels of precision and resolution.
This dynamic technique of adapting computation precision to specific requirements has been accredited to Marsha Berger, Joseph Oliger, Phillip Colella who developed an algorithm for dynamic gridding called local adaptive mesh refinement. The use of AMR has since proved of broad use and has been used in studying turbulence problems in hydrodynamics as well as in the study of large scale structures in astrophysics as in the Bolshoi Cosmological Simulation. In a series of papers, Marsha Berger, Joseph Oliger, Phillip Colella developed an algorithm for dynamic gridding called local adaptive mesh refinement; the algorithm begins with the entire computational domain covered with a coarsely resolved base-level regular Cartesian grid. As the calculation progresses, individual grid cells are tagged for refinement, using a criterion that can either be user-supplied or based on Richardson extrapolation. All tagged cells are refined, meaning that a finer grid is overlaid on the coarse one. After refinement, individual grid patches on a single fixed level of refinement are passed off to an integrator which advances those cells in time.
A correction procedure is implemented to correct the transfer along coarse-fine grid interfaces, to ensure that the amount of any conserved quantity leaving one cell balances the amount entering the bordering cell. If at some point the level of refinement in a cell is greater than required, the high resolution grid may be removed and replaced with a coarser grid; this allows the user to solve problems that are intractable on a uniform grid. Advanced mesh refinement has been introduced via functionals. Functionals allow the ability to provide mesh adaptation; some advanced functionals include the modified Liao functionals. When calculating a solution to the shallow water equations, the solution might only be calculated for points every few feet apart—and one would assume that in between those points the height varies smoothly; the limiting factor to the resolution of the solution is thus the grid spacing: there will be no features of the numerical solution on scales smaller than the grid-spacing.
Adaptive mesh refinement changes the spacing of grid points, to change how the solution is known in that region. In the shallow water example, the grid might in general be spaced every few feet—but it could be adaptively refined to have grid points every few inches in places where there are large waves. If the region in which higher resolution is desired remains localized over the course of the computation static mesh refinement can be used - in which the grid is more finely spaced in some regions than others, but maintains its shape over time; the advantages of a dynamic gridding scheme are: Increased computational savings over a static grid approach. Increased storage savings over a static grid approach. Complete control of grid resolution, compared to the fixed resolution of a static grid approach, or the Lagrangian-based adaptivity of smoothed particle hydrodynamics. Compared to pre-tuned static meshes, the adaptive approach requires less detailed a priori knowledge on the evolution of the solution.
The computational costs inherit properties of the physical system. Berger, M. J.. "Local adaptive mesh refinement for shock hydrodynamics". J. Comput. Phys. 82: 64–84. Adaptive stepsize Cactus Framework Multigrid method Silo
Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s, as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Put, it represents each value as a range of possibilities. For example, instead of estimating the height of someone using standard arithmetic as 2.0 metres, using interval arithmetic we might be certain that that person is somewhere between 1.97 and 2.03 metres. This concept is suitable for a variety of purposes; the most common use is to keep track of and handle rounding errors directly during the calculation and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic helps find reliable and guaranteed solutions to equations and optimization problems.
Mathematically, instead of working with an uncertain real x we work with the two ends of the interval that contains x. In interval arithmetic, any variable x could be one of them. A function f when applied to x is uncertain. In interval arithmetic f produces an interval, all the possible values for f for all x ∈; the main focus of interval arithmetic is the simplest way to calculate upper and lower endpoints for the range of values of a function in one or more variables. These endpoints are not the supremum or infimum, since the precise calculation of those values can be difficult or impossible. Treatment is limited to real intervals, so quantities of form =, where a = − ∞ and b = ∞ are allowed; as with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined. More complicated functions can be calculated from these basic elements. Take as an example the calculation of body mass index; the BMI is the body weight in kilograms divided by the square of height in metres.
A bathroom scale may have a resolution of one kilogram. We do not know intermediate values – about 79.6 kg or 80.3 kg – but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone weighs 80.0 kg. In normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all real numbers that are greater than or equal to 79.5, while less than or equal to 80.5, or in other words the interval. For a man who weighs 80 kg and is 1.80 m tall, the BMI is about 24.7. With a weight of 79.5 kg and the same height the value is 24.5, while 80.5 kilograms gives 24.9. So the actual BMI is in the range; the error in this case does not affect the conclusion. For example, weight fluctuates in the course of a day so that the BMI can vary between 24 and 25. Without detailed analysis it is not possible to always exclude questions as to whether an error is large enough to have significant influence. Interval arithmetic states the range of possible outcomes explicitly.
Put, results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to error bars to a metric in expressing the extent of uncertainty. Simple arithmetic operations, such as basic arithmetic and trigonometric functions, enable the calculation of outer limits of intervals. Returning to the earlier BMI example, in determining the body mass index and body weight both affect the result. For height, measurements are in round centimetres: a recorded measurement of 1.80 metres means a height somewhere between 1.795 m and 1.805 m. This uncertainty must be combined with the fluctuation range in weight between 80.5 kg. The BMI is defined as the weight in kilograms divided by the square of height in metre. Using either 79.5 kg and 1.795 m or 80.5 kg and 1.805 m gives 24.7. But the person in question may only be 1.795 m tall, with a weight of 80.5 kilograms – or 1.805 m and 79.5 kilograms: all combinations of all possible intermediate values must be considered.
Using the interval arithmetic methods described below, the BMI lies in the interval / 2 = [ 24
Algebra is one of the broad parts of mathematics, together with number theory and analysis. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols, it includes everything from elementary equation solving to the study of abstractions such as groups and fields. The more basic parts of algebra are called elementary algebra. Elementary algebra is considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are either unknown or allowed to take on many values. For example, in x + 2 = 5 the letter x is unknown, but the law of inverses can be used to discover its value: x = 3. In E = mc2, the letters E and m are variables, the letter c is a constant, the speed of light in a vacuum.
Algebra gives methods for writing formulas and solving equations that are much clearer and easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of mathematical object in abstract algebra is called an "algebra", the word is used, for example, in the phrases linear algebra and algebraic topology. A mathematician who does research in algebra is called an algebraist; the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wa'l-muḳābala by the Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the fifteenth century, from either Spanish, Italian, or Medieval Latin, it referred to the surgical procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century; the word "algebra" has several related meanings as a single word or with qualifiers. As a single word without an article, "algebra" names a broad part of mathematics.
As a single word with an article or in plural, "an algebra" or "algebras" denotes a specific mathematical structure, whose precise definition depends on the author. The structure has an addition, a scalar multiplication; when some authors use the term "algebra", they make a subset of the following additional assumptions: associative, unital, and/or finite-dimensional. In universal algebra, the word "algebra" refers to a generalization of the above concept, which allows for n-ary operations. With a qualifier, there is the same distinction: Without an article, it means a part of algebra, such as linear algebra, elementary algebra, or abstract algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, an associative algebra, or a vertex operator algebra. Sometimes both meanings exist for the same qualifier, as in the sentence: Commutative algebra is the study of commutative rings, which are commutative algebras over the integers. Algebra began with letters standing for numbers.
This allowed proofs of properties. For example, in the quadratic equation a x 2 + b x + c = 0, a, b, c can be any numbers whatsoever, the quadratic formula can be used to and find the values of the unknown quantity x which satisfy the equation; that is to say. And in current teaching, the study of algebra starts with the solving of equations such as the quadratic equation above. More general questions, such as "does an equation have a solution?", "how many solutions does an equation have?", "what can be said about the nature of the solutions?" are considered. These questions led extending algebra to non-numerical objects, such as permutations, vectors and polynomials; the structural properties of these non-numerical objects were abstracted into algebraic structures such as groups and fields. Before the 16th century, mathematics was divided into only two subfields and geometry. Though some methods, developed much earlier, may be considered nowadays as algebra, the emergence of algebra and, soon thereafter, of infinitesimal calculus as subfields of mathematics only dates from the 16th or 17th century.
From the second half of 19th century on, many new fields of mathematics appeared, most of which made use of both arithmetic and geometry, all of which used algebra. Today, algebra has grown until it includes many branches of mathematics, as can be seen in the Mathematics Subject Classification where none of the first level areas is called algebra. Today algebra in
Maxima and minima
In mathematical analysis, the maxima and minima of a function, known collectively as extrema, are the largest and smallest value of the function, either within a given range or on the entire domain of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, for finding the maxima and minima of functions; as defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no maximum. A real-valued function f defined on a domain X has a global maximum point at x∗ if f ≥ f for all x in X. Similarly, the function has a global minimum point at x∗ if f ≤ f for all x in X; the value of the function at a maximum point is called the maximum value of the function and the value of the function at a minimum point is called the minimum value of the function. Symbolically, this can be written as follows: x 0 ∈ X is a global maximum point of function f: X → R if f ≥ f.
For global minimum point. If the domain X is a metric space f is said to have a local maximum point at the point x∗ if there exists some ε > 0 such that f ≥ f for all x in X within distance ε of x∗. The function has a local minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: Let be a metric space and function f: X → R. X 0 ∈ X is a local maximum point of function f if such that d X < ε ⟹ f ≥ f. For a local minimum point. In both the global and local cases, the concept of a strict extremum can be defined. For example, x∗ is a strict global maximum point if, for all x in X with x ≠ x∗, we have f > f, x∗ is a strict local maximum point if there exists some ε > 0 such that, for all x in X within distance ε of x∗ with x ≠ x∗, we have f > f. Note that a point is a strict global maximum point if and only if it is the unique global maximum point, for minimum points.
A continuous real-valued function with a compact domain always has a maximum point and a minimum point. An important example is a function. Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a local maximum in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum is to look at all the local maxima in the interior, look at the maxima of the points on the boundary, take the largest one; the most important, yet quite obvious, feature of continuous real-valued functions of a real variable is that they decrease before local minima and increase afterwards for maxima. A direct consequence of this is the Fermat's theorem, which states that local extrema must occur at critical points. One can distinguish whether a critical point is a local maximum or local minimum by using the first derivative test, second derivative test, or higher-order derivative test, given sufficient differentiability.
For any function, defined piecewise, one finds a maximum by finding the maximum of each piece separately, seeing which one is largest. The function x2 has a unique global minimum at x = 0; the function x3 has maxima. Although the first derivative is 0 at x = 0, this is an inflection point; the function x. The function x−x has a unique global maximum over the positive real numbers at x = 1/e; the function x3/3 − x has first derivative x2 − second derivative 2x. Setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign o