1.
Analytic hierarchy process
–
The analytic hierarchy process is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology. It was developed by Thomas L. Saaty in the 1970s and has extensively studied and refined since then. Rather than prescribing a correct decision, the AHP helps decision makers find one that best suits their goal, users of the AHP first decompose their decision problem into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. In making the comparisons, the decision makers can use data about the elements. It is the essence of the AHP that human judgments, and not just the underlying information, the AHP converts these evaluations to numerical values that can be processed and compared over the entire range of the problem. A numerical weight or priority is derived for each element of the hierarchy, allowing diverse and often incommensurable elements to be compared to one another in a rational and this capability distinguishes the AHP from other decision making techniques. In the final step of the process, numerical priorities are calculated for each of the decision alternatives and these numbers represent the alternatives relative ability to achieve the decision goal, so they allow a straightforward consideration of the various courses of action. Several firms supply computer software to assist in using the process, Decision situations to which the AHP can be applied include, Choice – The selection of one alternative from a given set of alternatives, usually where there are multiple decision criteria involved. Other areas have included forecasting, total quality management, business process re-engineering, quality function deployment, many AHP applications are never reported to the world at large, because they take place at high levels of large organizations where security and privacy considerations prohibit their disclosure. But some uses of AHP are discussed in the literature. S and it was recently applied to a project that uses video footage to assess the condition of highways in Virginia. Highway engineers first used it to determine the scope of the project. It is an important subject in the quality field, and is taught in many specialized courses including Six Sigma, Lean Six Sigma. The value of the AHP is recognized in developed and developing countries around the world, China is an example—nearly a hundred Chinese universities offer courses in AHP, and many doctoral students choose AHP as the subject of their research and dissertations. Over 900 papers have been published on the subject in China, the International Symposium on the Analytic Hierarchy Process holds biennial meetings of academics and practitioners interested in the field. A wide range of topics are covered and those in 2005 ranged from Establishing Payment Standards for Surgical Specialists, to Strategic Technology Roadmapping, to Infrastructure Reconstruction in Devastated Countries. At the 2007 meeting in Valparaíso, Chile, over 90 papers were presented from 19 countries, including the US, Germany, Japan, Chile, Malaysia, a similar number of papers were presented at the 2009 symposium in Pittsburgh, Pennsylvania, when 28 countries were represented. Subjects of the papers included Economic Stabilization in Latvia, Portfolio Selection in the Banking Sector, Wildfire Management to Help Mitigate Global Warming, as can be seen in the material that follows, using the AHP involves the mathematical synthesis of numerous judgments about the decision problem at hand. It is not uncommon for these judgments to number in the dozens or even the hundreds, while the math can be done by hand or with a calculator, it is far more common to use one of several computerized methods for entering and synthesizing the judgments
2.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory
3.
PlanetMath
–
PlanetMath is a free, collaborative, online mathematics encyclopedia. The emphasis is on rigour, openness, pedagogy, real-time content, interlinked content, intended to be comprehensive, the project is currently hosted by the University of Waterloo. The site is owned by a US-based nonprofit corporation, PlanetMath. org, the main PlanetMath focus is on encyclopedic entries, and some forum discussions. In addition, the project hosts data about books, expositions, a system for semi-private messaging among users is also in place. Developing software recommendations for improved content authoring and editorial functions, PlanetMath content is licensed under the copyleft Creative Commons Attribution/Share-Alike License. All content is written in LaTeX, a typesetting system popular among mathematicians because of its support of the needs of mathematical typesetting. The software running PlanetMath is written in Perl and runs on Linux and it is known as Noösphere and has been released under the free BSD License. As of March 13,2013 PlanethMath has retired Noösphere and runs now on a software called Planetary, encyclopedic content and bibliographic materials related to physics, mathematics and mathematical physics are developed by PlanetPhysics. The site, launched in 2005, uses similar software, but a significantly different moderation model with emphasis on current research in physics and peer review
4.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
5.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
6.
Clopen set
–
In topology, a clopen set in a topological space is a set which is both open and closed. That this is possible may seem counter-intuitive, as the meanings of open and closed are antonyms. A set is closed if its complement is open, which leaves the possibility of a set whose complement is also open. In any topological space X, the empty set and the whole space X are both clopen, now consider the space X which consists of the union of the two open intervals and of R. The topology on X is inherited as the topology from the ordinary topology on the real line R. In X, the set is clopen, as is the set and this is a quite typical example, whenever a space is made up of a finite number of disjoint connected components in this way, the components will be clopen. As a less trivial example, consider the space Q of all rational numbers with their ordinary topology, using the fact that 2 is not in Q, one can show quite easily that A is a clopen subset of Q. A topological space X is connected if and only if the only clopen sets are the empty set, a set is clopen if and only if its boundary is empty. Any clopen set is a union of connected components, if all connected components of X are open, then a set is clopen in X if and only if it is a union of connected components. A topological space X is discrete if and only if all of its subsets are clopen, using the union and intersection as operations, the clopen subsets of a given topological space X form a Boolean algebra. Every Boolean algebra can be obtained in this way from a topological space. Morris, Sidney A. Topology Without Tears, archived from the original on 19 April 2013
7.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance