1.
Number
–
Numbers that answer the question How many. Are 0,1,2,3 and so on, when used to indicate position in a sequence they are ordinal numbers. To the Pythagoreans and Greek mathematician Euclid, the numbers were 2,3,4,5, Euclid did not consider 1 to be a number. Numbers like 3 +17 =227, expressible as fractions in which the numerator and denominator are whole numbers, are rational numbers and these make it possible to measure such quantities as two and a quarter gallons and six and a half miles. What we today would consider a proof that a number is irrational Euclid called a proof that two lengths arising in geometry have no common measure, or are incommensurable, Euclid included proofs of incommensurability of lengths arising in geometry in his Elements. In the Rhind Mathematical Papyrus, a pair of walking forward marked addition. They were the first known civilization to use negative numbers, negative numbers came into widespread use as a result of their utility in accounting. They were used by late medieval Italian bankers, by 1740 BC, the Egyptians had a symbol for zero in accounting texts. In Maya civilization zero was a numeral with a shape as a symbol. The ancient Egyptians represented all fractions in terms of sums of fractions with numerator 1, for example, 2/5 = 1/3 + 1/15. Such representations are known as Egyptian Fractions or Unit Fractions. The earliest written approximations of π are found in Egypt and Babylon, in Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats π as 25/8 =3.1250. In Egypt, the Rhind Papyrus, dated around 1650 BC, astronomical calculations in the Shatapatha Brahmana use a fractional approximation of 339/108 ≈3.139. Other Indian sources by about 150 BC treat π as √10 ≈3.1622 The first references to the constant e were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant and it is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli, the first known use of the constant, represented by the letter b, was in correspondence from Gottfried Leibniz to Christiaan Huygens in 1690 and 1691. Leonhard Euler introduced the letter e as the base for natural logarithms, Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and the first appearance of e in a publication was Eulers Mechanica. While in the subsequent years some researchers used the letter c, e was more common, the first numeral system known is Babylonian numeric system, that has a 60 base, it was introduced in 3100 B. C. and is the first Positional numeral system known
2.
Report
–
A report or account is any informational work made with the specific intention of relaying information or recounting certain events in a widely presentable form. A report is a work made with the specific intention of relaying information or recounting certain events in a widely presentable. Reports are often conveyed in writing, speech, television, or film, Reports fill a vast array of informational needs for many of societys important organizations. Reports are used for keeping track of information, which may be used to make decisions, written reports are documents which present focused, salient content, generally to a specific audience. Reports are used in government, business, education, science, Reports use features such as graphics, images, voice, or specialized vocabulary in order to persuade that specific audience to undertake an action. One of the most common formats for presenting reports is IMRAD, Introduction, Methods, Results and this structure is standard for the genre because it mirrors the traditional publication of scientific research and summons the ethos and credibility of that discipline. Reports are not required to follow this pattern, and may use alternative patterns like the problem-solution format, hill-Link Minority Report of the Presidential Commission on Obscenity and Pornography. Abstracts Of Reports Of The Immigration Commission, With Conclusions And Recommendations And Views Of The Minority, grey Literature International Steering Committee, Guidelines for the production of scientific and technical reports, how to write and distribute grey literature, Version 1.1
3.
Communication
–
Communication is the act of conveying intended meanings from one entity or group to another through the use of mutually understood signs and semiotic rules. The main steps inherent to all communication are, The forming of communicative motivation or reason, transmission of the encoded message as a sequence of signals using a specific channel or medium. Noise sources such as forces and in some cases human activity begin influencing the quality of signals propagating from the sender to one or more receivers. Reception of signals and reassemblying of the message from a sequence of received signals. Decoding of the encoded message. Interpretation and making sense of the original message. The channel of communication can be visual, auditory, tactile and haptic, olfactory, electromagnetic, human communication is unique for its extensive use of abstract language. Development of civilization has been linked with progress in telecommunication. Nonverbal communication describes the process of conveying information in the form of non-linguistic representations, examples of nonverbal communication include haptic communication, chronemic communication, gestures, body language, facial expressions, eye contact, and how one dresses. Nonverbal communication also relates to intent of a message, examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech also contains nonverbal elements known as paralanguage, e. g. rhythm, intonation, tempo and it affects communication most at the subconscious level and establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement of words, Nonverbal communication demonstrates one of Wazlawicks laws, you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting any signals received, Nonverbal cues are heavily relied on to express communication and to interpret others’ communication and can replace or substitute verbal messages. There are several reasons as to why non-verbal communication plays a role in communication. Written communication can also have non-verbal attributes, e-mails and web chats allow individual’s the option to change text font colours, stationary, emoticons, and capitalization in order to capture non-verbal cues into a verbal medium. Many different non-verbal channels are engaged at the time in communication acts. “Non-verbal behaviours may form a language system. ”Smiling, crying, pointing, caressing. Such non-verbal signals allow the most basic form of communication when verbal communication is not effective due to language barriers, Verbal communication is the spoken or written conveyance of a message
4.
Measurement
–
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of a measurement is dependent on the context, however, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal, interval, and ratio scales. Measurement is a cornerstone of trade, science, technology, historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators, since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units. This system reduces all physical measurements to a combination of seven base units. The science of measurement is pursued in the field of metrology, the measurement of a property may be categorized by the following criteria, type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements, the type or level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference, the type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the value of the characterization, usually obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of a used as standard or a natural physical quantity. An uncertainty represents the random and systemic errors of the measurement procedure, errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument. Measurements most commonly use the International System of Units as a comparison framework, the system defines seven fundamental units, kilogram, metre, candela, second, ampere, kelvin, and mole. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to and this directly influenced the Michelson–Morley experiment, Michelson and Morley cite Peirce, and improve on his method. With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements, nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of history, however, first for convenience and then for necessity. Laws regulating measurement were originally developed to prevent fraud in commerce.9144 metres, in the United States, the National Institute of Standards and Technology, a division of the United States Department of Commerce, regulates commercial measurements. Before SI units were adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth. The system came to be known as U. S. customary units in the United States and is still in use there and in a few Caribbean countries. S
5.
Estimation
–
Estimation is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available, typically, estimation involves using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter. The sample provides information that can be projected, through formal or informal processes. An estimate that out to be incorrect will be an overestimate if the estimate exceeded the actual result. Estimation is often done by sampling, which is counting a number of examples something. An example of estimation would be determining how many candies of a given size are in a glass jar, estimates can similarly be generated by projecting results from polls or surveys onto the entire population. In making an estimate, the goal is often most useful to generate a range of outcomes that is precise enough to be useful. Such a projection, intended to pick the single value that is believed to be closest to the value, is called a point estimate. A corresponding concept is an estimate, which captures a much larger range of possibilities. For example, if one were asked to estimate the percentage of people who like candy, such an estimate would provide no guidance, however, to somebody who is trying to determine how many candies to buy for a party to be attended by a hundred people. In statistics, an estimator is the name for the rule by which an estimate is calculated from data. This process is used in processing, for approximating an unobserved signal on the basis of an observed signal containing noise. For estimation of yet-to-be observed quantities, forecasting and prediction are applied, estimation is important in business and economics, because too many variables exist to figure out how large-scale activities will develop. An informal estimate when little information is available is called a guesstimate, the estimated sign, ℮, is used to designate that package contents are close to the nominal contents
6.
Accuracy and precision
–
Precision is a description of random errors, a measure of statistical variability. The two concepts are independent of other, so a particular set of data can be said to be either accurate, or precise. In the fields of science, engineering and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantitys true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are contrasted in the context of the scientific method. A measurement system can be accurate but not precise, precise but not accurate, neither, for example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment, eliminating the systematic error improves accuracy but does not change precision. A measurement system is considered if it is both accurate and precise. Related terms include bias and error, the terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. Statistical literature prefers to use the terms bias and variability instead of accuracy and precision, bias is the amount of inaccuracy and variability is the amount of imprecision. In military terms, accuracy refers primarily to the accuracy of fire, ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national organizations such as the National Institute of Standards. This also applies when measurements are repeated and averaged, further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish, the difference between the mean of the measurements and the value, the bias. Establishing and correcting for bias is necessary for calibration, the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures, here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m, while a recording of 8,436 m would imply a margin of error of 0.5 m
7.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
8.
Square root
–
In mathematics, a square root of a number a is a number y such that y2 = a, in other words, a number y whose square is a. For example,4 and −4 are square roots of 16 because 42 =2 =16, every nonnegative real number a has a unique nonnegative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the square root of 9 is 3, denoted √9 =3. The term whose root is being considered is known as the radicand, the radicand is the number or expression underneath the radical sign, in this example 9. Every positive number a has two roots, √a, which is positive, and −√a, which is negative. Together, these two roots are denoted ± √a, although the principal square root of a positive number is only one of its two square roots, the designation the square root is often used to refer to the principal square root. For positive a, the square root can also be written in exponent notation. Square roots of numbers can be discussed within the framework of complex numbers. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, a method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Aryabhata in the Aryabhatiya, has given a method for finding the root of numbers having many digits. It was known to the ancient Greeks that square roots of positive numbers that are not perfect squares are always irrational numbers, numbers not expressible as a ratio of two integers. This is the theorem Euclid X,9 almost certainly due to Theaetetus dating back to circa 380 BC, the particular case √2 is assumed to date back earlier to the Pythagoreans and is traditionally attributed to Hippasus. Mahāvīra, a 9th-century Indian mathematician, was the first to state that square roots of negative numbers do not exist, a symbol for square roots, written as an elaborate R, was invented by Regiomontanus. An R was also used for Radix to indicate square roots in Gerolamo Cardanos Ars Magna, according to historian of mathematics D. E. Smith, Aryabhatas method for finding the root was first introduced in Europe by Cataneo in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm, the letter jīm resembles the present square root shape. Its usage goes as far as the end of the century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol √ for the root was first used in print in 1525 in Christoph Rudolffs Coss
9.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
10.
Sine
–
In mathematics, the sine is a trigonometric function of an angle. More generally, the definition of sine can be extended to any value in terms of the length of a certain line segment in a unit circle. The function sine can be traced to the jyā and koṭi-jyā functions used in Gupta period Indian astronomy, via translation from Sanskrit to Arabic and then from Arabic to Latin. The word sine comes from a Latin mistranslation of the Arabic jiba, to define the trigonometric functions for an acute angle α, start with any right triangle that contains an angle of measure α, in the accompanying figure, angle A in triangle ABC has measure α. The three sides of the triangle are named as follows, The opposite side is the side opposite to the angle of interest, the hypotenuse is the side opposite the right angle, in this case side h. The hypotenuse is always the longest side of a right-angled triangle, the adjacent side is the remaining side, in this case side b. It forms a side of both the angle of interest and the right angle, once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse. As stated, the value sin appears to depend on the choice of right triangle containing an angle of measure α, however, this is not the case, all such triangles are similar, and so the ratio is the same for each of them. The trigonometric functions can be defined in terms of the rise, run, when the length of the line segment is 1, sine takes an angle and tells the rise. Sine takes an angle and tells the rise per unit length of the line segment, rise is equal to sin θ multiplied by the length of the line segment. In contrast, cosine is used for telling the run from the angle, arctan is used for telling the angle from the slope. The line segment is the equivalent of the hypotenuse in the right-triangle, in trigonometry, a unit circle is the circle of radius one centered at the origin in the Cartesian coordinate system. Let a line through the origin, making an angle of θ with the half of the x-axis. The x- and y-coordinates of this point of intersection are equal to cos θ and sin, the points distance from the origin is always 1. Unlike the definitions with the triangle or slope, the angle can be extended to the full set of real arguments by using the unit circle. This can also be achieved by requiring certain symmetries and that sine be a periodic function. Exact identities, These apply for all values of θ. sin = cos =1 csc The reciprocal of sine is cosecant, i. e. the reciprocal of sin is csc, or cosec. Cosecant gives the ratio of the length of the hypotenuse to the length of the opposite side, the inverse function of sine is arcsine or inverse sine
11.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
12.
Quantization (signal processing)
–
Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a smaller set. Rounding and truncation are typical examples of quantization processes, quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms, the difference between an input value and its quantized value is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer, an analog-to-digital converter is an example of a quantizer. Because quantization is a mapping, it is an inherently non-linear. The set of input values may be infinitely large, and may possibly be continuous. The set of output values may be finite or countably infinite. The input and output involved in quantization can be defined in a rather general way. For example, vector quantization is the application of quantization to multi-dimensional input data, outside the realm of signal processing, this category may simply be called rounding or scalar quantization. An ADC can be modeled as two processes, sampling and quantization, sampling converts a voltage signal into a discrete-time signal. Quantization replaces each real number with an approximation from a set of discrete values. Most commonly, these values are represented as fixed-point words or floating-point words. Common word-lengths are 8-bit, 16-bit, 32-bit, and so on, quantizing a sequence of numbers produces a sequence of quantization errors which is sometimes modeled as an additive random signal called quantization noise because of its stochastic behavior. The more levels a quantizer uses, the lower is its noise power. In general, both ADC processes lose some information, so discrete-valued signals are only an approximation of the continuous-valued discrete-time signal, which is itself only an approximation of the original continuous-valued continuous-time signal. But both types of errors can, in theory, be made arbitrarily small by good design. In this second setting, the amount of introduced distortion may be managed carefully by sophisticated techniques, a quantizer designed for this purpose may be quite different and more elaborate in design than an ordinary rounding operation. It is in this domain that substantial rate–distortion theory analysis is likely to be applied, however, the same concepts actually apply in both use cases
13.
Digital signal (signal processing)
–
If that discrete set is finite, the discrete values can be represented with digital words of a finite width. Most commonly, these values are represented as fixed-point words or floating-point words. The process of analog-to-digital conversion produces a digital signal, common practical digital signals are represented as 8-bit, 16-bit, 24-bit and 32-bit. But the number of levels is not necessarily limited to powers of two
14.
Equals sign
–
The equals sign or equality sign is a mathematical symbol used to indicate equality. It was invented in 1557 by Robert Recorde, in an equation, the equals sign is placed between two expressions that have the same value. In Unicode and ASCII, it is U+003D = equals sign, the etymology of the word equal is from the Latin word æqualis as meaning uniform, identical, or equal, from aequus. The = symbol that is now accepted in mathematics for equality was first recorded by Welsh mathematician Robert Recorde in The Whetstone of Witte. The original form of the symbol was much wider than the present form.2. Thynges, can be moare equalle. … to avoid the repetition of these words, is equal to, I will set a pair of parallels, or Gemowe lines, of one length. According to Scotlands University of St Andrews History of Mathematics website, the symbol || was used by some and æ, from the Latin word aequalis meaning equal, was widely used into the 1700s. In mathematics, the sign can be used as a simple statement of fact in a specific case, or to create definitions, conditional statements. The first important computer programming language to use the sign was the original version of Fortran, FORTRAN I, designed in 1954. In Fortran, = serves as an assignment operator, X =2 sets the value of X to 2. This somewhat resembles the use of = in a definition, but with different semantics. For example, the assignment X = X +2 increases the value of X by 2, a rival programming-language usage was pioneered by the original version of ALGOL, which was designed in 1958 and implemented in 1960. ALGOL included a relational operator that tested for equality, allowing constructions like if x =2 with essentially the same meaning of = as the usage in mathematics. The equals sign was reserved for this usage, both usages have remained common in different programming languages into the early 21st century. As well as Fortran, = is used for assignment in such languages as C, Perl, Python, awk, but = is used for equality and not assignment in the Pascal family, Ada, Eiffel, APL, and other languages. A few languages, such as BASIC and PL/I, have used the sign to mean both assignment and equality, distinguished by context. However, in most languages where = has one of these meanings, a different character or, more often, following ALGOL, most languages that use = for equality use, = for assignment, although APL, with its special character set, uses a left-pointing arrow. Fortran did not have an equality operator until FORTRAN IV was released in 1962, the language B introduced the use of == with this meaning, which has been copied by its descendant C and most later languages where = means assignment
15.
Alfred George Greenhill
–
Sir George Greenhill, F. R. S. was a British mathematician. George Greenhill was educated at Christs Hospital School and from there he went up to St Johns College, Cambridge in 1866, in 1876, Greenhill was appointed professor of mathematics at the Royal Military Academy at Woolwich, London, UK. He held this chair until his retirement in 1908 and his 1892 textbook on applications of elliptic functions is of acknowledged excellence. He was one of the leading experts on applications of elliptic integrals in electromagnetic theory. In 1879, Greenhill developed a rule of thumb for calculating the optimal twist rate for lead-core bullets and this shortcut uses the bullets length, needing no allowances for weight or nose shape. Greenhill applied this theory to account for the steadiness of flight conferred upon an elongated projectile by rifling, the eponymous Greenhill Formula, still used today, is, T w i s t = C D2 L × S G10. This works to velocities of about 840 m/s, above those velocities, a C of 180 should be used. For instance, with a velocity of 600 m/s, a diameter of 0.5 inches and a length of 1.5 inches, the Greenhill formula would give a value of 25, the First Century of the ICMI
16.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
17.
Metric (mathematics)
–
In mathematics, a metric or distance function is a function that defines a distance between each pair of elements of a set. A set with a metric is called a metric space, a metric induces a topology on a set, but not all topologies can be generated by a metric. A topological space whose topology can be described by a metric is called metrizable, an important source of metrics in differential geometry are metric tensors, bilinear forms that may be defined from the tangent vectors of a differentiable manifold onto a scalar. A metric tensor allows distances along curves to be determined through integration, however, not every metric comes from a metric tensor in this way. The first condition is implied by the others, for sets on which an addition +, X × X → X is defined, d is called a translation invariant metric if d = d for all x, y and a in X. These conditions express intuitive notions about the concept of distance, for example, that the distance between distinct points is positive and the distance from x to y is the same as the distance from y to x. The triangle inequality means that the distance x to z via y is at least as great as from x to z directly. Euclid in his work stated that the shortest distance between two points is a line, that was the triangle inequality for his geometry, if a modification of the triangle inequality 4*. D ≤ d + d is used in the definition then property 1 follows straight from property 4*, properties 2 and 4* give property 3 which in turn gives property 4. The discrete metric, if x = y then d =0, the Euclidean metric is translation and rotation invariant. The taxicab metric is translation invariant, more generally, any metric induced by a norm is translation invariant. If n ∈ N is a sequence of seminorms defining a vector space E. Graph metric, a defined in terms of distances in a certain graph. The Hamming distance in coding theory, Riemannian metric, a type of metric function that is appropriate to impose on any differentiable manifold. For any such manifold, one chooses at each point p a symmetric, positive definite, bilinear form L, Tp × Tp → ℝ on the tangent space Tp at p, a smooth manifold equipped with a Riemannian metric is called a Riemannian manifold. The Fubini–Study metric on complex projective space and this is an example of a Riemannian metric. String metrics, such as Levenshtein distance and other string edit distances, graph edit distance defines a distance function between graphs. For a given set X, two metrics d1 and d2 are called equivalent if the identity mapping id, → is a homeomorphism
18.
Range (mathematics)
–
In mathematics, and more specifically in naive set theory, the range of a function refers to either the codomain or the image of the function, depending upon usage. Modern usage almost always uses range to mean image, the codomain of a function is some arbitrary set. In real analysis, it is the real numbers, in complex analysis, it is the complex numbers. The image of a function is the set of all outputs of the function, the image is always a subset of the codomain. As the term range can have different meanings, it is considered a practice to define it the first time it is used in a textbook or article. Older books, when they use the range, tend to use it to mean what is now called the codomain. More modern books, if they use the range at all. To avoid any confusion, a number of modern books dont use the range at all. As an example of the two different usages, consider the function f = x 2 as it is used in real analysis, that is, as a function that inputs a real number and outputs its square. In this case, its codomain is the set of real numbers R, for this function, if we use range to mean codomain, it refers to R. When we use range to mean image, it refers to R +, as an example where the range equals the codomain, consider the function f =2 x, which inputs a real number and outputs its double. For this function, the codomain and the image are the same, so the range is unambiguous. When range is used to mean codomain, the image of a function f is already implicitely defined and it is the subset of the range which equals. When range is used to image, the range of a function f is by definition. In this case, the codomain of f must not be specified, in both cases, image f ⊆ range f ⊆ codomain f, with at least one of the containments being equality. Bijection, injection and surjection Codomain Image Naive set theory Childs, a Concrete Introduction to Higher Algebra. Dummit, David S. Foote, Richard M. Abstract Algebra
19.
Domain of a function
–
In mathematics, and more specifically in naive set theory, the domain of definition of a function is the set of input or argument values for which the function is defined. That is, the function provides an output or value for each member of the domain, conversely, the set of values the function takes on as output is termed the image of the function, which is sometimes also referred to as the range of the function. For instance, the domain of cosine is the set of all real numbers, if the domain of a function is a subset of the real numbers and the function is represented in a Cartesian coordinate system, then the domain is represented on the X-axis. Given a function f, X→Y, the set X is the domain of f, in the expression f, x is the argument and f is the value. One can think of an argument as a member of the domain that is chosen as an input to the function, the image of f is the set of all values assumed by f for all possible x, this is the set. The image of f can be the set as the codomain or it can be a proper subset of it. It is, in general, smaller than the codomain, it is the whole codomain if, a well-defined function must map every element of its domain to an element of its codomain. For example, the function f defined by f =1 / x has no value for f, thus, the set of all real numbers, R, cannot be its domain. In cases like this, the function is defined on R\ or the gap is plugged by explicitly defining f. If we extend the definition of f to f = {1 / x x ≠00 x =0 then f is defined for all real numbers, any function can be restricted to a subset of its domain. The restriction of g, A → B to S, where S ⊆ A, is written g |S, S → B. The natural domain of a function is the set of values for which the function is defined, typically within the reals. For instance the natural domain of square root is the non-negative reals when considered as a real number function, when considering a natural domain, the set of possible values of the function is typically called its range. There are two meanings in current mathematical usage for the notion of the domain of a partial function from X to Y, i. e. a function from a subset X of X to Y. Most mathematicians, including recursion theorists, use the domain of f for the set X of all values x such that f is defined. But some, particularly category theorists, consider the domain to be X, in category theory one deals with morphisms instead of functions. Morphisms are arrows from one object to another, the domain of any morphism is the object from which an arrow starts. In this context, many set theoretic ideas about domains must be abandoned or at least formulated more abstractly, for example, the notion of restricting a morphism to a subset of its domain must be modified
20.
Cardinality
–
In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = contains 3 elements, there are two approaches to cardinality – one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set is called its size, when no confusion with other notions of size is possible. The cardinality of a set A is usually denoted | A |, with a bar on each side, this is the same notation as absolute value. Alternatively, the cardinality of a set A may be denoted by n, A, card, while the cardinality of a finite set is just the number of its elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets. Two sets A and B have the same cardinality if there exists a bijection, that is, such sets are said to be equipotent, equipollent, or equinumerous. This relationship can also be denoted A≈B or A~B, for example, the set E = of non-negative even numbers has the same cardinality as the set N = of natural numbers, since the function f = 2n is a bijection from N to E. A has cardinality less than or equal to the cardinality of B if there exists a function from A into B. A has cardinality less than the cardinality of B if there is an injective function. If | A | ≤ | B | and | B | ≤ | A | then | A | = | B |, the axiom of choice is equivalent to the statement that | A | ≤ | B | or | B | ≤ | A | for every A, B. That is, the cardinality of a set was not defined as an object itself. However, such an object can be defined as follows, the relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation then consists of all sets which have the same cardinality as A. There are two ways to define the cardinality of a set, The cardinality of a set A is defined as its class under equinumerosity. A representative set is designated for each equivalence class, the most common choice is the initial ordinal in that class. This is usually taken as the definition of number in axiomatic set theory. Assuming AC, the cardinalities of the sets are denoted ℵ0 < ℵ1 < ℵ2 < …. For each ordinal α, ℵ α +1 is the least cardinal number greater than ℵ α
21.
Symmetry in mathematics
–
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance, the property that something does not change under a set of transformations, given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in cases, for example, if X is a set with no additional structure. In general, every kind of structure in mathematics will have its own kind of symmetry, the types of symmetry considered in basic geometry are described more fully in the main article on symmetry. Let f be a function of a real variable. Then f is even if the equation holds for all x. Geometrically speaking, the face of an even function is symmetric with respect to the y-axis. Examples of even functions are x, x2, x4, cos, again, let f be a real-valued function of a real variable. Then f is odd if the equation holds for all x and -x in the domain of f, − f = f. Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, examples of odd functions are x, x3, sin, sinh, and erf. The integral of an odd function from −A to +A is zero, the integral of an even function from −A to +A is twice the integral from 0 to +A. The Maclaurin series of an even function includes only even powers, the Maclaurin series of an odd function includes only odd powers. The Fourier series of an even function includes only cosine terms. The Fourier series of an odd function includes only sine terms. In linear algebra, a matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if A = A ⊤ and, because the definition of matrix equality demands equality of their dimensions, the entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A =, then aij = aji, the following 3×3 matrix is symmetric. Every square diagonal matrix is symmetric, since all off-diagonal entries are zero, similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative
22.
Discrete mathematics
–
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Discrete mathematics therefore excludes topics in mathematics such as calculus. Discrete objects can often be enumerated by integers, more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no definition of the term discrete mathematics. Indeed, discrete mathematics is described less by what is included than by what is excluded, continuously varying quantities, the set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of mathematics that deals with finite sets. Conversely, computer implementations are significant in applying ideas from mathematics to real-world problems. Although the main objects of study in mathematics are discrete objects. In university curricula, Discrete Mathematics appeared in the 1980s, initially as a computer science support course, some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is seen as a preparatory course. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics, the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, in logic, the second problem on David Hilberts list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödels second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself, Hilberts tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done, at the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades, operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in mathematics, particularly in graph theory. Formal verification of statements in logic has been necessary for development of safety-critical systems. Computational geometry has been an important part of the computer graphics incorporated into modern video games, currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP
23.
Monotonic function
–
In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was generalized to the more abstract setting of order theory. In calculus, a function f defined on a subset of the numbers with real values is called monotonic if. That is, as per Fig.1, a function that increases monotonically does not exclusively have to increase, a function is called monotonically increasing, if for all x and y such that x ≤ y one has f ≤ f, so f preserves the order. Likewise, a function is called monotonically decreasing if, whenever x ≤ y, then f ≥ f, if the order ≤ in the definition of monotonicity is replaced by the strict order <, then one obtains a stronger requirement. A function with this property is called strictly increasing, again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing. The terms non-decreasing and non-increasing should not be confused with the negative qualifications not decreasing, for example, the function of figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing, the term monotonic transformation can also possibly cause some confusion because it refers to a transformation by a strictly increasing function. Notably, this is the case in economics with respect to the properties of a utility function being preserved across a monotonic transform. A function f is said to be absolutely monotonic over an interval if the derivatives of all orders of f are nonnegative or all nonpositive at all points on the interval, F can only have jump discontinuities, f can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and these properties are the reason why monotonic functions are useful in technical work in analysis. In addition, this result cannot be improved to countable, see Cantor function, if f is a monotonic function defined on an interval, then f is Riemann integrable. An important application of functions is in probability theory. If X is a variable, its cumulative distribution function F X = Prob is a monotonically increasing function. A function is unimodal if it is monotonically increasing up to some point, when f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f, then there is an inverse function on T for f. A map f, X → Y is said to be if each of its fibers is connected i. e. for each element y in Y the set f−1 is connected. A subset G of X × X∗ is said to be a set if for every pair. G is said to be monotone if it is maximal among all monotone sets in the sense of set inclusion
24.
Pi
–
The number π is a mathematical constant, the ratio of a circles circumference to its diameter, commonly approximated as 3.14159. It has been represented by the Greek letter π since the mid-18th century, being an irrational number, π cannot be expressed exactly as a fraction. Still, fractions such as 22/7 and other numbers are commonly used to approximate π. The digits appear to be randomly distributed, in particular, the digit sequence of π is conjectured to satisfy a specific kind of statistical randomness, but to date no proof of this has been discovered. Also, π is a number, i. e. a number that is not the root of any non-zero polynomial having rational coefficients. This transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass, ancient civilizations required fairly accurate computed values for π for practical reasons. It was calculated to seven digits, using techniques, in Chinese mathematics. The extensive calculations involved have also used to test supercomputers. Because its definition relates to the circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses, and spheres. Because of its role as an eigenvalue, π appears in areas of mathematics. It is also found in cosmology, thermodynamics, mechanics, attempts to memorize the value of π with increasing precision have led to records of over 70,000 digits. In English, π is pronounced as pie, in mathematical use, the lowercase letter π is distinguished from its capitalized and enlarged counterpart ∏, which denotes a product of a sequence, analogous to how ∑ denotes summation. The choice of the symbol π is discussed in the section Adoption of the symbol π, π is commonly defined as the ratio of a circles circumference C to its diameter d, π = C d The ratio C/d is constant, regardless of the circles size. For example, if a circle has twice the diameter of another circle it will also have twice the circumference, preserving the ratio C/d. This definition of π implicitly makes use of geometry, although the notion of a circle can be extended to any curved geometry. Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be defined independently of geometry using limits. An integral such as this was adopted as the definition of π by Karl Weierstrass, definitions of π such as these that rely on a notion of circumference, and hence implicitly on concepts of the integral calculus, are no longer common in the literature. One such definition, due to Richard Baltzer, and popularized by Edmund Landau, is the following, the cosine can be defined independently of geometry as a power series, or as the solution of a differential equation
25.
Rational number
–
In mathematics, a rational number is any number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. Since q may be equal to 1, every integer is a rational number, the decimal expansion of a rational number always either terminates after a finite number of digits or begins to repeat the same finite sequence of digits over and over. Moreover, any repeating or terminating decimal represents a rational number and these statements hold true not just for base 10, but also for any other integer base. A real number that is not rational is called irrational, irrational numbers include √2, π, e, and φ. The decimal expansion of an irrational number continues without repeating, since the set of rational numbers is countable, and the set of real numbers is uncountable, almost all real numbers are irrational. Rational numbers can be defined as equivalence classes of pairs of integers such that q ≠0, for the equivalence relation defined by ~ if. The rational numbers together with addition and multiplication form field which contains the integers and is contained in any field containing the integers, finite extensions of Q are called algebraic number fields, and the algebraic closure of Q is the field of algebraic numbers. In mathematical analysis, the numbers form a dense subset of the real numbers. The real numbers can be constructed from the numbers by completion, using Cauchy sequences, Dedekind cuts. The term rational in reference to the set Q refers to the fact that a number represents a ratio of two integers. In mathematics, rational is often used as a noun abbreviating rational number, the adjective rational sometimes means that the coefficients are rational numbers. However, a curve is not a curve defined over the rationals. Any integer n can be expressed as the rational number n/1, a b = c d if and only if a d = b c. Where both denominators are positive, a b < c d if and only if a d < b c. If either denominator is negative, the fractions must first be converted into equivalent forms with positive denominators, through the equations, − a − b = a b, two fractions are added as follows, a b + c d = a d + b c b d. A b − c d = a d − b c b d, the rule for multiplication is, a b ⋅ c d = a c b d. Where c ≠0, a b ÷ c d = a d b c, note that division is equivalent to multiplying by the reciprocal of the divisor fraction, a d b c = a b × d c. Additive and multiplicative inverses exist in the numbers, − = − a b = a − b and −1 = b a if a ≠0
26.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0
27.
Interval arithmetic
–
Very simply put, it represents each value as a range of possibilities. This concept is suitable for a variety of purposes, the most common use is to keep track of and handle rounding errors directly during the calculation and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy, Interval arithmetic also helps find reliable and guaranteed solutions to equations and optimization problems. Mathematically, instead of working with an uncertain real x we work with the two ends of the interval that contains x, in interval arithmetic, any variable x lies between a and b, or could be one of them. A function f applied to x is also uncertain. In interval arithmetic f produces an interval that is all the values for f for all x ∈. The main focus of interval arithmetic is the simplest way to upper and lower endpoints for the range of values of a function in one or more variables. These endpoints are not necessarily the supremum or infimum, since the calculation of those values can be difficult or impossible. As with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined, more complicated functions can be calculated from these basic elements. Take as an example the calculation of body mass index, the BMI is the body weight in kilograms divided by the square of height in metres. A bathroom scale may have a resolution of one kilogram and we do not know intermediate values – about 79.6 kg or 80.3 kg – but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone really weighs exactly 80.0 kg, in normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all numbers that are greater than or equal to 79.5, while less than or equal to 80.5. For a man who weighs 80 kg and is 1.80 m tall, with a weight of 79.5 kg and the same height the value is 24.5, while 80.5 kilograms gives almost 24.9. So the actual BMI is in the range, the error in this case does not affect the conclusion, but this is not always the position. For example, weight fluctuates in the course of a day so that the BMI can vary between 24 and 25, without detailed analysis it is not possible to always exclude questions as to whether an error ultimately is large enough to have significant influence. Interval arithmetic states the range of possible outcomes explicitly, simply put, results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to bars to a metric in expressing the extent of uncertainty
28.
Stock index
–
A stock index or stock market index is a measurement of the value of a section of the stock market. It is computed from the prices of selected stocks and it is a tool used by investors and financial managers to describe the market, and to compare the return on specific investments. An index is a construct, so it may not be invested in directly. But many mutual funds and exchange-traded funds attempt to track an index, Stock market indices may be classified in many ways. A world or global stock market index — such as the MSCI World or the S&P Global 100 — includes stocks from multiple regions, regions may be defined geographically or by levels of industrialization or income. A national index represents the performance of the market of a given nation—and by proxy. Other indices may be regional, such as the FTSE Developed Europe Index or the FTSE Developed Asia Pacific Index, indexes may be based on exchange, such as the NASDAQ-100 or NYSE US100, or groups of exchanges, such as the Euronext 100 or OMX Nordic 40. The concept may be extended well beyond an exchange, russell Investment Group added to the family of indices by launching the Russel Global Index. More specialized indices exist tracking the performance of specific sectors of the market, some indices, such as the S&P500, have multiple versions. These versions can differ based on how the components are weighted. The difference between the full capitalization, float-adjusted, and equal weight versions is in how index components are weighted, an index may also be classified according to the method used to determine its price. In contrast, an index such as the Hang Seng Index factors in the size of the company. Thus, a small shift in the price of a large company will heavily influence the value of the index. Traditionally, capitalization- or share-weighted indices all had a full weighting, recently, many of them have changed to a float-adjusted weighting which helps indexing. An equal-weighted index is one in all components are assigned the same value. For example, the Barrons 400 Index assigns a value of 0. 25% to each of the 400 stocks included in the index. A modified capitalization-weighted index is a hybrid between capitalization weighting and equal weighting, moreover, in 2005, Standard & Poors introduced the S&P Pure Growth Style Index and S&P Pure Value Style Index which was attribute-weighted. For these two indexes, a score is calculated for every stock, be it their growth score or the value score, one argument for capitalization weighting is that investors must, in aggregate, hold a capitalization-weighted portfolio anyway
29.
Vancouver Stock Exchange
–
The Vancouver Stock Exchange was a stock exchange based in Vancouver, British Columbia. In 1989, Forbes magazine labelled the VSE the scam capital of the world, in 1991, it listed some 2,300 stocks. Some local figures stated that the majority of stocks were either total failures or frauds. A1994 report by James Matkin made reference to shams, swindles, regardless, it had roughly C$4 billion in annual trading in 1991. On November 29,1999 the VSE was merged into the Canadian Venture Exchange, along with the Alberta Stock Exchange, the trading floor of the old VSE remained as the trading floor of the new CDNX. The history of the index provides a standard case example of large errors arising from seemingly innocuous floating point calculations. In January 1982 the index was initialized at 1000 and subsequently updated and truncated to three places on each trade. Such a thing was done about 3000 times each day, the accumulated truncations led to an erroneous loss of around 25 points per month. Over the weekend of November 25–28,1983, the error was corrected, List of former stock exchanges in the Americas List of stock exchange mergers in the Americas List of stock exchanges Toronto Stock Exchange Cruise, David, Griffiths, Alison. Fleecing the Lamb, The Inside Story of the Vancouver Stock Exchange, douglas & Mcintyre Ltd.1987 ISBN 978-0888945587
30.
Sign function
–
In mathematics, the sign function or signum function is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is represented as sgn. The signum function of a number x is defined as follows. Any real number can be expressed as the product of its value and its sign function. The numbers cancel and all we are left with is the sign of x, D | x | d x = sgn for x ≠0. The signum function is differentiable with derivative 0 everywhere except at 0, using this identity, it is easy to derive the distributional derivative, d sgn d x =2 d H d x =2 δ. The signum can also be using the Iverson bracket notation. The signum can also be using the floor and the absolute value functions. For k ≫1, an approximation of the sign function is sgn ≈ tanh . Another approximation is sgn ≈ x x 2 + ε2, which gets sharper as ε →0, note that this is the derivative of √x2 + ε2. This is inspired from the fact that the above is equal for all nonzero x if ε =0. See Heaviside step function – Analytic approximations, the signum function can be generalized to complex numbers as, sgn = z | z | for any complex number z except z =0. The signum of a complex number z is the point on the unit circle of the complex plane that is nearest to z. Then, for z ≠0, sgn = e i arg z and we then have, csgn = z z 2 = z 2 z. At real values of x, it is possible to define a generalized function–version of the function, ε such that ε2 =1 everywhere. This generalized signum allows construction of the algebra of generalized functions, absolute value Heaviside function Negative number Rectangular function Sigmoid function Step function Three-way comparison Zero crossing Modulus function
31.
Floor and ceiling functions
–
In mathematics and computer science, the floor and ceiling functions map a real number to the greatest preceding or the least succeeding integer, respectively. More precisely, floor = ⌊ x ⌋ is the greatest integer less than or equal to x, carl Friedrich Gauss introduced the square bracket notation for the floor function in his third proof of quadratic reciprocity. This remained the standard in mathematics until Kenneth E. Iverson introduced the names floor and ceiling, both notations are now used in mathematics, this article follows Iverson. e. The value of x rounded to an integer towards 0, the language APL uses ⌊x, other computer languages commonly use notations like entier, INT, or floor. In mathematics, it can also be written with boldface or double brackets, the ceiling function is usually denoted by ceil or ceiling in non-APL computer languages that have a notation for this function. The J Programming Language, a follow on to APL that is designed to use standard symbols, uses >. for ceiling. In mathematics, there is another notation with reversed boldface or double brackets ] ] x x[\. x[, the fractional part is the sawtooth function, denoted by for real x and defined by the formula = x − ⌊ x ⌋. HTML4.0 uses the names, &lfloor, &rfloor, &lceil. Unicode contains codepoints for these symbols at U+2308–U+230B, ⌈x⌉, ⌊x⌋, in the following formulas, x and y are real numbers, k, m, and n are integers, and Z is the set of integers. Floor and ceiling may be defined by the set equations ⌊ x ⌋ = max, ⌈ x ⌉ = min. Since there is exactly one integer in an interval of length one. Then ⌊ x ⌋ = m and ⌈ x ⌉ = n may also be taken as the definition of floor and these formulas can be used to simplify expressions involving floors and ceilings. In the language of order theory, the function is a residuated mapping. These formulas show how adding integers to the arguments affect the functions, negating the argument complements the fractional part, + = {0 if x ∈ Z1 if x ∉ Z. The floor, ceiling, and fractional part functions are idempotent, the result of nested floor or ceiling functions is the innermost function, ⌊ ⌈ x ⌉ ⌋ = ⌈ x ⌉, ⌈ ⌊ x ⌋ ⌉ = ⌊ x ⌋. If m and n are integers and n ≠0,0 ≤ ≤1 −1 | n |. If n is a positive integer ⌊ x + m n ⌋ = ⌊ ⌊ x ⌋ + m n ⌋, ⌈ x + m n ⌉ = ⌈ ⌈ x ⌉ + m n ⌉. For m =2 these imply n = ⌊ n 2 ⌋ + ⌈ n 2 ⌉
32.
Discrete uniform distribution
–
Another way of saying discrete uniform distribution would be a known, finite number of outcomes equally likely to happen. A simple example of the uniform distribution is throwing a fair die. The possible values are 1,2,3,4,5,6, if two dice are thrown and their values added, the resulting distribution is no longer uniform since not all sums have equal probability. The discrete uniform distribution itself is inherently non-parametric and it is convenient, however, to represent its values generally by an integer interval, so that a, b become the main parameters of the distribution. This problem is known as the German tank problem, following the application of maximum estimation to estimates of German tank production during World War II. The UMVU estimator for the maximum is given by N ^ = k +1 k m −1 = m + m k −1 where m is the maximum and k is the sample size. This can be seen as a simple case of maximum spacing estimation. This has a variance of 1 k ≈ N2 k 2 for small samples k ≪ N so a standard deviation of approximately N k, the sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased. If samples are not numbered but are recognizable or markable, one can instead estimate population size via the capture-recapture method, see rencontres numbers for an account of the probability distribution of the number of fixed points of a uniformly distributed random permutation
33.
Expected value
–
In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents. For example, the value in rolling a six-sided die is 3.5. Less roughly, the law of large states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is known as the expectation, mathematical expectation, EV, average, mean value, mean. More practically, the value of a discrete random variable is the probability-weighted average of all possible values. In other words, each value the random variable can assume is multiplied by its probability of occurring. The same principle applies to a random variable, except that an integral of the variable with respect to its probability density replaces the sum. The expected value does not exist for random variables having some distributions with large tails, for random variables such as these, the long-tails of the distribution prevent the sum/integral from converging. The expected value is a key aspect of how one characterizes a probability distribution, by contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations, it is the value of the squared deviation of the variables value from the variables expected value. The expected value plays important roles in a variety of contexts, in regression analysis, one desires a formula in terms of observed data that will give a good estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, a formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate can be shown to equal the true value of the desired parameter. In decision theory, and in particular in choice under uncertainty, one example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a fraction of the expected loss. Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, then the expectation of this random variable X is defined as E = x 1 p 1 + x 2 p 2 + ⋯ + x k p k. If all outcomes xi are equally likely, then the weighted average turns into the simple average and this is intuitive, the expected value of a random variable is the average of all values it can take, thus the expected value is what one expects to happen on average. If the outcomes xi are not equally probable, then the simple average must be replaced with the weighted average, the intuition however remains the same, the expected value of X is what one expects to happen on average. Let X represent the outcome of a roll of a fair six-sided die, more specifically, X will be the number of pips showing on the top face of the die after the toss