1.
Units of measurement
–
A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same quantity. Any other value of quantity can be expressed as a simple multiple of the unit of measurement. For example, length is a physical quantity, the metre is a unit of length that represents a definite predetermined length. When we say 10 metres, we actually mean 10 times the definite predetermined length called metre, the definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to this day. Different systems of units used to be very common, now there is a global standard, the International System of Units, the modern form of the metric system. In trade, weights and measures is often a subject of regulation, to ensure fairness. The International Bureau of Weights and Measures is tasked with ensuring worldwide uniformity of measurements, metrology is the science for developing nationally and internationally accepted units of weights and measures. In physics and metrology, units are standards for measurement of quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method, a standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights, science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life and indicate them more precisely. The judicious selection of the units of measurement can aid researchers in problem solving, in the social sciences, there are no standard units of measurement and the theory and practice of measurement is studied in psychometrics and the theory of conjoint measurement. A unit of measurement is a quantity of a physical property. Units of measurement were among the earliest tools invented by humans, primitive societies needed rudimentary measures for many tasks, constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. Weights and measures are mentioned in the Bible and it is a commandment to be honest and have fair measures. As of the 21st Century, multiple unit systems are used all over the world such as the United States Customary System, the British Customary System, however, the United States is the only industrialized country that has not yet completely converted to the Metric System. The systematic effort to develop an acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. After this treaty was signed, a General Conference of Weights, the CGPM produced the current SI system which was adopted in 1954 at the 10th conference of weights and measures. Currently, the United States is a society which uses both the SI system and the US Customary system

2.
Units of information
–
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are used to measure the information contents or entropy of random variables. The most common units are the bit, the capacity of a system which can exist in two states, and the byte, which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes or the newer IEC binary prefixes, information capacity is a dimensionless quantity. In particular, if b is an integer, then the unit is the amount of information that can be stored in a system with N possible states. When b is 2, the unit is the shannon, equal to the content of one bit. A system with 8 possible states, for example, can store up to log28 =3 bits of information, other units that have been named include, Base b =3, the unit is called trit, and is equal to log23 bits. Base b =10, the unit is called decimal digit, hartley, ban, decit, or dit, Base b = e, the base of natural logarithms, the unit is called a nat, nit, or nepit, and is worth log2 e bits. Several conventional names are used for collections or groups of bits, a byte can represent 256 distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies B as the symbol for byte, bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, a group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of number representations. Computers usually manipulate bits in groups of a size, conventionally called words. The number of bits in a word is defined by the size of the registers in the computers CPU. Some machine instructions and computer number formats use two words, or four words, computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, virtual memory systems partition the computers main storage into even larger units, traditionally called pages. Terms for large quantities of bits can be formed using the range of SI prefixes for powers of 10, e. g. kilo =103 =1000, mega- =106 =1000000. These prefixes are often used for multiples of bytes, as in kilobyte, megabyte

3.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra

4.
Base e
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one

5.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0

6.
Qubit
–
In quantum computing, a qubit or quantum bit is a unit of quantum information—the quantum analogue of the classical bit. A qubit is a two-state quantum-mechanical system, such as the polarization of a single photon, in a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a superposition of states at the same time, a property that is fundamental to quantum computing. The concept of the qubit was unknowingly introduced by Stephen Wiesner in 1983, in his proposal for quantum money, the coining of the term qubit is attributed to Benjamin Schumacher. The paper describes a way of compressing states emitted by a source of information so that they require fewer physical resources to store. This procedure is now known as Schumacher compression, the bit is the basic unit of information. It is used to represent information by computers, an analogy to this is a light switch—its off position can be thought of as 0 and its on position as 1. A qubit has a few similarities to a bit, but is overall very different. There are two possible outcomes for the measurement of a qubit—usually 0 and 1, like a bit, the difference is that whereas the state of a bit is either 0 or 1, the state of a qubit can also be a superposition of both. It is possible to encode one bit in one qubit. However, a qubit can hold more information, e. g. up to two bits using superdense coding. For a system of n components, a description of its state in classical physics requires only n bits. The two states in which a qubit may be measured are known as basis states, as is the tradition with any sort of quantum states, they are represented by Dirac—or bra–ket—notation. This means that the two basis states are conventionally written as |0 ⟩ and |1 ⟩. A pure qubit state is a superposition of the basis states. When we measure this qubit in the basis, the probability of outcome |0 ⟩ is | α |2. Because the absolute squares of the amplitudes equate to probabilities, it follows that α and β must be constrained by the equation | α |2 + | β |2 =1. It might at first sight seem that there should be four degrees of freedom, as α and β are complex numbers with two degrees of freedom each

7.
Quantum information
–
In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the entity of study in quantum information theory. Quantum information differs strongly from classical information, epitomized by the bit, in many striking, among these are the following, A unit of quantum information is the qubit. Unlike classical digital states, a qubit is continuous-valued, describable by a direction on the Bloch sphere, despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information. The reason for this indivisibility is due to the Heisenberg uncertainty principle, despite the state being continuously-valued. A qubit cannot be converted into classical bits, that is, despite the awkwardly-named no-teleportation theorem, qubits can be moved from one physical particle to another, by means of quantum teleportation. That is, qubits can be transported, independently of the underlying physical particle, an arbitrary qubit can neither be copied, nor destroyed. This is the content of the no cloning theorem and the no-deleting theorem, although a single qubit can be transported from place to place, it cannot be delivered to multiple recipients, this is the no-broadcast theorem, and is essentially implied by the no-cloning theorem. Qubits can be changed, by applying linear transformations or quantum gates to them, Classical bits may be combined with and extracted from configurations of multiple qubits, through the use of quantum gates. That is, two or more qubits can be arranged in such a way as to convey classical bits, the simplest such configuration is the Bell state, which consists of two qubits and four classical bits. Quantum information can be moved about, in a quantum channel, Quantum messages have a finite size, measured in qubits, quantum channels have a finite channel capacity, measured in qubits per second. Multiple qubits can be used to carry classical bits, although n qubits can carry more than n classical bits of information, the greatest amount of classical information that can be retrieved is n. Quantum information, and changes in information, can be quantitatively measured by using an analogue of Shannon entropy. Given a statistical ensemble of mechanical systems with the density matrix ρ. Many of the entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy. Quantum algorithms have a different computational complexity than classical algorithms, the most famous example of this is Shors factoring algorithm, which is not known to have a polynomial time classical algorithm, but does have a polynomial time quantum algorithm. Other examples include Grovers search algorithm, where the algorithm gives a quadratic speed-up over the best possible classical algorithm. Quantum key distribution allows unconditionally secure transmission of information, unlike classical encryption

8.
Information
–
In other words, it is the answer to a question of some kind. It is thus related to data and knowledge, as data represents values attributed to parameters, as it regards data, the informations existence is not necessarily coupled to an observer, while in the case of knowledge, the information requires a cognitive observer. At its most fundamental, information is any propagation of cause, Information can be encoded into various forms for transmission and interpretation. It can also be encrypted for safe storage and communication, the uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the information is required to resolve uncertainty of that event. The bit is a unit of information, but other units such as the nat may be used. Example, information in one fair coin ﬂip, log2 =1 bit, the concept that information is the message has different meanings in different contexts. The English word was derived from the Latin stem of the nominative. Inform itself comes from the Latin verb informare, which means to give form, eidos can also be associated with thought, proposition, or even concept. The ancient Greek word for information is πληροφορία, which transliterates from πλήρης fully and it literally means fully bears or conveys fully. In modern Greek language the word Πληροφορία is still in use and has the same meaning as the word information in English. In addition to its meaning, the word Πληροφορία as a symbol has deep roots in Aristotles semiotic triangle. In this regard it can be interpreted to communicate information to the one decoding that specific type of sign, from the stance of information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of a function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic and it may have memory or be memoryless. Often information can be viewed as a type of input to an organism or system, inputs are of two kinds, some inputs are important to the function of the organism or system by themselves. In his book Sensory Ecology Dusenbery called these causal inputs, other inputs are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time. Some information is important because of association with information but eventually there must be a connection to a causal input

9.
Information entropy
–
In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel, the channel modifies the message in some way. The receiver attempts to infer which message was sent, in this context, entropy is the expected value of the information contained in each message. Messages can be modeled by any flow of information, in a more technical sense, there are reasons to define information as the negative of the logarithm of the probability distribution of possible events or messages. The amount of information of every event forms a random variable whose expected value, units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources, for instance, the entropy of a coin toss is 1 shannon, whereas of m tosses it is m shannons. Generally, you need log2 bits to represent a variable that can take one of n if n is a power of 2. If these values are equally probable, the entropy is equal to the number of bits, equality between number of bits and shannons holds only while all outcomes are equally probable. If one of the events is more probable than others, observation of event is less informative. Conversely, rarer events provide more information when observed, since observation of less probable events occurs more rarely, the net effect is that the entropy received from non-uniformly distributed data is less than log2. Entropy is zero when one outcome is certain, Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The meaning of the events observed does not matter in the definition of entropy, generally, entropy refers to disorder or uncertainty. Shannon entropy was introduced by Claude E. Shannon in his 1948 paper A Mathematical Theory of Communication, Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. Entropy is a measure of unpredictability of the state, or equivalently, to get an intuitive understanding of these terms, consider the example of a political poll. Usually, such polls happen because the outcome of the poll is not already known, now, consider the case that the same poll is performed a second time shortly after the first poll. Now consider the example of a coin toss, assuming the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be. Such a coin toss has one shannon of entropy since there are two possible outcomes that occur with probability, and learning the actual outcome contains one shannon of information. Contrarily, a toss with a coin that has two heads and no tails has zero entropy since the coin will always come up heads

10.
Natural logarithms
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one

11.
Binary logarithm
–
In mathematics, the binary logarithm is the power to which the number 2 must be raised to obtain the value n. That is, for any number x, x = log 2 n ⟺2 x = n. For example, the logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2. The binary logarithm is the logarithm to the base 2, the binary logarithm function is the inverse function of the power of two function. As well as log2, alternative notations for the binary logarithm include lg, ld, lb, and log. Binary logarithms can be used to calculate the length of the representation of a number in the numeral system. In computer science, they count the number of steps needed for binary search, other areas in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography. Binary logarithms are included in the standard C mathematical functions and other software packages. The integer part of a binary logarithm can be using the find first set operation on an integer value. The fractional part of the logarithm can be calculated efficiently, the powers of two have been known since antiquity, for instance they appear in Euclids Elements, Props. And the binary logarithm of a power of two is just its position in the sequence of powers of two. On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544 and his book Arthmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these allow them to be interpreted as tables of binary logarithms. Earlier than Stifel, the 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm, virasenas concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the logarithm on the powers of two, but it is different for other integers, giving the 2-adic order rather than the logarithm. The modern form of a logarithm, applying to any number was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their more significant applications in information theory, as part of his work in this area, Euler published a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy. Alternatively, it may be defined as ln n/ln 2, where ln is the natural logarithm, using the complex logarithm in this definition allows the binary logarithm to be extended to the complex numbers

12.
Natural unit
–
In physics, natural units are physical units of measurement based only on universal physical constants. For example, the charge e is a natural unit of electric charge. It precludes the interpretation of an expression in terms of physical constants, such e and c. In this case, the reinsertion of the powers of e, c. Natural units are natural because the origin of their definition comes only from properties of nature, Planck units are often, without qualification, called natural units, although they constitute only one of several systems of natural units, albeit the best known such system. As with other systems of units, the units of a set of natural units will include definitions and values for length, mass, time, temperature. It is possible to disregard temperature as a physical quantity, since it states the energy per degree of freedom of a particle. Virtually every system of natural units normalizes Boltzmanns constant kB to 1, there are two common ways to relate charge to mass, length, and time, In Lorentz–Heaviside units, Coulombs law is F = q1q2/4πr2, and in Gaussian units, Coulombs law is F = q1q2/r2. Both possibilities are incorporated into different natural unit systems, where, α is the fine-structure constant,2 ≈0.007297, αG is the gravitational coupling constant,2 ≈ 6955175200000000000♠1. 752×10−45. Natural units are most commonly used by setting the units to one, for example, many natural unit systems include the equation c =1 in the unit-system definition, where c is the speed of light. If a velocity v is half the speed of light, then as v = c/2 and c =1, the equation v = 1/2 means the velocity v has the value one-half when measured in Planck units, or the velocity v is one-half the Planck unit of velocity. The equation c =1 can be plugged in anywhere else, for example, Einsteins equation E = mc2 can be rewritten in Planck units as E = m. This equation means The energy of a particle, measured in Planck units of energy, equals the mass of the particle, measured in Planck units of mass. For example, the special relativity equation E2 = p2c2 + m2c4 appears somewhat complicated, Physical interpretation, Natural unit systems automatically subsume dimensional analysis. For example, in Planck units, the units are defined by properties of quantum mechanics, not coincidentally, the Planck unit of length is approximately the distance at which quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, no prototypes, A prototype is a physical object that defines a unit, such as the International Prototype Kilogram, a physical cylinder of metal whose mass is by definition exactly one kilogram. A prototype definition always has imperfect reproducibility between different places and between different times, and it is an advantage of natural systems that they use no prototypes. Less precise measurements, SI units are designed to be used in precision measurements, for example, the second is defined by an atomic transition frequency in cesium atoms, because this transition frequency can be precisely reproduced with atomic clock technology

13.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version

14.
Joule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito

15.
Kelvin
–
The kelvin is a unit of measure for temperature based upon an absolute scale. It is one of the seven units in the International System of Units and is assigned the unit symbol K. The kelvin is defined as the fraction 1⁄273.16 of the temperature of the triple point of water. In other words, it is defined such that the point of water is exactly 273.16 K. The Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Lord Kelvin, unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or typeset as a degree. The kelvin is the unit of temperature measurement in the physical sciences, but is often used in conjunction with the Celsius degree. The definition implies that absolute zero is equivalent to −273.15 °C, Kelvin calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale, when spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm. When reference is made to the Kelvin scale, the word kelvin—which is normally a noun—functions adjectivally to modify the noun scale and is capitalized, as with most other SI unit symbols there is a space between the numeric value and the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a degree and it was distinguished from the other scales with either the adjective suffix Kelvin or with absolute and its symbol was °K. The latter term, which was the official name from 1948 until 1954, was ambiguous since it could also be interpreted as referring to the Rankine scale. Before the 13th CGPM, the form was degrees absolute. The 13th CGPM changed the name to simply kelvin. Its measured value was 0.01028 °C with an uncertainty of 60 µK, the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been widely adopted. In 2005 the CIPM embarked on a program to redefine the kelvin using a more experimentally rigorous methodology, the current definition as of 2016 is unsatisfactory for temperatures below 20 K and above 1300 K. In particular, the committee proposed redefining the kelvin such that Boltzmanns constant takes the exact value 1. 3806505×10−23 J/K, from a scientific point of view, this will link temperature to the rest of SI and result in a stable definition that is independent of any particular substance. From a practical point of view, the redefinition will pass unnoticed, the kelvin is often used in the measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light whose colour depends on the temperature of the radiator, black bodies with temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish

16.
Heat capacity
–
Heat capacity or thermal capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The unit of capacity is joule per kelvin J K. Specific heat is the amount of heat needed to raise the temperature of one kilogram of mass by 1 kelvin, Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. The molar heat capacity is the capacity per unit amount of a pure substance. In some engineering contexts, the heat capacity is used. Other contributions can come from magnetic and electronic degrees of freedom in solids, for quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the capacity is a fraction of the maximum. As the temperature approaches zero, the heat capacity of a system approaches zero. Quantum theory can be used to predict the heat capacity of simple systems. In a previous theory of common in the early modern period, heat was thought to be a measurement of an invisible fluid. Bodies were capable of holding an amount of this fluid, hence the term heat capacity, named. Heat is no longer considered a fluid, but rather a transfer of disordered energy, nevertheless, at least in English, the term heat capacity survives. In some other languages, the thermal capacity is preferred. In the International System of Units, heat capacity has the unit joules per kelvin, if the temperature change is sufficiently small the heat capacity may be assumed to be constant, C = Q Δ T. Heat capacity is a property, meaning it depends on the extent or size of the physical system studied. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat to achieve the change in temperature. For many purposes it is convenient to report heat capacity as an intensive property. In practice, this is most often an expression of the property in relation to a unit of mass, in science and engineering, International standards now recommend that specific heat capacity always refer to division by mass

17.
Entropy
–
In statistical thermodynamics, entropy is a measure of the number of microscopic configurations Ω that a thermodynamic system can have when in a state as specified by some macroscopic variables. Formally, S = k B ln Ω, for example, gas in a container with known volume, pressure, and temperature could have an enormous number of possible configurations of the collection of individual gas molecules. Each instantaneous configuration of the gas may be regarded as random, Entropy may be understood as a measure of disorder within a macroscopic system. The second law of thermodynamics states that an isolated systems entropy never decreases, such systems spontaneously evolve towards thermodynamic equilibrium, the state with maximum entropy. Non-isolated systems may lose entropy, provided their environments entropy increases by at least that amount, since entropy is a function of the state of the system, a change in entropy of a system is determined by its initial and final states. This applies whether the process is reversible or irreversible, however, irreversible processes increase the combined entropy of the system and its environment. The above definition is called the macroscopic definition of entropy because it can be used without regard to any microscopic description of the contents of a system. The concept of entropy has found to be generally useful and has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state and it has the dimension of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units. But the entropy of a substance is usually given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. In statistical mechanics this reflects that the state of a system is generally non-degenerate. Understanding the role of entropy in various processes requires an understanding of how. It is often said that entropy is an expression of the disorder, or randomness of a system, the second law is now often seen as an expression of the fundamental postulate of statistical mechanics through the modern definition of entropy. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy and he made the analogy with that of how water falls in a water wheel. This was an insight into the second law of thermodynamics. g. Clausius described entropy as the transformation-content, i. e. dissipative energy use and this was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, henceforth, the essential problem in statistical thermodynamics, i. e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a definition of irreversibility, in terms of trajectories

18.
Boltzmann's constant
–
The Boltzmann constant, which is named after Ludwig Boltzmann, is a physical constant relating the average kinetic energy of particles in a gas with the temperature of the gas. It is the gas constant R divided by the Avogadro constant NA, the Boltzmann constant has the dimension energy divided by temperature, the same as entropy. The accepted value in SI units is 6977138064851999999♠1. 38064852×10−23 J/K, the Boltzmann constant, k, is a bridge between macroscopic and microscopic physics. Introducing the Boltzmann constant transforms the gas law into an alternative form, p V = N k T. For n =1 mol, N is equal to the number of particles in one mole, given a thermodynamic system at an absolute temperature T, the average thermal energy carried by each microscopic degree of freedom in the system is on the order of magnitude of 1/2kT. In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases, monatomic ideal gases possess three degrees of freedom per atom, corresponding to the three spatial directions, which means a thermal energy of 3/2kT per atom. This corresponds very well with experimental data, the thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from 7003137000000000000♠1370 m/s for helium, kinetic theory gives the average pressure p for an ideal gas as p =13 N V m v 2 ¯. Combination with the gas law p V = N k T shows that the average translational kinetic energy is 12 m v 2 ¯ =32 k T. Considering that the translational motion velocity vector v has three degrees of freedom gives the energy per degree of freedom equal to one third of that. Diatomic gases, for example, possess a total of six degrees of freedom per molecule that are related to atomic motion. Again, it is the energy-like quantity kT that takes central importance, consequences of this include the Arrhenius equation in chemical kinetics. This equation, which relates the details, or microstates. Such is its importance that it is inscribed on Boltzmanns tombstone, the constant of proportionality k serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius, Δ S = ∫ d Q T. One could choose instead a rescaled dimensionless entropy in terms such that S ′ = ln W, Δ S ′ = ∫ d Q k T. This is a natural form and this rescaled entropy exactly corresponds to Shannons subsequent information entropy. The characteristic energy kT is thus the required to increase the rescaled entropy by one nat. The iconic terse form of the equation S = k ln W on Boltzmanns tombstone is in due to Planck

19.
Shannon entropy
–
In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel, the channel modifies the message in some way. The receiver attempts to infer which message was sent, in this context, entropy is the expected value of the information contained in each message. Messages can be modeled by any flow of information, in a more technical sense, there are reasons to define information as the negative of the logarithm of the probability distribution of possible events or messages. The amount of information of every event forms a random variable whose expected value, units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources, for instance, the entropy of a coin toss is 1 shannon, whereas of m tosses it is m shannons. Generally, you need log2 bits to represent a variable that can take one of n if n is a power of 2. If these values are equally probable, the entropy is equal to the number of bits, equality between number of bits and shannons holds only while all outcomes are equally probable. If one of the events is more probable than others, observation of event is less informative. Conversely, rarer events provide more information when observed, since observation of less probable events occurs more rarely, the net effect is that the entropy received from non-uniformly distributed data is less than log2. Entropy is zero when one outcome is certain, Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The meaning of the events observed does not matter in the definition of entropy, generally, entropy refers to disorder or uncertainty. Shannon entropy was introduced by Claude E. Shannon in his 1948 paper A Mathematical Theory of Communication, Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. Entropy is a measure of unpredictability of the state, or equivalently, to get an intuitive understanding of these terms, consider the example of a political poll. Usually, such polls happen because the outcome of the poll is not already known, now, consider the case that the same poll is performed a second time shortly after the first poll. Now consider the example of a coin toss, assuming the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be. Such a coin toss has one shannon of entropy since there are two possible outcomes that occur with probability, and learning the actual outcome contains one shannon of information. Contrarily, a toss with a coin that has two heads and no tails has zero entropy since the coin will always come up heads

20.
Natural logarithm
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one

21.
Alan Turing
–
Alan Mathison Turing OBE FRS was an English computer scientist, mathematician, logician, cryptanalyst and theoretical biologist. Turing is widely considered to be the father of computer science. During the Second World War, Turing worked for the Government Code and Cypher School at Bletchley Park, for a time he led Hut 8, the section responsible for German naval cryptanalysis. After the war, he worked at the National Physical Laboratory and he wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Turing was prosecuted in 1952 for homosexual acts, when by the Labouchere Amendment and he accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954,16 days before his 42nd birthday, an inquest determined his death as suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made a public apology on behalf of the British government for the appalling way he was treated. Queen Elizabeth II granted him a pardon in 2013. The Alan Turing law is now a term for a 2017 law in the United Kingdom that retroactively pardons men cautioned or convicted under historical legislation that outlawed homosexual acts. Turings father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had based in the Netherlands. Turings mother, Julius wife, was Ethel Sara, daughter of Edward Waller Stoney, the Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius work with the ICS brought the family to British India and he had an elder brother, John. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, very early in life, Turing showed signs of the genius that he was later to display prominently. His parents purchased a house in Guildford in 1927, and Turing lived there during school holidays, the location is also marked with a blue plaque. Turings parents enrolled him at St Michaels, a day school at 20 Charles Road, St Leonards-on-Sea, the headmistress recognised his talent early on, as did many of his subsequent educators. From January 1922 to 1926, Turing was educated at Hazelhurst Preparatory School, in 1926, at the age of 13, he went on to Sherborne School, an independent school in the market town of Sherborne in Dorset. Turings natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne and his headmaster wrote to his parents, I hope he will not fall between two stools. If he is to stay at school, he must aim at becoming educated

22.
Luminance
–
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light passes through, is emitted or reflected from a particular area. The SI unit for luminance is candela per square metre, a non-SI term for the same unit is the nit. The CGS unit of luminance is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2, luminance is often used to characterize emission or reflection from flat, diffuse surfaces. The luminance indicates how much power will be detected by an eye looking at the surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear, in this case, the solid angle of interest is the solid angle subtended by the eyes pupil. Luminance is used in the industry to characterize the brightness of displays. A typical computer display emits between 50 and 300 cd/m2, the sun has luminance of about 1. 6×109 cd/m2 at noon. Luminance is invariant in geometric optics and this means that for an ideal optical system, the luminance at the output is the same as the input luminance. For real, passive, optical systems, the luminance is at most equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be brighter than the source, if light travels through a lossless medium, the luminance does not change along a given light ray. In the case of a perfectly diffuse reflector, the luminance is isotropic, then the relationship is simply L v = E v R / π A variety of units have been used for luminance, besides the candela per square metre. One candela per square metre is equal to, 10−4 stilbs π apostilbs π×10−4 lamberts 0.292 foot-lamberts Retinal damage can occur when the eye is exposed to high luminance, damage can occur due to local heating of the retina. Photochemical effects can cause damage, especially at short wavelengths. Also available in PDF form and Google Docs online version Autodesk Design Academy Measuring Light Levels

23.
International Electrotechnical Commission
–
The IEC also manages three global conformity assessment systems that certify whether equipment, system or components conform to its International Standards. The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, at that time the International System of Electrical and Magnetic Units was agreed to. The IEC was instrumental in developing and distributing standards for units of measurement, particularly Gauss, Hertz and it also first proposed a system of standards, the Giorgi System, which ultimately became the SI, or Système International d’unités. In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and this effort continues, and the International Electrotechnical Vocabulary remains an important work in the electrical and electronic industries. The CISPR – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC, originally located in London, the commission moved to its current headquarters in Geneva in 1948. It has regional centres in Asia-Pacific, Latin America and North America, today, the IEC is the worlds leading international organization in its field, and its standards are adopted as national standards by its members. The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs, IEC standards have numbers in the range 60000–79999 and their titles take a form such as IEC60417, Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC27 became IEC60027. Standards of the 60000 series are also preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard. The IEC cooperates closely with the International Organization for Standardization and the International Telecommunication Union, Standards developed jointly with ISO such as ISO/IEC26300, ISO/IEC27001, and CASCO ISO/IEC17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 - Information Technology, as well as conformity assessment standards developed by ISO CASCO, other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045-1. IEC standards are also being adopted by other certifying bodies such as BSI, CSA, UL & ANSI/INCITS, SABS, SAI, SPC/GB, IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard. The IEC is made up of members, called national committees, national committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, about 90% of those who prepare IEC standards work in industry

24.
New York City
–
The City of New York, often called New York City or simply New York, is the most populous city in the United States. With an estimated 2015 population of 8,550,405 distributed over an area of about 302.6 square miles. Located at the tip of the state of New York. Home to the headquarters of the United Nations, New York is an important center for international diplomacy and has described as the cultural and financial capital of the world. Situated on one of the worlds largest natural harbors, New York City consists of five boroughs, the five boroughs – Brooklyn, Queens, Manhattan, The Bronx, and Staten Island – were consolidated into a single city in 1898. In 2013, the MSA produced a gross metropolitan product of nearly US$1.39 trillion, in 2012, the CSA generated a GMP of over US$1.55 trillion. NYCs MSA and CSA GDP are higher than all but 11 and 12 countries, New York City traces its origin to its 1624 founding in Lower Manhattan as a trading post by colonists of the Dutch Republic and was named New Amsterdam in 1626. The city and its surroundings came under English control in 1664 and were renamed New York after King Charles II of England granted the lands to his brother, New York served as the capital of the United States from 1785 until 1790. It has been the countrys largest city since 1790, the Statue of Liberty greeted millions of immigrants as they came to the Americas by ship in the late 19th and early 20th centuries and is a symbol of the United States and its democracy. In the 21st century, New York has emerged as a node of creativity and entrepreneurship, social tolerance. Several sources have ranked New York the most photographed city in the world, the names of many of the citys bridges, tapered skyscrapers, and parks are known around the world. Manhattans real estate market is among the most expensive in the world, Manhattans Chinatown incorporates the highest concentration of Chinese people in the Western Hemisphere, with multiple signature Chinatowns developing across the city. Providing continuous 24/7 service, the New York City Subway is one of the most extensive metro systems worldwide, with 472 stations in operation. Over 120 colleges and universities are located in New York City, including Columbia University, New York University, and Rockefeller University, during the Wisconsinan glaciation, the New York City region was situated at the edge of a large ice sheet over 1,000 feet in depth. The ice sheet scraped away large amounts of soil, leaving the bedrock that serves as the foundation for much of New York City today. Later on, movement of the ice sheet would contribute to the separation of what are now Long Island and Staten Island. The first documented visit by a European was in 1524 by Giovanni da Verrazzano, a Florentine explorer in the service of the French crown and he claimed the area for France and named it Nouvelle Angoulême. Heavy ice kept him from further exploration, and he returned to Spain in August and he proceeded to sail up what the Dutch would name the North River, named first by Hudson as the Mauritius after Maurice, Prince of Orange

25.
Simon & Schuster
–
Simon & Schuster, Inc. a subsidiary of CBS Corporation, is an American publishing company founded in New York City in 1924 by Richard Simon and Max Schuster. As of 2016, Simon & Schuster publishes 2,000 titles annually under 35 different imprints, in 1924, Richard Simons aunt, a crossword puzzle enthusiast, asked whether there was a book of New York World crossword puzzles, which were very popular at the time. After discovering that none had been published, Simon and Max Schuster decided to launch a company to exploit the opportunity, at the time, Simon was a piano salesman and Schuster was editor of an automotive trade magazine. They pooled US$8,000 to start a company to publish crossword puzzles, fad publishing became the business model for the new publishing house, which set out to exploit current fads and trends and publish books with commercial appeal. Instead of signing authors with a manuscript, they came up with their own ideas. In the 1930, the moved to what was known as Publishers Row on Park Avenue in Manhattan. In 1939, with Robert Fair de Graff, Simon & Schuster founded Pocket Books, in 1942, Simon & Schuster, or Essandess as it is called in the initial announcement, launched the Little Golden Books series in cooperation with the Artists and Writers Guild. Simon & Schusters partner in the venture was the Western Printing and Lithographing Company, Western Printing bought out Simon & Schusters interest in 1958. In 1944, Marshall Field III, owner of the Chicago Sun, purchased Simon & Schuster, following Fields death in 1957, his heirs sold the company back to Richard Simon and Max Schuster, while Leon Shimkin and James Jacobson acquired Pocket Books. In the 1950s and 1960s, many publishers including Simon & Schuster turned toward educational publishing due to the boom market. Pocket Books focused on paperbacks for the market instead of textbooks. By 1964 it had published over 200 titles and was expected to put out another 400 by the end of that year, Books published under the imprint included classic reprints such as Lorna Doone, Ivanhoe, Tom Sawyer, Huckleberry Finn, and Robinson Crusoe. In 1966, Max Schuster retired and sold his half of Simon & Schuster to Leon Shimkin, Shimkin then merged Simon & Schuster with Pocket Books under the name of Simon & Schuster. Among his many bestsellers was Joseph Hellers Catch-22, in 1976, Gulf+Western headed by Charles Bluhdorn acquired S&S, which was grossing about US$50 million a year for $11 million, most of it in Gulf+Western stock. After the death of Bluhdorn in 1983, Simon & Schuster made the decision to diversify, bluhdorns successor Martin Davis told The New York Times, Society was undergoing dramatic changes, so that there was a greater need for textbooks, maps and educational information. We saw the opportunity to diversify into areas, which are more stable. In 1984, CEO Richard E. Snyder acquired Esquire Corporation, buying everything, Prentice Hall was brought into the company fold in 1985 for over $700 million and Martin Davis said that Prentice Hall became the road map for remodeling the company and a catalyst for change. This acquisition was followed by Silver Burdett in 1986, mapmaker Gousha in 1987, part of the acquisition included educational publisher Allyn & Bacon which according to Michael Korda became the nucleus of S&Ss educational and informational business

26.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

27.
OCLC
–
The Online Computer Library Center is a US-based nonprofit cooperative organization dedicated to the public purposes of furthering access to the worlds information and reducing information costs. It was founded in 1967 as the Ohio College Library Center, OCLC and its member libraries cooperatively produce and maintain WorldCat, the largest online public access catalog in the world. OCLC is funded mainly by the fees that libraries have to pay for its services, the group first met on July 5,1967 on the campus of the Ohio State University to sign the articles of incorporation for the nonprofit organization. The group hired Frederick G. Kilgour, a former Yale University medical school librarian, Kilgour wished to merge the latest information storage and retrieval system of the time, the computer, with the oldest, the library. The goal of network and database was to bring libraries together to cooperatively keep track of the worlds information in order to best serve researchers and scholars. The first library to do online cataloging through OCLC was the Alden Library at Ohio University on August 26,1971 and this was the first occurrence of online cataloging by any library worldwide. Membership in OCLC is based on use of services and contribution of data, between 1967 and 1977, OCLC membership was limited to institutions in Ohio, but in 1978, a new governance structure was established that allowed institutions from other states to join. In 2002, the structure was again modified to accommodate participation from outside the United States. As OCLC expanded services in the United States outside of Ohio, it relied on establishing strategic partnerships with networks, organizations that provided training, support, by 2008, there were 15 independent United States regional service providers. OCLC networks played a key role in OCLC governance, with networks electing delegates to serve on OCLC Members Council, in early 2009, OCLC negotiated new contracts with the former networks and opened a centralized support center. OCLC provides bibliographic, abstract and full-text information to anyone, OCLC and its member libraries cooperatively produce and maintain WorldCat—the OCLC Online Union Catalog, the largest online public access catalog in the world. WorldCat has holding records from public and private libraries worldwide. org, in October 2005, the OCLC technical staff began a wiki project, WikiD, allowing readers to add commentary and structured-field information associated with any WorldCat record. The Online Computer Library Center acquired the trademark and copyrights associated with the Dewey Decimal Classification System when it bought Forest Press in 1988, a browser for books with their Dewey Decimal Classifications was available until July 2013, it was replaced by the Classify Service. S. The reference management service QuestionPoint provides libraries with tools to communicate with users and this around-the-clock reference service is provided by a cooperative of participating global libraries. OCLC has produced cards for members since 1971 with its shared online catalog. OCLC commercially sells software, e. g. CONTENTdm for managing digital collections, OCLC has been conducting research for the library community for more than 30 years. In accordance with its mission, OCLC makes its research outcomes known through various publications and these publications, including journal articles, reports, newsletters, and presentations, are available through the organizations website. The most recent publications are displayed first, and all archived resources, membership Reports – A number of significant reports on topics ranging from virtual reference in libraries to perceptions about library funding