1.
Units of measurement
–
A unit of measurement is a definite magnitude of a quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same quantity. Any other value of quantity can be expressed as a simple multiple of the unit of measurement. For example, length is a physical quantity, the metre is a unit of length that represents a definite predetermined length. When we say 10 metres, we actually mean 10 times the definite predetermined length called metre, the definition, agreement, and practical use of units of measurement have played a crucial role in human endeavour from early ages up to this day. Different systems of units used to be very common, now there is a global standard, the International System of Units, the modern form of the metric system. In trade, weights and measures is often a subject of regulation, to ensure fairness. The International Bureau of Weights and Measures is tasked with ensuring worldwide uniformity of measurements, metrology is the science for developing nationally and internationally accepted units of weights and measures. In physics and metrology, units are standards for measurement of quantities that need clear definitions to be useful. Reproducibility of experimental results is central to the scientific method, a standard system of units facilitates this. Scientific systems of units are a refinement of the concept of weights, science, medicine, and engineering often use larger and smaller units of measurement than those used in everyday life and indicate them more precisely. The judicious selection of the units of measurement can aid researchers in problem solving, in the social sciences, there are no standard units of measurement and the theory and practice of measurement is studied in psychometrics and the theory of conjoint measurement. A unit of measurement is a quantity of a physical property. Units of measurement were among the earliest tools invented by humans, primitive societies needed rudimentary measures for many tasks, constructing dwellings of an appropriate size and shape, fashioning clothing, or bartering food or raw materials. Weights and measures are mentioned in the Bible and it is a commandment to be honest and have fair measures. As of the 21st Century, multiple unit systems are used all over the world such as the United States Customary System, the British Customary System, however, the United States is the only industrialized country that has not yet completely converted to the Metric System. The systematic effort to develop an acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. After this treaty was signed, a General Conference of Weights, the CGPM produced the current SI system which was adopted in 1954 at the 10th conference of weights and measures. Currently, the United States is a society which uses both the SI system and the US Customary system
2.
Units of information
–
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are used to measure the information contents or entropy of random variables. The most common units are the bit, the capacity of a system which can exist in two states, and the byte, which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes or the newer IEC binary prefixes, information capacity is a dimensionless quantity. In particular, if b is an integer, then the unit is the amount of information that can be stored in a system with N possible states. When b is 2, the unit is the shannon, equal to the content of one bit. A system with 8 possible states, for example, can store up to log28 =3 bits of information, other units that have been named include, Base b =3, the unit is called trit, and is equal to log23 bits. Base b =10, the unit is called decimal digit, hartley, ban, decit, or dit, Base b = e, the base of natural logarithms, the unit is called a nat, nit, or nepit, and is worth log2 e bits. Several conventional names are used for collections or groups of bits, a byte can represent 256 distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies B as the symbol for byte, bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, a group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of number representations. Computers usually manipulate bits in groups of a size, conventionally called words. The number of bits in a word is defined by the size of the registers in the computers CPU. Some machine instructions and computer number formats use two words, or four words, computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, virtual memory systems partition the computers main storage into even larger units, traditionally called pages. Terms for large quantities of bits can be formed using the range of SI prefixes for powers of 10, e. g. kilo =103 =1000, mega- =106 =1000000. These prefixes are often used for multiples of bytes, as in kilobyte, megabyte
3.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra
4.
Base e
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one
5.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0
6.
Qubit
–
In quantum computing, a qubit or quantum bit is a unit of quantum information—the quantum analogue of the classical bit. A qubit is a two-state quantum-mechanical system, such as the polarization of a single photon, in a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a superposition of states at the same time, a property that is fundamental to quantum computing. The concept of the qubit was unknowingly introduced by Stephen Wiesner in 1983, in his proposal for quantum money, the coining of the term qubit is attributed to Benjamin Schumacher. The paper describes a way of compressing states emitted by a source of information so that they require fewer physical resources to store. This procedure is now known as Schumacher compression, the bit is the basic unit of information. It is used to represent information by computers, an analogy to this is a light switch—its off position can be thought of as 0 and its on position as 1. A qubit has a few similarities to a bit, but is overall very different. There are two possible outcomes for the measurement of a qubit—usually 0 and 1, like a bit, the difference is that whereas the state of a bit is either 0 or 1, the state of a qubit can also be a superposition of both. It is possible to encode one bit in one qubit. However, a qubit can hold more information, e. g. up to two bits using superdense coding. For a system of n components, a description of its state in classical physics requires only n bits. The two states in which a qubit may be measured are known as basis states, as is the tradition with any sort of quantum states, they are represented by Dirac—or bra–ket—notation. This means that the two basis states are conventionally written as |0 ⟩ and |1 ⟩. A pure qubit state is a superposition of the basis states. When we measure this qubit in the basis, the probability of outcome |0 ⟩ is | α |2. Because the absolute squares of the amplitudes equate to probabilities, it follows that α and β must be constrained by the equation | α |2 + | β |2 =1. It might at first sight seem that there should be four degrees of freedom, as α and β are complex numbers with two degrees of freedom each
7.
Quantum information
–
In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the entity of study in quantum information theory. Quantum information differs strongly from classical information, epitomized by the bit, in many striking, among these are the following, A unit of quantum information is the qubit. Unlike classical digital states, a qubit is continuous-valued, describable by a direction on the Bloch sphere, despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information. The reason for this indivisibility is due to the Heisenberg uncertainty principle, despite the state being continuously-valued. A qubit cannot be converted into classical bits, that is, despite the awkwardly-named no-teleportation theorem, qubits can be moved from one physical particle to another, by means of quantum teleportation. That is, qubits can be transported, independently of the underlying physical particle, an arbitrary qubit can neither be copied, nor destroyed. This is the content of the no cloning theorem and the no-deleting theorem, although a single qubit can be transported from place to place, it cannot be delivered to multiple recipients, this is the no-broadcast theorem, and is essentially implied by the no-cloning theorem. Qubits can be changed, by applying linear transformations or quantum gates to them, Classical bits may be combined with and extracted from configurations of multiple qubits, through the use of quantum gates. That is, two or more qubits can be arranged in such a way as to convey classical bits, the simplest such configuration is the Bell state, which consists of two qubits and four classical bits. Quantum information can be moved about, in a quantum channel, Quantum messages have a finite size, measured in qubits, quantum channels have a finite channel capacity, measured in qubits per second. Multiple qubits can be used to carry classical bits, although n qubits can carry more than n classical bits of information, the greatest amount of classical information that can be retrieved is n. Quantum information, and changes in information, can be quantitatively measured by using an analogue of Shannon entropy. Given a statistical ensemble of mechanical systems with the density matrix ρ. Many of the entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy. Quantum algorithms have a different computational complexity than classical algorithms, the most famous example of this is Shors factoring algorithm, which is not known to have a polynomial time classical algorithm, but does have a polynomial time quantum algorithm. Other examples include Grovers search algorithm, where the algorithm gives a quadratic speed-up over the best possible classical algorithm. Quantum key distribution allows unconditionally secure transmission of information, unlike classical encryption
8.
Portmanteau
–
In linguistics, a portmanteau is defined as a single morph that represents two or more morphemes. A portmanteau also differs from a compound, which not involve the truncation of parts of the stems of the blended words. For instance, starfish is a compound, not a portmanteau, of star and fish, whereas a hypothetical portmanteau of star and fish might be stish. Humpty Dumpty explains the practice of combining words in various ways by telling Alice, for instance, take the two words fuming and furious. Make up your mind that you will say both words, but leave it unsettled which you will say first … if you have the rarest of gifts, in then-contemporary English, a portmanteau was a suitcase that opened into two equal sections. The etymology of the word is the French porte-manteau, from porter, to carry, in modern French, a porte-manteau is a clothes valet, a coat-tree or similar article of furniture for hanging up jackets, hats, umbrellas and the like. It has also used especially in Europe as a formal description for hat racks from the French words porter. An occasional synonym for portmanteau word is frankenword, an autological word exemplifying the phenomenon it describes, blending Frankenstein, many neologisms are examples of blends, but many blends have become part of the lexicon. In Punch in 1896, the word brunch was introduced as a portmanteau word, in 1964, the newly independent African republic of Tanganyika and Zanzibar chose the portmanteau word Tanzania as its name. Similarly Eurasia is a portmanteau of Europe and Asia, a scientific example is a liger, which is a cross between a male lion and a female tiger. Jeoportmanteau. is a category on the American television quiz show Jeopardy. The categorys name is itself a portmanteau of the words Jeopardy, responses in the category are portmanteaus constructed by fitting two words together. The term gerrymander has itself contributed to portmanteau terms bjelkemander and playmander, oxbridge is a common portmanteau for the UKs two oldest universities, those of Oxford and Cambridge. Many portmanteau words receive some use but do not appear in all dictionaries, for example, a spork is an eating utensil that is a combination of a spoon and a fork, and a skort is an item of clothing that is part skirt, part shorts. On the other hand, turducken, a made by inserting a chicken into a duck. Similarly, the word refudiate was first used by Sarah Palin when she misspoke, though initially a gaffe, the word was recognized as the New Oxford American Dictionarys Word of the Year in 2010. The business lexicon is replete with newly coined portmanteau words like permalance, advertainment, advertorial, infotainment, a company name may also be portmanteau as well as a product name. By contrast, the public, including the media, use portmanteaux to refer to their favorite pairings as a way to. giv people an essence of who they are within the same name and this is particularly seen in cases of fictional and real-life supercouples
9.
Information
–
In other words, it is the answer to a question of some kind. It is thus related to data and knowledge, as data represents values attributed to parameters, as it regards data, the informations existence is not necessarily coupled to an observer, while in the case of knowledge, the information requires a cognitive observer. At its most fundamental, information is any propagation of cause, Information can be encoded into various forms for transmission and interpretation. It can also be encrypted for safe storage and communication, the uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the information is required to resolve uncertainty of that event. The bit is a unit of information, but other units such as the nat may be used. Example, information in one fair coin ﬂip, log2 =1 bit, the concept that information is the message has different meanings in different contexts. The English word was derived from the Latin stem of the nominative. Inform itself comes from the Latin verb informare, which means to give form, eidos can also be associated with thought, proposition, or even concept. The ancient Greek word for information is πληροφορία, which transliterates from πλήρης fully and it literally means fully bears or conveys fully. In modern Greek language the word Πληροφορία is still in use and has the same meaning as the word information in English. In addition to its meaning, the word Πληροφορία as a symbol has deep roots in Aristotles semiotic triangle. In this regard it can be interpreted to communicate information to the one decoding that specific type of sign, from the stance of information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of a function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic and it may have memory or be memoryless. Often information can be viewed as a type of input to an organism or system, inputs are of two kinds, some inputs are important to the function of the organism or system by themselves. In his book Sensory Ecology Dusenbery called these causal inputs, other inputs are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time. Some information is important because of association with information but eventually there must be a connection to a causal input
10.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
11.
Communication
–
Communication is the act of conveying intended meanings from one entity or group to another through the use of mutually understood signs and semiotic rules. The main steps inherent to all communication are, The forming of communicative motivation or reason, transmission of the encoded message as a sequence of signals using a specific channel or medium. Noise sources such as forces and in some cases human activity begin influencing the quality of signals propagating from the sender to one or more receivers. Reception of signals and reassemblying of the message from a sequence of received signals. Decoding of the encoded message. Interpretation and making sense of the original message. The channel of communication can be visual, auditory, tactile and haptic, olfactory, electromagnetic, human communication is unique for its extensive use of abstract language. Development of civilization has been linked with progress in telecommunication. Nonverbal communication describes the process of conveying information in the form of non-linguistic representations, examples of nonverbal communication include haptic communication, chronemic communication, gestures, body language, facial expressions, eye contact, and how one dresses. Nonverbal communication also relates to intent of a message, examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech also contains nonverbal elements known as paralanguage, e. g. rhythm, intonation, tempo and it affects communication most at the subconscious level and establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement of words, Nonverbal communication demonstrates one of Wazlawicks laws, you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting any signals received, Nonverbal cues are heavily relied on to express communication and to interpret others’ communication and can replace or substitute verbal messages. There are several reasons as to why non-verbal communication plays a role in communication. Written communication can also have non-verbal attributes, e-mails and web chats allow individual’s the option to change text font colours, stationary, emoticons, and capitalization in order to capture non-verbal cues into a verbal medium. Many different non-verbal channels are engaged at the time in communication acts. “Non-verbal behaviours may form a language system. ”Smiling, crying, pointing, caressing. Such non-verbal signals allow the most basic form of communication when verbal communication is not effective due to language barriers, Verbal communication is the spoken or written conveyance of a message
12.
Truth value
–
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth. In classical logic, with its intended semantics, the values are true and untrue or false. This set of two values is called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables, logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgans laws, assigning values for propositional variables is referred to as valuation. In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of the statement from those axioms, a statement is false if you can deduce a contradiction from it. This leaves open the possibility of statements that have not yet assigned a truth value. Unproven statements in Intuitionistic logic are not given a truth value. Indeed, you can prove that they have no truth value. There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation, see also, Intuitionistic Logic - Semantics. Multi-valued logics allow for more than two values, possibly containing some internal structure. For example, on the interval such structure is a total order. Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions, but even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, Intuitionistic type theory uses types in the place of truth values. Topos theory uses truth values in a sense, the truth values of a topos are the global elements of the subobject classifier. Having truth values in this sense does not make a logic truth valuational
13.
Signed number
–
In mathematics, the concept of sign originates from the property of every non-zero real number of being positive or negative. Zero itself is signless, although in some contexts it makes sense to consider a signed zero, along with its application to real numbers, change of sign is used throughout mathematics and physics to denote the additive inverse, even for quantities which are not real numbers. Also, the sign can indicate aspects of mathematical objects that resemble positivity and negativity. A real number is said to be if its value is greater than zero. The attribute of being positive or negative is called the sign of the number, zero itself is not considered to have a sign. Also, signs are not defined for complex numbers, although the argument generalizes it in some sense, in common numeral notation, the sign of a number is often denoted by placing a plus sign or a minus sign before the number. For example, +3 denotes positive three, and −3 denotes negative three, when no plus or minus sign is given, the default interpretation is that a number is positive. Because of this notation, as well as the definition of numbers through subtraction. In this context, it makes sense to write − = +3, any non-zero number can be changed to a positive one using the absolute value function. For example, the value of −3 and the absolute value of 3 are both equal to 3. In symbols, this would be written |−3| =3 and |3| =3, the number zero is neither positive nor negative, and therefore has no sign. In arithmetic, +0 and −0 both denote the same number 0, which is the inverse of itself. Note that this definition is culturally determined, in France and Belgium,0 is said to be both positive and negative. The positive resp. negative numbers without zero are said to be strictly positive resp, in some contexts, such as signed number representations in computing, it makes sense to consider signed versions of zero, with positive zero and negative zero being different numbers. One also sees +0 and −0 in calculus and mathematical analysis when evaluating one-sided limits and this notation refers to the behaviour of a function as the input variable approaches 0 from positive or negative values respectively, these behaviours are not necessarily the same. Because zero is positive nor negative, the following phrases are sometimes used to refer to the sign of an unknown number. A number is negative if it is less than zero, a number is non-negative if it is greater than or equal to zero. A number is non-positive if it is less than or equal to zero, thus a non-negative number is either positive or zero, while a non-positive number is either negative or zero
14.
Data storage device
–
A data storage device is a device for recording information. A storage device may hold information, process information, or both, a device that only holds information is a recording medium. Devices that process information may either access a separate portable recording medium or a permanent component to store, Electronic data storage requires electrical power to store and retrieve that data. Most storage devices that do not require vision and a brain to read data fall into this category, electromagnetic data may be stored in either an analog data or digital data format on a variety of media. Most electronically processed data storage media are considered permanent storage, that is, in contrast, most electronically stored information within most types of semiconductor microcircuits are volatile memory, for it vanishes if power is removed. JISC/NPO Studies on the Preservation of Electronic Materials, A Framework of Data Types and Formats, british Library Research and Innovation Report 50
15.
Computing device
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC
16.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm
17.
Information theory
–
Information theory studies the quantification, storage, and communication of information. A key measure in information theory is entropy, entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a coin flip provides less information than specifying the outcome from a roll of a die. Some other important measures in information theory are mutual information, channel capacity, error exponents, applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, Information theory studies the transmission, processing, utilization, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty, Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. These codes can be subdivided into data compression and error-correction techniques. In the latter case, it took years to find the methods Shannons work proved were possible. A third class of information theory codes are cryptographic algorithms, concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban for a historical application, Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, the unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann, Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the associated with random variables. Important quantities of information are entropy, a measure of information in a random variable, and mutual information. The choice of base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm, other units include the nat, which is based on the natural logarithm, and the hartley, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p =0 and this is justified because lim p →0 + p log p =0 for any logarithmic base
18.
Information entropy
–
In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel, the channel modifies the message in some way. The receiver attempts to infer which message was sent, in this context, entropy is the expected value of the information contained in each message. Messages can be modeled by any flow of information, in a more technical sense, there are reasons to define information as the negative of the logarithm of the probability distribution of possible events or messages. The amount of information of every event forms a random variable whose expected value, units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources, for instance, the entropy of a coin toss is 1 shannon, whereas of m tosses it is m shannons. Generally, you need log2 bits to represent a variable that can take one of n if n is a power of 2. If these values are equally probable, the entropy is equal to the number of bits, equality between number of bits and shannons holds only while all outcomes are equally probable. If one of the events is more probable than others, observation of event is less informative. Conversely, rarer events provide more information when observed, since observation of less probable events occurs more rarely, the net effect is that the entropy received from non-uniformly distributed data is less than log2. Entropy is zero when one outcome is certain, Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The meaning of the events observed does not matter in the definition of entropy, generally, entropy refers to disorder or uncertainty. Shannon entropy was introduced by Claude E. Shannon in his 1948 paper A Mathematical Theory of Communication, Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. Entropy is a measure of unpredictability of the state, or equivalently, to get an intuitive understanding of these terms, consider the example of a political poll. Usually, such polls happen because the outcome of the poll is not already known, now, consider the case that the same poll is performed a second time shortly after the first poll. Now consider the example of a coin toss, assuming the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be. Such a coin toss has one shannon of entropy since there are two possible outcomes that occur with probability, and learning the actual outcome contains one shannon of information. Contrarily, a toss with a coin that has two heads and no tails has zero entropy since the coin will always come up heads
19.
Quantum computing
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
20.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
21.
Quantum superposition
–
Quantum superposition is a fundamental principle of quantum mechanics. Mathematically, it refers to a property of solutions to the Schrödinger equation, since the Schrödinger equation is linear, an example of a physically observable manifestation of superposition is interference peaks from an electron wave in a double-slit experiment. Another example is a logical qubit state, as used in quantum information processing. Here |0 ⟩ is the Dirac notation for the state that will always give the result 0 when converted to classical logic by a measurement. Likewise |1 ⟩ is the state that will convert to 1. The numbers that describe the amplitudes for different possibilities define the kinematics, the dynamics describes how these numbers change with time. This list is called the vector, and formally it is an element of a Hilbert space. The quantities that describe how they change in time are the transition probabilities K x → y, which gives the probability that, starting at x, the particle ends up at y time t later. When no time passes, nothing changes, for 0 elapsed time K x → y = δ x y, the K matrix is zero except from a state to itself. So in the case that the time is short, it is better to talk about the rate of change of the probability instead of the change in the probability. Quantum amplitudes give the rate at which amplitudes change in time, the reason it is multiplied by i is that the condition that U is unitary translates to the condition, = I H † − H =0 which says that H is Hermitian. The eigenvalues of the Hermitian matrix H are real quantities, which have an interpretation as energy levels. For a particle that has equal amplitude to move left and right, the Hermitian matrix H is zero except for nearest neighbors, where it has the value c. If the coefficient is constant, the condition that H is Hermitian demands that the amplitude to move to the left is the complex conjugate of the amplitude to move to the right. By redefining the phase of the wavefunction in time, ψ → ψ e i 2 c t, but this phase rotation introduces a linear term. I d ψ n d t = c ψ n +1 −2 c ψ n + c ψ n −1, the analogy between quantum mechanics and probability is very strong, so that there are many mathematical links between them. The analogous expression in quantum mechanics is the path integral, a generic transition matrix in probability has a stationary distribution, which is the eventual probability to be found at any point no matter what the starting point. If there is a probability for any two paths to reach the same point at the same time, this stationary distribution does not depend on the initial conditions
22.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
23.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit
24.
Unit of information
–
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are used to measure the information contents or entropy of random variables. The most common units are the bit, the capacity of a system which can exist in two states, and the byte, which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes or the newer IEC binary prefixes, information capacity is a dimensionless quantity. In particular, if b is an integer, then the unit is the amount of information that can be stored in a system with N possible states. When b is 2, the unit is the shannon, equal to the content of one bit. A system with 8 possible states, for example, can store up to log28 =3 bits of information, other units that have been named include, Base b =3, the unit is called trit, and is equal to log23 bits. Base b =10, the unit is called decimal digit, hartley, ban, decit, or dit, Base b = e, the base of natural logarithms, the unit is called a nat, nit, or nepit, and is worth log2 e bits. Several conventional names are used for collections or groups of bits, a byte can represent 256 distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies B as the symbol for byte, bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, a group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of number representations. Computers usually manipulate bits in groups of a size, conventionally called words. The number of bits in a word is defined by the size of the registers in the computers CPU. Some machine instructions and computer number formats use two words, or four words, computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, virtual memory systems partition the computers main storage into even larger units, traditionally called pages. Terms for large quantities of bits can be formed using the range of SI prefixes for powers of 10, e. g. kilo =103 =1000, mega- =106 =1000000. These prefixes are often used for multiples of bytes, as in kilobyte, megabyte
25.
Claude Shannon
–
Claude Elwood Shannon was an American mathematician, electrical engineer, and cryptographer known as the father of information theory. Shannon is noted for having founded information theory with a paper, A Mathematical Theory of Communication. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his work on codebreaking. Shannon was born in Petoskey, Michigan and grew up in Gaylord and his father, Claude, Sr. a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannons mother, Mabel Wolf Shannon, was a language teacher, most of the first 16 years of Shannons life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things and his best subjects were science and mathematics. At home he constructed such devices as models of planes, a model boat. While growing up, he worked under Andrew Coltrey as a messenger for the Western Union company. His childhood hero was Thomas Edison, who he learned was a distant cousin. Both were descendants of John Ogden, a leader and an ancestor of many distinguished people. Shannon was apolitical and an atheist, in 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two degrees, one in electrical engineering and the other in mathematics. In 1936, Shannon began his studies in electrical engineering at MIT, where he worked on Vannevar Bushs differential analyzer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Booles concepts, in 1937, he wrote his masters degree thesis, A Symbolic Analysis of Relay and Switching Circuits, A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used then in telephone call routing switches, next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presents diagrams of several circuits, using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannons work became the foundation of digital design, as it became widely known in the electrical engineering community during. The theoretical rigor of Shannons work superseded the ad hoc methods that had prevailed previously, howard Gardner called Shannons thesis possibly the most important, and also the most noted, masters thesis of the century
26.
Punched card
–
A punched card or punch card is a piece of stiff paper that can be used to contain digital information represented by the presence or absence of holes in predefined positions. The information might be data for data processing applications or, in earlier examples, the terms IBM card, or Hollerith card specifically refer to punched cards used in semiautomatic data processing. Many early digital computers used punched cards, often prepared using keypunch machines, while punched cards are now obsolete as a recording medium, as of 2012, some voting machines still use punched cards to record votes. Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725, in 1801 Joseph Marie Jacquard demonstrated a mechanism to automate loom operation. A number of punched cards were linked into a chain of any length, each card held the instructions for shedding and selecting the shuttle for a single pass. It is considered an important step in the history of computing hardware, semen Korsakov was reputedly the first to use the punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832, rather than seeking patents, charles Babbage proposed the use of Number Cards, pierced with certain holes and stand opposite levers connected with a set of figure wheels. Advanced they push in those levers opposite to which there are no holes on the card, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, such as those above, had been for control, after some initial trials with paper tape, he settled on punched cards. Developing punched card data processing technology for the 1890 US census, other companies entering the punched card business included the Powers Accounting Machine Company, Remington Rand, and Groupe Bull. Both IBM and Remington Rand tied punched card purchases to machine leases, in 1932, the US government took both to court on this issue. IBM viewed its business as providing a service and that the cards were part of the machine, IBM fought all the way to the Supreme Court and lost in 1936, the court ruling that IBM could only set card specifications. According to the IBM Archives, By 1937, IBM had 32 presses at work in Endicott, N. Y. printing, cutting and stacking five to 10 million punched cards every day. Punched cards were used as legal documents, such as U. S. Government checks. Punched card technology developed into a tool for business data-processing. By 1950 punched cards had become ubiquitous in industry and government, do not fold, spindle or mutilate, a generalized version of the warning that appeared on some punched cards, became a motto for the post-World War II era. In 1955 IBM signed a consent decree requiring, amongst other things, tom Watson Jr. s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson, Sr. The UNITYPER introduced magnetic tape for data entry in the 1950s, during the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available
27.
Basile Bouchon
–
Basile Bouchon was a textile worker in the silk center in Lyon who invented a way to control a loom with a perforated paper tape in 1725. The son of a maker, Bouchon partially automated the tedious setting up process of the drawloom in which an operator lifted the warp threads using cords. This development is considered to be the first industrial application of a semi-automated machine, the cords of the warp were passed through the eyes of horizontal needles arranged to slide in a box. These were either raised or not depending on whether there was not or was a hole in the tape at that point. This was similar to the roll developed at the end of the 19th century. Though this eliminated mistakes in the lifting of threads, it still needed an extra operator to control it, but it was not until 1805 that the wildly successful Jacquard mechanism was finally produced. Poncelet, Jean-Victor, Travaux de la Commission Francaise,3, part 1, section 2, pages 348-349. The Origins of Digital Computers, Selected Papers, 3rd ed. page 5, eymard, Paul “Historique du metier Jacquard” Annales des Sciences physiques et naturelles*, 3rd series, vol. 7, pages 34–56, see especially page 37, l’introduction du machinisme dans l’industrie française. Revue dhistoire de Lyon, Études, Documents, Bibliographie, vol, the History and Principles of Weaving by Hand and by Power. History of Computers Photograph of replica of Bouchon loom
28.
Joseph Marie Jacquard
–
Joseph Marie Charles dit Jacquard was a French weaver and merchant. In his grandfather’s generation, several branches of the Charles family lived in Lyon’s Couzon-Au-Mont d’Or suburb, to distinguish the various branches, the community gave them nicknames, Joseph’s branch was called “Jacquard” Charles. Thus, Joseph’s grandfather was Bartholomew Charles dit Jacquard, Joseph Marie Charles sit Jacquard was born into a conservative catholic family in Lyon, France on 7 July 1752. He was one of nine children of Jean Charles sit Jacquard, a weaver of Lyon. However, only Joseph and his sister Clemenceau survived to adulthood, although his father was a man of property, Joseph received no formal schooling and remained illiterate until he was 13. He was finally taught by his brother-in-law, Jean-Marie Barrett, who ran a printing, Barrett also introduced Joseph to learned societies and scholars. His mother died in 1762, and when his father died in 1772, Joseph inherited his father’s house, looms and workshop as well as a vineyard, Joseph then dabbled in real estate. In 1778, he listed his occupations as master weaver and silk merchant, there is some confusion about Jacquard’s early work history. British economist Sir John Bowring met Jacquard, who told Bowring that at one time he had been a maker of straw hats. Eymard claimed that before becoming involved in the weaving of silk, Jacquard was a type-founder, a soldier, a bleacher of straw hats, Barlow claims that before marrying, Jacquard had worked for a bookbinder, a type-founder, and a maker of cutlery. After marrying, Jacquard tried cutlery making, type-founding, and weaving, however, Barlow does not cite any sources for that information. Ballot stated that Jacquard initially helped his father operate his loom, on 26 July 1778, Joseph married Claudine Boichon. She was a widow from Lyon who owned property and had a substantial dowry. However, Joseph soon fell deeply into debt and was brought to court, Barlow claims that after Jacquards father died, Jacquard started a figure-weaving business but failed and lost all his wealth. However, Barlow cites no sources to support his claim, to settle his debts, he was obliged to sell his inheritance and to appropriate his wife’s dowry. Fortunately, his wife retained a house in Oullins, where the couple resided, on 19 April 1779, the couple had their only child, a son, Jean Marie. Beyond his name and his date of birth, nothing is known about Jacquards son, Charles Ballot stated that after the rebellion of Lyon in 1793 was suppressed, Jacquard and his son escaped from the city by joining the revolutionary army. They fought together in the Rhine campaign of 1795, serving in the Rhone-and-Loire battalion under General Jean Charles Pichegru, Jacquards son was killed outside of Heidelberg
29.
Semen Korsakov
–
Semen Nikolaevich Korsakov was a Russian government official, noted both as a homeopath and an inventor who was involved with an early version of information technology. Korsakov was born in 1787 in what is now Kherson, Ukraine and his father was a military engineer. The family had migrated from Lithuania in the 14th century, from 1812 to 1814, Semen Korsakov took part in the Napoleonic Wars with the Russian Army. He later was to serve as an official in the department of the Russian Police Ministry in St. Petersburg. He was a recipient of the Order of St. Anna, Korsakov died in 1853 in the village of Tarusovo, then part of the Moscow Province. Though Korsakov was not formally trained as a doctor, he was interested in medicine, according to his journals he treated several thousand patients, at first using conventional medicine, but in 1829 switching to homeopathy at the urging of his relatives. Korsakov also used dilutions higher than previously used. Dilutions made using his method are commonly designated with the letter K, e. g. 15K, while working in the statistics department of the Police Ministry, Korsakov became intrigued with the possibility of using machinery to enhance natural intelligence. To this end, he devised several devices which he called machines for the comparison of ideas and these included the linear homeoscope with movable parts, the linear homeoscope without movable parts, the flat homeoscope, the ideoscope, and the simple comparator. The purpose of the devices was primarily to facilitate the search for information, Korsakov announced his new method in September 1832, and rather than seeking patents offered the machines for public use. The punch card had been introduced in 1805, but until that time had been used solely in the industry to control looms. Korsakov was reputedly the first to use the cards for information storage, machines for the Comparison of Philosophical Ideas. In, Trogemann, Georg, Ernst, Wolfgang and Nitussov, Alexander, Computing in Russia, The History of Computer Devices and Information Technology Revealed, ISBN 3-528-05757-2, ISBN 978-3-528-05757-2 Semen Korsakovs inventions, Cybernetics Dept. of MEPhI Semen Korsakov’s brochure of 1832
30.
Charles Babbage
–
Charles Babbage KH FRS was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage is best remembered for originating the concept of a programmable computer. His varied work in other fields has led him to be described as pre-eminent among the many polymaths of his century, parts of Babbages uncompleted mechanisms are on display in the Science Museum in London. In 1991, a perfectly functioning difference engine was constructed from Babbages original plans, built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbages machine would have worked. Babbages birthplace is disputed, but according to the Oxford Dictionary of National Biography he was most likely born at 44 Crosby Row, Walworth Road, London, a blue plaque on the junction of Larcom Street and Walworth Road commemorates the event. His date of birth was given in his obituary in The Times as 26 December 1792, the parish register of St. Marys Newington, London, shows that Babbage was baptised on 6 January 1792, supporting a birth year of 1791. Babbage was one of four children of Benjamin Babbage and Betsy Plumleigh Teape and his father was a banking partner of William Praed in founding Praeds & Co. of Fleet Street, London, in 1801. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth, around the age of eight, Babbage was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. For a short time he attended King Edward VI Grammar School in Totnes, South Devon, Babbage then joined the 30-student Holmwood academy, in Baker Street, Enfield, Middlesex, under the Reverend Stephen Freeman. The academy had a library that prompted Babbages love of mathematics and he studied with two more private tutors after leaving the academy. The first was a clergyman near Cambridge, through him Babbage encountered Charles Simeon and his evangelical followers and he was brought home, to study at the Totnes school, this was at age 16 or 17. The second was an Oxford tutor, under whom Babbage reached a level in Classics sufficient to be accepted by Cambridge, Babbage arrived at Trinity College, Cambridge, in October 1810. He was already self-taught in some parts of mathematics, he had read in Robert Woodhouse, Joseph Louis Lagrange. As a result, he was disappointed in the standard mathematical instruction available at the university, Babbage, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812, they were also close to Edward Ryan. In 1812 Babbage transferred to Peterhouse, Cambridge and he was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814 and he had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination. Considering his reputation, Babbage quickly made progress and he lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the hand, he applied for positions unsuccessfully
31.
Hermann Hollerith
–
Herman Hollerith was an American inventor who developed an electromechanical punched card tabulator to assist in summarizing information and, later, accounting. He was the founder of the Tabulating Machine Company that was amalgamated in 1911 with three companies to form a fifth company, the Computing-Tabulating-Recording Company later renamed IBM. Hollerith is regarded as one of the figures in the development of data processing. His invention of the punched card tabulating machine marks the beginning of the era of data processing systems. Herman Hollerith was born the son of German immigrant Prof. Georg Hollerith from Großfischlingen in Buffalo, New York, in 1882 Hollerith joined the Massachusetts Institute of Technology where he taught mechanical engineering and conducted his first experiments with punched cards. He died in Washington D. C. of a heart attack, at the urging of John Shaw Billings, Hollerith developed a mechanism using electrical connections to increment a counter, recording information. A key idea was that a datum could be recorded by the presence or absence of a hole at a location on a card. For example, if a hole location indicates marital status. Hollerith determined that data in specified locations on a card, the rows and columns. A description of system, An Electric Tabulating System, was submitted by Hollerith to Columbia University as his doctoral thesis. On January 8,1889, Hollerith was issued U. S. Hollerith had left teaching, titled Art of Compiling Statistics, it was filed on September 23,1884, U. S. Patent 395,782 was granted on January 8,1889, Hollerith initially did business under his own name, as The Hollerith Electric Tabulating System, specializing in punched card data processing equipment. He built tabulators and other machines under contract for the Census Office, in 1896 Hollerith founded the Tabulating Machine Company. Many major census bureaus around the world leased his equipment and purchased his cards, as did major insurance companies. Holleriths machines were used for censuses in England, Italy, Germany, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines and he invented the first automatic card-feed mechanism and the first keypunch. The 1890 Tabulator was hardwired to operate on 1890 Census cards, a control panel in his 1906 Type I Tabulator simplified rewiring for different jobs. The 1920s removable control panel supported prewiring and near instant job changing and these inventions were among the foundations of the data processing industry and Holleriths punched cards continued in use for almost a century. In 1911 four corporations, including Holleriths firm, were amalgamated to form a fifth company, under the presidency of Thomas J. Watson, CTR was renamed International Business Machines Corporation in 1924
32.
IBM
–
International Business Machines Corporation is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries. The company originated in 1911 as the Computing-Tabulating-Recording Company and was renamed International Business Machines in 1924, IBM manufactures and markets computer hardware, middleware and software, and offers hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a research organization, holding the record for most patents generated by a business for 24 consecutive years. IBM has continually shifted its business mix by exiting commoditizing markets and focusing on higher-value, also in 2014, IBM announced that it would go fabless, continuing to design semiconductors, but offloading manufacturing to GlobalFoundries. Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the worlds largest employers, with nearly 380,000 employees. Known as IBMers, IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology, in the 1880s, technologies emerged that would ultimately form the core of what would become International Business Machines. On June 16,1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company based in Endicott, New York. The five companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York, Dayton, Ohio, Detroit, Michigan, Washington, D. C. and Toronto. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr. fired from the National Cash Register Company by John Henry Patterson, called on Flint and, Watson joined CTR as General Manager then,11 months later, was made President when court cases relating to his time at NCR were resolved. Having learned Pattersons pioneering business practices, Watson proceeded to put the stamp of NCR onto CTRs companies and his favorite slogan, THINK, became a mantra for each companys employees. During Watsons first four years, revenues more than doubled to $9 million, Watson had never liked the clumsy hyphenated title of the CTR and in 1924 chose to replace it with the more expansive title International Business Machines. By 1933 most of the subsidiaries had been merged into one company, in 1937, IBMs tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U. S. During the Second World War the company produced small arms for the American war effort, in 1949, Thomas Watson, Sr. created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after almost 40 years at the company helm, in 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines, in 1963, IBM employees and computers helped NASA track the orbital flight of the Mercury astronauts. A year later it moved its headquarters from New York City to Armonk. The latter half of the 1960s saw IBM continue its support of space exploration, on April 7,1964, IBM announced the first computer system family, the IBM System/360
33.
Paper tape
–
Punched tape or perforated paper tape is a form of data storage, consisting of a long strip of paper in which holes are punched to store data. Paper tapes constructed from punched cards were used throughout the 19th century for controlling looms. Perforated paper tapes were first used by Basile Bouchon in 1725 to control looms, however, the paper tapes were expensive to create, fragile, and difficult to repair. By 1801 Joseph Marie Jacquard had developed machines to paper tapes by tying punched cards in a sequence. The resulting paper tape, also called a chain of cards, was stronger and simpler both to create and to repair and this led to the concept of communicating data not as a stream of individual cards, but one continuous card, or a tape. In 1846, Alexander Bain used punched tape to send telegrams, in 1880s, Tolbert Lanston invented a Monotype System, which consisted of a keyboard and a composition caster. The tape, punched with the keyboard, was read by the caster. The tape reader used compressed air, which passed through the holes and was directed into certain mechanisms of the caster, the system went into commercial use in 1897 and was in production well into 1970s, undergoing several changes along the way. Data were represented by the presence or absence of a hole at a particular location, tapes originally had five rows of holes for data. Later tapes had 6,7 and 8 rows, an early electro-mechanical calculating machine, the Automatic Sequence Controlled Calculator or Harvard Mark I, used paper tape with 24 rows. A row of smaller sprocket holes that were always punched served to feed the tape, later optical readers used the sprocket holes to generate timing pulses. The sprocket holes are a bit to one side, making it clear which way to orient the tape in the reader, text was encoded in several ways. The earliest standard character encoding was Baudot, which dates back to the century and had 5 holes. The Baudot code was never used in teleprinters, instead, modifications such as the Murray code, Western Union code, International Telegraph Alphabet No. 2, and American Teletypewriter code, were used, other standards, such as Teletypesetter, FIELDATA and Flexowriter, had 6 holes. In the early 1960s, the American Standards Association led a project to develop a code for data processing. This 7-level code was adopted by some users, including AT&T. Others, such as Telex. Tape for punching was 0.00394 inches thick, the two most common widths were 11/16 inch for five bit codes, and 1 inch for tapes with six or more bits
34.
Morse code
–
Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment. It is named for Samuel F. B, Morse, an inventor of the telegraph. Because many non-English natural languages use more than the 26 Roman letters, each Morse code symbol represents either a text character or a prosign and is represented by a unique sequence of dots and dashes. The duration of a dash is three times the duration of a dot, each dot or dash is followed by a short silence, equal to the dot duration. The letters of a word are separated by an equal to three dots, and the words are separated by a space equal to seven dots. The dot duration is the unit of time measurement in code transmission. To increase the speed of the communication, the code was designed so that the length of each character in Morse varies approximately inversely to its frequency of occurrence in English. Thus the most common letter in English, the letter E, has the shortest code, Morse code is used by some amateur radio operators, although knowledge of and proficiency with it is no longer required for licensing in most countries. Pilots and air controllers usually need only a cursory understanding. Aeronautical navigational aids, such as VORs and NDBs, constantly identify in Morse code, compared to voice, Morse code is less sensitive to poor signal conditions, yet still comprehensible to humans without a decoding device. Morse is, therefore, an alternative to synthesized speech for sending automated data to skilled listeners on voice channels. Many amateur radio repeaters, for example, identify with Morse, in an emergency, Morse code can be sent by improvised methods that can be easily keyed on and off, making it one of the simplest and most versatile methods of telecommunication. The most common signal is SOS or three dots, three dashes, and three dots, internationally recognized by treaty. Beginning in 1836, the American artist Samuel F. B, Morse, the American physicist Joseph Henry, and Alfred Vail developed an electrical telegraph system. This system sent pulses of current along wires which controlled an electromagnet that was located at the receiving end of the telegraph system. A code was needed to transmit natural language using only these pulses, around 1837, Morse, therefore, developed an early forerunner to the modern International Morse code. Around the same time, Carl Friedrich Gauss and Wilhelm Eduard Weber as well as Carl August von Steinheil had already used codes with varying lengths for their telegraphs. In 1837, William Cooke and Charles Wheatstone in England began using a telegraph that also used electromagnets in its receivers
35.
Teletype
–
A teleprinter is an electromechanical typewriter that can be used to send and receive typed messages from point to point and point-to-multipoint over various types of communications channels. They were adapted to provide an interface to early mainframe computers and minicomputers, sending typed data to the computer. Some models could also be used to create punched tape for data storage, teleprinters could use a variety of different communication media. These included a pair of wires, dedicated non-switched telephone circuits, switched networks that operated similarly to the public telephone network. A teleprinter attached to a modem could also communicate through standard switched public telephone lines and this latter configuration was often used to connect teleprinters to remote computers, particularly in time-sharing environments. Teleprinters have largely replaced by fully electronic computer terminals which usually use a computer monitor instead of a printer. Teleprinters were invented in order to send and receive messages without the need for operators trained in the use of Morse code, a system of two teleprinters, with one operator trained to use a typewriter, replaced two trained Morse code operators. The teleprinter system improved message speed and delivery time, making it possible for messages to be flashed across a country with little manual intervention, in 1835 Samuel Morse devised a recording telegraph. In 1841 Alexander Bain devised a printing telegraph, by 1846, the Morse telegraph service was operational between Washington, D. C. and New York. Royal Earl House patented his printing telegraph that same year and he linked two 28-key piano-style keyboards by wire. Each piano key represented a letter of the alphabet and when pressed caused the letter to print at the receiving end. A shift key gave each main key two optional values, a 56-character typewheel at the sending end was synchronised to coincide with a similar wheel at the receiving end. It was thus an example of a data transmission system. Houses equipment could transmit around 40 instantly readable words per minute, the printer could copy and print out up to 2,000 words per hour. This invention was first put in operation and exhibited at the Mechanics Institute in New York in 1844, landline teleprinter operations began in 1849 when a circuit was put in service between Philadelphia and New York City. In 1855, David Edward Hughes introduced a machine built on the work of Royal Earl House. Émile Baudot designed a system using a five unit code in 1874, the Baudot system was adopted in France in 1877, and later extensively in France. During 1901 Baudots code was modified by Donald Murray, prompted by his development of a typewriter-like keyboard
36.
Stock ticker machine
–
Ticker tape was the earliest digital electronic communications medium, transmitting stock price information over telegraph lines, in use between around 1870 through 1970. The term ticker came from the sound made by the machine as it printed, paper ticker tape became obsolete in the 1960s, as television and computers were increasingly used to transmit financial information. The concept of the stock ticker lives on, however, in the scrolling electronic tickers seen on brokerage walls, Ticker tape stock price telegraphs were invented in 1867 by Edward A. Calahan, an employee of the American Telegraph Company. The first stock price ticker system using a telegraphic printer was invented by Edward A. Calahan in 1863, Early versions of stock tickers provided the first mechanical means of conveying stock prices, over a long distance over telegraph wiring. In its infancy, the ticker used the symbols as Morse code as a medium for conveying messages. Previously, stock prices had been hand-delivered via written or verbal messages, since the useful time-span of individual quotes is very brief, they generally had not been sent long distances, aggregated summaries, typically for one day, were sent instead. The increase in speed provided by the allowed for faster. Since the ticker ran continuously, updates to a stocks price whenever the price changed became effective much faster, for the first time, trades were being done in what is now thought of as near real-time. By the 1880s, there were about a thousand stock tickers installed in the offices of New York bankers and brokers. In 1890, members of the agreed to create the New York Quotation Co. buying up all other ticker companies to ensure accuracy of reporting of price. Stock ticker machines are an ancestor of the modern computer printer, being one of the first applications of transmitting text over a wire to a printing device, based on the printing telegraph. This used the technology of the then-recently invented telegraph machines, with the advantage that the output was readable text, instead of the dots, a special typewriter designed for operation over telegraph wires was used at the opposite end of the telegraph wire connection to the ticker machine. Text typed on the typewriter was displayed on the machine at the opposite end of the connection. The machines printed a series of symbols, followed by brief information about the price of that companys stock. The word ticker comes from the tapping noise the machines made while printing. Pulses on the line made a letter wheel turn step by step until the correct symbol was reached. A typical 32-symbol letter wheel had to turn on average 15 steps until the letter could be printed resulting in a very slow printing speed of one character per second. Newer and more efficient tickers became available in the 1930s, Ticker machines became obsolete in the 1960s, replaced by computer networks, none have been manufactured for use for decades
37.
Ralph Hartley
–
Ralph Vinton Lyon Hartley was an electronics researcher. He invented the Hartley oscillator and the Hartley transform, and contributed to the foundations of information theory, Hartley was born in Sprucemont, Nevada and attended the University of Utah, receiving an A. B. degree in 1909. He became a Rhodes Scholar at St Johns, Oxford University, in 1910 and received a B. A. degree in 1912 and he married Florence Vail of Brooklyn on March 21,1916. He returned to the United States and was employed at the Research Laboratory of the Western Electric Company, in 1915 he was in charge of radio receiver development for the Bell System transatlantic radiotelephone tests. For this he developed the Hartley oscillator and also a circuit to eliminate triode singing resulting from internal coupling. A patent for the oscillator was filed on June 1,1915, during World War I Hartley established the principles that led to sound-type directional finders. Following the war he returned to Western Electric and he later worked at Bell Laboratories. His 1928 paper is considered as the single most important prerequisite for Shannons theory of information, after about 10 years of illness he returned to Bell Labs in 1939 as a consultant. This research was mostly parallel to the work being done at the time in Soviet Russia by Leonid Mandelstam. A short review and extensive bibliography was published by Mumford in 1960, the Bell Laboratories work was carried on under Hartleys guidance during the 1930s and 1940s by John Burton and Eugene Peterson. During World War II he was involved with servomechanism problems. He retired from Bell Labs in 1950 and died on May 1,1970 and his legacy includes the naming of the hartley, a unit of information equal to one decimal digit, after him. IRE Medal of Honor,1946, for his oscillator and information proportionality law and this was an award from the Institute of Radio Engineers which later merged into the Institute of Electrical and Electronics Engineers, the award became the IEEE Medal of Honor. Fellow of the American Association for the Advancement of Science Probably incomplete, the Function of Phase Difference in the Binaural Location of Pure Tones, Physical Review, Volume 13, Issue 6, pp 373–385. Fry T. C. The Binaural Location of Pure Tones, Physical Review, Volume 18, Issue 6, relations of Carrier and Side-Bands in Radio Transmission, Proceedings of the IRE, Volume 11, Issue 1, pp 34 –56. Transmission of Information, Bell System Technical Journal, Volume 7, Number 3, a Wave Mechanism of Quantum Phenomena, Physical Review, Volume 33, Issue 2, Page 289, Hartley, R. V. L. Oscillations in Systems with Non-Linear Reactance, The Bell System Technical Journal, Volume 15, Number 3, excitation of Raman Spectra with the Aid of Optical Catalysers, Nature, Volume 139, pg 329 -329, Hartley, R. V. L. Steady State Delay as Related to Aperiodic Signals, Bell System Technical Journal, Volume 20, Number 2, a More Symmetrical Fourier Analysis Applied to Transmission Problems, Proceedings of the IRE, Volume 30, Number 2, pp. 144–150
38.
Claude E. Shannon
–
Claude Elwood Shannon was an American mathematician, electrical engineer, and cryptographer known as the father of information theory. Shannon is noted for having founded information theory with a paper, A Mathematical Theory of Communication. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his work on codebreaking. Shannon was born in Petoskey, Michigan and grew up in Gaylord and his father, Claude, Sr. a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannons mother, Mabel Wolf Shannon, was a language teacher, most of the first 16 years of Shannons life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things and his best subjects were science and mathematics. At home he constructed such devices as models of planes, a model boat. While growing up, he worked under Andrew Coltrey as a messenger for the Western Union company. His childhood hero was Thomas Edison, who he learned was a distant cousin. Both were descendants of John Ogden, a leader and an ancestor of many distinguished people. Shannon was apolitical and an atheist, in 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two degrees, one in electrical engineering and the other in mathematics. In 1936, Shannon began his studies in electrical engineering at MIT, where he worked on Vannevar Bushs differential analyzer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Booles concepts, in 1937, he wrote his masters degree thesis, A Symbolic Analysis of Relay and Switching Circuits, A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used then in telephone call routing switches, next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presents diagrams of several circuits, using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannons work became the foundation of digital design, as it became widely known in the electrical engineering community during. The theoretical rigor of Shannons work superseded the ad hoc methods that had prevailed previously, howard Gardner called Shannons thesis possibly the most important, and also the most noted, masters thesis of the century
39.
A Mathematical Theory of Communication
–
A Mathematical Theory of Communication is an influential 1948 article by mathematician Claude E. Shannon. It was renamed The Mathematical Theory of Communication in the book, the article was the founding work of the field of information theory. It was later published in 1949 as a book titled The Mathematical Theory of Communication, the book contains an additional article by Warren Weaver, providing an overview of the theory for a more general audience. The full article, hosted by IEEE