1.
Digital electronics
–
Digital electronics or digital circuits are electronics that handle digital signals rather than by continuous ranges as used in analog electronics. All levels within a band of values represent the information state. In most cases, the number of states is two, and they are represented by two voltage bands, one near a reference value, and the other a value near the supply voltage. These correspond to the false and true values of the Boolean domain respectively, Digital techniques are useful because it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values. Digital electronic circuits are made from large assemblies of logic gates. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. Digital logic as we know it was the brain-child of George Boole, Boole died young, but his ideas lived on. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, eventually, vacuum tubes replaced relays for logic operations. Lee De Forests modification, in 1907, of the Fleming valve can be used as an AND logic gate, ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus. Walther Bothe, inventor of the circuit, got part of the 1954 Nobel Prize in physics. Mechanical analog computers started appearing in the first century and were used in the medieval era for astronomical calculations. In World War II, mechanical computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic computers were developed. Originally they were the size of a room, consuming as much power as several hundred modern personal computers. The Z3 was a computer designed by Konrad Zuse, finished in 1941. It was the worlds first working programmable, fully automatic digital computer and its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, the bipolar junction transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the generation of computers
2.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state
3.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra
4.
Gray code
–
The reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one bit. The reflected binary code was designed to prevent spurious output from electromechanical switches. Today, Gray codes are used to facilitate error correction in digital communications such as digital terrestrial television. Bell Labs researcher Frank Gray introduced the term reflected binary code in his 1947 patent application and he derived the name from the fact that it may be built up from the conventional binary code by a sort of reflection process. The code was named after Gray by others who used it. Two different 1953 patent applications use Gray code as a name for the reflected binary code, one of those also lists minimum error code. A1954 patent application refers to the Bell Telephone Gray code, many devices indicate position by closing and opening switches. In the transition between the two shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position, even without keybounce, the transition might look like 011 —001 —101 —100. When the switches appear to be in position 001, the observer cannot tell if that is the real position 001, if the output feeds into a sequential system, possibly via combinational logic, then the sequential system may store a false value. This is called the property of a Gray code. In the standard Gray coding the least significant bit follows a pattern of 2 on,2 off, the next digit a pattern of 4 on,4 off. These codes are known as single-distance codes, reflecting the Hamming distance of 1 between adjacent codes. Reflected binary codes were applied to mathematical puzzles before they became known to engineers, martin Gardner wrote a popular account of the Gray code in his August 1972 Mathematical Games column in Scientific American. The French engineer Émile Baudot used Gray codes in telegraphy in 1878 and he received the French Legion of Honor medal for his work. The Gray code is attributed, incorrectly, to Elisha Gray. The method and apparatus were patented in 1953 and the name of Gray stuck to the codes. The PCM tube apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, Gray codes are used in position encoders, in preference to straightforward binary encoding
5.
Binary decoder
–
In digital electronics, a binary decoder is a combinational logic circuit that converts a binary integer value to an associated pattern of output bits. They are used in a variety of applications, including data demultiplexing, seven segment displays. In addition to data inputs, some decoders also have one or more enable inputs. When the enable input is negated, all outputs are forced to their inactive states. Depending on its function, a binary decoder will convert binary information from n input signals to as many as 2n unique output signals, some decoders have less than 2n output lines, in such cases, at least one output pattern will be repeated for different input values. A binary decoder is implemented as either a stand-alone integrated circuit or as part of a more complex IC. In the latter case the decoder may be synthesized by means of a description language such as VHDL or Verilog. Widely used decoders are often available in the form of standardized ICs, a 1-of-n binary decoder has n output bits. This type of decoder asserts exactly one of its n output bits, or none of them, the address of the activated output is specified by the integer input value. For example, output bit number 0 is selected when the integer value 0 is applied to the inputs. Examples of this type of include, A 3-to-8 line decoder activates one of eight output bits for each input value from 0 to 7 — the range of integer values that can be expressed in three bits. Similarly, a 4-to-16 line decoder activates one of 16 outputs for each 4-bit input in the integer range, a BCD to decimal decoder has ten output bits. It accepts an input consisting of a binary-coded decimal integer value and activates one specific. All outputs are held inactive when a value is applied to the inputs. A demultiplexer is a 1-of-n binary decoder that is used to route a data bit to one of its n outputs while all other outputs remain inactive, code translators differ from 1-of-n decoders in that multiple output bits may be active at the same time. An example of this is a seven-segment decoder, which converts an integer into the combination of segment control signals needed to display the value on a seven-segment display digit. One variant of seven-segment decoder is the BCD to seven-segment decoder and this decoder function is available in standard ICs such as the CMOS4511. Sum addressed decoder Multiplexer Priority encoder
6.
Ring counter
–
A ring counter is a type of counter composed of a type of circular shift register. The output of the last shift register is fed to the input of the first register, for example, in a 4-register one-hot counter, with initial register values of 1000, the repeating pattern is,1000,0100,0010,0001,1000. Note that one of the registers must be pre-loaded with a 1 in order to operate properly. For example, in a 4-register counter, with initial values of 0000. The Johnson counter generates a Gray code, a code in which adjacent states differ by one bit. The circuit of an Overbeck counter is shown here, Ring counters are used in logic design to create complicated finite state machines. A binary counter will require a circuit which is substantially more complex than a ring counter. Additionally, the worst case propagation delay on a circuit will be proportional to the number of bits in the code. The propagation delay of a counter will be a constant regardless of the number of bits in the code. The complex combinational logic of an adder can create timing errors which may result in erratic hardware performance, last, ring counters with Hamming distance 2 allow the detection of single bit upsets that can occur in hazardous environments. The disadvantage of ring counters is that they are lower density codes, a binary counter can represent 2^N states, where N is the number of bits in the code, whereas an Overbeck counter can represent only N states and a Johnson counter can represent only 2N states. This may be an important consideration in hardware implementations where registers are more expensive than combinational logic, Counter Ring oscillator Linear feedback shift register File,4 Bit Ring Counter. svg File, Code Libaw-Craig. svg Crowe, John, Hayes-Gill, Barrie. Fundamentals of digital logic and microcomputer design, John Wiley and Sons,2005, somanathan Nair, Digital electronics and logic design, PHI Learning Pvt. Ltd. Shift Register Counters Ring Counters The Johnson Counter German Wikibooks
7.
Flip flop (electronics)
–
In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. A flip-flop is a bistable multivibrator, the circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the storage element in sequential logic. Flip-flops and latches are fundamental building blocks of electronics systems used in computers, communications. Flip-flops and latches are used as storage elements. A flip-flop stores a bit of data, one of its two states represents a one and the other represents a zero. Such data storage can be used for storage of state, when used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state. It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal, flip-flops can be either simple or clocked. Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive and that is, when a latch is enabled it becomes transparent, while a flip flops output only changes on a single type of clock edge. The first electronic flip-flop was invented in 1918 by the British physicists William Eccles and it was initially called the Eccles–Jordan trigger circuit and consisted of two active elements. Early flip-flops were known variously as trigger circuits or multivibrators, according to P. L. Lindley was at the time working at Hughes Aircraft under Eldred Nelson, who had coined the term JK for a flip-flop which changed states when both inputs were on. The other names were coined by Phister and they differ slightly from some of the definitions given below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. Flip-flops can be simple or clocked. The simple ones are described as latches, while the clocked ones are described as flip-flops. Clocked devices are designed for synchronous systems, such devices ignore their inputs except at the transition of a dedicated clock signal. Clocking causes the flip-flop either to change or to retain its output signal based upon the values of the signals at the transition. Some flip-flops change output on the edge of the clock
8.
Address decoder
–
In digital electronics, an address decoder is a binary decoder that has two or more inputs for address bits and one or more outputs for device selection signals. When the address for a device appears on the address inputs. A dedicated, single-output address decoder may be incorporated into each device on an address bus, a single address decoder with n address input bits can serve up to 2n devices. Several members of the 7400 series of integrated circuits can be used as address decoders, for example, when used as an address decoder, the 74154 provides four address inputs and sixteen device selector outputs. An address decoder is a use of a binary decoder circuit known as a demultiplexer or demux. Address decoders are fundamental building blocks for systems that use buses and they are represented in all integrated circuit families and processes and in all standard FPGA and ASIC libraries. They are discussed in textbooks in digital logic design
9.
Priority encoder
–
A priority encoder is a circuit or algorithm that compresses multiple binary inputs into a smaller number of outputs. The output of a priority encoder is the representation of the original number starting from zero of the most significant input bit. They are often used to control interrupt requests by acting on the highest priority encoder, if two or more inputs are given at the same time, the input having the highest priority will take precedence. The output V indicates if the input is valid. The priority encoder is an improvement on a simple encoder circuit, a simple encoder circuit is a one-hot to binary converter. That is, if there are 2n input lines, and at most only one of them will ever be high, for example, a 4-to-2 simple encoder takes 4 input bits and produces 2 output bits. If the input circuit can guarantee at most a single-active input, a simple encoder is a better choice than a priority encoder, since it requires less logic to implement
10.
Natural language processing
–
The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled Computing Machinery, the Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that three or five years, machine translation would be a solved problem. Little further research in translation was conducted until the late 1980s. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction, when the patient exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to My head hurts with Why do you say your head hurts. During the 1970s many programmers began to write conceptual ontologies, which structured real-world information into computer-understandable data, examples are MARGIE, SAM, PAM, TaleSpin, QUALM, Politics, and Plot Units. During this time, many chatterbots were written including PARRY, Racter, up to the 1980s, most NLP systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in NLP with the introduction of machine learning algorithms for language processing, some of the earliest-used machine learning algorithms, such as decision trees, produced systems of hard if-then rules similar to existing hand-written rules. The cache language models upon which many speech recognition systems now rely are examples of statistical models. Many of the early successes occurred in the field of machine translation, due especially to work at IBM Research. However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, as a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to learn from data that has not been hand-annotated with the desired answers, generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an amount of non-annotated data available. Since the so-called statistical revolution in the late 1980s and mid 1990s, formerly, many language-processing tasks typically involved the direct hand coding of rules, which is not in general robust to natural language variation. The machine-learning paradigm calls instead for using statistical inference to automatically learn such rules through the analysis of large corpora of typical real-world examples, Many different classes of machine learning algorithms have been applied to NLP tasks. These algorithms take as input a set of features that are generated from the input data. Some of the algorithms, such as decision trees, produced systems of hard if-then rules similar to the systems of hand-written rules that were then common
11.
Flip-flop (electronics)
–
In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. A flip-flop is a bistable multivibrator, the circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the storage element in sequential logic. Flip-flops and latches are fundamental building blocks of electronics systems used in computers, communications. Flip-flops and latches are used as storage elements. A flip-flop stores a bit of data, one of its two states represents a one and the other represents a zero. Such data storage can be used for storage of state, when used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state. It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal, flip-flops can be either simple or clocked. Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive and that is, when a latch is enabled it becomes transparent, while a flip flops output only changes on a single type of clock edge. The first electronic flip-flop was invented in 1918 by the British physicists William Eccles and it was initially called the Eccles–Jordan trigger circuit and consisted of two active elements. Early flip-flops were known variously as trigger circuits or multivibrators, according to P. L. Lindley was at the time working at Hughes Aircraft under Eldred Nelson, who had coined the term JK for a flip-flop which changed states when both inputs were on. The other names were coined by Phister and they differ slightly from some of the definitions given below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. Flip-flops can be simple or clocked. The simple ones are described as latches, while the clocked ones are described as flip-flops. Clocked devices are designed for synchronous systems, such devices ignore their inputs except at the transition of a dedicated clock signal. Clocking causes the flip-flop either to change or to retain its output signal based upon the values of the signals at the transition. Some flip-flops change output on the edge of the clock
12.
Field-programmable gate array
–
A field-programmable gate array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence field-programmable. The FPGA configuration is generally specified using a description language. Logic blocks can be configured to perform complex functions, or merely simple logic gates like AND. In most FPGAs, logic blocks also include elements, which may be simple flip-flops or more complete blocks of memory. Contemporary field-programmable gate arrays have large resources of logic gates and RAM blocks to implement complex digital computations, as FPGA designs employ very fast I/Os and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resources allocation within FPGAs to meet time constraints. FPGAs can be used to implement any logical function that an ASIC could perform, some FPGAs have analog features in addition to digital functions. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels, the FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in batches in a factory or in the field, however, programmable logic was hard-wired between logic gates. In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates, Casselman was successful and a patent related to the system was issued in 1992. Some of the foundational concepts and technologies for programmable logic arrays, gates. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064, the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables, more than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention. Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s, by 1993, Actel was serving about 18 percent of the market. By 2010, Altera, Actel and Xilinx together represented approximately 77 percent of the FPGA market, the 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications, by the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications. This work mirrors the architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a reconfigurable CPU architecture on a chip called the SB24. That work was done in 1982, the Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmels programmable logic architecture
13.
Programmable Array Logic
–
Programmable Array Logic is a family of programmable logic device semiconductors used to implement logic functions in digital circuits introduced by Monolithic Memories, Inc. in March 1978. MMI obtained a trademark on the term PAL for use in Programmable Semiconductor Logic Circuits. The trademark is held by Lattice Semiconductor. PAL devices consisted of a small PROM core and additional output logic used to implement particular desired logic functions with few components, using specialized machines, PAL devices were field-programmable. PALs were available in variants, One-time programmable devices could not be updated and reused after initial programming. UV erasable versions had a window over the chip die. Later versions were flash erasable devices, in most applications, electrically-erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs. One PAL device would typically replace dozens of discrete logic packages. PALs were used advantageously in many products, such as minicomputers, PALs were not the first commercial programmable logic devices, Signetics had been selling its field programmable logic array since 1975. These devices were completely unfamiliar to most circuit designers and were perceived to be too difficult to use, the FPLA had a relatively slow maximum operating speed, was expensive, and had a poor reputation for testability. Another factor limiting the acceptance of the FPLA was the large package, the project to create the PAL device was managed by John Birkner and the actual PAL circuit was designed by H. T. Chua. In a previous job, Birkner had developed a 16-bit processor using 80 standard logic devices and his experience with standard logic led him to believe that user programmable devices would be more attractive to users if the devices were designed to replace standard logic. This meant that the package sizes had to be typical of the existing devices. MMI intended PALs to be a low cost part. However, they initially had severe manufacturing yield problems and had to sell the devices for over $50 and this threatened the viability of the PAL as a commercial product and they were forced to license the product line to National Semiconductor. PALs were later second sourced by Texas Instruments and Advanced Micro Devices, early PALs were 20-pin DIP components fabricated in silicon using bipolar transistor technology with one-time programmable titanium-tungsten programming fuses. Later devices were manufactured by Cypress, Lattice Semiconductor and Advanced Micro Devices using CMOS technology, the original 20 and 24-pin PALs were denoted by MMI as medium-scale integration devices. The PAL architecture consists of two components, a logic plane and output logic macrocells
14.
Bi-quinary coded decimal
–
Bi-quinary coded decimal is a numeral encoding scheme used in many abacuses and in some early computers, including the Colossus. The term bi-quinary indicates that the code comprises both a two-state and a five-state component, the encoding resembles that used by many abaci, with four beads indicating either 0 through 4 or 5 through 9 and another bead indicating which of those ranges. Several human languages, most notably Khmer and Wolof, also use biquinary systems, for example, the Khmer word for 6, pram muoy, literally means five one. Several different representations of bi-quinary coded decimal have been used by different machines, the two-state component is encoded as one or two bits, and the five-state component is encoded using three to five bits. Some examples are, Roman and Chinese abacuses, IBM650 – seven bits Two bi bits,05 and five quinary bits,01234, with error checking. Exactly one bi bit and one bit is set in a valid digit. Remington Rand 409 - five bits One quinary bit for each of 1,3,5, the fifth bi bit represented 9 if none of the others were on, otherwise it added 1 to the value represented by the other quinary bit
15.
Unary numeral system
–
The unary numeral system is the bijective base-1 numeral system. It is the simplest numeral system to represent natural numbers, in order to represent a number N, for examples, the numbers 1,2,3,4,5. Would be represented in this system as 1,11,111,1111,11111 and these numbers should be distinguished from repunits, which are also written as sequences of ones but have their usual decimal numerical interpretation. This system is used in tallying, for example, using the tally mark |, the number 3 is represented as |||. In East Asian cultures, the three is represented as “三”, a character that is drawn with three strokes. Addition and subtraction are particularly simple in the system, as they involve little more than string concatenation. The Hamming weight or population count operation that counts the number of bits in a sequence of binary values may also be interpreted as a conversion from unary to binary numbers. However, multiplication is more cumbersome and has often used as a test case for the design of Turing machines. Compared to standard positional numeral systems, the system is inconvenient. It occurs in some decision problem descriptions in theoretical computer science, therefore, while the run-time and space requirement in unary looks better as function of the input size, it does not represent a more efficient solution. In computational complexity theory, unary numbering is used to distinguish strongly NP-complete problems from problems that are NP-complete, for such a problem, there exist hard instances for which all parameter values are at most polynomially large. Unary is used as part of data compression algorithms such as Golomb coding. It also forms the basis for the Peano axioms for formalizing arithmetic within mathematical logic, a form of unary notation called Church encoding is used to represent numbers within lambda calculus. Sloanes A000042, Unary representation of natural numbers, the On-Line Encyclopedia of Integer Sequences
16.
XOR gate
–
The XOR gate is a digital logic gate that gives a true output when the number of true inputs is odd. An XOR gate implements an exclusive or, that is, a true output results if one, if both inputs are false or both are true, a false output results. XOR represents the inequality function, i. e. the output is true if the inputs are not alike otherwise the output is false, a way to remember XOR is one or the other but not both. XOR can also be viewed as addition modulo 2, as a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate, other uses include subtractors, comparators, and controlled inverters. The algebraic expressions A ⋅ B ¯ + A ¯ ⋅ B and ⋅ both represent the XOR gate with inputs A and B, the behavior of XOR is summarized in the truth table shown on the right. There are two symbols for XOR gates, the symbol and the IEEE symbol. For more information see Logic Gate Symbols, the logic symbols ⊕ and ⊻ can be used to denote XOR in algebraic expressions. C-like languages use the symbol ^ to denote bitwise XOR. An XOR gate can be constructed using MOSFETs, here is a diagram of a pass transistor logic implementation of an XOR gate. Note, The Rss resistor prevents shunting current directly from A and B to the output, without it, if the circuit that provides inputs A and B does not have the proper driving capability, the output might not swing rail to rail or be severely slew-rate limited. The Rss resistor also limits the current from Vdd to ground which protects the transistors, if a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be constructed from an XNOR gate followed by a NOT gate. If we consider the expression A ⋅ B ¯ + A ¯ ⋅ B, we can construct an XOR gate circuit directly using AND, OR, however, this approach requires five gates of three different kinds. An XOR gate circuit can be made from four NAND gates in the configuration shown below, in fact, both NAND and NOR gates are so-called universal gates and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NAND gates, below, are replaced by NOR gates, this results in an XNOR gate, strict reading of the definition of exclusive or, or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to three or more inputs and produce a true output if exactly one of those inputs were true. However, it is rarely implemented this way in practice, the result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even
17.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
18.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base