1.
Digital electronics
–
Digital electronics or digital circuits are electronics that handle digital signals rather than by continuous ranges as used in analog electronics. All levels within a band of values represent the information state. In most cases, the number of states is two, and they are represented by two voltage bands, one near a reference value, and the other a value near the supply voltage. These correspond to the false and true values of the Boolean domain respectively, Digital techniques are useful because it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values. Digital electronic circuits are made from large assemblies of logic gates. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. Digital logic as we know it was the brain-child of George Boole, Boole died young, but his ideas lived on. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, eventually, vacuum tubes replaced relays for logic operations. Lee De Forests modification, in 1907, of the Fleming valve can be used as an AND logic gate, ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus. Walther Bothe, inventor of the circuit, got part of the 1954 Nobel Prize in physics. Mechanical analog computers started appearing in the first century and were used in the medieval era for astronomical calculations. In World War II, mechanical computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic computers were developed. Originally they were the size of a room, consuming as much power as several hundred modern personal computers. The Z3 was a computer designed by Konrad Zuse, finished in 1941. It was the worlds first working programmable, fully automatic digital computer and its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, the bipolar junction transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the generation of computers
2.
Addition
–
Addition is one of the four basic operations of arithmetic, with the others being subtraction, multiplication and division. The addition of two numbers is the total amount of those quantities combined. For example, in the picture on the right, there is a combination of three apples and two together, making a total of five apples. This observation is equivalent to the mathematical expression 3 +2 =5 i. e.3 add 2 is equal to 5, besides counting fruits, addition can also represent combining other physical objects. In arithmetic, rules for addition involving fractions and negative numbers have been devised amongst others, in algebra, addition is studied more abstractly. It is commutative, meaning that order does not matter, and it is associative, repeated addition of 1 is the same as counting, addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication, performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers, the most basic task,1 +1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the system, starting with single digits. Mechanical aids range from the ancient abacus to the modern computer, Addition is written using the plus sign + between the terms, that is, in infix notation. The result is expressed with an equals sign, for example, 3½ =3 + ½ =3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead, the sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, ∑ k =15 k 2 =12 +22 +32 +42 +52 =55. The numbers or the objects to be added in addition are collectively referred to as the terms, the addends or the summands. This is to be distinguished from factors, which are multiplied, some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an addend at all, today, due to the commutative property of addition, augend is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin, using the gerundive suffix -nd results in addend, thing to be added. Likewise from augere to increase, one gets augend, thing to be increased, sum and summand derive from the Latin noun summa the highest, the top and associated verb summare
3.
Computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC
4.
Microprocessor
–
A microprocessor is a computer processor which incorporates the functions of a computers central processing unit on a single integrated circuit, or at most a few integrated circuits. Microprocessors contain both combinational logic and sequential digital logic, Microprocessors operate on numbers and symbols represented in the binary numeral system. The integration of a whole CPU onto a chip or on a few chips greatly reduced the cost of processing power. Integrated circuit processors are produced in numbers by highly automated processes resulting in a low per unit cost. Single-chip processors increase reliability as there are many electrical connections to fail. As microprocessor designs get better, the cost of manufacturing a chip generally stays the same, before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits. Microprocessors combined this into one or a few large-scale ICs, the internal arrangement of a microprocessor varies depending on the age of the design and the intended purposes of the microprocessor. Advancing technology makes more complex and powerful chips feasible to manufacture, a minimal hypothetical microprocessor might only include an arithmetic logic unit and a control logic section. The ALU performs operations such as addition, subtraction, and operations such as AND or OR, each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation. The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction, a single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip, the size of data objects became larger, allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to todays 64-bit words. Additional features were added to the architecture, more on-chip registers sped up programs. Floating-point arithmetic, for example, was not available on 8-bit microprocessors. Integration of the point unit first as a separate integrated circuit and then as part of the same microprocessor chip. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary, instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. With the ability to put large numbers of transistors on one chip and this CPU cache has the advantage of faster access than off-chip memory, and increases the processing speed of the system for many applications. Processor clock frequency has increased more rapidly than external memory speed, except in the recent past, a microprocessor is a general purpose system. Several specialized processing devices have followed from the technology, A digital signal processor is specialized for signal processing, graphics processing units are processors designed primarily for realtime rendering of 3D images
5.
Arithmetic logic unit
–
An arithmetic logic unit is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit, which operates on floating point numbers, an ALU is a fundamental building block of many types of computing circuits, including the central processing unit of computers, FPUs, and graphics processing units. A single CPU, FPU or GPU may contain multiple ALUs, in many designs, the ALU also exchanges additional information with a status register, which relates to the result of the current or previous operations. An ALU has a variety of input and output nets, which are the electrical conductors used to digital signals between the ALU and external circuitry. When an ALU is operating, external circuits apply signals to the ALU inputs and, in response, a basic ALU has three parallel data buses consisting of two input operands and a result output. Each data bus is a group of signals that conveys one binary integer number, typically, the A, B and Y bus widths are identical and match the native word size of the external circuitry. The opcode size is related to the number of different operations the ALU can perform, for example, a four-bit opcode can specify up to sixteen different ALU operations. Generally, an ALU opcode is not the same as a machine language opcode, the status outputs are various individual signals that convey supplemental information about the result of an ALU operation. These outputs are usually stored in registers so they can be used in future ALU operations or for controlling conditional branching. The collection of bit registers that store the status outputs are often treated as a single, multi-bit register, zero, which indicates all bits of output are logic zero. Negative, which indicates the result of an operation is negative. Overflow, which indicates the result of an operation has exceeded the numeric range of output. Parity, which indicates whether an even or odd number of bits in the output are logic one, the status input allows additional information to be made available to the ALU when performing an operation. Typically, this is a bit that is the stored carry-out from a previous ALU operation. An ALU is a logic circuit, meaning that its outputs will change asynchronously in response to input changes. In general, external circuitry controls an ALU by applying signals to its inputs, at the same time, the CPU also routes the ALU result output to a destination register that will receive the sum. The ALUs input signals, which are stable until the next clock, are allowed to propagate through the ALU. When the next clock arrives, the destination register stores the ALU result and, since the ALU operation has completed, a number of basic arithmetic and bitwise logic functions are commonly supported by ALUs
6.
Address space
–
In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity. For software programs to save and retrieve stored data, each unit of data must have an address where it can be located or else the program will be unable to find. The number of spaces available will depend on the underlying address structure. Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous, for a persons physical address, the address space would be a combination of locations, such as a neighborhood, town, city, or country. Some elements of a space may be the same– but if any element in the address is different than addresses in said space will reference different entities. An address space usually provides a partitioning to several regions according to the structure it has. In the case of order, as for memory addresses. Some nested domains hierarchy appears in the case of directed ordered tree as for the Domain Name System or a directory structure, another common feature of address spaces are mappings and translations, often forming numerous layers. This usually means that some higher-level address must be translated to lower-level ones in some way, then, for a disk drive connected via Parallel ATA, each of them must be converted to logical cylinder-head-sector address due to the interface historical shortcomings. It is converted back to LBA by the controller and then, finally, to physical cylinder, head. The Domain Name System maps its names to network-specific addresses, which in turn may be mapped to link layer network addresses via Address Resolution Protocol, also, network address translation may occur on the edge of different IP spaces, such as a local area network and the Internet. An iconic example of virtual-to-physical address translation is virtual memory, where different pages of virtual address space map either to page file or to main memory address space. It is possible that several different virtual addresses all refer to one physical address. It is also possible that a virtual address maps to zero, one. Linear address space Name space Virtualization
7.
Binary-coded decimal
–
In computing and electronic systems, binary-coded decimal is a class of binary encodings of decimal numbers where each decimal digit is represented by a fixed number of bits, usually four or eight. Special bit patterns are used for a sign or for other indications. The precise 4-bit encoding may vary however, for technical reasons, the ten states representing a BCD decimal digit are sometimes called tetrades with those dont care-states unused named pseudo-tetrads or pseudo-decimal digit). BCDs principal drawbacks are an increase in the complexity of the circuits needed to implement basic arithmetics. BCD was used in many early computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants. BCD takes advantage of the fact that any one decimal numeral can be represented by a four bit pattern, the most obvious way of encoding digits is natural BCD, where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called 8421 encoding, other encodings are also used, including so-called 4221 and 7421 — named after the weighting used for the bits — and excess-3. For example, the BCD digit 6, 0110b in 8421 notation, is 1100b in 4221, 0110b in 7421, Packed, Two numerals are encoded into a single byte, with one numeral in the least significant nibble and the other numeral in the most significant nibble. To represent numbers larger than the range of a single byte any number of contiguous bytes may be used, also note how packed BCD is more efficient in storage usage as compared to unpacked BCD, encoding the same number in unpacked format would consume twice the storage. Shifting and masking operations are used to pack or unpack a packed BCD digit, other logical operations are used to convert a numeral to its equivalent bit pattern or reverse the process. BCD is very common in systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic. By employing BCD, the manipulation of data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit. If the numeric quantity were stored and manipulated as pure binary, therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. Most pocket calculators do all their calculations in BCD, the same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, for these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities. In Packed BCD, each of the two nibbles of each byte represent a decimal digit, Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations are big endian, i. e. with the more significant digit in the half of each byte
8.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra
9.
Negative number
–
In mathematics, a negative number is a real number that is less than zero. If positive represents movement to the right, negative represents movement to the left, if positive represents above sea level, then negative represents below level. If positive represents a deposit, negative represents a withdrawal and they are often used to represent the magnitude of a loss or deficiency. A debt that is owed may be thought of as a negative asset, if a quantity may have either of two opposite senses, then one may choose to distinguish between those senses—perhaps arbitrarily—as positive and negative. In the medical context of fighting a tumor, an expansion could be thought of as a negative shrinkage, negative numbers are used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature. The laws of arithmetic for negative numbers ensure that the common idea of an opposite is reflected in arithmetic. For example, − −3 =3 because the opposite of an opposite is the original thing, negative numbers are usually written with a minus sign in front. For example, −3 represents a quantity with a magnitude of three, and is pronounced minus three or negative three. To help tell the difference between a subtraction operation and a number, occasionally the negative sign is placed slightly higher than the minus sign. Conversely, a number that is greater than zero is called positive, the positivity of a number may be emphasized by placing a plus sign before it, e. g. +3. In general, the negativity or positivity of a number is referred to as its sign, every real number other than zero is either positive or negative. The positive whole numbers are referred to as natural numbers, while the positive and negative numbers are referred to as integers. In bookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, Liu Hui established rules for adding and subtracting negative numbers. By the 7th century, Indian mathematicians such as Brahmagupta were describing the use of negative numbers, islamic mathematicians further developed the rules of subtracting and multiplying negative numbers and solved problems with negative coefficients. Western mathematicians accepted the idea of numbers by the 17th century. Prior to the concept of numbers, mathematicians such as Diophantus considered negative solutions to problems false. Negative numbers can be thought of as resulting from the subtraction of a number from a smaller. For example, negative three is the result of subtracting three from zero,0 −3 = −3, in general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers
10.
Integer overflow
–
The most common result of an overflow is that the least significant representable bits of the result are stored the result is said to wrap around the maximum. An overflow condition gives incorrect results and, particularly if the possibility has not been anticipated, the register width of a processor determines the range of values that can be represented. In particular, multiplying or adding two integers may result in a value that is small, and subtracting from a small integer may cause a wrap to a large positive value. If the variable has an integer type, a program may make the assumption that a variable always contains a positive value. An integer overflow can cause the value to wrap and become negative, most computers have two dedicated processor flags to check for overflow conditions. The carry flag is set when the result of an addition or subtraction, considering the operands and result as unsigned numbers and this indicates an overflow with a carry/borrow from the most significant bit. This indicates than an overflow has occurred and the result represented in twos complement form would not fit in the given number of bits. Handling, If it is anticipated that overflow may occur and when it happens detected, cPUs generally have a way of detecting this to support addition of numbers larger than their register size, typically using a status bit. Propagation, if a value is too large to be stored it can be assigned a value indicating that overflow has occurred. This is useful so that the problem can be checked for once at the end of a long rather than after each step. This is often supported in Floating Point Hardware called FPUs, run-time overflow detection implementation AddressSanitizer is also available for C compilers. Using such languages may thus be helpful to mitigate this issue, however, in some such languages, situations are still possible where an integer overflow can occur. An example is explicit optimization of a path which is considered a bottleneck by the profiler. In the case of Common Lisp, this is possible by using a declaration to type-annotate a variable to a machine-size word. In Java 8, there are overloaded methods, for example like Math#addExact, in computer graphics or signal processing, it is typical to work on data that ranges from 0 to 1 or from −1 to 1. An example of this is an image where 0 represents black,1 represents white. One operation that one may want to support is brightening the image by multiplying every pixel by a constant, unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for large input data sets
11.
XOR gate
–
The XOR gate is a digital logic gate that gives a true output when the number of true inputs is odd. An XOR gate implements an exclusive or, that is, a true output results if one, if both inputs are false or both are true, a false output results. XOR represents the inequality function, i. e. the output is true if the inputs are not alike otherwise the output is false, a way to remember XOR is one or the other but not both. XOR can also be viewed as addition modulo 2, as a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate, other uses include subtractors, comparators, and controlled inverters. The algebraic expressions A ⋅ B ¯ + A ¯ ⋅ B and ⋅ both represent the XOR gate with inputs A and B, the behavior of XOR is summarized in the truth table shown on the right. There are two symbols for XOR gates, the symbol and the IEEE symbol. For more information see Logic Gate Symbols, the logic symbols ⊕ and ⊻ can be used to denote XOR in algebraic expressions. C-like languages use the symbol ^ to denote bitwise XOR. An XOR gate can be constructed using MOSFETs, here is a diagram of a pass transistor logic implementation of an XOR gate. Note, The Rss resistor prevents shunting current directly from A and B to the output, without it, if the circuit that provides inputs A and B does not have the proper driving capability, the output might not swing rail to rail or be severely slew-rate limited. The Rss resistor also limits the current from Vdd to ground which protects the transistors, if a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be constructed from an XNOR gate followed by a NOT gate. If we consider the expression A ⋅ B ¯ + A ¯ ⋅ B, we can construct an XOR gate circuit directly using AND, OR, however, this approach requires five gates of three different kinds. An XOR gate circuit can be made from four NAND gates in the configuration shown below, in fact, both NAND and NOR gates are so-called universal gates and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NAND gates, below, are replaced by NOR gates, this results in an XNOR gate, strict reading of the definition of exclusive or, or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to three or more inputs and produce a true output if exactly one of those inputs were true. However, it is rarely implemented this way in practice, the result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even
12.
AND gate
–
The AND gate is a basic digital logic gate that implements logical conjunction - it behaves according to the truth table to the right. A HIGH output results if both the inputs to the AND gate are HIGH. If neither or only one input to the AND gate is HIGH, in another sense, the function of AND effectively finds the minimum between two binary digits, just as the OR function finds the maximum between two binary digits. Therefore, the output is always 0, except all the inputs are 1. There are three symbols for AND gates, the American symbol and the IEC symbol, as well as the deprecated DIN symbol, for more information see Logic Gate Symbols. The AND gate with inputs A and B and output C implements the logical expression C = A ⋅ B, an AND gate is usually designed using N-channel or P-channel MOSFETs. The digital inputs a and b cause the output F to have the result as the AND function. If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are considered the universal gates, AND gates are available in IC packages. 7408 IC is a famous QUAD 2-Input AND GATES and contains four independent gates each of which performs the logic AND function, OR gate NOT gate NAND gate NOR gate XOR gate XNOR gate IMPLY gate Boolean algebra Logic gate
13.
Transistor
–
A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed of semiconductor material usually with at least three terminals for connection to an external circuit, a voltage or current applied to one pair of the transistors terminals controls the current through another pair of terminals. Because the controlled power can be higher than the controlling power, today, some transistors are packaged individually, but many more are found embedded in integrated circuits. The transistor is the building block of modern electronic devices. Julius Edgar Lilienfeld patented a field-effect transistor in 1926 but it was not possible to construct a working device at that time. The first practically implemented device was a point-contact transistor invented in 1947 by American physicists John Bardeen, Walter Brattain, the transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones in electronics, and Bardeen, Brattain, the thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a device that consumed a substantial amount of power. Physicist Julius Edgar Lilienfeld filed a patent for a transistor in Canada in 1925. Lilienfeld also filed patents in the United States in 1926 and 1928. However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any examples of a working prototype. In 1934, German inventor Oskar Heil patented a device in Europe. Solid State Physics Group leader William Shockley saw the potential in this, the term transistor was coined by John R. Pierce as a contraction of the term transresistance. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor, Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of interference in 1947, realizing that Bell Labs scientists had already invented the transistor before them, the company rushed to get its transistron into production for amplified use in Frances telephone network. The first bipolar junction transistors were invented by Bell labs William Shockley, on April 12,1950, Bell Labs chemists Gordon Teal and Morgan Sparks had successfully produced a working bipolar NPN junction amplifying germanium transistor. Bell Labs had made this new sandwich transistor discovery announcement, in a release on July 4,1951. The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953 and these were made by etching depressions into an N-type germanium base from both sides with jets of Indium sulfate until it was a few ten-thousandths of an inch thick
14.
OR gate
–
The OR gate is a digital logic gate that implements logical disjunction – it behaves according to the truth table to the right. A HIGH output results if one or both the inputs to the gate are HIGH, if neither input is high, a LOW output results. In another sense, the function of OR effectively finds the maximum between two digits, just as the complementary AND function finds the minimum. There are two symbols of OR gates, the American symbol and the IEC symbol, as well as the deprecated DIN symbol, for more information see Logic Gate Symbols. OR Gates are basic logic gates, and as such they are available in TTL, the standard 4000 series CMOS IC is the 4071, which includes four independent two-input OR gates. The ancestral TTL device is the 7432, there are many offshoots of the original 7432 OR gate, all having the same pinout but different internal architecture, allowing them to operate in different voltage ranges and/or at higher speeds. In addition to the standard 2-Input OR Gate, 3- and 4-Input OR Gates are also available, if no specific OR gates are available, one can be made from NAND or NOR gates in the configuration shown in the image below. Any logic gate can be made from a combination of NAND or NOR gates, with active low open collector logic outputs, as used for control signals in many circuits, an OR function can be produced by wiring together several outputs. This arrangement is called a wired OR and this implementation of an OR function typically is also found in integrated circuits of N or P-type only transistor processes. AND gate NOT gate NAND gate NOR gate XOR gate XNOR gate Boolean algebra Logic gate
15.
Propagation delay
–
Propagation delay is a technical term that can have a different meaning depending on the context. It can relate to networking, electronics or physics, in general it is the length of time taken for the quantity of interest to reach its destination. In computer networks, propagation delay is the amount of time it takes for the head of the signal to travel from the sender to the receiver and it can be computed as the ratio between the link length and the propagation speed over the specific medium. Propagation delay is equal to d / s where d is the distance, in wireless communication, s=c, i. e. the speed of light. In copper wire, the speed s generally ranges from. 59c to. 77c and this delay is the major obstacle in the development of high-speed computers and is called the interconnect bottleneck in IC systems. Often on manufacturers datasheets this refers to the time required for the output to reach 50% of its final output level when the changes to 50% of its final input level. Reducing gate delays in digital circuits allows them to data at a faster rate. The determination of the delay of a combined circuit requires identifying the longest path of propagation delays from input to output. The difference in delays of logic elements is the major contributor to glitches in asynchronous circuits as a result of race conditions. The principle of logical effort utilizes propagation delays to compare designs implementing the same logical statement, propagation delay increases with operating temperature, as resistance of conductive materials tends to increase with temperature. Marginal increases in voltage can increase propagation delay since the upper switching threshold voltage, VIH. Increases in output load capacitance, often from placing increased fan-out loads on a wire, if the output of a logic gate is connected to a long trace or used to drive many other gates the propagation delay increases substantially. Wires have a propagation delay of 1 ns for every 6 inches of length. Logic gates can have propagation delays ranging from more than 10 ns down to the picosecond range, in physics, particularly in the electromagnetic field, the propagation delay is the length of time it takes for a signal to travel to its destination. For example, in the case of a signal, it is the time taken for the signal to travel through a wire. Contamination delay Delay calculation Latency Transmission delay
16.
AND-OR-Invert
–
AND-OR-Invert logic and AOI gates are two-level compound logic functions constructed from the combination of one or more AND gates followed by a NOR gate. Construction of AOI cells is particularly efficient using CMOS technology where the number of transistor gates can be compared to the same construction using NAND logic or NOR logic. The complement of AOI Logic is OR-AND-Invert logic where the OR gates precede a NAND gate, AOI gates perform one or more AND operations followed by an OR operation and then an inversion. AOI and OAI gates can be implemented in CMOS circuitry. AOI gates are particularly advantaged in that the number of transistors is less than if the AND, NOT. This results in increased speed, reduced power, smaller area, for example, a 2-1 AOI gate can be constructed with 6 transistors in CMOS compared to 10 transistors using a 2-input NAND gate, an inverter, and a 2-input NOR gate. In NMOS logic, only the half of the CMOS circuit is used, in combination with a load device. AOI gates are similarly efficient in transistor–transistor logic, the TTL7400 line included a number of AOI gate parts, such as the 7451 dual 2-wide 2-input AND-OR-invert gate and the 7464 4-2-3-2-input AND-OR-invert gate. Engineering digital design, Revised Second Edition