The electron is a subatomic particle, symbol e− or β−, whose electric charge is negative one elementary charge. Electrons belong to the first generation of the lepton particle family, are thought to be elementary particles because they have no known components or substructure; the electron has a mass, 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant, ħ; as it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light; the wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism and thermal conductivity, they participate in gravitational and weak interactions.
Since an electron has charge, it has a surrounding electric field, if that electron is moving relative to an observer, it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, cathode ray tubes, electron microscopes, radiation therapy, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics; the Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms.
Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge'electron' in 1891, J. J. Thomson and his team of British physicists identified it as a particle in 1897. Electrons can participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere; the antiparticle of the electron is called the positron. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
The ancient Greeks noticed. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον. In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids and resinous, that are separated by friction, that neutralize each other when combined. American scientist Ebenezer Kinnersley also independently reached the same conclusion. A decade Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess or deficit.
He gave them the modern charge nomenclature of negative respectively. Franklin thought of the charge carrier as being positive, but he did not identify which situation was a surplus of the charge carrier, which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion, he was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney coined the term
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
Polycrystalline silicon called polysilicon or poly-Si, is a high purity, polycrystalline form of silicon, used as a raw material by the solar photovoltaic and electronics industry. Polysilicon is produced from metallurgical grade silicon by a chemical purification process, called the Siemens process; this process involves distillation of volatile silicon compounds, their decomposition into silicon at high temperatures. An emerging, alternative process of refinement uses a fluidized bed reactor; the photovoltaic industry produces upgraded metallurgical-grade silicon, using metallurgical instead of chemical purification processes. When produced for the electronics industry, polysilicon contains impurity levels of less than one part per billion, while polycrystalline solar grade silicon is less pure. A few companies from China, Japan and the United States, such as GCL-Poly, Wacker Chemie, OCI, Hemlock Semiconductor, as well as the Norwegian headquartered REC, accounted for most of the worldwide production of about 230,000 tonnes in 2013.
The polysilicon feedstock – large rods broken into chunks of specific sizes and packaged in clean rooms before shipment – is directly cast into multicrystalline ingots or submitted to a recrystallization process to grow single crystal boules. The products are sliced into thin silicon wafers and used for the production of solar cells, integrated circuits and other semiconductor devices. Polysilicon consists of small crystals known as crystallites, giving the material its typical metal flake effect. While polysilicon and multisilicon are used as synonyms, multicrystalline refers to crystals larger than 1 mm. Multicrystalline solar cells are the most common type of solar cells in the fast-growing PV market and consume most of the worldwide produced polysilicon. About 5 tons of polysilicon is required to manufacture 1 megawatt of conventional solar modules. Polysilicon is distinct from monocrystalline silicon and amorphous silicon. In single crystal silicon known as monocrystalline silicon, the crystalline framework is homogenous, which can be recognized by an external colouring.
The entire sample is one single and unbroken crystal as its structure contains no grain boundaries. Large single crystals are rare in nature and can be difficult to produce in the laboratory. In contrast, in an amorphous structure the order in atomic positions is limited to short range. Polycrystalline and paracrystalline phases are composed of a number of smaller crystals or crystallites. Polycrystalline silicon is a material consisting of multiple small silicon crystals. Polycrystalline cells can be recognized by a visible grain, a "metal flake effect". Semiconductor grade polycrystalline silicon is converted to "single crystal" silicon – meaning that the randomly associated crystallites of silicon in "polycrystalline silicon" are converted to a large "single" crystal. Single crystal silicon is used to manufacture most Si-based microelectronic devices. Polycrystalline silicon can be as much as 99.9999% pure. Ultra-pure poly is used in the semiconductor industry, starting from poly rods that are two to three meters in length.
In microelectronic industry, poly is used both at the micro-scale level. Single crystals are grown using the Czochralski float-zone and Bridgman techniques. At the component level, polysilicon has long been used as the conducting gate material in MOSFET and CMOS processing technologies. For these technologies it is deposited using low-pressure chemical-vapour deposition reactors at high temperatures and is heavily doped n-type or p-type. More intrinsic and doped polysilicon is being used in large-area electronics as the active and/or doped layers in thin-film transistors. Although it can be deposited by LPCVD, plasma-enhanced chemical vapour deposition, or solid-phase crystallization of amorphous silicon in certain processing regimes, these processes still require high temperatures of at least 300 °C; these temperatures make deposition of polysilicon possible for glass substrates but not for plastic substrates. The deposition of polycrystalline silicon on plastic substrates is motivated by the desire to be able to manufacture digital displays on flexible screens.
Therefore, a new technique called laser crystallization has been devised to crystallize a precursor amorphous silicon material on a plastic substrate without melting or damaging the plastic. Short, high-intensity ultraviolet laser pulses are used to heat the deposited a-Si material to above the melting point of silicon, without melting the entire substrate; the molten silicon will crystallize as it cools. By controlling the temperature gradients, researchers have been able to grow large grains, of up to hundreds of micrometers in size in the extreme case, although grain sizes of 10 nanometers to 1 micrometer are common. In order to create devices on polysilicon over large-areas however, a crystal grain size smaller than the device feature size is needed for homogeneity of the devices. Another method to produce poly-Si at low temperatures is metal-induced crystallization where an amorphous-Si thin film can be crystallized at temperatures as low as 150 °C if annealed while in contact of another metal film such as aluminium, gold, or silver.
Polysilicon has many applications in VLSI manufacturing. One of its primary uses is as gate electrode material for MOS devices. A polysilicon gate's electrical conductivity may be increased by depositing a metal or a metal silicide (such as tungsten silici
Copper is a chemical element with symbol Cu and atomic number 29. It is a soft and ductile metal with high thermal and electrical conductivity. A freshly exposed surface of pure copper has a pinkish-orange color. Copper is used as a conductor of heat and electricity, as a building material, as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, constantan used in strain gauges and thermocouples for temperature measurement. Copper is one of the few metals; this led to early human use in several regions, from c. 8000 BC. Thousands of years it was the first metal to be smelted from sulfide ores, c. 5000 BC, the first metal to be cast into a shape in a mold, c. 4000 BC and the first metal to be purposefully alloyed with another metal, tin, to create bronze, c. 3500 BC. In the Roman era, copper was principally mined on Cyprus, the origin of the name of the metal, from aes сyprium corrupted to сuprum, from which the words derived and copper, first used around 1530.
The encountered compounds are copper salts, which impart blue or green colors to such minerals as azurite and turquoise, have been used and as pigments. Copper used in buildings for roofing, oxidizes to form a green verdigris. Copper is sometimes used in decorative art, both in its elemental metal form and in compounds as pigments. Copper compounds are used as bacteriostatic agents and wood preservatives. Copper is essential to all living organisms as a trace dietary mineral because it is a key constituent of the respiratory enzyme complex cytochrome c oxidase. In molluscs and crustaceans, copper is a constituent of the blood pigment hemocyanin, replaced by the iron-complexed hemoglobin in fish and other vertebrates. In humans, copper is found in the liver and bone; the adult body contains between 2.1 mg of copper per kilogram of body weight. Copper and gold are in group 11 of the periodic table; the filled d-shells in these elements contribute little to interatomic interactions, which are dominated by the s-electrons through metallic bonds.
Unlike metals with incomplete d-shells, metallic bonds in copper are lacking a covalent character and are weak. This observation explains the low high ductility of single crystals of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries, hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is supplied in a fine-grained polycrystalline form, which has greater strength than monocrystalline forms; the softness of copper explains its high electrical conductivity and high thermal conductivity, second highest among pure metals at room temperature. This is because the resistivity to electron transport in metals at room temperature originates from scattering of electrons on thermal vibrations of the lattice, which are weak in a soft metal; the maximum permissible current density of copper in open air is 3.1×106 A/m2 of cross-sectional area, above which it begins to heat excessively. Copper is one of a few metallic elements with a natural color other than silver.
Pure copper acquires a reddish tarnish when exposed to air. The characteristic color of copper results from the electronic transitions between the filled 3d and half-empty 4s atomic shells – the energy difference between these shells corresponds to orange light; as with other metals, if copper is put in contact with another metal, galvanic corrosion will occur. Copper does not react with water, but it does react with atmospheric oxygen to form a layer of brown-black copper oxide which, unlike the rust that forms on iron in moist air, protects the underlying metal from further corrosion. A green layer of verdigris can be seen on old copper structures, such as the roofing of many older buildings and the Statue of Liberty. Copper tarnishes when exposed to some sulfur compounds, with which it reacts to form various copper sulfides. There are 29 isotopes of copper. 63Cu and 65Cu are stable, with 63Cu comprising 69% of occurring copper. The other isotopes are radioactive, with the most stable being 67Cu with a half-life of 61.83 hours.
Seven metastable isotopes have been characterized. Isotopes with a mass number above 64 decay by β−, whereas those with a mass number below 64 decay by β+. 64Cu, which has a half-life of 12.7 hours, decays both ways.62Cu and 64Cu have significant applications. 62Cu is used in 62Cu-PTSM as a radioactive tracer for positron emission tomography. Copper is produced in massive stars and is present in the Earth's crust in a proportion of about 50 parts per million. In nature, copper occurs in a variety of minerals, including native copper, copper sulfides such as chalcopyrite, digenite and chalcocite, copper sulfosalts such as tetrahedite-tennantite, enargite, copper carbonates such as azurite and malachite, as copper or copper oxides such as cuprite and tenorite, respectively; the largest mass of elemental copper discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a polycrystal
In electronics, a self-aligned gate is a transistor manufacturing feature whereby a refractory gate electrode region of a MOSFET transistor is used as a mask for the doping of the source and drain regions. This technique ensures that the gate will overlap the edges of the source and drain; the use of self-aligned gates is one of the many innovations that led to the large increase in computing power in the 1970s. Self-aligned gates are still used in most modern integrated circuit processes; the self-aligned gate is used to eliminate the need to align the gate electrode to the source and drain regions of a MOS transistor during the fabrication process. With self-aligned gates the parasitic overlap capacitances between gate and source, gate and drain are reduced, leading to MOS transistors that are faster and more reliable than transistors made without them. After the early experimentation with different gate materials the industry universally adopted self-aligned gates made with polycrystalline silicon, the so-called Silicon Gate Technology, which had many additional benefits over the reduction of parasitic capacitances.
One important feature of SGT was that the silicon gate was buried under top quality thermal oxide, making it possible to create new device types, not feasible with conventional technology or with self-aligned gates made with other materials. Important are charge coupled devices, used for image sensors, non-volatile memory devices using floating silicon-gate structures; these devices enlarged the range of functionality that could be achieved with solid state electronics. Innovations that made Self-Aligned Gate Technology possible Certain innovations were required in order to make self-aligned gates: a new process that would create the gates; this is. Prior to these innovations, self-aligned gates had been demonstrated on metal-gate devices, but their real impact was on silicon-gate devices; the aluminum-gate MOS process technology, developed in the mid-sixties, started with the definition and doping of the source and drain regions of MOS transistors, followed by the gate mask that defined the thin-oxide region of the transistors.
With additional processing steps, an aluminum gate would be formed over the thin-oxide region completing the device fabrication. Due to the inevitable misalignment of the gate mask with respect to the source and drain mask, it was necessary to have a large overlap area between the gate region and the source and drain regions, to ensure that the thin-oxide region would bridge the source and drain under worst-case misalignment; this requirement resulted in gate-to-source and gate-to-drain parasitic capacitances that were large and variable from wafer to wafer, depending on the misalignment of the gate oxide mask with respect with the source and drain mask. The result was an undesirable spread in the speed of the integrated circuits produced, a much lower speed than theoretically possible if the parasitic capacitances could be reduced to a minimum; the overlap capacitance with the most adverse consequences on performance was the gate-to-drain parasitic capacitance, which, by the well-known Miller effect, augmented the gate-to-source capacitance of the transistor by Cgd multiplied by the gain of the circuit to which that transistor was a part.
The impact was a considerable reduction in the switching speed of transistors. In 1966 Dr. Bower realized that if the gate electrode was defined first, it would be possible not only to minimize the parasitic capacitances between gate and source and drain, but it would make them insensitive to misalignment, he proposed a method in which the aluminum gate electrode itself was used as a mask to define the source and drain regions of the transistor. However, since aluminum could not withstand the high temperature required for the conventional doping of the source and drain junctions, Dr. Bower proposed to use ion implantation, a new doping technique still in development at Hughes Aircraft, his employer, not yet available at other labs. While Bower’s idea was conceptually sound, in practice it did not work, because it was impossible to adequately passivate the transistors, repair the radiation damage done to the silicon crystal structure by the ion implantation, since these two operations would have required temperatures in excess of the ones survivable by the aluminum gate.
Thus his invention provided a proof of principle, but no commercial integrated circuit was produced with Bower’s method. A more refractory gate material was needed. In 1967 John C. Sarace and collaborators at Bell Labs replaced the aluminum gate with an electrode made of vacuum-evaporated amorphous silicon and succeeded in building working self-aligned gate MOS transistors. However, the process, as described, was only a proof of principle, suitable only for the fabrication of discrete transistors and not for integrated circuits. In 1968 the MOS industry was prevalently using aluminum gate transistors with high threshold voltage and desired to have a low threshold voltage MOS process in order to increase the speed and reduce the power dissipation of MOS integrated circuits. Low threshold voltage transistors with aluminum gate demanded the use of silicon orientation, which however produced too low a threshold voltage for the parasitic MOS transistors (the MOS transistors created when aluminum over th
P-type metal-oxide-semiconductor logic uses p-channel metal-oxide-semiconductor field effect transistors to implement logic gates and other digital circuits. PMOS transistors operate by creating an inversion layer in an n-type transistor body; this inversion layer, called the p-channel, can conduct holes between p-type "source" and "drain" terminals. The p-channel is created by applying voltage to the third terminal, called the gate. Like other MOSFETs, PMOS transistors have four modes of operation: cut-off, triode and velocity saturation. While PMOS logic is easy to design and manufacture, it has several shortcomings as well; the worst problem is that there is a direct current through a PMOS logic gate when the PUN is active, that is, whenever the output is high, which leads to static power dissipation when the circuit sits idle. PMOS circuits are slow to transition from high to low; when transitioning from low to high, the transistors provide low resistance, the capacitive charge at the output accumulates quickly.
But the resistance between the output and the negative supply rail is much greater, so the high-to-low transition takes longer. Using a resistor of lower value will speed up the process but increases static power dissipation. Additionally, the asymmetric input logic levels make PMOS circuits susceptible to noise. Most P-MOS integrated circuits require a power supply of 17-24 volt DC; the Intel 4004 PMOS microprocessor, uses PMOS logic with polysilicon rather than metal gates allowing a smaller voltage differential. For compatibility with TTL signals, the 4004 uses positive supply voltage VSS=+5V and negative supply voltage VDD = -10V. Though easier to manufacture, PMOS logic was supplanted by NMOS logic using n-channel field-effect transistors. NMOS is faster than PMOS. Modern integrated circuits are CMOS logic, which uses both n-channel transistors; the p-type MOSFETs are arranged in a so-called "pull-up network" between the logic gate output and positive supply voltage, while a resistor is placed between the logic gate output and the negative supply voltage.
The circuit is designed such that if the desired output is high the PUN will be active, creating a current path between the positive supply and the output. PMOS gates have the same arrangement as NMOS gates. Thus, for active-high logic, De Morgan's laws show that a PMOS NOR gate has the same structure as an NMOS NAND gate and vice versa. Savard, John J. G.. "What Computers Are Made From". Quadibloc. Archived from the original on 2018-07-02. Retrieved 2018-07-16
In electronic logic circuits, a pull-up resistor or pull-down resistor is a resistor used to ensure a known state for a signal. It is used in combination with components such as switches and transistors, which physically interrupt the connection of subsequent components to ground or to VCC; when the switch is closed, it creates a direct connection to ground or VCC, but when the switch is open, the rest of the circuit would be left floating. For a switch that connects to ground, a pull-up resistor ensures a well-defined voltage across the remainder of the circuit when the switch is open. Conversely, for a switch that connects to VCC, a pull-down resistor ensures a well-defined ground voltage when the switch is open. An open switch is not equivalent to a component with infinite impedance, since in the former case, the stationary voltage in any loop in which it is involved can no longer be determined by Kirchhoff's laws; the voltages across those critical components which are only in loops involving the open switch are undefined, too.
A pull-up resistor establishes an additional loop over the critical components, ensuring that the voltage is well-defined when the switch is open. For a pull-up resistor to only serve this one purpose and not interfere with the circuit otherwise, a resistor with an appropriate amount of resistance must be used. For this, it is assumed that the critical components have infinite or sufficiently high impedance, guaranteed for example for logic gates made from FETs. In this case, when the switch is open, the voltage across a pull-up resistor with sufficiently low impedance vanishes to the effect that it looks like a wire to VCC. On the other hand, when the switch is closed, the pull-up resistor must have sufficiently high impedance in comparison to the closed switch to not affect the connection to ground. Together, these two conditions can be used to derive an appropriate value for the impedance of the pull-up resistor but only a lower bound is derived assuming that the critical components do indeed have infinite impedance.
A resistor with low resistance is called a "strong" pull-up or pull-down. A resistor with high resistance is called a "weak" pull-up or pull-down. A pull-up resistor may be used. For example, an input signal may be pulled by a resistor a switch or jumper strap can be used to connect that input to ground; this can be used for configuration information, to select options or for troubleshooting of a device. Pull-up resistors may be used at logic outputs where the logic device cannot source current such as open-collector TTL logic devices; such outputs are used for driving external devices, for a wired-OR function in combinational logic, or for a simple way of driving a logic bus with multiple devices connected to it. For example, the circuit shown on the right uses 0 V logic level inputs to actuate a relay. If the input is left unconnected, pull-down resistor R1 ensures that the input is pulled down to a logic low; the 7407 TTL device, an open collector buffer outputs whatever it receives as input, but as an open collector device, the output is left unconnected when outputting a "1".
Pull-up resistor R2 thus pulls the output all the way up to 12 V when the buffer outputs a "1", providing enough voltage to turn the power MOSFET all the way on and actuate the relay. Pull-up resistors may be discrete devices mounted on the same circuit board as the logic devices. Many microcontrollers intended for embedded control applications have internal, programmable pull-up resistors for logic inputs so that minimal external components are needed; some disadvantages of pull-up resistors are the extra power consumed when current is drawn through the resistor and the reduced speed of a pull-up compared to an active current source. Certain logic families are susceptible to power supply transients introduced into logic inputs through pull-up resistors, which may force the use of a separate filtered power source for the pull-ups. Pull-down resistors can be safely used with CMOS logic gates because the inputs are voltage-controlled. TTL logic inputs that are left un-connected inherently float high, require a much lower valued pull-down resistor to force the input low.
A standard TTL input at logic "1" is operated assuming a source current of 40 µA, a voltage level above 2.4 V, allowing a pull-up resistor of no more than 50 kohms. Holding unused TTL inputs low consumes more current. For that reason, pull-up resistors are preferred in TTL circuits. In bipolar logic families operating at 5 VDC, a typical pull-up resistor value will be 1000–5000 Ω, based on the requirement to provide the required logic level current over the full operating range of temperature and supply voltage. For CMOS and MOS logic, much higher values of resistor can be used, several thousand to a million ohms, since the required leakage current at a logic input is small. Three-state logic Paul Horowitz and Winfield Hill, The Art of Electronics, 2nd edition, Cambridge University Press, England, 1989, ISBN 0-521-37095-7