A photomask is an opaque plate with holes or transparencies that allow light to shine through in a defined pattern. They are used in photolithography. Lithographic photomasks are transparent fused silica blanks covered with a pattern defined with a chrome metal-absorbing film. Photomasks are used at wavelengths of 365 nm, 248 nm, 193 nm. Photomasks have been developed for other forms of radiation such as 157 nm, 13.5 nm, X-ray and ions. A set of photomasks, each defining a pattern layer in integrated circuit fabrication, is fed into a photolithography stepper or scanner, individually selected for exposure. In double patterning techniques, a photomask would correspond to a subset of the layer pattern. In photolithography for the mass production of integrated circuit devices, the more correct term is photoreticle or reticle. In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern; this was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics.
As used in steppers and scanners, the reticle contains only one layer of the chip.. The pattern is shrunk by four or five times onto the wafer surface. To achieve complete wafer coverage, the wafer is "stepped" from position to position under the optical column until full exposure is achieved. Features 150 nm or below in size require phase-shifting to enhance the image quality to acceptable values; this can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure; the former method is attenuated phase-shifting, is considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting, is the most popular strong enhancement technique.
As leading-edge semiconductor features shrink, photomask features that are 4× larger must shrink as well. This could pose challenges since the absorber film will need to become thinner, hence less opaque. A recent study by IMEC has found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools. One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying on phase-shifting for imaging; the emergence of immersion lithography has a strong impact on photomask requirements. The used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film. Leading-edge photomasks images of the final chip patterns magnified by four times; this magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e. the dimension error on the wafer may be more than 1/4 the dimension error on the mask, the second is that the mask feature is becoming smaller, the dimension tolerance is approaching a few nanometers.
For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm, which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can well exceed this; the term "pellicle" is used to mean "film", "thin film", or "membrane." Beginning in the 1960s, thin film stretched on a metal frame known as a "pellicle", was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle. In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask". Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pellicle – a thin transparent film stretched over a frame, glued over one side of the photomask.
The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are Nitrocellulose and made for various Transmission Wavelengths; the SPIE Annual Conference, Photomask Technology reports the SEMATECH Mask Industry Assessment which includes current industry analysis and the results of their annual photomask manufacturers survey. The following companies are listed in order of their global market share: Dai Nippon Printing Toppan Photomasks Photronics Inc Hoya Corporation Taiwan Mask Corporation CompugraphicsMajor chipmakers such as Intel, Globalfoundries, IBM, NEC, TSMC, UMC, Micron Technology, have their own large maskmaking facilities or joint ventures with the abovementioned companies; the worldwide photomask market was estimated as $3.2 billion in 2012 and $3.1 billion in 2013.
Half of the mark
Arithmetic logic unit
An arithmetic logic unit is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit. An ALU is a fundamental building block of many types of computing circuits, including the central processing unit of computers, FPUs, graphics processing units. A single CPU, FPU or GPU may contain multiple ALUs; the inputs to an ALU are the data to be operated on, called operands, a code indicating the operation to be performed. In many designs, the ALU has status inputs or outputs, or both, which convey information about a previous operation or the current operation between the ALU and external status registers. An ALU has a variety of input and output nets, which are the electrical conductors used to convey digital signals between the ALU and external circuitry; when an ALU is operating, external circuits apply signals to the ALU inputs and, in response, the ALU produces and conveys signals to external circuitry via its outputs.
A basic ALU has three parallel data buses consisting of a result output. Each data bus is a group of signals; the A, B and Y bus widths are identical and match the native word size of the external circuitry. The opcode input is a parallel bus that conveys to the ALU an operation selection code, an enumerated value that specifies the desired arithmetic or logic operation to be performed by the ALU; the opcode size determines the maximum number of different operations. An ALU opcode is not the same as a machine language opcode, though in some cases it may be directly encoded as a bit field within a machine language opcode; the status outputs are various individual signals that convey supplemental information about the result of the current ALU operation. General-purpose ALUs have status signals such as: Carry-out, which conveys the carry resulting from an addition operation, the borrow resulting from a subtraction operation, or the overflow bit resulting from a binary shift operation. Zero, which indicates all bits of Y are logic zero.
Negative, which indicates the result of an arithmetic operation is negative. Overflow, which indicates the result of an arithmetic operation has exceeded the numeric range of Y. Parity, which indicates whether an or odd number of bits in Y are logic one. At the end of each ALU operation, the status output signals are stored in external registers to make them available for future ALU operations or for controlling conditional branching; the collection of bit registers that store the status outputs are treated as a single, multi-bit register, referred to as the "status register" or "condition code register". The status inputs allow additional information to be made available to the ALU when performing an operation; this is a single "carry-in" bit, the stored carry-out from a previous ALU operation. An ALU is a combinational logic circuit, meaning that its outputs will change asynchronously in response to input changes. In normal operation, stable signals are applied to all of the ALU inputs and, when enough time has passed for the signals to propagate through the ALU circuitry, the result of the ALU operation appears at the ALU outputs.
The external circuitry connected to the ALU is responsible for ensuring the stability of ALU input signals throughout the operation, for allowing sufficient time for the signals to propagate through the ALU before sampling the ALU result. In general, external circuitry controls an ALU by applying signals to its inputs; the external circuitry employs sequential logic to control the ALU operation, paced by a clock signal of a sufficiently low frequency to ensure enough time for the ALU outputs to settle under worst-case conditions. For example, a CPU begins an ALU addition operation by routing operands from their sources to the ALU's operand inputs, while the control unit applies a value to the ALU's opcode input, configuring it to perform addition. At the same time, the CPU routes the ALU result output to a destination register that will receive the sum; the ALU's input signals, which are held stable until the next clock, are allowed to propagate through the ALU and to the destination register while the CPU waits for the next clock.
When the next clock arrives, the destination register stores the ALU result and, since the ALU operation has completed, the ALU inputs may be set up for the next ALU operation. A number of basic arithmetic and bitwise logic functions are supported by ALUs. Basic, general purpose ALUs include these operations in their repertoires: Add: A and B are summed and the sum appears at Y and carry-out. Add with carry: A, B and carry-in are summed and the sum appears at Y and carry-out. Subtract: B is subtracted from A and the difference appears at Y and carry-out. For this function, carry-out is a "borrow" indicator; this operation may be used to compare the magnitudes of A and B. Subtract with borrow: B is subtracted from A with borrow and the difference appears at Y and carry-
In computing, a processor or processing unit is an electronic circuit which performs operations on some external data source memory or some other data stream. The term is used to refer to the central processor in a system, but typical computer systems combine a number of specialised "processors". CPU – central processing unit If designed conforming to the von Neumann architecture, it contains at least a control unit, arithmetic logic unit and processor registers. In some contexts, the ALU and registers are called the processing unit. GPU – graphics processing unit VPU – video processing unit TPU – tensor processing unit NPU – neural processing unit PPU – physics processing unit DSP – digital signal processor ISP – image signal processor SPU or SPE – synergistic processing element in the Cell microprocessor FPGA – field-programmable gate array sound chip All pages with titles containing processing unit Microprocessor Multi-core processor Superscalar processor Hardware acceleration
Intellectual property is a category of property that includes intangible creations of the human intellect. Intellectual property encompasses two types of rights, it was not until the 19th century that the term "intellectual property" began to be used, not until the late 20th century that it became commonplace in the majority of the world. The main purpose of intellectual property law is to encourage the creation of a large variety of intellectual goods. To achieve this, the law gives people and businesses property rights to the information and intellectual goods they create – for a limited period of time; this gives economic incentive for their creation, because it allows people to profit from the information and intellectual goods they create. These economic incentives are expected to stimulate innovation and contribute to the technological progress of countries, which depends on the extent of protection granted to innovators; the intangible nature of intellectual property presents difficulties when compared with traditional property like land or goods.
Unlike traditional property, intellectual property is "indivisible" – an unlimited number of people can "consume" an intellectual good without it being depleted. Additionally, investments in intellectual goods suffer from problems of appropriation – a landowner can surround their land with a robust fence and hire armed guards to protect it, but a producer of information or an intellectual good can do little to stop their first buyer from replicating it and selling it at a lower price. Balancing rights so that they are strong enough to encourage the creation of intellectual goods but not so strong that they prevent the goods' wide use is the primary focus of modern intellectual property law; the Statute of Monopolies and the British Statute of Anne are seen as the origins of patent law and copyright firmly establishing the concept of intellectual property. "Literary property" was the term predominantly used in the British legal debates of the 1760s and 1770s over the extent to which authors and publishers of works had rights deriving from the common law of property.
The first known use of the term intellectual property dates to this time, when a piece published in the Monthly Review in 1769 used the phrase. The first clear example of modern usage goes back as early as 1808, when it was used as a heading title in a collection of essays; the German equivalent was used with the founding of the North German Confederation whose constitution granted legislative power over the protection of intellectual property to the confederation. When the administrative secretariats established by the Paris Convention and the Berne Convention merged in 1893, they located in Berne, adopted the term intellectual property in their new combined title, the United International Bureaux for the Protection of Intellectual Property; the organization subsequently relocated to Geneva in 1960, was succeeded in 1967 with the establishment of the World Intellectual Property Organization by treaty as an agency of the United Nations. According to legal scholar Mark Lemley, it was only at this point that the term began to be used in the United States, it did not enter popular usage there until passage of the Bayh-Dole Act in 1980.
"The history of patents does not begin with inventions, but rather with royal grants by Queen Elizabeth I for monopoly privileges... 200 years after the end of Elizabeth's reign, however, a patent represents a legal right obtained by an inventor providing for exclusive control over the production and sale of his mechanical or scientific invention... the evolution of patents from royal prerogative to common-law doctrine." The term can be found used in an October 1845 Massachusetts Circuit Court ruling in the patent case Davoll et al. v. Brown. In which Justice Charles L. Woodbury wrote that "only in this way can we protect intellectual property, the labors of the mind and interests are as much a man's own...as the wheat he cultivates, or the flocks he rears." The statement that "discoveries are..property" goes back earlier. Section 1 of the French law of 1791 stated, "All new discoveries are the property of the author. In Europe, French author A. Nion mentioned propriété intellectuelle in his Droits civils des auteurs, artistes et inventeurs, published in 1846.
Until the purpose of intellectual property law was to give as little protection as possible in order to encourage innovation. Therefore, they were granted only when they were necessary to encourage invention, limited in time and scope; this is as a result of knowledge being traditionally viewed as a public good, in order to allow its extensive dissemination and improvement thereof. The concept's origins can be traced back further. Jewish law includes several considerations whose effects are similar to those of modern intellectual property laws, though the notion of intellectual creations as property does not seem to exist – notably the principle of Hasagat Ge'vul was used to justify limited-term publisher copyright in the 16th century. In 500 BCE, the government of the Greek state of Sybaris offered one year's patent "to all who should discover any new refinement in luxury". According to Jean-Frédéric Morin, "the global inte
Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, keyboard, computer data storage, graphics card, sound card and motherboard. By contrast, software is instructions that can be run by hardware. Hardware is so-termed because it rigid with respect to changes or modifications. Intermediate between software and hardware is "firmware", software, coupled to the particular hardware of a computer system and thus the most difficult to change but among the most stable with respect to consistency of interface; the progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing. Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components; the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus; this is referred to as the Von Neumann bottleneck and limits the performance of the system. The personal computer known as the PC, is one of the most common types of computer due to its versatility and low price. Laptops are very similar, although they may use lower-power or reduced size components, thus lower performance; the computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, power supplies, controls and directs the flow of cooling air over internal components.
The case is part of the system to control electromagnetic interference radiated by the computer, protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. A power supply unit converts alternating current electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery for a period of hours; the motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include: The CPU, which performs most of the calculations which enable a computer to function, is sometimes referred to as the brain of the computer. It is cooled by a heatsink and fan, or water-cooling system. Most newer CPUs include an on-die graphics processing unit; the clock speed of CPUs governs how fast it executes instructions, is measured in GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling; the chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-access memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory. RAM comes on DIMMs in the sizes 2GB, 4GB, 8GB, but can be much larger. Read-only memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up".
The BIOS includes power management firmware. Newer motherboards use Unified Extensible Firmware Interface instead of BIOS. Buses that connect the CPU to various internal components and to expand cards for graphics and sound; the CMOS battery, which powers the memory for date and time in the BIOS chip. This battery is a watch battery; the video card, which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games. An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or
Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage waveform. However, digital signals are fundamentally analog in nature, all signals are subject to effects such as noise and loss. Over short distances and at low bit rates, a simple conductor can transmit this with sufficient fidelity. At high bit rates and over longer distances or through various mediums, various effects can degrade the electrical signal to the point where errors occur and the system or device fails. Signal integrity engineering is the task of mitigating these effects, it is an important activity at all levels of electronics packaging and assembly, from internal connections of an integrated circuit, through the package, the printed circuit board, the backplane, inter-system connections. While there are some common themes at these various levels, there are practical considerations, in particular the interconnect flight time versus the bit period, that cause substantial differences in the approach to signal integrity for on-chip connections versus chip-to-chip connections.
Some of the main issues of concern for signal integrity are ringing, ground bounce, signal loss, power supply noise. Signal integrity involves the electrical performance of the wires and other packaging structures used to move signals about within an electronic product; such performance is a matter of basic physics and as such has remained unchanged since the inception of electronic signaling. The first transatlantic telegraph cable suffered from severe signal integrity problems, analysis of the problems yielded many of the mathematical tools still used today to analyze signal integrity problems, such as the telegrapher's equations. Products as old as the Western Electric crossbar telephone exchange, based on the wire-spring relay, suffered all the effects seen today - the ringing, ground bounce, power supply noise that plague modern digital products. On printed circuit boards, signal integrity became a serious concern when the transition times of signals started to become comparable to the propagation time across the board.
Speaking, this happens when system speeds exceed a few tens of MHz. At first, only a few of the most important, or highest speed, signals needed detailed analysis or design; as speeds increased, a larger and larger fraction of signals needed SI analysis and design practices. In modern circuit designs all signals must be designed with SI in mind. For ICs, SI analysis became necessary as an effect of reduced design rules. In the early days of the modern VLSI era, digital chip circuit design and layout were manual processes; the use of abstraction and the application of automatic synthesis techniques have since allowed designers to express their designs using high-level languages and apply an automated design process to create complex designs, ignoring the electrical characteristics of the underlying circuits to a large degree. However, scaling trends brought electrical effects back to the forefront in recent technology nodes. With scaling of technology below 0.25 µm, the wire delays have become comparable or greater than the gate delays.
As a result, the wire delays needed to be considered to achieve timing closure. In nanometer technologies at 0.13 µm and below, unintended interactions between signals became an important consideration for digital design. At these technology nodes, the performance and correctness of a design cannot be assured without considering noise effects. Most of this article is about SI in relation to modern electronic technology - notably the use integrated circuits and printed circuit board technology; the principles of SI are not exclusive to the signalling technology used. SI existed long before the advent of either technology, will do so as long as electronic communications persist. Signal integrity problems in modern integrated circuits can have many drastic consequences for digital designs: Products can fail to operate at all, or worse yet, become unreliable in the field; the design may work, but only at speeds slower than planned Yield may be lowered, sometimes drasticallyThe cost of these failures is high, includes photomask costs, engineering costs and opportunity cost due to delayed product introduction.
Therefore, electronic design automation tools have been developed to analyze and correct these problems. In integrated circuits, or ICs, the main cause of signal integrity problems is crosstalk. In CMOS technologies, this is due to coupling capacitance, but in general it may be caused by mutual inductance, substrate coupling, non-ideal gate operation, other sources; the fixes involve changing the sizes of drivers and/or spacing of wires. In analog circuits, designers are concerned with noise that arise from physical sources, such as thermal noise, flicker noise, shot noise; these noise sources on the one hand present a lower limit to the smallest signal that can be amplified, on the other, define an upper limit to the useful amplification. In digital ICs, noise in a signal of interest arises from coupling effects from switching of other signals. Increasing interconnect density has led to each wire having neighbors that are physically closer together, leading to increased crosstalk between neighboring nets.
As circuits have continued to shrink in accordance with Moore's law, several effects have conspired to make noise problems worse: To keep resistance tolerable despite decreased width, modern wire geometries are thicker in propo
Field-programmable gate array
A Field-Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is specified using a hardware description language, similar to that used for an Application-Specific Integrated Circuit. Circuit diagrams were used to specify the configuration, but this is rare due to the advent of electronic design automation tools. FPGAs contain an array of programmable logic blocks, a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or simple logic gates like AND and XOR. In most FPGAs, logic blocks include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
Contemporary Field-Programmable Gate Arrays have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function; the ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design, offer advantages for many applications. Some FPGAs have analog features in addition to digital functions; the most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on loaded pins that would otherwise ring or couple unacceptably, to set higher rates on loaded pins on high-speed channels that would otherwise run too slowly. Common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer transmit clocks and receiver clock recovery.
Common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters and digital-to-analog converters with analog signal conditioning blocks allowing them to operate as a system-on-a-chip; such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, field-programmable analog array, which carries analog values on its internal programmable interconnect fabric. The FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in the field. However, programmable logic was hard-wired between logic gates. Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
In December 2015, Intel acquired Altera. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064; the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables. More than 20 years Freeman was entered into the National Inventors Hall of Fame for his invention. In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel was serving about 18 percent of the market. By 2013, Altera and Xilinx together represented 77 percent of the FPGA market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer and industrial applications. A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip"; this work mirrors the architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore.
The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Mic