A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions. The instructions are ordinary CPU instructions but the single processor can run multiple instructions on separate cores at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package; the microprocessors used in all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device or loosely. For example, cores may or may not share caches, they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, two-dimensional mesh, crossbar. Homogeneous multi-core systems include only identical cores. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, vector, or multithreading.
Multi-core processors are used across many application domains, including general-purpose, network, digital signal processing, graphics. The improvement in performance gained by the use of a multi-core processor depends much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel on multiple cores. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or more if the problem is split up enough to fit within each core's cache, avoiding use of much slower main-system memory. Most applications, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem; the parallelization of software is a significant ongoing topic of research. The terms multi-core and dual-core most refer to some sort of central processing unit, but are sometimes applied to digital signal processors and system on a chip.
The terms are used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted. In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units; the terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA; each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern; these physical limitations can cause significant heat data synchronization problems. Various other methods are used to improve CPU performance; some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code.
Many applications are better suited to thread-level parallelism methods, multiple independent CPUs are used to increase a system's overall TLP. A combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality for complex instruction set computing architectures. Clock rates increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s; as the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance.
Multiple cores were used on the same CPU chip, which could lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing. Since computer manufacturers have long implemented symmetric multiprocessing designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known. Additionally: Using a proven processing-core design without architectural changes reduces design risk significantly. For general-purpose processors, much of the motivation for multi-core processors comes from diminished gains in processor performance from increasing the operating frequency; this is due to three primary fa
Dynamic random-access memory
Dynamic random-access memory is a type of random access semiconductor memory that stores each bit of data in a separate tiny capacitor within an integrated circuit. The capacitor can either be discharged; the electric charge on the capacitors leaks off, so without intervention the data on the chip would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge; this refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory, since it loses its data when power is removed. However, DRAM does exhibit limited data remanence. DRAM is used in digital electronics where low-cost and high-capacity memory is required. One of the largest applications for DRAM is the main memory in modern graphics cards, it is used in many portable devices and video game consoles.
In contrast, SRAM, faster and more expensive than DRAM, is used where speed is of greater concern than cost and size, such as the cache memories in processors. Due to its need of a system to perform refreshing, DRAM has more complicated circuitry and timing requirements than SRAM, but it is much more used; the advantage of DRAM is the structural simplicity of its memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach high densities, making DRAM much cheaper per bit; the transistors and capacitors used are small. Due to the dynamic nature of its memory cells, DRAM consumes large amounts of power, with different ways for managing the power consumption. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% percent jump in 1988, while in recent years the price has been going down; the cryptanalytic machine code-named "Aquarius" used at Bletchley Park during World War II incorporated a hard-wired dynamic memory.
Paper tape was read and the characters on it "were remembered in a dynamic store.... The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross and an uncharged capacitor dot. Since the charge leaked away, a periodic pulse was applied to top up those still charged". In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch, they replaced the latch with two transistors and two resistors, a configuration that became known as the Farber-Schlig cell. In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, 4 diodes. In 1966, DRAM was invented by Dr. Robert Dennard at the IBM Thomas J. Watson Research Center, he was granted U. S. patent number 3,387,286 in 1968. Capacitors had been used for earlier memory schemes such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.
The Toshiba "Toscal" BC-1411 electronic calculator, introduced in November 1966, used a form of DRAM built from discrete components. The first DRAM was introduced in 1969 by Advanced Memory system, Inc of Sunnyvale, CA; this 1000 bit chip was sold to Honeywell, Wang Computer, others. In 1969 Honeywell asked Intel to make a DRAM using a three-transistor cell; this became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell; this became the first commercially available DRAM, the Intel 1103, in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was laid out by Pat Earhart; the masks were cut by Judy Garcia. The first DRAM with multiplexed row and column address lines was the Mostek MK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973; this addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles.
This was a radical advance halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a robust design for customer applications. At the 16 Kbit density, the cost advantage increased. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers dumping DRAMs on the US market. DRAM is arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit; the figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in width; the long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column, they are known as the "+" and "−" bit lines. A sense amplifier is essent
Double Data Rate Synchronous Dynamic Random-Access Memory abbreviated as DDR SDRAM, is a double data rate synchronous dynamic random-access memory class of memory integrated circuits used in computers. DDR SDRAM retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM and DDR4 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 memory modules will not work in DDR1-equipped motherboards, vice versa. Compared to single data rate SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy; the interface uses double pumping to double data bus bandwidth without a corresponding increase in clock frequency. One advantage of keeping the clock frequency down is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller.
The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping. With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of × 2 × 64 / 8. Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s. "Beginning in 1996 and concluding in June 2000, JEDEC developed the DDR SDRAM specification." JEDEC has set standards for data rates of DDR SDRAM, divided into two parts. The first specification is for memory chips, the second is for memory modules. Note: All above listed are specified by JEDEC as JESD79F. All RAM data rates in-between or above these listed specifications are not standardized by JEDEC—often they are manufacturer optimizations using tighter-tolerance or overvolted chips; the package sizes in which DDR SDRAM is manufactured are standardized by JEDEC. There is no architectural difference between DDR SDRAM designed for different clock frequencies, for example, PC-1600, designed to run at 100 MHz, PC-2100, designed to run at 133 MHz.
The number designates the data rate at which the chip is guaranteed to perform, hence DDR SDRAM is guaranteed to run at lower and can run at higher clock rates than those for which it was made. DDR SDRAM modules for desktop computers, dual in-line memory modules, have 184 pins, can be differentiated from SDRAM DIMMs by the number of notches. DDR SDRAM for notebook computers, SO-DIMMs, have 200 pins, the same number of pins as DDR2 SO-DIMMs; these two specifications are notched similarly and care must be taken during insertion if unsure of a correct match. Most DDR SDRAM operates at a voltage of 2.5 V, compared to 3.3 V for SDRAM. This can reduce power consumption. Chips and modules with DDR-400/PC-3200 standard have a nominal voltage of 2.6 V. JEDEC Standard No. 21–C defines three possible operating voltages for 184 pin DDR, as identified by the key notch position relative to its centreline. Page 4.5.10-7 defines 2.5V, 1.8V, TBD, while page 4.20.5–40 nominates 3.3V for the right notch position.
The orientation of the module for determining the key notch position is with 52 contact positions to the left and 40 contact positions to the right. Increasing operating voltage can increase maximum speed, at the cost of higher power dissipation and heating, at the risk of malfunctioning or damage. Many new chipsets use these memory types in multi-channel configurations. DRAM density Size of the chip is measured in megabits. Most motherboards recognize only 1 GB modules. If 128M×4 1 GB modules are used, they most will not work; the JEDEC standard allows 128M×4 only for slower buffered/registered modules designed for some servers, but some generic manufacturers do not comply. Organization The notation like 64M×4 means that the memory matrix has 64 million 4-bit storage locations. There are ×4, ×8, ×16 DDR chips; the ×4 chips allow the use of advanced error correction features like Chipkill, memory scrubbing and Intel SDDC in server environments, while the ×8 and ×16 chips are somewhat less expensive.
X8 chips are used in desktops/notebooks but are making entry into the server market. There are 4 banks and only one row can be active in each bank. Ranks To increase memory capacity and bandwidth, chips are combined on a module. For instance, the 64-bit data bus for DIMM requires eight 8-bit chips, addressed in parallel. Multiple chips with the common address lines are called a memory rank; the term was introduced to avoid confusion with banks. A memory module may bear more than one rank; the term sides would be confusing because it incorrectly suggests the physical placement of chips on the module. All ranks are connected to the same memory bus; the Chip Select signal is used to issue commands to specific rank. Adding modules to the single memory bus creates additional electrical load on its drivers. To mitigate the resulting bus signaling rate drop and overcome the memory bottleneck, new chipsets employ the multi-channel architecture. Capacity Number of DRAM devices The number of chips is a multiple of 8 for non-ECC modules and a multiple of 9 for ECC modules.
Chips can oc
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600; the first CDC 6600's were delivered in 1965 to Los Alamos. They became a must-have system in scientific and mathematical computing circles, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, many others. 50 were delivered in total. A CDC 6600 is on display at the Computer History Museum in California; the only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs. CDC's first products were based on the machines designed at ERA, which Seymour Cray had been asked to update after moving to CDC.
After an experimental machine known as the Little Character, in 1960 they delivered the CDC 1604, one of the first commercial transistor-based computers, one of the fastest machines on the market. Management was delighted, made plans for a new series of machines that were more tailored to business use. Cray was not interested in such a project, set himself the goal of producing a new machine that would be 50 times faster than the 1604; when asked to complete a detailed report on plans at one and five years into the future, he wrote back that his five-year goal was "to produce the largest computer in the world", "largest" at that time being synonymous with "fastest", that his one-year plan was "to be one-fifth of the way". Taking his core team to new offices nearby the original CDC headquarters, they started to experiment with higher quality versions of the "cheap" transistors Cray had used in the 1604. After much experimentation, they found that there was no way the germanium-based transistors could be run much faster than those used in the 1604.
The "business machine" that management had wanted, now forming as the CDC 3000 series, pushed them about as far as they could go. Cray decided the solution was to work with the then-new silicon-based transistors from Fairchild Semiconductor, which were just coming onto the market and offered improved switching performance. During this period, CDC grew from a startup to a large company and Cray became frustrated with what he saw as ridiculous management requirements. Things became more tense in 1962 when the new CDC 3600 started to near production quality, appeared to be what management wanted, when they wanted it. Cray told CDC's CEO, William Norris that something had to change, or he would leave the company. Norris felt he was too important to lose, gave Cray the green light to set up a new laboratory wherever he wanted. After a short search, Cray decided to return to his home town of Chippewa Falls, where he purchased a block of land and started up a new laboratory. Although this process introduced a lengthy delay in the design of his new machine, once in the new laboratory, without management interference, things started to progress quickly.
By this time, the new transistors were becoming quite reliable, modules built with them tended to work properly on the first try. The 6600 began to take form, with Cray working alongside Jim Thornton, system architect and "hidden genius" of the 6600. More than 100 CDC 6600s were sold over the machine's lifetime. Many of these went to various nuclear weapon-related laboratories, quite a few found their way into university computing laboratories. Cray turned his attention to its replacement, this time setting a goal of ten times the performance of the 6600, delivered as the CDC 7600; the CDC Cyber 70 and 170 computers were similar to the CDC 6600 in overall design and were nearly backwards compatible. The 6600 was three times faster than the IBM 7030 Stretch. Then-CEO Thomas Watson Jr. wrote a memo to his employees: "Last week, Control Data... announced the 6600 system. I understand that in the laboratory developing the system there are only 34 people including the janitor. Of these, 14 are engineers and 4 are programmers...
Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world's most powerful computer." Cray's reply was sardonic: "It seems like Mr. Watson has answered his own question." Typical machines of the era used a single CPU to drive the entire system. A typical program would first load data into memory, process it, write it back out; this required the CPUs to be complex in order to handle the complete set of instructions they would be called on to perform. A complex CPU implied a large CPU, introducing signalling delays while information flowed between the individual modules making it up; these delays set a maximum upper limit on performance, the machine could only operate at a cycle speed that allowed the signals time to arrive at the next module. Cray took another approach. At the time, CPUs ran slower than the main memory to which they were attached. For instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two.
This meant. It was this idle time; the CDC 6600 used a simplified core processor that wa
The AMD Socket C32 is the server processor socket for AMD's current single-CPU and dual-CPU Opteron 4000 series CPUs. It is the successor to Socket AM3 for single-CPU servers and the successor for Socket F for lower-end dual-CPU servers. Socket C32 supports two DDR3 SDRAM channels, it is based on the Socket F and uses a similar 1207-land LGA socket but is not physically or electrically compatible with Socket F due to the use of DDR3 SDRAM instead of the DDR2 SDRAM that Socket F platforms use. Socket C32 was launched June 23, 2010 as part of the San Marino platform with the four and six-core Opteron 4100 "Lisbon" processors. Socket C32 supports the Bulldozer-based six- and eight-core "Valencia" Opterons introduced in November 2011. Like Socket G34, it uses the AMD SR5690, SR5670, SR5650 chipsets. Socket C32 is being used in the ultra-low-power Adelaide platform with the SR5650 chipset and HT1 interconnects instead of HT3.1. List of AMD microprocessors Opteron http://phx.corporate-ir.net/External.
File?item=UGFyZW50SUQ9MjAzMzJ8Q2hpbGRJRD0tMXxUeXBlPTM=&t=1 https://web.archive.org/web/20110728150926/http://www.amdzone.com/phpbb3/viewtopic.php?f=52&t=137051&st=0&sk=t&sd=a http://blogs.amd.com/work/2009/07/29/playing-20-questions-part-1/ http://anandtech.com/cpuchipsets/showdoc.aspx?i=3673&p=5 https://www.amd.com/us/Documents/48410B_Opteron4000_QRG_FINAL.pdf
LGA 2011 called Socket R, is a CPU socket by Intel. Released on November 14, 2011, it replaces Intel's LGA 1366 and LGA 1567 in the performance and high-end desktop and server platforms; the socket has 2011 protruding pins. The LGA 2011 socket uses QPI to connect the CPU to additional CPUs. DMI 2.0 is used to connect the processor to the PCH. The memory controller and 40 PCI Express lanes are integrated on the CPU. On a secondary processor an extra ×4 PCIe interface replaces the DMI interface; as with its predecessor LGA 1366, there is no provisioning for integrated graphics. This socket supports four DDR3 or DDR4 SDRAM memory channels with up to three unbuffered or registered DIMMs per channel, as well as up to 40 PCI Express 2.0 or 3.0 lanes. LGA 2011 has to ensure platform scalability beyond eight cores and 20 MB of cache; the LGA 2011 socket is used by Sandy Bridge-E/EP and Ivy Bridge-E/EP processors with the corresponding X79 and C600-series chipsets. LGA 2011-1, an updated generation of the socket and the successor of LGA 1567, is used for Ivy Bridge-EX and Haswell-EX CPUs, which were released in February 2014 and May 2015, respectively.
LGA 2011-v3 is another updated generation of the socket, used for Haswell-E and Haswell-EP CPUs, which were released in August and September 2014, respectively. Updated socket generations are physically similar to LGA 2011, but different electrical signals and ILM keying prevent backward compatibility with older CPUs. Intel CPU sockets use the so-called Independent Loading Mechanism retention device to apply the specific amount of uniform pressure required to hold the CPU against the socket interface; as part of their design, ILMs have differently placed protrusions which are intended to mate with cutouts in CPU packagings. These protrusions known as ILM keying, have the purpose of preventing installation of incompatible CPUs into otherwise physically compatible sockets, preventing ILMs to be mounted with a 180-degree rotation relative to the CPU socket. Different variants of the LGA 2011 socket and associated CPUs come with different ILM keying, which makes it possible to install CPUs only into generation-matching sockets.
CPUs that are intended to be mounted into LGA 2011-0, LGA 2011-1 or LGA 2011-v3 sockets are all mechanically compatible regarding their dimensions and ball pattern pitches, but the designations of contacts are different between generations of the LGA 2011 socket and CPUs, which makes them electrically and logically incompatible. Original LGA 2011 socket is used for Sandy Bridge-E/EP and Ivy Bridge-E/EP processors, while LGA 2011-1 is used for Ivy Bridge-EX and Haswell-EX CPUs, which were released in February 2014 and May 2015, respectively. LGA 2011-v3 socket is used for Haswell-E and Haswell-EP CPUs, which were released in August and September 2014, respectively. Two types of ILM exist, with different shapes and heatsink mounting hole patterns, both with M4 x 0.7 threads: square ILM, narrow ILM. Square ILM is the standard type, while the narrow one is alternatively available for space-constrained applications. A matching heatsink is required for each ILM type. Information for the Intel X79 and C600 series chipsets is in the table below.
The Romley platform was delayed one quarter due to a SAS controller bug. The X79 appears to contain the same silicon as the C600 series, with ECS having enabled the SAS controller for one of their boards though SAS is not supported by Intel for X79. Desktop processors for the LGA 2011, 2011-3 socket are listed in the table below. All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep Technology, Intel 64, XD bit, TXT, Intel VT-x, Intel VT-d, Turbo Boost, AES-NI, Smart Cache, Hyper-threading, except the C1 stepping models, which lack VT-d. Sandy Bridge-E, Ivy Bridge-E and Haswell-E processors are not bundled with standard air-cooled CPU coolers. Intel is offering a standard CPU cooler, a liquid-cooled CPU cooler, which are both sold separately. Sandy Bridge-E and Ivy Bridge-E processors are compatible with the Intel X79 chipset. Haswell-E and Broadwell-E processors are compatible with the Intel X99 chipset.1 The X79 chipset allows for increasing the base clock, Intel calls it CPU Strap, by 1.00×, 1.25×, 1.66× or 2.50×.
The CPU frequency is derived by the BCLK times the CPU multiplier. Server processors for the LGA 2011 socket are listed in the table below. All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, Enhanced Intel SpeedStep Technology, Intel 64, XD bit, TXT, Intel VT-x, Intel VT-d, AES-NI, Smart Cache. Not all support Hyper-threading and Turbo Boost. Server processors are for the LGA 2011 socket Server processors for the LGA 2011-v3 socket are listed in the tables below; as one of the significant changes from the previous generation, they support DDR4 memory
LGA 1156 known as Socket H or H1, is an Intel desktop CPU socket. LGA stands for land grid array, its incompatible successor is LGA 1155. The last processors supporting it ceased production in 2011. LGA 1156, along with LGA 1366, were designed to replace LGA 775. Whereas LGA 775 processors connect to a northbridge using the Front Side Bus, LGA 1156 processors integrate the features traditionally located on a northbridge within the processor itself; the LGA 1156 socket allows the following connections to be made from the processor to the rest of the system: PCI-Express 2.0 ×16 for communication with a graphics card. Some processors allow this connection to be divided into two ×8 lanes to connect two graphics cards; some motherboard manufacturers use Nvidia's NF200 chip to allow more graphics cards to be used. DMI for communication with the Platform Controller Hub; this consists of a PCI-Express 2.0 ×4 connection. FDI for communication with the PCH; this consists of two DisplayPort connections. Two memory channels for communication with DDR3 SDRAM.
The clock speed of the memory, supported will depend on the processor. The LGA 1156 and LGA 1366 sockets and processors were discontinued sometime in 2012, having been superseded by the LGA 1155 and LGA 2011 respectively. For LGA 1156 the 4 holes for fastening the heatsink to the motherboard are placed in a square with a lateral length of 75 mm; this configuration was retained for the LGA 1155, LGA 1150, LGA 1151 sockets meaning that cooling solutions should be interchangeable. All LGA 1156 processors and motherboards made to date are interoperable, making it possible to switch between a Celeron, Core i3 or Core i5 with integrated graphics and a Core i5 or Core i7 without graphics. However, using a chip with integrated graphics on a P55 motherboard will not allow use of the on-board graphics processor, using a chip without integrated graphics on a H55, H57 or Q57 motherboard will not allow use of the motherboard's graphics ports; the Desktop chipsets that support LGA 1156 are Intel's H55, H57, P55, Q57.
Server chipsets supporting the socket are Intel's 3400, 3420 and 3450. List of Intel microprocessors List of Intel Core i3 microprocessors List of Intel Core i5 microprocessors List of Intel Core i7 microprocessors List of Intel Pentium microprocessors List of Intel Celeron microprocessors List of Intel Xeon microprocessors Clarkdale Lynnfield LGA 775 LGA 1366 LGA 1155 Intel desktop processor integration overview