A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions. The instructions are ordinary CPU instructions but the single processor can run multiple instructions on separate cores at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package; the microprocessors used in all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device or loosely. For example, cores may or may not share caches, they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, two-dimensional mesh, crossbar. Homogeneous multi-core systems include only identical cores. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, vector, or multithreading.
Multi-core processors are used across many application domains, including general-purpose, network, digital signal processing, graphics. The improvement in performance gained by the use of a multi-core processor depends much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel on multiple cores. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or more if the problem is split up enough to fit within each core's cache, avoiding use of much slower main-system memory. Most applications, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem; the parallelization of software is a significant ongoing topic of research. The terms multi-core and dual-core most refer to some sort of central processing unit, but are sometimes applied to digital signal processors and system on a chip.
The terms are used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted. In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units; the terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA; each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern; these physical limitations can cause significant heat data synchronization problems. Various other methods are used to improve CPU performance; some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code.
Many applications are better suited to thread-level parallelism methods, multiple independent CPUs are used to increase a system's overall TLP. A combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality for complex instruction set computing architectures. Clock rates increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s; as the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance.
Multiple cores were used on the same CPU chip, which could lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing. Since computer manufacturers have long implemented symmetric multiprocessing designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known. Additionally: Using a proven processing-core design without architectural changes reduces design risk significantly. For general-purpose processors, much of the motivation for multi-core processors comes from diminished gains in processor performance from increasing the operating frequency; this is due to three primary fa
The ARM Cortex-A7 MPCore is a 32-bit microprocessor core licensed by ARM Holdings implementing the ARMv7-A architecture announced in 2011. It has two target applications; the other use is in the big. LITTLE architecture, combining one or more A7 cores with one or more Cortex-A15 cores into a heterogeneous system. To do this it is feature-compatible with the A15. Key features of the Cortex-A7 core are: Partial dual-issue, in-order microarchitecture with an 8-stage pipeline NEON SIMD instruction set extension VFPv4 Floating Point Unit Thumb-2 instruction set encoding Jazelle RCT Hardware virtualization Large Page Address Extensions Integrated level 2 Cache 1.9 DMIPS / MHz Typical Clock Speed 1.5GHz Several system-on-chips have implemented the Cortex-A7 core, including: Allwinner A20 Allwinner A31 Allwinner A83T Allwinner H3 Broadcom BCM23550 quad-core HSPA+ Multimedia Processor Broadcom BCM2836, designed for Raspberry Pi 2 NXP Semiconductor QorIQ Layerscape LS1 Freescale i. MX 6 UltraLite HiSilicon K3V3, big.
LITTLE architecture with dual-core Cortex-A7 and dual-core Cortex-A15. Use ARM Mali-T658 GPU. Marvell PXA1088 Mediatek MT6572 Mediatek MT6582 Mediatek MT6589 Mediatek MT6592 Mstar MSB2531A ARM Cortex A7 32bit 800MHZ Qualcomm Snapdragon 200 and Snapdragon 400 MSM8212 and MSM8612, MSM8226, MSM8626 and MSM8926 Samsung Exynos 5 Octa, big. LITTLE architecture with quad-core Cortex-A7 and quad-core Cortex-A15. Use Imagination Technologies PowerVR SGX544MP3 GPU. Samsung Exynos 5 Octa, big. LITTLE architecture with quad-core Cortex-A7 and quad-core Cortex-A15. Use ARM Mali-T628MP6 GPU. ARM architecture List of ARM cores List of applications of ARM cores Comparison of ARMv7-A cores JTAG ARM HoldingsOfficial website Cortex-A7 Technical Reference ManualsOtherCortex-A7 instruction cycle timings
In telecommunication, Long-Term Evolution is a standard for wireless broadband communication for mobile devices and data terminals, based on the GSM/EDGE and UMTS/HSPA technologies. It increases the capacity and speed using a different radio interface together with core network improvements; the standard is developed by the 3GPP and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is the upgrade path for carriers with both GSM/UMTS networks and CDMA2000 networks; the different LTE frequencies and bands used in different countries mean that only multi-band phones are able to use LTE in all countries where it is supported. LTE is marketed as 4G LTE & Advance 4G, but it does not meet the technical criteria of a 4G wireless service, as specified in the 3GPP Release 8 and 9 document series for LTE Advanced. LTE is commonly known as 3.95G. The requirements were set forth by the ITU-R organization in the IMT Advanced specification. However, due to marketing pressures and the significant advancements that WiMAX, Evolved High Speed Packet Access and LTE bring to the original 3G technologies, ITU decided that LTE together with the aforementioned technologies can be called 4G technologies.
The LTE Advanced standard formally satisfies the ITU-R requirements to be considered IMT-Advanced. To differentiate LTE Advanced and WiMAX-Advanced from current 4G technologies, ITU has defined them as "True 4G". LTE stands for Long Term Evolution and is a registered trademark owned by ETSI for the wireless data communications technology and a development of the GSM/UMTS standards. However, other nations and companies do play an active role in the LTE project; the goal of LTE was to increase the capacity and speed of wireless data networks using new DSP techniques and modulations that were developed around the turn of the millennium. A further goal was the redesign and simplification of the network architecture to an IP-based system with reduced transfer latency compared to the 3G architecture; the LTE wireless interface is incompatible with 2G and 3G networks, so that it must be operated on a separate radio spectrum. LTE was first proposed in 2004 by Japan's NTT Docomo, with studies on the standard commenced in 2005.
In May 2007, the LTE/SAE Trial Initiative alliance was founded as a global collaboration between vendors and operators with the goal of verifying and promoting the new standard in order to ensure the global introduction of the technology as as possible. The LTE standard was finalized in December 2008, the first publicly available LTE service was launched by TeliaSonera in Oslo and Stockholm on December 14, 2009, as a data connection with a USB modem; the LTE services were launched by major North American carriers as well, with the Samsung SCH-r900 being the world's first LTE Mobile phone starting on September 21, 2010, Samsung Galaxy Indulge being the world's first LTE smartphone starting on February 10, 2011, both offered by MetroPCS, the HTC ThunderBolt offered by Verizon starting on March 17 being the second LTE smartphone to be sold commercially. In Canada, Rogers Wireless was the first to launch LTE network on July 7, 2011, offering the Sierra Wireless AirCard 313U USB mobile broadband modem, known as the "LTE Rocket stick" followed by mobile devices from both HTC and Samsung.
CDMA operators planned to upgrade to rival standards called UMB and WiMAX, but major CDMA operators have announced instead they intend to migrate to LTE. The next version of LTE is LTE Advanced, standardized in March 2011. Services are expected to commence in 2013. Additional evolution known as LTE Advanced Pro have been approved in year 2015; the LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s and QoS provisions permitting a transfer latency of less than 5 ms in the radio access network. LTE supports multi-cast and broadcast streams. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing and time-division duplexing. The IP-based network architecture, called the Evolved Packet Core designed to replace the GPRS Core Network, supports seamless handovers for both voice and data to cell towers with older network technology such as GSM, UMTS and CDMA2000; the simpler architecture results in lower operating costs.
In 2004, NTT Docomo of Japan proposes LTE as the international standard. In September 2006, Siemens Networks showed in collaboration with Nomor Research the first live emulation of an LTE network to the media and investors; as live applications two users streaming an HDTV video in the downlink and playing an interactive game in the uplink have been demonstrated. In February 2007, Ericsson demonstrated for the first time in the world LTE with bit rates up to 144 Mbit/s In September 2007, NTT Docomo demonstrated LTE data rates of 200 Mbit/s with power level below 100 mW during the test. In November 2007, Infineon presented the world’s first RF transceiver named SMARTi LTE supporting LTE functionality in a single-chip RF silicon processed in CMOS In early 2008, LTE test equipment began shipping from several vendors and, at the Mobile World Congress 2008 in Barcelona, Ericsson demonstrated the world’s first end-to-end mobile call enabled by LTE on a small handheld device. M
Arm Holdings is a British multinational semiconductor and software design company, owned by SoftBank Group and its Vision Fund. With its headquarters in Cambridgeshire, within the United Kingdom, its primary business is in the design of ARM processors, although it designs software development tools under the DS-5, RealView and Keil brands, as well as systems and platforms, system-on-a-chip infrastructure and software; as a "Holding" company, it holds shares of other companies. It is considered to be market dominant for processors in mobile phones and tablet computers; the company is one of the best-known "Silicon Fen" companies. Processors based on designs licensed from Arm, or designed by licensees of one of the Arm instruction set architectures, are used in all classes of computing devices. Examples of those processors range from the world's smallest computer to the processors in some supercomputers on the TOP500 list. Processors designed by Arm or by Arm licensees are used as microcontrollers in embedded systems, including real-time safety systems, biometrics systems, smart TVs, all modern smartwatches, are used as general-purpose processors in smartphones, laptops, desktops and supercomputers/HPC, e.g. a CPU "option" in Cray's supercomputers.
Arm's Mali line of graphics processing units are used in laptops, in over 50% of Android tablets by market share, some versions of Samsung's smartphones and smartwatches. It is the third most popular GPU in mobile devices. Systems, including iPhone smartphones include many chips, from many different providers, that include one or more licensed Arm cores, in addition to those in the main Arm-based processor. Arm's core designs are used in chips that support many common network related technologies in smartphones: Bluetooth, WiFi and broadband, in addition to corresponding equipment such as Bluetooth headsets, 802.11ac routers, network providers' cellular LTE. Arm's main CPU competitors in servers include Intel and AMD. In mobile applications, Intel's x86 Atom is a competitor. AMD sells Arm-based chips as well as x86. Arm's main GPU competitors include mobile GPUs from Imagination Technologies and Nvidia and Intel. Despite competing within GPUs, Qualcomm and Nvidia have combined their GPUs with an Arm licensed CPU.
Arm was a constituent of the FTSE 100 Index. It had a secondary listing on NASDAQ; however Japanese telecommunications company SoftBank Group made an agreed offer for Arm on 18 July 2016, subject to approval by Arm's shareholders, valuing the company at £23.4 billion. The transaction was completed on 5 September 2016; the acronym ARM was first used in 1983 and stood for "Acorn RISC Machine". Acorn Computers' first RISC processor was used in the original Acorn Archimedes and was one of the first RISC processors used in small computers. However, when the company was incorporated in 1990, the acronym was changed to "Advanced RISC Machines", in light of the company's name "Advanced RISC Machines Ltd." - and according to an interview with Steve Furber the name change was at the behest of Apple who did not wish to have the name of a former competitor - namely Acorn - in the name of the company. At the time of the IPO in 1998, the company name was changed to "ARM Holdings" just called ARM like the processors.
On 1 August 2017, the logo were changed. The logo is now all lowercase and other uses of'ARM' are in sentence case except where the whole sentence is upper case, so, for instance, it is now'Arm Holdings'; the company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple Computer and VLSI Technology. The new company intended to further the development of the Acorn RISC Machine processor, used in the Acorn Archimedes and had been selected by Apple for their Newton project, its first profitable year was 1993. The company's Silicon Valley and Tokyo offices were opened in 1994. Arm invested in Palmchip Corporation in 1997 to provide system on chip platforms and to enter into the disk drive market. In 1998, the company changed its name from Advanced RISC Machines Ltd to ARM Ltd; the company was first listed on the London Stock Exchange and NASDAQ in 1998 and by February 1999, Apple's shareholding had fallen to 14.8%. In 2010, Arm joined with IBM, Texas Instruments, Samsung, ST-Ericsson and Freescale Semiconductor in forming a non-profit open source engineering company, Linaro.
Micrologic Solutions, a software consulting company based in Cambridge Allant Software, a developer of debugging software Infinite Designs, a design company based in Sheffield EuroMIPS a smart card design house in Sophia Antipolis, France The engineering team of Noral Micrologics, a debug hardware and software company based in Blackburn, England Adelante Technologies of Belgium, creating its OptimoDE data engines business, a form of lightweight DSP engine Axys Design Automation, a developer of ESL design tools and Artisan Components, a designer of Physical IP, the building blocks of integrated circuits KEIL Software, a leading developer of software development tools for the microcontroller market, including 8051 and C16x platforms. Arm acquired the engineering team of PowerEscape. Falanx, a developer of 3D graphics accelerators a
Secure Digital abbreviated as SD, is a non-volatile memory card format developed by the SD Card Association for use in portable devices. The standard was introduced in August 1999 by joint efforts between SanDisk and Toshiba as an improvement over MultiMediaCards, has become the industry standard; the three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property rights associated with SD memory cards and SD host and ancillary products. The companies formed the SD Association, a non-profit organization, in January 2000 to promote and create SD Card standards. SDA today has about 1,000 member companies; the SDA uses several trademarked logos owned and licensed by SD-3C to enforce compliance with its specifications and assure users of compatibility. In 1999, SanDisk and Toshiba agreed to develop and market the Secure Digital Memory Card; the card was derived from the MultiMediaCard and provided digital rights management based on the Secure Digital Music Initiative standard and for the time, a high memory density.
It was designed to compete with the Memory Stick, a DRM product that Sony had released the year before. Developers predicted; the trademarked SD logo was developed for the Super Density Disc, the unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles an optical disc. At the 2000 Consumer Electronics Show trade show, the three companies announced the creation of the SD Association to promote SD cards; the SD Association, headquartered in San Ramon, United States, started with about 30 companies and today consists of about 1,000 product manufacturers that make interoperable memory cards and devices. Early samples of the SD Card became available in the first quarter of 2000, with production quantities of 32 and 64 MB cards available three months later; the miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced and demonstrated it. The SDA adopted the miniSD card in 2003 as a small form factor extension to the SD card standard.
While the new cards were designed for mobile phones, they are packaged with a miniSD adapter that provides compatibility with a standard SD memory card slot. In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC support built into the host device. Devices that support miniSDHC work with miniSD and miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD card. Since 2008, miniSD cards were no longer produced; the microSD removable miniaturized Secure Digital flash memory cards were named T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally identical allowing either to operate in devices made for the other. SanDisk had conceived microSD when its chief technology officer and the chief technology officer of Motorola concluded that current memory cards were too large for mobile phones; the card was called T-Flash, but just before product launch, T-Mobile sent a cease-and-desist letter to SanDisk claiming that T-Mobile owned the trademark on T-, the name was changed to TransFlash.
At CTIA Wireless 2005, the SDA announced the small microSD form factor along with SDHC secure digital high capacity formatting in excess of 2 GB with a minimum sustained read and write speed of 17.6 Mbit/s. SanDisk induced the SDA to administer the microSD standard; the SDA approved the final microSD specification on July 13, 2005. MicroSD cards were available in capacities of 32, 64, 128 MB; the Motorola E398 was the first mobile phone to contain a TransFlash card. A few years their competitors began using microSD cards; the SDHC format, announced in January 2006, brought improvements such as 32 GB storage capacity and mandatory support for FAT32 filesystems. In April, the SDA released a detailed specification for the non-security related parts of the SD memory card standard and for the Secure Digital Input Output cards and the standard SD host controller. In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TB and speeds up to 300 MB/s, it features mandatory support for the exFAT filesystem.
SDXC was announced at Consumer Electronics Show 2009. At the same show, SanDisk and Sony announced a comparable Memory Stick XC variant with the same 2 TB maximum as SDXC, Panasonic announced plans to produce 64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card, a 32 GB card with a read/write speed of 400 Mbit/s, but only early in 2010 did compatible host devices come onto the market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D Digital SLR camera, a USB card reader from Panasonic, an integrated SDXC card reader from JMicron. The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus, which does not have the bandwidth to support SDXC at full speed. In early 2010, commercial SDXC cards appeared from Toshiba and SanDisk. In early 2011, Centon Electronics, Inc. and Lexar began shipping SDXC cards rated at Speed Class 10. Pretec offered cards from 8 GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC card. Kingmax released a comparable product in 2011.
In April 2012, Panasonic introduced MicroP2 card format for professional video applications. The cards are full-size SDHC or SDXC UHS-II cards, rated at UHS Speed Class U1. An adapter allows MicroP
System on a chip
A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic system. These components include a central processing unit, input/output ports and secondary storage – all on a single substrate or microchip, the size of a coin, it may contain digital, mixed-signal, radio frequency signal processing functions, depending on the application. As they are integrated on a single substrate, SoCs consume much less power and take up much less area than multi-chip designs with equivalent functionality; because of this, SoCs are common in the mobile computing and edge computing markets. Systems on chip are used in embedded systems and the Internet of Things. Systems on Chip are in contrast to the common traditional motherboard-based PC architecture, which separates components based on function and connects them through a central interfacing circuit board. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integrated circuit, as if all these functions were built into the motherboard.
An SoC will integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories and secondary storage on a single circuit die, whereas a motherboard would connect these modules as discrete components or expansion cards. More integrated computer system designs improve performance and reduce power consumption as well as semiconductor die area needed for an equivalent design composed of discrete modules, at the cost of reduced replaceability of components. By definition, SoC designs are or nearly integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. Systems-on-Chip can be viewed as part of a larger trend towards embedded computing and hardware acceleration. An SoC integrates a microcontroller or microprocessor with advanced peripherals like graphics processing unit, Wi-Fi module, or one or more coprocessors.
Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with more advanced peripherals. For an overview of integrating system components, see system integration. In general, there are four distinguishable types of SoCs: SoCs built around a microcontroller, SoCs built around a microprocessor found in mobile phones. Systems-on-chip can be applied to any computing task. However, they are used in mobile computing such as tablets, smartphones and netbooks as well as embedded systems and in applications where microcontrollers would be used. Where only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, vector processing and ambient intelligence.
Embedded systems-on-chip target the internet of things, industrial internet of things and edge computing markets. Mobile computing based SoCs bundle processors, memories, on-chip caches, wireless networking capabilities and digital camera hardware and firmware. With increasing memory sizes, high end SoCs will have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above, the SoC; some examples of mobile computing SoCs include: Apple: Apple-designed processors A12 Bionic and other A series, used in iPhones and iPads S series and W series, in Apple Watches. Apple T series, used in the 2016 and 2017 MacBook Pro touch bars and fingerprint scanners. Samsung Electronics: list based on ARM7 and ARM9 Exynos, used by Samsung's Galaxy series of smartphones Qualcomm: Snapdragon, used in many LG, Google Pixel, HTC and Samsung Galaxy smartphones. In 2018, Snapdragon SoCs are being used as the backbone of laptop computers running Windows 10, marketed as "Always Connected PCs".
As long ago as 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 system-on-chip. It combined the original Acorn ARM2 processor with a memory controller, video controller, I/O controller. In previous Acorn ARM-powered computers, these were four discreet chips; the ARM7500 chip was their second-generation system-on-chip, based on the ARM700, VIDC20 and IOMD controllers, was licensed in embedded devices such as set-top-boxes, as well as Acorn personal computers. Systems-on-chip are being applied to mainstream personal computers as of 2018, they are applied to laptops and tablet PCs. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, LTE and other wireless network communications integrated on chip. ARM based: Qualcomm S