The gigabyte is a multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units. Therefore, one gigabyte is 1000000000bytes; the unit symbol for the gigabyte is GB. This definition is used in all contexts of science, engineering and many areas of computing, including hard drive, solid state drive, tape capacities, as well as data transmission speeds. However, the term is used in some fields of computer science and information technology to denote 1073741824 bytes for sizes of RAM; the use of gigabyte may thus be ambiguous. Hard disk capacities as described and marketed by drive manufacturers using the standard metric definition of the gigabyte, but when a 500-GB drive's capacity is displayed by, for example, Microsoft Windows, it is reported as 465 GB, using a binary interpretation. To address this ambiguity, the International System of Quantities standardizes the binary prefixes which denote a series of integer powers of 1024. With these prefixes, a memory module, labeled as having the size 1GB has one gibibyte of storage capacity.
The term gigabyte is used to mean either 10003 bytes or 10243 bytes. The latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name; as 1024 is 1000 corresponding to SI multiples, it was used for binary multiples as well. In 1998 the International Electrotechnical Commission published standards for binary prefixes, requiring that the gigabyte denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, NIST, in 2009 it was incorporated in the International System of Quantities; the term gigabyte continues to be used with the following two different meanings: 1 GB = 1000000000 bytes Based on powers of 10, this definition uses the prefix giga- as defined in the International System of Units. This is the recommended definition by the International Electrotechnical Commission; this definition is used in networking contexts and most storage media hard drives, flash-based storage, DVDs, is consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance.
The file manager of Mac OS X version 10.6 and versions are a notable example of this usage in software, which report files sizes in decimal units. 1 GiB = 1073741824 bytes. The binary definition uses powers of the base 2, as does the architectural principle of binary computers; this usage is promulgated by some operating systems, such as Microsoft Windows in reference to computer memory. This definition is synonymous with the unambiguous unit gibibyte. Since the first disk drive, the IBM 350, disk drive manufacturers expressed hard drive capacities using decimal prefixes. With the advent of gigabyte-range drive capacities, manufacturers based most consumer hard drive capacities in certain size classes expressed in decimal gigabytes, such as "500 GB"; the exact capacity of a given drive model is slightly larger than the class designation. All manufacturers of hard disk drives and flash-memory disk devices continue to define one gigabyte as 1000000000bytes, displayed on the packaging; some operating systems such as OS X express hard drive capacity or file size using decimal multipliers, while others such as Microsoft Windows report size using binary multipliers.
This discrepancy causes confusion, as a disk with an advertised capacity of, for example, 400 GB might be reported by the operating system as 372 GB, meaning 372 GiB. The JEDEC memory standards use IEEE 100 nomenclature; the difference between units based on decimal and binary prefixes increases as a semi-logarithmic function—for example, the decimal kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB hard disk might be indicated variously as 300 GB, 279 GB or 279 GiB, depending on the operating system; as storage sizes increase and larger units are used, these differences become more pronounced. Some legal challenges have been waged over this confusion such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and settled.
Because of its physical design, the capacity of modern computer random access memory devices, such as DIMM modules, is always a multiple of a power of 1024. It is thus convenient to use prefixes denoting powers of 1024, known as binary prefixes, in describing them. For example, a memory capacity of 1073741824bytes is conveniently expressed as 1 GiB rather than as 1.074 GB. The former specification is, however quoted as "1 GB" when applied to random access memory. Software allocates memory in varying degrees of granularity as needed to fulfill data structure requirements and binary multiples are not required. Other computer capacities and rates, like storage hardware size, data transfer rates, clock speeds, operations per second, etc. do not depend on an inherent base, are presented in decimal units. For example, the manufacturer of a "300 GB" hard drive is claiming a capacity of 300000000000bytes, not 300x10243 bytes. One hour of SDTV video at 2.2 Mbit/s is 1 GB. Seven minutes of HDTV video at 19.39 Mbit/s is 1
IEEE 802.11 is part of the IEEE 802 set of LAN protocols, specifies the set of media access control and physical layer protocols for implementing wireless local area network Wi-Fi computer communication in various frequencies, including but not limited to 2.4, 5, 60 GHz frequency bands. They are the world's most used wireless computer networking standards, used in most home and office networks to allow laptops and smartphones to talk to each other and access the Internet without connecting wires, they are created and maintained by the Institute of Electrical and Electronics Engineers LAN/MAN Standards Committee. The base version of the standard was released in 1997, has had subsequent amendments; the standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products.
As a result, in the marketplace, each revision tends to become its own standard. The protocols are used in conjunction with IEEE 802.2, are designed to interwork seamlessly with Ethernet, are often used to carry Internet Protocol traffic. Although IEEE 802.11 specifications list channels that might be used, the radio frequency spectrum availability allowed varies by regulatory domain. The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employ carrier-sense multiple access with collision avoidance whereby equipment listens to a channel for other users before transmitting each packet. 802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first accepted one, followed by 802.11a, 802.11g, 802.11n, 802.11ac. Other standards in the family are service amendments that are used to extend the current scope of the existing standard, which may include corrections to a previous specification.802.11b and 802.11g use the 2.4 GHz ISM band, operating in the United States under Part 15 of the U.
S. Federal Communications Commission Rules and Regulations; because of this choice of frequency band, 802.11b/g/n equipment may suffer interference in the 2.4 GHz band from microwave ovens, cordless telephones, Bluetooth devices etc. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum and orthogonal frequency-division multiplexing signaling methods, respectively. 802.11a uses the 5 GHz U-NII band, for much of the world, offers at least 23 non-overlapping 20 MHz-wide channels rather than the 2.4 GHz ISM frequency band offering only three non-overlapping 20 MHz-wide channels, where other adjacent channels overlap—see list of WLAN channels. Better or worse performance with higher or lower frequencies may be realized, depending on the environment. 802.11 n can use either the 5 GHz band. The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations.
Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption. 802.11 technology has its origins in a 1985 ruling by the U. S. Federal Communications Commission that released the ISM band for unlicensed use. In 1991 NCR Corporation/AT & T invented a precursor to 802.11 in the Netherlands. The inventors intended to use the technology for cashier systems; the first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years, has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, branded by Apple as AirPort. One year IBM followed with its ThinkPad 1300 series in 2000; the original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 2 megabits per second, plus forward error correction code, it specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U. S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was supplanted and popularized by 802.11b. 802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface.
It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20
ARM Advanced RISC Machine Acorn RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. Arm Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures—including systems-on-chips and systems-on-modules that incorporate memory, radios, etc, it designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products. Processors that have a RISC architecture require fewer transistors than those with a complex instruction set computing architecture, which improves cost, power consumption, heat dissipation; these characteristics are desirable for light, battery-powered devices—including smartphones and tablet computers, other embedded systems. For supercomputers, which consume large amounts of electricity, ARM could be a power-efficient solution.
ARM Holdings periodically releases updates to the architecture. Architecture versions ARMv3 to ARMv7 support 32-bit arithmetic; the Thumb version supports a variable-length instruction set that provides both 32- and 16-bit instructions for improved code density. Some older cores can provide hardware execution of Java bytecodes. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. With over 100 billion ARM processors produced as of 2017, ARM is the most used instruction set architecture and the instruction set architecture produced in the largest quantity; the used Cortex cores, older "classic" cores, specialized SecurCore cores variants are available for each of these to include or exclude optional capabilities. The British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture in the 1980s to use in its personal computers, its first ARM-based products were coprocessor modules for the BBC Micro series of computers.
After the successful BBC Micro computer, Acorn Computers considered how to move on from the simple MOS Technology 6502 processor to address business markets like the one, soon dominated by the IBM PC, launched in 1981. The Acorn Business Computer plan required that a number of second processors be made to work with the BBC Micro platform, but processors such as the Motorola 68000 and National Semiconductor 32016 were considered unsuitable, the 6502 was not powerful enough for a graphics-based user interface. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/second bandwidth. After testing all available processors and finding them lacking, Acorn decided it needed a new architecture. Inspired by papers from the Berkeley RISC project, Acorn considered designing its own processor. A visit to the Western Design Center in Phoenix, where the 6502 was being updated by what was a single-person company, showed Acorn engineers Steve Furber and Sophie Wilson they did not need massive resources and state-of-the-art research and development facilities.
Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a 6502 second processor. This convinced Acorn engineers. Wilson approached Acorn's CEO, Hermann Hauser, requested more resources. Hauser assembled a small team to implement Wilson's model in hardware; the official Acorn RISC Machine project started in October 1983. They chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design, they implemented it with a similar efficiency ethos as the 6502. A key design goal was achieving low-latency input/output handling like the 6502; the 6502's memory access architecture had let developers produce fast machines without costly direct memory access hardware. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985; the first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips, sped up the CAD software used in ARM2 development.
Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be dense, making ARM BBC BASIC an good test for any ARM emulator; the original aim of a principally ARM-based computer was achieved in 1987 with the release of the Acorn Archimedes. In 1992, Acorn once more won the Queen's Award for Technology for the ARM; the ARM2 featured 26-bit address space and 27 32-bit registers. Eight bits from the program counter register were available for other purposes; the address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags. The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 40,000. Much of this simplicity came from the lack of mic
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. Multi-touch was in use as early as 1985. Apple popularized the term "multi-touch" in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing.
In computing, multi-touch is technology which enables a trackpad or touchscreen to recognize more than one or more than two points of contact with the surface. Apple popularized the term "multi-touch" in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing; the use of touchscreen technology predates the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments.
IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, a terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface; these early touchscreens only registered one point of touch at a time. On-screen keyboards were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. An exception was a multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s. In 1977, one of the early implementations of mutual capacitance touchscreen technology was developed at CERN based on their capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe; this technology was used to develop a new type of human machine interface for the control room of the Super Proton Synchrotron particle accelerator. In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display.
The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough and sufficiently far apart to be invisible. In the final device, a simple lacquer coating prevented the fingers from touching the capacitors. In 1976, MIT described a keyboard with variable graphics capable of multi-touch detection, for what is likely to be the first multitouch screen. In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass; when a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input.
Since the size of a dot was dependent on pressure, the system was somewhat pressure-sensitive as well. Of note, this system was not able to display graphics. In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself. By 1984, both Bell Labs and Carnegie Mellon University had working multi-touch-screen prototypes – both input and graphics – that could respond interactively in response to multiple finger inputs; the Bell Labs system was based on capacitive coupling of fingers, whereas the CMU system was optical. In 1985, the canonical multitouch pinch-to-zoom gesture was demonstrated, with coordinated graphics, on CMU's system. In October 1985, Steve Jobs signed a non-disclosure agreement to tour CMU's Sensor Frame multi-touch lab.
In 1990, Sears et al. published a review of academic research on single and multi-touch touchscreen human–computer interaction of the time, describing single touch gestures such as rotating knobs, swiping the screen to activate a switch, an
A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions. The instructions are ordinary CPU instructions but the single processor can run multiple instructions on separate cores at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package; the microprocessors used in all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device or loosely. For example, cores may or may not share caches, they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, two-dimensional mesh, crossbar. Homogeneous multi-core systems include only identical cores. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, vector, or multithreading.
Multi-core processors are used across many application domains, including general-purpose, network, digital signal processing, graphics. The improvement in performance gained by the use of a multi-core processor depends much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel on multiple cores. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or more if the problem is split up enough to fit within each core's cache, avoiding use of much slower main-system memory. Most applications, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem; the parallelization of software is a significant ongoing topic of research. The terms multi-core and dual-core most refer to some sort of central processing unit, but are sometimes applied to digital signal processors and system on a chip.
The terms are used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted. In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units; the terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA; each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern; these physical limitations can cause significant heat data synchronization problems. Various other methods are used to improve CPU performance; some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code.
Many applications are better suited to thread-level parallelism methods, multiple independent CPUs are used to increase a system's overall TLP. A combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality for complex instruction set computing architectures. Clock rates increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s; as the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance.
Multiple cores were used on the same CPU chip, which could lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing. Since computer manufacturers have long implemented symmetric multiprocessing designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known. Additionally: Using a proven processing-core design without architectural changes reduces design risk significantly. For general-purpose processors, much of the motivation for multi-core processors comes from diminished gains in processor performance from increasing the operating frequency; this is due to three primary fa
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Low-Power Double Data Rate Synchronous Dynamic Random Access Memory abbreviated as Low-Power DDR SDRAM or LPDDR SDRAM, is a type of double data rate synchronous dynamic random-access memory that consumes less power and is targeted for mobile computers. It is known as Mobile DDR, abbreviated as mDDR. In contrast with standard SDRAM, used in stationary devices and laptops and is connected over a 64-bit wide memory bus, LPDDR permits 16- or 32-bit wide channels. Not just as with standard SDRAM, each generation of LPDDR has doubled the internal fetch size and external transfer speed; the original low-power DDR is a modified form of DDR SDRAM, with several changes to reduce overall power consumption. Most significant, the supply voltage is reduced from 2.5 to 1.8 V. Additional savings come from temperature-compensated refresh, partial array self refresh, a "deep power down" mode which sacrifices all memory contents. Additionally, chips are smaller. Samsung and Micron are two of the main providers of this technology, used in tablet computing devices such as the iPhone 3GS, original iPad, Samsung Galaxy Tab 7.0 and Motorola Droid X.
A new JEDEC standard JESD209-2E defines a more revised low-power DDR interface. It is not compatible with either DDR1 or DDR2 SDRAM, but can accommodate either: LPDDR2-S2: 2n prefetch memory, LPDDR2-S4: 4n prefetch memory, or LPDDR2-N: Non-volatile memory. Low-power states are similar to basic LPDDR, with some additional partial array refresh options. Timing parameters are specified for LPDDR-200 to LPDDR-1066. Working at 1.2 V, LPDDR2 multiplexes the control and address lines onto a 10-bit double data rate CA bus. The commands are similar to those of normal SDRAM, except for the reassignment of the precharge and burst terminate opcodes: Column address bit C0 is never transferred, is assumed to be zero. Burst transfers thus always begin at addresses. LPDDR2 has an active-low chip select and clock enable CKE signal, which operate like SDRAM. Like SDRAM, the command sent on the cycle that CKE is first dropped selects the power-down state: If the chip is active, it freezes in place. If the command is a NOP, the chip idles.
If the command is a refresh command, the chip enters the self-refresh state. If the command is a burst terminate, the chip enters the deep power-down state; the mode registers have been expanded compared to conventional SDRAM, with an 8-bit address space, the ability to read them back. Although smaller than a serial presence detect EEPROM, enough information is included to eliminate the need for one. S2 devices smaller than 4 Gbit, S4 devices smaller than 1 Gbit have only four banks, they ignore the BA2 signal, do not support per-bank refresh. Non-volatile memory devices do not use the refresh commands, reassign the precharge command to transfer address bits A20 and up; the low-order bits are transferred by a following Activate command. This transfers the selected row from the memory array to one of 4 or 8 row data buffers, where they can be read by a Read command. Unlike DRAM, the bank address bits are not part of the memory address. A row data buffer may be from 32 to 4096 bytes long, depending on the type of memory.
Rows larger than 32 bytes ignore some of the low-order address bits in the Activate command. Rows smaller than 4096 bytes ignore some of the high-order address bits in the Read command. Non-volatile memory does not support the Write command to row data buffers. Rather, a series of control registers in a special address region support Read and Write commands, which can be used to erase and program the memory array. In May 2012, JEDEC published the JESD209-3 Low Power Memory Device Standard. In comparison to LPDDR2, LPDDR3 offers a higher data rate, greater bandwidth and power efficiency, higher memory density. LPDDR3 achieves a data rate of 1600 MT/s and utilizes key new technologies: write-leveling and command/address training, optional on-die termination, low-I/O capacitance. LPDDR3 supports both discrete packaging types; the command encoding is identical to LPDDR2, using a 10-bit double data rate CA bus. However, the standard only specifies 8n-prefetch DRAM, does not include the flash memory commands.
Products using LPDDR3 include the 2013 MacBook Air, iPhone 5S, iPhone 6, Nexus 10, Samsung Galaxy S4 and Microsoft Surface Pro 3. LPDDR3 went mainstream in 2013, running at 800 MHz DDR, offering bandwidth comparable to PC3-12800 notebook memory in 2011. To achieve this bandwidth, the controller must implement dual-channel memory. For example, this is the case for the 5 Octa. Samsung Electronics introduced the first 4 gigabit 20 nm-class LPDDR3 modules capable of transmitting data at up to 2,133 Mbit/s per pin, more than double the performance of the older LPDDR2, only capable of 800 Mbit/s. Various SoCs from various manufacturers natively support 800 MHz LPDDR3 RAM; such include the Snapdragon 600 and 800 from Qualcomm as well as some SoCs from the Exynos and Allwinner series. On 14 March 2012, JEDEC hosted a conference to explore how future mobile device requirements will drive upcoming standards like LPDDR4. On 30 December 2013, Samsung announced that it had developed the first 20 nm-class 8