1.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0

2.
Computer data storage
–
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. It is a function and fundamental component of computers. The central processing unit of a computer is what manipulates data by performing computations, in practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away. In the Von Neumann architecture, the CPU consists of two parts, The control unit and the arithmetic logic unit. The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data, without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior and this is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions, most modern computers are von Neumann machines. A modern digital computer represents data using the numeral system. Text, numbers, pictures, audio, and nearly any form of information can be converted into a string of bits, or binary digits. The most common unit of storage is the byte, equal to 8 bits, a piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the works of Shakespeare, about 1250 pages in print. Data is encoded by assigning a bit pattern to each character, digit, by adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. A random bit flip is typically corrected upon detection, the cyclic redundancy check method is typically used in communications and storage for error detection. A detected error is then retried, data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed. This utilizes substantially less storage for many types of data at the cost of more computation, analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and this traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM or other forms of fast but temporary storage

3.
NCR 315
–
The NCR315 Data Processing System, released in January 1962 by NCR, was a second-generation computer. All printed circuit boards used resistor-transistor logic to create the various logic elements and it used 12-bit slab memory structure using core memory. The instructions could use a memory slab as either two 6-bit alphanumeric characters or as three 4-bit BCD characters, basic memory was 5000 slabs of handmade core memory, which was expandable to a maximum of 40,000 slabs in four refrigerator-size cabinets. Input/Output was by direct connections to each type of peripheral through a two-cable bundle with 1-inch-thick cables. Some devices like tape and the CRAM were daisy-chained to allow multiple drives to be connected. Later models in this include the 315-100 and the 315-RMC. The addressable unit of memory on the NCR315 series is a slab, short for syllable, consisting of 12 data bits and its size falls between a byte and a typical word. A slab may contain three digits or two characters of six bits each. A slab may contain a value from -99 to +999. A numeric value contains up to eight slabs, if the value is negative then the minus sign is the leftmost digit of this row. There are instructions to transform digits to or from alphanumeric characters and these commands use the accumulator, which has a maximum length of eight slabs. To accelerate the processing the accumulator works with an effective length, the NCR 315-100 was the second version of the original 315. It too had a 6 microsecond clock cycle, and from 10,000 to 40,000 slabs of memory, the 315-100 series console I/O incorporated a Teletype printer and keyboard in place of the original 315s IBM typewriter. The primary difference between the older NCR315 and the 315-100 was the inclusion of the Automatic Recovery Option, one of the problems with early generation of computers was that when a memory or program error occurred, the system would literally turn on a red light and halt. The normal recovery process was to copy all register and counter setting from the light panel. Usually the restart was from the beginning of the program. The upgrade to the 315 required the removal of approximate 1800 wire-wrapped connection on the backplane, the NCR 315-RMC, released in July 1965, was the first commercially available computer to employ thin film memory. This reduced the clock time to 800 nanoseconds

4.
Binary prefix
–
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2. The computer industry has used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB. In citations of main memory capacity, gigabyte customarily means 1073741824 bytes, as this is the third power of 1024, and 1024 is a power of two, this usage is referred to as a binary measurement. In most other contexts, the uses the multipliers kilo, mega, giga, etc. in a manner consistent with their meaning in the International System of Units. For example, a 500 gigabyte hard disk holds 500000000000 bytes, in contrast with the binary prefix usage, this use is described as a decimal prefix, as 1000 is a power of 10. The use of the same unit prefixes with two different meanings has caused confusion, in 2008, the IEC prefixes were incorporated into the ISO/IEC80000 standard. Early computers used one of two addressing methods to access the memory, binary or decimal. For example, the IBM701 used binary and could address 2048 words of 36 bits each, while the IBM702 used decimal, by the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. Early computer system documentation would specify the size with an exact number such as 4096,8192. These are all powers of two, and furthermore are small multiples of 210, or 1024, as storage capacities increased, several different methods were developed to abbreviate these quantities. The method most commonly used today uses prefixes such as kilo, mega, giga, and corresponding symbols K, M, and G, the prefixes kilo- and mega-, meaning 1000 and 1000000 respectively, were commonly used in the electronics industry before World War II. Along with giga- or G-, meaning 1000000000, they are now known as SI prefixes after the International System of Units, introduced in 1960 to formalize aspects of the metric system. The International System of Units does not define units for digital information and this usage is not consistent with the SI. Compliance with the SI requires that the prefixes take their 1000-based meaning, the use of K in the binary sense as in a 32K core meaning 32 ×1024 words, i. e.32768 words, can be found as early as 1959. Gene Amdahls seminal 1964 article on IBM System/360 used 1K to mean 1024 and this style was used by other computer vendors, the CDC7600 System Description made extensive use of K as 1024. Thus the first binary prefix was born, the exact values 32768 words,65536 words and 131072 words would then be described as 32K, 65K and 131K. This style was used from about 1965 to 1975 and these two styles were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the size was evident from context

5.
International Electrotechnical Commission
–
The IEC also manages three global conformity assessment systems that certify whether equipment, system or components conform to its International Standards. The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, at that time the International System of Electrical and Magnetic Units was agreed to. The IEC was instrumental in developing and distributing standards for units of measurement, particularly Gauss, Hertz and it also first proposed a system of standards, the Giorgi System, which ultimately became the SI, or Système International d’unités. In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and this effort continues, and the International Electrotechnical Vocabulary remains an important work in the electrical and electronic industries. The CISPR – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC, originally located in London, the commission moved to its current headquarters in Geneva in 1948. It has regional centres in Asia-Pacific, Latin America and North America, today, the IEC is the worlds leading international organization in its field, and its standards are adopted as national standards by its members. The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs, IEC standards have numbers in the range 60000–79999 and their titles take a form such as IEC60417, Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC27 became IEC60027. Standards of the 60000 series are also preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard. The IEC cooperates closely with the International Organization for Standardization and the International Telecommunication Union, Standards developed jointly with ISO such as ISO/IEC26300, ISO/IEC27001, and CASCO ISO/IEC17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 - Information Technology, as well as conformity assessment standards developed by ISO CASCO, other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045-1. IEC standards are also being adopted by other certifying bodies such as BSI, CSA, UL & ANSI/INCITS, SABS, SAI, SPC/GB, IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard. The IEC is made up of members, called national committees, national committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, about 90% of those who prepare IEC standards work in industry

6.
Metric prefix
–
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in use today are decadic, historically there have been a number of binary metric prefixes as well. Each prefix has a symbol that is prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand, the prefix milli-, likewise, may be added to metre to indicate division by one thousand, one millimetre is equal to one thousandth of a metre. Decimal multiplicative prefixes have been a feature of all forms of the system with six dating back to the systems introduction in the 1790s. Metric prefixes have even been prepended to non-metric units, the SI prefixes are standardized for use in the International System of Units by the International Bureau of Weights and Measures in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities, the BIPM specifies twenty prefixes for the International System of Units. Each prefix name has a symbol which is used in combination with the symbols for units of measure. For example, the symbol for kilo- is k, and is used to produce km, kg, and kW, which are the SI symbols for kilometre, kilogram, prefixes corresponding to an integer power of one thousand are generally preferred. Hence 100 m is preferred over 1 hm or 10 dam, the prefixes hecto, deca, deci, and centi are commonly used for everyday purposes, and the centimetre is especially common. However, some building codes require that the millimetre be used in preference to the centimetre, because use of centimetres leads to extensive usage of decimal points. Prefixes may not be used in combination and this also applies to mass, for which the SI base unit already contains a prefix. For example, milligram is used instead of microkilogram, in the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier,1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m or 2×1018 m3, and not 2000000 cubic metres, examples 5 cm = 5×10−2 m =5 ×0.01 m =0. The prefixes, including those introduced after 1960, are used with any metric unit, metric prefixes may also be used with non-metric units. The choice of prefixes with a unit is usually dictated by convenience of use. Unit prefixes for amounts that are larger or smaller than those actually encountered are seldom used

7.
Megabyte
–
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB, but sometimes MByte is used, the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information and this definition has been incorporated into the International System of Quantities. However, in the computer and information fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to one megabyte as 1048576bytes. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, less common is a convention that used the megabyte to mean 1000×1024 bytes. The megabyte is commonly used to measure either 10002 bytes or 10242 bytes, the interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 approximates 1000, roughly corresponding to the SI prefix kilo-, in 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO, the Mac OS X10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units, base 21 MB =1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the binary prefix mebibyte. Mixed 1 MB =1024000 bytes is the used to describe the formatted capacity of the 1.44 MB3. 5inch HD floppy disk. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size, sector sizes were set as powers of two for convenience in processing. It was an extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal. Depending on compression methods and file format, a megabyte of data can roughly be, a 4 megapixel JPEG image with normal compression. Approximately 1 minute of 128 kbit/s MP3 compressed music,6 seconds of uncompressed CD audio. A typical English book volume in plain text format, the human genome consists of DNA representing 800 MB of data

8.
Gigabyte
–
The gigabyte is a multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units, the unit symbol for the gigabyte is GB. However, the term is used in some fields of computer science and information technology to denote 1073741824 bytes. The use of gigabyte may thus be ambiguous, to address this ambiguity, the International System of Quantities standardizes the binary prefixes which denote a series of integer powers of 1024. With these prefixes, a module that is labeled as having the size 1GB has one gibibyte of storage capacity. The term gigabyte is commonly used to mean either 10003 bytes or 10243 bytes, the latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 is approximately 1000, roughly corresponding to SI multiples, in 1998 the International Electrotechnical Commission published standards for binary prefixes, requiring that the gigabyte strictly denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, and NIST and this is the recommended definition by the International Electrotechnical Commission. The file manager of Mac OS X version 10.6 and later versions are an example of this usage in software. The binary definition uses powers of the base 2, as is the principle of binary computers. This usage is widely promulgated by some operating systems, such as Microsoft Windows in reference to computer memory and this definition is synonymous with the unambiguous unit gibibyte. Since the first disk drive, the IBM350, disk drive manufacturers expressed hard drive capacities using decimal prefixes, with the advent of gigabyte-range drive capacities, manufacturers based most consumer hard drive capacities in certain size classes expressed in decimal gigabytes, such as 500 GB. The exact capacity of a given model is usually slightly larger than the class designation. Practically all manufacturers of disk drives and flash-memory disk devices continue to define one gigabyte as 1000000000bytes. Some operating systems such as OS X express hard drive capacity or file size using decimal multipliers and this discrepancy causes confusion, as a disk with an advertised capacity of, for example,400 GB might be reported by the operating system as 372 GB, meaning 372 GiB. The JEDEC memory standards use IEEE100 nomenclature which quote the gigabyte as 1073741824bytes and this means that a 300 GB hard disk might be indicated variously as 300 GB,279 GB or 279 GiB, depending on the operating system. As storage sizes increase and larger units are used, these differences even more pronounced. Some legal challenges have been waged over this confusion such as a lawsuit against drive manufacturer Western Digital, Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity

9.
Nibble
–
In computing, a nibble is a four-bit aggregation, or half an octet. It is also known as half-byte or tetrade, in a networking or telecommunication context, the nibble is often called a semi-octet, quadbit, or quartet. A nibble has sixteen possible values, a nibble can be represented by a single hexadecimal digit and called a hex digit. A full byte is represented by two digits, therefore, it is common to display a byte of information as two nibbles. Sometimes the set of all 256 byte values is represented as a 16×16 table, four-bit computer architectures use groups of four bits as their fundamental unit. Such architectures were used in microprocessors, pocket calculators and pocket computers. They continue to be used in some microcontrollers, the term nibble originates from its representing half a byte, with byte a homophone of the English word bite. The alternative spelling nybble reflects the spelling of byte, as noted in editorials of Kilobaud, another early recorded use of the term nybble was in 1977 within the consumer-banking technology group at Citibank. It created a pre-ISO8583 standard for transactional messages between cash machines and Citibanks data centers that used the basic informational unit NABBLE, the nibble is used to describe the amount of memory used to store a digit of a number stored in packed decimal format within an IBM mainframe. This technique is used to make faster and debugging easier. An 8-bit byte is split in half and each nibble is used to one decimal digit. The last nibble of the variable is reserved for the sign, thus a variable which can store up to nine digits would be packed into 5 bytes. Ease of debugging resulted from the numbers being readable in a hex dump where two hex numbers are used to represent the value of a byte, as 16×16 =28, for example, a five-byte BCD value of 31415926 5C represents a decimal value of +314159265. Historically, there are cases where nybble was used for a group of bits greater than 4, in the Apple II microcomputer line, much of the disk drive control and group-coded recording was implemented in software. Writing data to a disk was done by converting 256-byte pages into sets of 5-bit nibbles, moreover,1982 documentation for the Integrated Woz Machine refers consistently to an 8 bit nibble. The term byte once had the same ambiguity and meant a set of bits but not necessarily 8, hence the distinction of bytes and octets or of nibbles and quartets. Today, the terms byte and nibble almost always refer to 8-bit and 4-bit collections respectively and are rarely used to express any other sizes. The terms semi-nibble or nibblet have occasionally used to refer to half a nibble

10.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit

11.
Units of information
–
In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are used to measure the information contents or entropy of random variables. The most common units are the bit, the capacity of a system which can exist in two states, and the byte, which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes or the newer IEC binary prefixes, information capacity is a dimensionless quantity. In particular, if b is an integer, then the unit is the amount of information that can be stored in a system with N possible states. When b is 2, the unit is the shannon, equal to the content of one bit. A system with 8 possible states, for example, can store up to log28 =3 bits of information, other units that have been named include, Base b =3, the unit is called trit, and is equal to log23 bits. Base b =10, the unit is called decimal digit, hartley, ban, decit, or dit, Base b = e, the base of natural logarithms, the unit is called a nat, nit, or nepit, and is worth log2 e bits. Several conventional names are used for collections or groups of bits, a byte can represent 256 distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies B as the symbol for byte, bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, a group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of number representations. Computers usually manipulate bits in groups of a size, conventionally called words. The number of bits in a word is defined by the size of the registers in the computers CPU. Some machine instructions and computer number formats use two words, or four words, computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, virtual memory systems partition the computers main storage into even larger units, traditionally called pages. Terms for large quantities of bits can be formed using the range of SI prefixes for powers of 10, e. g. kilo =103 =1000, mega- =106 =1000000. These prefixes are often used for multiples of bytes, as in kilobyte, megabyte