1.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0
2.
Metric prefix
–
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in use today are decadic, historically there have been a number of binary metric prefixes as well. Each prefix has a symbol that is prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand, the prefix milli-, likewise, may be added to metre to indicate division by one thousand, one millimetre is equal to one thousandth of a metre. Decimal multiplicative prefixes have been a feature of all forms of the system with six dating back to the systems introduction in the 1790s. Metric prefixes have even been prepended to non-metric units, the SI prefixes are standardized for use in the International System of Units by the International Bureau of Weights and Measures in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities, the BIPM specifies twenty prefixes for the International System of Units. Each prefix name has a symbol which is used in combination with the symbols for units of measure. For example, the symbol for kilo- is k, and is used to produce km, kg, and kW, which are the SI symbols for kilometre, kilogram, prefixes corresponding to an integer power of one thousand are generally preferred. Hence 100 m is preferred over 1 hm or 10 dam, the prefixes hecto, deca, deci, and centi are commonly used for everyday purposes, and the centimetre is especially common. However, some building codes require that the millimetre be used in preference to the centimetre, because use of centimetres leads to extensive usage of decimal points. Prefixes may not be used in combination and this also applies to mass, for which the SI base unit already contains a prefix. For example, milligram is used instead of microkilogram, in the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier,1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m or 2×1018 m3, and not 2000000 cubic metres, examples 5 cm = 5×10−2 m =5 ×0.01 m =0. The prefixes, including those introduced after 1960, are used with any metric unit, metric prefixes may also be used with non-metric units. The choice of prefixes with a unit is usually dictated by convenience of use. Unit prefixes for amounts that are larger or smaller than those actually encountered are seldom used
3.
Megabyte
–
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB, but sometimes MByte is used, the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information and this definition has been incorporated into the International System of Quantities. However, in the computer and information fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to one megabyte as 1048576bytes. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, less common is a convention that used the megabyte to mean 1000×1024 bytes. The megabyte is commonly used to measure either 10002 bytes or 10242 bytes, the interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 approximates 1000, roughly corresponding to the SI prefix kilo-, in 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO, the Mac OS X10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units, base 21 MB =1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the binary prefix mebibyte. Mixed 1 MB =1024000 bytes is the used to describe the formatted capacity of the 1.44 MB3. 5inch HD floppy disk. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size, sector sizes were set as powers of two for convenience in processing. It was an extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal. Depending on compression methods and file format, a megabyte of data can roughly be, a 4 megapixel JPEG image with normal compression. Approximately 1 minute of 128 kbit/s MP3 compressed music,6 seconds of uncompressed CD audio. A typical English book volume in plain text format, the human genome consists of DNA representing 800 MB of data
4.
Gigabyte
–
The gigabyte is a multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units, the unit symbol for the gigabyte is GB. However, the term is used in some fields of computer science and information technology to denote 1073741824 bytes. The use of gigabyte may thus be ambiguous, to address this ambiguity, the International System of Quantities standardizes the binary prefixes which denote a series of integer powers of 1024. With these prefixes, a module that is labeled as having the size 1GB has one gibibyte of storage capacity. The term gigabyte is commonly used to mean either 10003 bytes or 10243 bytes, the latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 is approximately 1000, roughly corresponding to SI multiples, in 1998 the International Electrotechnical Commission published standards for binary prefixes, requiring that the gigabyte strictly denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, and NIST and this is the recommended definition by the International Electrotechnical Commission. The file manager of Mac OS X version 10.6 and later versions are an example of this usage in software. The binary definition uses powers of the base 2, as is the principle of binary computers. This usage is widely promulgated by some operating systems, such as Microsoft Windows in reference to computer memory and this definition is synonymous with the unambiguous unit gibibyte. Since the first disk drive, the IBM350, disk drive manufacturers expressed hard drive capacities using decimal prefixes, with the advent of gigabyte-range drive capacities, manufacturers based most consumer hard drive capacities in certain size classes expressed in decimal gigabytes, such as 500 GB. The exact capacity of a given model is usually slightly larger than the class designation. Practically all manufacturers of disk drives and flash-memory disk devices continue to define one gigabyte as 1000000000bytes. Some operating systems such as OS X express hard drive capacity or file size using decimal multipliers and this discrepancy causes confusion, as a disk with an advertised capacity of, for example,400 GB might be reported by the operating system as 372 GB, meaning 372 GiB. The JEDEC memory standards use IEEE100 nomenclature which quote the gigabyte as 1073741824bytes and this means that a 300 GB hard disk might be indicated variously as 300 GB,279 GB or 279 GiB, depending on the operating system. As storage sizes increase and larger units are used, these differences even more pronounced. Some legal challenges have been waged over this confusion such as a lawsuit against drive manufacturer Western Digital, Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity
5.
Binary prefix
–
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2. The computer industry has used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB. In citations of main memory capacity, gigabyte customarily means 1073741824 bytes, as this is the third power of 1024, and 1024 is a power of two, this usage is referred to as a binary measurement. In most other contexts, the uses the multipliers kilo, mega, giga, etc. in a manner consistent with their meaning in the International System of Units. For example, a 500 gigabyte hard disk holds 500000000000 bytes, in contrast with the binary prefix usage, this use is described as a decimal prefix, as 1000 is a power of 10. The use of the same unit prefixes with two different meanings has caused confusion, in 2008, the IEC prefixes were incorporated into the ISO/IEC80000 standard. Early computers used one of two addressing methods to access the memory, binary or decimal. For example, the IBM701 used binary and could address 2048 words of 36 bits each, while the IBM702 used decimal, by the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. Early computer system documentation would specify the size with an exact number such as 4096,8192. These are all powers of two, and furthermore are small multiples of 210, or 1024, as storage capacities increased, several different methods were developed to abbreviate these quantities. The method most commonly used today uses prefixes such as kilo, mega, giga, and corresponding symbols K, M, and G, the prefixes kilo- and mega-, meaning 1000 and 1000000 respectively, were commonly used in the electronics industry before World War II. Along with giga- or G-, meaning 1000000000, they are now known as SI prefixes after the International System of Units, introduced in 1960 to formalize aspects of the metric system. The International System of Units does not define units for digital information and this usage is not consistent with the SI. Compliance with the SI requires that the prefixes take their 1000-based meaning, the use of K in the binary sense as in a 32K core meaning 32 ×1024 words, i. e.32768 words, can be found as early as 1959. Gene Amdahls seminal 1964 article on IBM System/360 used 1K to mean 1024 and this style was used by other computer vendors, the CDC7600 System Description made extensive use of K as 1024. Thus the first binary prefix was born, the exact values 32768 words,65536 words and 131072 words would then be described as 32K, 65K and 131K. This style was used from about 1965 to 1975 and these two styles were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the size was evident from context
6.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit
7.
Computer data storage
–
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. It is a function and fundamental component of computers. The central processing unit of a computer is what manipulates data by performing computations, in practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away. In the Von Neumann architecture, the CPU consists of two parts, The control unit and the arithmetic logic unit. The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data, without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior and this is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions, most modern computers are von Neumann machines. A modern digital computer represents data using the numeral system. Text, numbers, pictures, audio, and nearly any form of information can be converted into a string of bits, or binary digits. The most common unit of storage is the byte, equal to 8 bits, a piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the works of Shakespeare, about 1250 pages in print. Data is encoded by assigning a bit pattern to each character, digit, by adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. A random bit flip is typically corrected upon detection, the cyclic redundancy check method is typically used in communications and storage for error detection. A detected error is then retried, data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed. This utilizes substantially less storage for many types of data at the cost of more computation, analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and this traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM or other forms of fast but temporary storage
8.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
9.
Orders of magnitude (numbers)
–
This list contains selected positive numbers in increasing order, including counts of things, dimensionless quantity and probabilities. Mathematics – Writing, Approximately 10−183,800 is a rough first estimate of the probability that a monkey, however, taking punctuation, capitalization, and spacing into account, the actual probability is far lower, around 10−360,783. Computing, The number 1×10−6176 is equal to the smallest positive non-zero value that can be represented by a quadruple-precision IEEE decimal floating-point value, Computing, The number 6. 5×10−4966 is approximately equal to the smallest positive non-zero value that can be represented by a quadruple-precision IEEE floating-point value. Computing, The number 3. 6×10−4951 is approximately equal to the smallest positive non-zero value that can be represented by a 80-bit x86 double-extended IEEE floating-point value. Computing, The number 1×10−398 is equal to the smallest positive non-zero value that can be represented by a double-precision IEEE decimal floating-point value, Computing, The number 4. 9×10−324 is approximately equal to the smallest positive non-zero value that can be represented by a double-precision IEEE floating-point value. Computing, The number 1×10−101 is equal to the smallest positive non-zero value that can be represented by a single-precision IEEE decimal floating-point value, Mathematics, The probability in a game of bridge of all four players getting a complete suit is approximately 4. 47×10−28. ISO, yocto- ISO, zepto- Mathematics, The probability of matching 20 numbers for 20 in a game of keno is approximately 2.83 × 10−19. ISO, atto- Mathematics, The probability of rolling snake eyes 10 times in a row on a pair of dice is about 2. 74×10−16. ISO, micro- Mathematics – Poker, The odds of being dealt a flush in poker are 649,739 to 1 against. Mathematics – Poker, The odds of being dealt a flush in poker are 72,192 to 1 against. Mathematics – Poker, The odds of being dealt a four of a kind in poker are 4,164 to 1 against, for a probability of 2.4 × 10−4. ISO, milli- Mathematics – Poker, The odds of being dealt a full house in poker are 693 to 1 against, for a probability of 1.4 × 10−3. Mathematics – Poker, The odds of being dealt a flush in poker are 507.8 to 1 against, Mathematics – Poker, The odds of being dealt a straight in poker are 253.8 to 1 against, for a probability of 4 × 10−3. Physics, α =0.007297352570, the fine-structure constant, ISO, deci- Mathematics – Poker, The odds of being dealt only one pair in poker are about 5 to 2 against, for a probability of 0.42. Demography, The population of Monowi, a village in Nebraska. Mathematics, √2 ≈1.414213562373095489, the ratio of the diagonal of a square to its side length. Mathematics, φ ≈1.618033988749895848, the golden ratio Mathematics, the number system understood by most computers, human scale, There are 10 digits on a pair of human hands, and 10 toes on a pair of human feet. Mathematics, The number system used in life, the decimal system, has 10 digits,0,1,2,3,4,5,6,7,8,9
10.
Long and short scales
–
Thus, billion means a million millions, trillion means a million billions, and so on. Short scale Every new term greater than million is one thousand times larger than the previous term, thus, billion means a thousand millions, trillion means a thousand billions, and so on. For whole numbers less than a million the two scales are identical. From a thousand million up the two scales diverge, using the words for different numbers, this can cause misunderstanding. Countries where the scale is currently used include most countries in continental Europe and most French-speaking, Spanish-speaking. The short scale is now used in most English-speaking and Arabic-speaking countries, in Brazil, in former Soviet Union, number names are rendered in the language of the country, but are similar everywhere due to shared etymology. Some languages, particularly in East Asia and South Asia, have large number naming systems that are different from both the long and short scales, for example the Indian numbering system. After several decades of increasing informal British usage of the scale, in 1974 the government of the UK adopted it. With very few exceptions, the British usage and American usage are now identical, the first recorded use of the terms short scale and long scale was by the French mathematician Geneviève Guitel in 1975. At and above a million the same names are used to refer to numbers differing by a factor of an integer power of 1,000. Each scale has a justification to explain the use of each such differing numerical name. The short-scale logic is based on powers of one thousand, whereas the long-scale logic is based on powers of one million, in both scales, the prefix bi- refers to 2 and tri- refers to 3, etc. However only in the scale do the prefixes beyond one million indicate the actual power or exponent. In the short scale, the prefixes refer to one less than the exponent, the word, million, derives from the Old French, milion, from the earlier Old Italian, milione, an intensification of the Latin word, mille, a thousand. That is, a million is a big thousand, much as a great gross is a dozen gross or 12×144 =1728, the word, milliard, or its translation, is found in many European languages and is used in those languages for 109. However, it is unknown in American English, which uses billion, and not used in British English, which preferred to use thousand million before the current usage of billion. The financial term, yard, which derives from milliard, is used on financial markets, as, unlike the term, billion, it is internationally unambiguous and phonetically distinct from million. Likewise, many long scale use the word billiard for one thousand long scale billions
11.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
12.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
13.
Optical disc
–
The encoding material sits atop a thicker substrate which makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the disc surface. Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by its grooves and this side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer. The reverse side of a disc usually has a printed label, sometimes made of paper. Optical discs are usually between 7.6 and 30 cm in diameter, with 12 cm being the most common size, a typical disc is about 1.2 mm thick, while the track pitch ranges from 1.6 µm to 320 nm. An optical disc is designed to support one of three recording types, read-only, recordable, or re-recordable, write-once optical discs commonly have an organic dye recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, Optical discs are most commonly used for storing music, video, or data and programs for personal computers. The Optical Storage Technology Association promotes standardized optical storage formats, although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage. Libraries and archives enact optical media preservation procedures to ensure continued usability in the optical disc drive or corresponding disc player. For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices and this trend is expected to continue as USB flash drives continue to increase in capacity and drop in price. Additionally, music purchased or shared over the Internet has significantly reduced the number of audio CDs sold annually. The first recorded use of an optical disc was in the 1884 when Alexander Graham Bell, Chichester Bell. An early optical disc system existed in 1935, named Lichttonorgel, an early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was an early form of the DVD. It is of special interest that U. S, patent 4,893,297, filed 1989, issued 1990, generated royalty income for Pioneer Corporations DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Greggs patents and his company, american inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil which is lit from behind by a high-power halogen lamp. Russells patent application was first filed in 1966 and he was granted a patent in 1970, following litigation, Sony and Philips licensed Russells patents in the 1980s. Both Greggs and Russells disc are floppy media read in transparent mode, in the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam U. S
14.
Supercomputer
–
A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second instead of instructions per second. As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS and it tops the rankings in the TOP500 supercomputer list. Sunway TaihuLights emergence is also notable for its use of indigenous chips, as of June 2016, China, for the first time, had more computers on the TOP500 list than the United States. However, U. S. built computers held ten of the top 20 positions, in November 2016 the U. S. has five of the top 10, throughout their history, they have been essential in the field of cryptanalysis. The use of multi-core processors combined with centralization is an emerging trend, the history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester and a series of computers at Control Data Corporation, designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance, Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, the Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the second fastest after M-13 supercomputer in Moscow. Fujitsus Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a speed of 1.7 gigaFLOPS per processor. The Hitachi SR2201 obtained a performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, the Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface. Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s, early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems, supercomputers of the 21st century can use over 100,000 processors connected by fast connections. The Connection Machine CM-5 supercomputer is a parallel processing computer capable of many billions of arithmetic operations per second. Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers, the large amount of heat generated by a system may also have other effects, e. g. reducing the lifetime of other system components. There have been diverse approaches to management, from pumping Fluorinert through the system. Systems with a number of processors generally take one of two paths
15.
Supercomputer architecture
–
Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs, however, in time the demand for increased computational power ushered in the age of massively parallel systems. Supercomputers of the 21st century can use over 100,000 processors connected by fast connections, throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, there have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. In another approach, a number of processors are used in close proximity to each other. Since the late 1960s the growth in the power and proliferation of supercomputers has been dramatic, throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. Seymour Crays get the heat out motto was central to his philosophy and has continued to be a key issue in supercomputer architectures. The large amount of heat generated by a system may also have other effects, the heat from the Aquasar supercomputer is used to warm a university campus. g. in a computer cluster, or could be geographically dispersed in grid computing. As the number of processors in a supercomputer grows, component failure rate begins to become a serious issue, if a supercomputer uses thousands of nodes, each of which may fail once per year on the average, then the system will experience several node failures each day. As the price/performance of general purpose graphic processors has improved, a number of supercomputers such as Tianhe-I. However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by replacing CPUs with GPUs. As the number of independent processors in a supercomputer increases, the way they access data in the system and how they share. Over the years a number of systems for distributed file management were developed, e. g. the IBM General Parallel File System, BeeGFS, a number of supercomputers on the TOP100 list such as the Tianhe-I use Linuxs Lustre file system. With the Minnesota FORTRAN compiler the 6600 could sustain 500 kiloflops on standard mathematical operations, in time, as the number of processors increased, different architectural issues emerged. Two issues that need to be addressed as the number of increases are the distribution of memory. In the distributed memory approach, each processor is physically packaged close with some local memory, the memory associated with other processors is then further away based on bandwidth and latency parameters in non-uniform memory access. In the 1960s pipelining was viewed as an innovation, and by the 1970s the use of vector processors had been well established, by the 1980s, many supercomputers used parallel vector processors. The relatively small number of processors in early systems, allowed them to use a shared memory architecture
16.
Hard disk drive
–
The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a manner, meaning that individual blocks of data can be stored or retrieved in any order. HDDs are a type of storage, retaining stored data even when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for computers by the early 1960s. Continuously improved, HDDs have maintained this position into the era of servers. More than 200 companies have produced HDDs historically, though after extensive industry consolidation most current units are manufactured by Seagate, Toshiba, as of 2016, HDD production is growing, although unit shipments and sales revenues are declining. While SSDs have higher cost per bit, SSDs are replacing HDDs where speed, power consumption, small size, the primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000, the two most common form factors for modern HDDs are 3. 5-inch, for desktop computers, and 2. 5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA, SATA, Hard disk drives were introduced in 1956, as data storage for an IBM real-time transaction processing computer and were developed for use with general-purpose mainframe and minicomputers. The first IBM drive, the 350 RAMAC in 1956, was approximately the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962 the IBM350 RAMAC disk storage unit was superseded by the IBM1301 disk storage unit, cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches above the platter surface. Motion of the head array depended upon a binary system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes, access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations, non-removable HDDs were called fixed disk drives. Some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track, known as fixed-head or head-per-track disk drives they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named Winchester and its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to land on an area of the disk surface upon spin-down
17.
Tape drive
–
A tape drive is a data storage device that reads and writes data on a magnetic tape. Magnetic tape data storage is used for offline, archival data storage. Tape media generally has a unit cost and a long archival stability. A tape drive provides sequential access storage, unlike a hard disk drive, a disk drive can move to any position on the disk in a few milliseconds, but a tape drive must physically wind tape between reels to read any one particular piece of data. As a result, tape drives have very slow average seek times to data, however, tape drives can stream data very quickly off a tape when the required position has been reached. For example, as of 2010 Linear Tape-Open supported continuous data transfer rates of up to 140 MB/s, magnetic tape drives were first used for data storage on mainframe computers in the 1950s, with capacities less than one megabyte. As technology advanced, capacities increased to 10 terabytes or higher of uncompressed data per cartridge as of 2014, in early computer systems, magnetic tape might be the main storage medium, although the drives were expensive, the tapes were inexpensive. The true storage capacity is known as the native capacity or the raw capacity. IBM and Sony have also used higher compression ratios in their marketing materials, the compression ratio actually achievable depends on the data being compressed. Some data has little redundancy, large files, for example, already use compression technology. A sparse database, on the hand, may allow compression ratios better than 10,1. Tape drives can be connected to a computer with SCSI, Fibre Channel, SATA, USB, FireWire, FICON, or other interfaces. Tape drives are used with autoloaders and tape libraries which automatically load, unload, in the early days of home computing, floppy and hard disk drives were very expensive. Many computers had an interface to store data via a tape recorder. Simple dedicated tape drives, such as the professional DECtape and the home ZX Microdrive, however, the drop in disk drive prices made such alternatives obsolete. In this situation, the modern fast-running tape drive is unable to stop the tape instantly, instead, the drive must decelerate and stop the tape, rewind it a short distance, restart it, position back to the point at which streaming stopped and then resume the operation. If the condition repeats, the resulting back-and-forth tape motion resembles that of shining shoes with a cloth, shoe-shining decreases the attainable data transfer rate, drive and tape life, and tape capacity. In early tape drives, non-continuous data transfer was normal and unavoidable, computer processing power and amounts of available memory were usually insufficient to provide a constant stream, so tape drives were typically designed for so called start-stop operation
18.
Motherboard
–
A motherboard is the main printed circuit board found in general purpose microcomputers and other expandable systems. It holds and allows communication between many of the electronic components of a system, such as the central processing unit and memory. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice, the Central Processing Unit, memory, and peripherals were housed on individual printed circuit boards, which were plugged into the backplate. The ubiquitous S-100 bus of the 1970s is an example of type of backplane system. During the late 1980s and 1990s, it became economical to move a number of peripheral functions onto the motherboard. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds, laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century, memory, processors, network controllers, power source, and storage would be integrated into some systems. A motherboard provides the connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems. A typical desktop computer has its microprocessor, main memory, an important component of a motherboard is the microprocessors supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard, modern motherboards include, Sockets in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA C3, as of 2007, some graphics cards require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives, typically SATA only, disk drives also connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support commonly used devices, such as USB for mouse devices. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard, occasionally video interface hardware was also integrated into the motherboard, for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards, given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Motherboards are produced in a variety of sizes and shapes called computer form factor, however, the motherboards used in IBM-compatible systems are designed to fit various case sizes. As of 2007, most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, a cases motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases
19.
Library of Congress
–
The Library of Congress is the research library that officially serves the United States Congress and is the de facto national library of the United States. It is the oldest federal cultural institution in the United States, the Library is housed in three buildings on Capitol Hill in Washington, D. C. it also maintains the Packard Campus in Culpeper, Virginia, which houses the National Audio-Visual Conservation Center. The Library of Congress claims to be the largest library in the world and its collections are universal, not limited by subject, format, or national boundary, and include research materials from all parts of the world and in more than 450 languages. Two-thirds of the books it acquires each year are in other than English. The Library of Congress moved to Washington in 1800, after sitting for years in the temporary national capitals of New York. John J. Beckley, who became the first Librarian of Congress, was two dollars per day and was required to also serve as the Clerk of the House of Representatives. The small Congressional Library was housed in the United States Capitol for most of the 19th century until the early 1890s, most of the original collection had been destroyed by the British in 1814, during the War of 1812. To restore its collection in 1815, the bought from former president Thomas Jefferson his entire personal collection of 6,487 books. After a period of growth, another fire struck the Library in its Capitol chambers in 1851, again destroying a large amount of the collection. The Library received the right of transference of all copyrighted works to have two copies deposited of books, maps, illustrations and diagrams printed in the United States. It also began to build its collections of British and other European works and it included several stories built underground of steel and cast iron stacks. Although the Library is open to the public, only high-ranking government officials may check out books, the Library promotes literacy and American literature through projects such as the American Folklife Center, American Memory, Center for the Book, and Poet Laureate. James Madison is credited with the idea for creating a congressional library, part of the legislation appropriated $5,000 for the purchase of such books as may be necessary for the use of Congress. And for fitting up an apartment for containing them. Books were ordered from London and the collection, consisting of 740 books and 3 maps, was housed in the new Capitol, as president, Thomas Jefferson played an important role in establishing the structure of the Library of Congress. The new law also extended to the president and vice president the ability to borrow books and these volumes had been left in the Senate wing of the Capitol. One of the only congressional volumes to have survived was a government account book of receipts and it was taken as a souvenir by a British Commander whose family later returned it to the United States government in 1940. Within a month, former president Jefferson offered to sell his library as a replacement
20.
Ancestry.com
–
Ancestry. com LLC is a privately held Internet company based in Lehi, Utah, United States. The largest for-profit genealogy company in the world, it operates a network of genealogical and historical record websites focused on the United States, as of June 2014, the company provided access to approximately 16 billion historical records and had over 2 million paying subscribers. User-generated content tallies to more than 70 million family trees, and subscribers have added more than 200 million photographs, scanned documents, and written stories. Ancestrys brands include Ancestry, AncestryDNA, AncestryHealth, AncestryProGenealogists, Archives. com, Family Tree Maker, Find a Grave, Fold3, Newspapers. com, and Rootsweb. Under its subsidiaries, Ancestry. com operates foreign sites that provide access to services and these include Australia, Canada, China, Japan, Brazil, New Zealand, the United Kingdom and several other countries in Europe and Asia. In 1990, Paul B. Allen and Dan Taggart, two Brigham Young University graduates, founded Infobases and began offering Latter-day Saints publications on floppy disks, in 1988, Allen had worked at Folio Corporation, founded by his brother Curt and his brother-in-law Brad Pelo. Infobases chose to use the Folio infobase technology, which Allen was familiar with, Infobases first products were floppy disks and compact disks sold from the back seat of the founders car. In 1994, Infobases was named among Inc. magazines 500 fastest-growing companies and their first offering on CD was the LDS Collectors Edition, released in April 1995, selling for $299.95, which was offered in an online version in August 1995. Ancestry officially went online with the launched Ancestry. com in 1996, with its roots as a genealogy newsletter started in 1983 by John Sittner, and became an established publishing company in 1984. Ancestry was relaunched as a magazine in January 1994, and went online in 1996, on January 1,1997, Infobases parent company, Western Standard Publishing, purchased Ancestry, Inc. publisher of Ancestry magazine and genealogy books. Western Standard Publishings CEO was Joe Cannon, one of the owners of Geneva Steel. In July 1997, Allen and Taggart purchased Western Standards interest in Ancestry, at the time, Brad Pelo was president and CEO of Infobases, and president of Western Standard. Less than six months earlier, he had been president of Folio Corporation, in March 1997, Folio was sold to Open Market for $45 million. The first public evidence of the change in ownership of Ancestry Magazine came with the July/August 1997 issue and that issues masthead also included the first use of the Ancestry. com web address. More growth for Infobases occurred in July 1997, when Ancestry, Inc. purchased Bookcraft, Infobases had published many of Bookcrafts books as part of its LDS Collectors Library. Pelo also announced that Ancestrys product line would be expanded in both CDs and online. Alan Ashton, an investor in Infobases and founder of WordPerfect, was its chairman of the board. Allen and Taggart began running Ancestry, Inc. independently from Infobases in July 1997, included in the sale were the rights to Infobases LDS Collectors Library on CD
21.
CERN
–
The European Organization for Nuclear Research, known as CERN, is a European research organization that operates the largest particle physics laboratory in the world. Established in 1954, the organization is based in a northwest suburb of Geneva on the Franco–Swiss border, Israel is the only non-European country granted full membership. The main site at Meyrin hosts a large computing facility, which is used to store and analyse data from experiments. Researchers need remote access to facilities, so the lab has historically been a major wide area network hub. CERN is also the birthplace of the World Wide Web, the convention establishing CERN was ratified on 29 September 1954 by 12 countries in Western Europe. The acronym was retained for the new laboratory after the council was dissolved. CERNs first president was Sir Benjamin Lockspeiser, edoardo Amaldi was the general secretary of CERN at its early stages when operations were still provisional, while the first Director-General was Felix Bloch. The laboratory was originally devoted to the study of atomic nuclei, therefore, the laboratory operated by CERN is commonly referred to as the European laboratory for particle physics, which better describes the research being performed there. Several important achievements in physics have been made through experiments at CERN. The 1984 Nobel Prize for Physics was awarded to Carlo Rubbia, the 1992 Nobel Prize for Physics was awarded to CERN staff researcher Georges Charpak for his invention and development of particle detectors, in particular the multiwire proportional chamber. The World Wide Web began as a CERN project named ENQUIRE, initiated by Tim Berners-Lee in 1989, Berners-Lee and Cailliau were jointly honoured by the Association for Computing Machinery in 1995 for their contributions to the development of the World Wide Web. Based on the concept of hypertext, the project was intended to facilitate the sharing of information between researchers, the first website was activated in 1991. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone, a copy of the original first webpage, created by Berners-Lee, is still published on the World Wide Web Consortiums website as a historical document. Prior to the Webs development, CERN had pioneered the introduction of Internet technology, a short history of this period can be found at CERN. ch. More recently, CERN has become a facility for the development of computing, hosting projects including the Enabling Grids for E-sciencE. It also hosts the CERN Internet Exchange Point, one of the two main internet exchange points in Switzerland. 48×10−5, a measurement with 6. 0-sigma significance. However, on 23 February, CERN stated in a release that the results were flawed due to an incorrectly connected GPS-synchronization cable. In March 2012, the ICARUS Collaboration reported that the measurement would be reproduced by both OPERA and ICARUS, further tests, after fixing the GPS connector, showed speed measurements consistent with the speed of light from four experiments at Gran Sasso, including OPERA
22.
Large Hadron Collider
–
The Large Hadron Collider is the worlds largest and most powerful particle collider, most complex experimental facility ever built, and the largest single machine in the world. It lies in a tunnel 27 kilometres in circumference, as deep as 175 metres beneath the France–Switzerland border near Geneva, on 13 February 2013 the LHCs first run officially ended, and it was shut down for planned upgrades. Test collisions restarted in the collider on 5 April 2015. Its second research run commenced on schedule, on 3 June 2015, the collider has four crossing points, around which are positioned seven detectors, each designed for certain kinds of research. The LHC primarily collides proton beams, but it can also use beams of lead nuclei, proton–lead collisions were performed for short periods in 2013 and 2016, and lead–lead collisions took place in 2010,2011,2013, and 2015. The LHCs computing grid is a record holder. The term hadron refers to composite particles composed of quarks held together by the strong force, a collider is a type of a particle accelerator with two directed beams of particles. In particle physics, colliders are used as a tool, they accelerate particles to very high kinetic energies. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world, many of these byproducts are produced only by high-energy collisions, and they decay after very short periods of time. Thus many of them are hard or nearly impossible to study in other ways, many theorists expect new physics beyond the Standard Model to emerge at the TeV energy level, as the Standard Model appears to be unsatisfactory. Issues possibly to be explored by LHC collisions include, Are the masses of particles actually generated by the Higgs mechanism via electroweak symmetry breaking. The experiments found a particle appears to be the Higgs boson. Is supersymmetry, an extension of the Standard Model and Poincaré symmetry, realized in nature, are there extra dimensions, as predicted by various models based on string theory, and can we detect them. What is the nature of the matter that appears to account for 27% of the mass-energy of the universe. Why is the fundamental force so many orders of magnitude weaker than the other three fundamental forces. Are there additional sources of quark mixing, beyond those already present within the Standard Model. Why are there apparent violations of the symmetry between matter and antimatter, what are the nature and properties of quark–gluon plasma, thought to have existed in the early universe and in certain compact and strange astronomical objects today. This will be investigated by heavy ion collisions, mainly in ALICE, first observed in 2010, findings published in 2012 confirmed the phenomenon of jet quenching in heavy-ion collisions
23.
Hitachi
–
Hitachi, Ltd. is a Japanese multinational conglomerate company headquartered in Chiyoda, Tokyo, Japan. It is the parent company of the Hitachi Group and forms part of the DKB Group of companies, Hitachi is listed on the Tokyo Stock Exchange and is a constituent of the Nikkei 225 and TOPIX indices. It is ranked 38th in the 2012 Fortune Global 500 and 129th in the 2012 Forbes Global 2000, Hitachi was founded in 1910 by electrical engineer Namihei Odaira in Ibaraki Prefecture. The companys first product was Japans first 5-horsepower induction motor, initially developed for use in copper mining, odairas company soon became the domestic leader in electric motors and electric power industry infrastructure. The company began as a venture of Fusanosuke Kuharas mining company in Hitachi. Odaira moved headquarters to Tokyo in 1918, long before that, he coined the company’s toponymic name by superimposing two kanji characters, hi meaning “sun” and tachi meaning “rise”. The young companys national aspirations were conveyed by its original brand mark, world War II and its aftermath devastated the company. Many of its factories were destroyed by Allied bombing raids, and after the war, founder Odaira was removed from the company. Nevertheless, as a result of three years of negotiations, Hitachi was permitted to all but 19 of its manufacturing plants. The cost of such a production shutdown, though, compounded by a labor strike in 1950. Only the Korean War saved the company from complete collapse, Hitachi and many other struggling Japanese industrial firms benefited from defense contracts offered by the American military. Meanwhile, Hitachi went public in 1949, Hitachi America, Ltd. was established in 1959. Hitachi Europe, Ltd. was established in 1982, in March 2011, Hitachi agreed to sell its hard disk drive subsidiary, HGST, to Western Digital for a combination of cash and shares worth US$4.3 billion. Due to concerns of a duopoly of WD and Seagate Technology by the EU Commission, the transaction was completed in March 2012. The talks subsequently broke down and were suspended, the joint venture began operations in February 2014. It is a high density information storage medium utilizing laser etched/readable Fused quartz, Hitachi is taking for 2016 an estimated ¥65 billion write-off in value of a SILEX technology laser uranium enrichment joint venture with General Electric. Hitachi Consulting is a management and technology consulting firm with headquarters in Dallas, Texas. Hitachi Data Systems is an owned subsidiary of Hitachi which provides hardware, software
24.
Internet traffic
–
Internet traffic is the flow of data across the Internet. Because of the nature of the Internet, there is no single point of measurement for total Internet traffic. The phrase Internet traffic is used to describe web traffic. File sharing constitutes a large fraction of Internet traffic, the prevalent technology for file sharing is the BitTorrent protocol, which is a peer-to-peer system mediated through indexing sites that provide resource directories. The traffic patterns of P2P systems are described as problematic. According to a Sandvine Research in 2013, Bit Torrent’s share of Internet traffic decreased by 20% to 7. 4% overall, streaming media provides users with video and audio resources, such as YouTube and Spotify. The Internet does not employ any formally centralized facilities for traffic management, hungary achieved 1.15 billion gigabytes last year and another 18 million gigabytes accumulated by mobile devices. This would have resulted in revenue of 175 billion forints under the new tax based on the consultancy firm eNet. Some people argue that the new plan on Internet tax would prove disadvantageous to the economic development, limit access to information. Approximately 36,000 people have signed up to part in an event on Facebook to be held outside the Economy Ministry to protest against the possible tax. Traffic classification describes the methods of classifying traffic by observing features passively in the traffic, there might be some that only have a vulgar classification goal. For example, whether it is bulk transfer, peer to peer file sharing or transaction-orientated, some others will set a finer-grained classification goal, for instance the exact number of application represented by the traffic. Traffic features included port number, application payload, temporal, packet size, there are a vast range of methods to allocate Internet traffic including exact traffic, for instance port number, payload, heuristic or statistical machine learning. Yet, classification schemes are complex to operate accurately due to the shortage of available knowledge to the network. For example, the packet header related information is insufficient to allow for an precise methodology. Consequently, the accuracy of any traditional method are between 50%-70%, work involving supervised machine learning to classify network traffic. Data are hand-classified to one of a number of categories, a combination of data set category and descripttions of the classified flows are used to train the classifier. To give an insight of the technique itself, initial assumptions are made as well as applying two other techniques in reality
25.
Cisco Systems
–
Through its numerous acquired subsidiaries, such as OpenDNS, WebEx, and Jasper, Cisco specializes into specific tech markets, such as Internet of Things, domain security, and energy management. Cisco is the largest networking company in the world, the stock was added to the Dow Jones Industrial Average on June 8,2009, and is also included in the S&P500 Index, the Russell 1000 Index, NASDAQ-100 Index and the Russell 1000 Growth Stock Index. By the time the company went public in 1990, when it was listed on the NASDAQ, Cisco was the most valuable company in the world by 2000, with a more than $500 billion market capitalization. Despite founding Cisco in 1984, Bosack, along with Kirk Lougheed, continued to work at Stanford on Ciscos first product and it consisted of exact replicas of Stanfords Blue Box router and a stolen copy of the Universitys multiple-protocol router software. The software was written some years earlier at Stanford medical school by research engineer William Yeager. Bosack and Lougheed adapted it into what became the foundation for Cisco IOS, in 1987, Stanford licensed the router software and two computer boards to Cisco. In addition to Bosack, Lerner and Lougheed, Greg Satz, a programmer, and Richard Troiano, the companys first CEO was Bill Graves, who held the position from 1987 to 1988. In 1988, John Morgridge was appointed CEO, the name Cisco was derived from the city name San Francisco, which is why the companys engineers insisted on using the lower case cisco in its early years. The logo is intended to depict the two towers of the Golden Gate Bridge, on February 16,1990, Cisco Systems went public and was listed on the NASDAQ stock exchange. On August 28,1990, Lerner was fired, upon hearing the news, her husband Bosack resigned in protest. The couple walked away from Cisco with $170 million, 70% of which was committed to their own charity, although Cisco was not the first company to develop and sell dedicated network nodes, it was one of the first to sell commercially successful routers supporting multiple network protocols. Classical, CPU-based architecture of early Cisco devices coupled with flexibility of operating system IOS allowed for keeping up with evolving technology needs by means of frequent software upgrades, some popular models of that time managed to stay in production for almost a decade virtually unchanged—a rarity in high-tech industry. This philosophy dominated the companys product lines throughout the 1990s, in 1995, John Morgridge was succeeded by John Chambers. The phenomenal growth of the Internet in mid-to-late 1990s quickly changed the telecom landspe, as the Internet Protocol became widely adopted, the importance of multi-protocol routing declined. In late March 2000, at the height of the bubble, Cisco became the most valuable company in the world. In July 2014, with a cap of about US$129 billion. One of them, Juniper Networks, shipped their first product in 1999, Cisco answered the challenge with homegrown ASICs and fast processing cards for GSR routers and Catalyst 6500 switches. In 2004, Cisco also started migration to new high-end hardware CRS-1, as part of a massive rebranding campaign in 2006, Cisco Systems adopted the shortened name Cisco and created The Human Network advertising campaign
26.
Yahoo! Groups
–
Groups is one of the world’s largest collections of online discussion boards. In addition, members can choose whether to receive individual, daily digest or Special Delivery e-mails, Groups can be created with public or member-only access. Some Groups are simply announcement bulletin boards, to only the Group moderators can post. It is not necessary to register with Yahoo. in order to participate in Yahoo, the basic mailing list functionality is available to any e-mail address, but a Yahoo. ID is required to access most other features, Groups have been called a goldmine. Yahoo. has never cashed in on, as of 2013, Hari Vasudev oversees Yahoo Groups and Yahoo. As well as providing e-mail relaying and archiving facilities for the many Groups it hosts, Groups service provides additional features for each Group web site, such as a homepage, message archive, polls, calendar announcements, files, photos, database functions, and bookmarks. Each new group created at Yahoo, Groups allows the creator to have several features attached to the group. Some of the features can be selected for off, moderator, members, here is the complete list of possible group features, Group members Messages, Post via web or email to group. Photo album, Organized into album/thumbnail structure, file storage, Capable of storing any file format. Link directory, Options for folders, text labels for each link, poll, Members can create multiple-choice polls, including various options for ID display. Database, Up to ten tables, each with up to one thousand rows, member list, Scroll of registered member profiles, and the basics of the information they provide. Calendar, Scheduling system for clubs with regular events, promote, HTML box for website display. Administration Invite, to more members by email. Options, Edit of the group homepage display text etc, Post approval, It is possible to switch to strict moderation if required. Web tools management, Options are off, public, members, administrators, discontinued the chat feature in Yahoo. Groups On February 1,2011 Yahoo, Groups launched a beta version of Group Chat. Groups Directory organizes groups in these categories, There is a Yahoo
27.
Monsters vs. Aliens
–
Monsters vs. Aliens is a 2009 American 3D computer-animated science fiction film produced by DreamWorks Animation and distributed by Paramount Pictures. The film was scheduled for a May 2009 release, but the date was moved to March 27,2009. It was released on DVD and Blu-ray September 29,2009 in North America, the film features the voices of Reese Witherspoon, Seth Rogen, Hugh Laurie, Will Arnett, Kiefer Sutherland, Rainn Wilson, Paul Rudd and Stephen Colbert. It grossed over $381 million worldwide on a $175 million budget, Susan Murphy of Modesto, California is going to be married to news weatherman Derek Dietl. But before the ceremony, she hit by a meteorite. As a side-effect, her hair turns white and she is tranquilized by the military and awakens in a top secret government facility that houses monsters without human knowledge. She meets the warden W. R. Monger, and fellow inmates, a mad scientist who became half cockroach after an experiment, B. O. B. Susan herself has been renamed to Ginormica, in deep space, an alien overlord named Gallaxhar is alerted to the presence of quantonium, a powerful energy source on Earth, and he sends a probe to retrieve it. The probe later lands on Earth where the President of the United States attempts to make first contact with it, however, the attempt fails and the probe goes on a destructive rampage, headed straight for San Francisco. Monger arranges for the freedom of the if they can stop the probe to which the president agrees. The robot detects the quantonium radiating through Susans body and tries to take it from her, the monsters work together to save the people and defeat the probe. Unbeknownst to the monsters, Gallaxhar sets course for Earth to obtain the quantonium in person while the now-free Susan returns home with her new friends, however, the monsters alienate themselves from the humans due to their inexperience with social situations. Derek later breaks off his engagement with Susan, believing that he cannot marry someone who would overshadow him, heartbroken, the monsters reunite, but Susan realizes that her life is better as a monster and promises not to sell herself short to anyone again. Suddenly, Susan is pulled into a space ship piloted by Gallaxhar. Insectosaurus tries to save her, but he is shot down by the plasma cannons. Gallaxhar extracts the quantonium from Susan, shrinking her back to her normal size, Gallaxhar then begins making clones of himself in order to launch a full-scale invasion of Earth. Monger manages to secretly get the monsters on board the ship and they rescue Susan and make their way to the main core where Dr. Cockroach sets the ship to self-destruct to prevent the invasion. All but Susan are trapped as the blast doors close and she goes to personally confront Gallaxhar on the bridge, with her time running out, she sends the ball of stored quantonium down on herself, restoring her monstrous size and strength
28.
Google Groups
–
Google Groups is a service from Google that provides discussion groups for people sharing common interests. The Groups service also provides a gateway to Usenet newsgroups via a user interface. Google Groups became operational in February 2001, following Googles acquisition of Dejas Usenet archive, Google Groups offers at least two kinds of discussion group, in both cases users can participate in threaded conversations, either through a web interface or by e-mail. The first kind are forums specific to Google Groups which are inaccessible by NNTP, the second kind are Usenet groups, for which Google Groups acts as gateway and unofficial archive. Through the Google Groups user interface, users can read and post to Usenet groups, in addition to accessing Google and Usenet groups, registered users can also set up mailing list archives for e-mail lists that are hosted elsewhere. Prior to the acquisition of its archive by Google the Deja News Research Service was an archive of messages posted to Usenet discussion groups, started in March 1995 by Steve Madere in Austin, Texas. Its powerful search engine capabilities won the acclaim, generated controversy. While archives of Usenet discussions had been kept for as long as the medium existed, the search facilities transformed Usenet from a loosely organized and ephemeral communication tool into a valued information repository. It already supported the use of an X-No-Archive message header, which if present would cause an article to be omitted from the archive and this did not prevent others from quoting the material in a later message and causing it to be stored. Copyright holders were allowed to have material removed from the archive. According to Humphrey Marr of Deja News, copyright actions most frequently came from the Church of Scientology, the capability to nuke postings was kept open for many years but later removed without explanation under Googles tenure. Google also mistakenly resurrected previously nuked messages at one point, angering many users, nukes that were in effect at the time when Google removed the possibility, are still honored, however. Since May 2014, European users can request to have search results for their name from Google Groups, including their Usenet archive, Google Groups is one of the ten most delinked sites. If Google does not grant a delinking, Europeans can appeal to their local Data Protection Agencies, the service was eventually expanded beyond search. My Deja News offered the ability to read Usenet in the chronological, per-group manner. Deja Communities were private Internet forums offered primarily to businesses, in 1999 the site sharply changed direction and made its primary feature a shopping comparison service. During this transition, which involved relocation of the servers, many messages in the Usenet archive became unavailable. By late 2000 the company, in distress, sold the shopping service to eBay
29.
Usenet
–
Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP dial-up network architecture, tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages to one or more categories, known as newsgroups, Usenet resembles a bulletin board system in many respects and is the precursor to Internet forums that are widely used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially, the name comes from the term users network. One notable difference between a BBS or web forum and Usenet is the absence of a server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store, individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, employer, or their own server. Usenet has significant cultural importance in the world, having given rise to, or popularized, many widely recognized concepts and terms such as FAQ, flame. The name Usenet emphasized its creators hope that the USENIX organization would take a role in its operation. The articles that users post to Usenet are organized into topical categories called newsgroups, for instance, sci. math and sci. physics are within the sci. * hierarchy, for science. Or, talk. origins and talk. atheism are in the talk. * hierarchy, when a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read. In most newsgroups, the majority of the articles are responses to some other article, the set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads, when a user posts an article, it is initially only available on that users news server. Each news server talks to one or more servers and exchanges articles with them. In this fashion, the article is copied from server to server, the later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available, many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out. This is largely because the POTS network was used for transfers. The format and transmission of Usenet articles is similar to that of Internet e-mail messages, today, Usenet has diminished in importance with respect to Internet forums, blogs and mailing lists. The groups in alt. binaries are still used for data transfer
30.
Wikipedia
–
Wikipedia is a free online encyclopedia that aims to allow anyone to edit articles. Wikipedia is the largest and most popular reference work on the Internet and is ranked among the ten most popular websites. Wikipedia is owned by the nonprofit Wikimedia Foundation, Wikipedia was launched on January 15,2001, by Jimmy Wales and Larry Sanger. Sanger coined its name, a portmanteau of wiki and encyclopedia, There was only the English language version initially, but it quickly developed similar versions in other languages, which differ in content and in editing practices. With 5,377,348 articles, the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias, in 2005, Nature published a peer review comparing 42 science articles from Encyclopædia Britannica and Wikipedia, and found that Wikipedias level of accuracy approached Encyclopædia Britannicas. Other collaborative online encyclopedias were attempted before Wikipedia, but none were so successful, Wikipedia began as a complementary project for Nupedia, a free online English-language encyclopedia project whose articles were written by experts and reviewed under a formal process. Nupedia was founded on March 9,2000, under the ownership of Bomis and its main figures were Jimmy Wales, the CEO of Bomis, and Larry Sanger, editor-in-chief for Nupedia and later Wikipedia. Nupedia was licensed initially under its own Nupedia Open Content License, while Wales is credited with defining the goal of making a publicly editable encyclopedia, Sanger is credited with the strategy of using a wiki to reach that goal. On January 10,2001, Sanger proposed on the Nupedia mailing list to create a wiki as a project for Nupedia. Wikipedia was launched on January 15,2001, as a single English-language edition at www. wikipedia. com, Wikipedias policy of neutral point-of-view was codified in its first months. Otherwise, there were few rules initially and Wikipedia operated independently of Nupedia. Originally, Bomis intended to make Wikipedia a business for profit, Wikipedia gained early contributors from Nupedia, Slashdot postings, and web search engine indexing. By August 8,2001, Wikipedia had over 8,000 articles, on September 25,2001, Wikipedia had over 13,000 articles. By the end of 2001, it had grown to approximately 20,000 articles and 18 language editions and it had reached 26 language editions by late 2002,46 by the end of 2003, and 161 by the final days of 2004. Nupedia and Wikipedia coexisted until the servers were taken down permanently in 2003. Citing fears of commercial advertising and lack of control in Wikipedia and these moves encouraged Wales to announce that Wikipedia would not display advertisements, and to change Wikipedias domain from wikipedia. com to wikipedia. org. Around 1,800 articles were added daily to the encyclopedia in 2006, a team at the Palo Alto Research Center attributed this slowing of growth to the projects increasing exclusivity and resistance to change. Others suggest that the growth is flattening naturally because articles that could be called low-hanging fruit—topics that clearly merit an article—have already been created, the Wall Street Journal cited the array of rules applied to editing and disputes related to such content among the reasons for this trend
31.
Compact Disc Digital Audio
–
Compact Disc Digital Audio is the standard format for audio compact discs. The standard is defined in the Red Book, one of a series of Rainbow Books that contain the specifications for all Compact Disc formats. These parameters are common to all discs and used by all logical formats. The standard also specifies the form of audio encoding, 2-channel signed 16-bit Linear PCM sampled at 44,100 Hz. Although rarely used, the specification allows for discs to be mastered with a form of emphasis, the second edition of IEC60908 was published in 1999 and it cancels and replaces the first edition, amendment 1 and the corrigendum to amendment 1. The IEC60908 however does not contain all the information for extensions that is available in the Red Book, such as the details for CD-Text, CD+G, the standard is not freely available and must be licensed. It is available from Philips and the IEC, as of 2013, Philips outsources licensing of the standard to Adminius, which charges US$100 for the Red Book, plus US$50 each for the Subcode Channels R-W and CD Text Mode annexes. The audio contained in a CD-DA consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz, the sampling rate is adapted from that attained when recording digital audio on a PAL videotape with a PCM adaptor, an earlier way of storing digital audio. An audio CD can represent frequencies up to 22.05 kHz, the selection of the sample rate was based primarily on the need to reproduce the audible frequency range of 20–20,000 Hz. The Nyquist–Shannon sampling theorem states that a rate of more than twice the maximum frequency of the signal to be recorded is needed. The exact sampling rate of 44, the device that converts an analog audio signal into PCM audio, which in turn is changed into an analog video signal is called a PCM adaptor. This technology could store six samples in a horizontal line. 60 field/s black and white video was required, and in NTSC countries that video signal has 245 usable lines per field, similarly, PAL has 294 lines and 50 fields, which gives 44,100 samples/s/stereo channel. This system could store 14-bit samples with some correction, or 16-bit samples with almost no error correction. There was a debate over the use of 14-bit or 16-bit quantization. When the Sony/Philips task force designed the Compact Disc, Philips had already developed a 14-bit D/A converter, in the end,16 bits and 44.1 kilosamples per second prevailed. Philips found a way to produce 16-bit quality using its 14-bit DAC by using four times oversampling, some CDs are mastered with pre-emphasis, an artificial boost of high audio frequencies. The pre-emphasis improves the apparent signal-to-noise ratio by making use of the channels dynamic range
32.
Hubble Space Telescope
–
The Hubble Space Telescope is a space telescope that was launched into low Earth orbit in 1990 and remains in operation. Although not the first space telescope, Hubble is one of the largest and most versatile, with a 2. 4-meter mirror, Hubbles four main instruments observe in the near ultraviolet, visible, and near infrared spectra. Hubbles orbit outside the distortion of Earths atmosphere allows it to take extremely high-resolution images, Hubble has recorded some of the most detailed visible light images ever, allowing a deep view into space and time. Many Hubble observations have led to breakthroughs in astrophysics, such as determining the rate of expansion of the universe. The HST was built by the United States space agency NASA, the Space Telescope Science Institute selects Hubbles targets and processes the resulting data, while the Goddard Space Flight Center controls the spacecraft. Space telescopes were proposed as early as 1923, Hubble was funded in the 1970s, with a proposed launch in 1983, but the project was beset by technical delays, budget problems, and the Challenger disaster. When finally launched in 1990, Hubbles main mirror was found to have been ground incorrectly, the optics were corrected to their intended quality by a servicing mission in 1993. Hubble is the telescope designed to be serviced in space by astronauts. After launch by Space Shuttle Discovery in 1990, five subsequent Space Shuttle missions repaired, upgraded, the fifth mission was canceled on safety grounds following the Columbia disaster. However, after spirited public discussion, NASA administrator Mike Griffin approved the fifth servicing mission, the telescope is operating as of 2017, and could last until 2030–2040. Its scientific successor, the James Webb Space Telescope, is scheduled for launch in 2018, the history of the Hubble Space Telescope can be traced back as far as 1946, to the astronomer Lyman Spitzers paper Astronomical advantages of an extraterrestrial observatory. In it, he discussed the two advantages that a space-based observatory would have over ground-based telescopes. First, the resolution would be limited only by diffraction, rather than by the turbulence in the atmosphere. Second, a telescope could observe infrared and ultraviolet light. Spitzer devoted much of his career to pushing for the development of a space telescope, space-based astronomy had begun on a very small scale following World War II, as scientists made use of developments that had taken place in rocket technology. An orbiting solar telescope was launched in 1962 by the United Kingdom as part of the Ariel space program, oAO-1s battery failed after three days, terminating the mission. It was followed by OAO-2, which carried out observations of stars and galaxies from its launch in 1968 until 1972. The continuing success of the OAO program encouraged increasingly strong consensus within the community that the LST should be a major goal