University of Vermont
The University of Vermont The University of Vermont and State Agricultural College, is a public research university and, since 1862, the sole land-grant university in the U. S. state of Vermont. Founded in 1791, UVM is among the oldest universities in the United States and is the fifth institution of higher education established in the New England region of the U. S. northeast. It is listed as one of the original eight "Public Ivy" institutions in the United States; the university is incorporated in the city of Burlington–Vermont's most populous municipality. The campus's Dudley H. Davis Center was the first student center in the United States to receive a Leadership in Energy and Environmental Design Gold certification; the largest hospital complex in Vermont, the University of Vermont Medical Center, has its primary facility on the UVM campus and is affiliated with the Robert Larner College of Medicine. The University of Vermont was founded as a private university in 1791, the same year Vermont became the 14th U.
S. state. The university enrolled its first students 10 years later, its first president, The Rev. Daniel C. Sanders, was hired in 1800, served as the sole faculty member for seven years. Instruction began in 1801, the first class graduated in 1804. In 1865, the university merged with Vermont Agricultural College, emerging as the University of Vermont and State Agricultural College; the University of Vermont draws 6.8 percent of its annual budget of about $600 million from the State of Vermont and Vermont residents make up 35 percent of enrollment, while 65 percent of students come from elsewhere. Much of the initial funding and planning for the university was undertaken by Ira Allen, honored as UVM's founder. Allen donated a 50-acre parcel of land for establishment of the university. Most of this land has been maintained as the university's main green, where stands a statue of Allen; the citizens of Burlington helped fund the university's first edifice, when it was destroyed by fire in 1824 paid for its replacement.
This building came to be known as "Old Mill" for its resemblance to New England mills of the time. The Marquis de Lafayette, a French general who became a commander in the American Revolution, toured all 24 U. S. states in 1824-1825 and while in Vermont laid the cornerstone of Old Mill, which stands on University Row, along with Ira Allen Chapel, Billings Library, Williams Hall, Royall Tyler Theatre and Morrill Hall. A statue of Lafayette stands at the north end of the main green; the University of Vermont was the first American college or university with a charter declaring that the "rules, by-laws shall not tend to give preference to any religious sect or denomination whatsoever."In 1871, UVM defied custom and admitted two women as students. Four years it was the first American university to admit women to full membership into the Phi Beta Kappa Society, the country's oldest collegiate academic honor society. In 1877, it initiated the first African American into the society. Justin Smith Morrill, a U.
S. Representative and Senator from Vermont, author of the Morrill Land-Grant Colleges Act that created federal funding for establishing the U. S. Land-Grant colleges and universities, served as a trustee of the university from 1865 to 1898. In 1924, the first radio broadcast in Vermont occurred from the college station, WCAX, run by students now the call sign of a commercial television station. For 73 years, until 1969, UVM held an annual "Kake Walk"; the University of Vermont comprises seven undergraduate schools, an honors college, a graduate college, a college of medicine. The Honors College does not offer its own degrees. Bachelors and doctoral programs are offered through the College of Agriculture and Life Sciences, the College of Arts and Sciences, the College of Education and Social Services, the College of Engineering and Mathematical Sciences, the College of Medicine, the College of Nursing and Health Sciences, the Graduate College, the School of Business Administration, the Rubenstein School of Environment and Natural Resources.
UVM is ranked tied for 97th in U. S. News & World Report's 2018 national university rankings, is ranked tied for 41st among public universities. In 2016, Forbes "America's Top Colleges" list ranks UVM 138th overall out of 660 private and public colleges and universities in America, ranks it 28th in the "Public Colleges" category and 64th among "Research Universities."The University of Vermont is ranked 40th on a list published by BusinessWeek.com of the top 50 U. S. colleges and universities whose bachelor's degree graduates earn the highest salaries. In 2014, an analysis of federal data found The University of Vermont to be among the top ten schools in the United States with the highest total rape reports. There were 27 total rape reports on their main campus; the College of Arts and Sciences is the largest of UVM's schools and colleges and has the largest number of students and staff. The college offers the bulk of the foundational courses to help ensure that students all over campus have the tools to succeed in all academic endeavors.
It offers 45 areas of study in the humanities, fine arts, social sciences, natural sciences, physical sciences. UVM's Grossman School of Business Administration is accredited by the AACSB International and offers concentrations in accounting, finance, human resource management, international management and the environment, management information systems, marke
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB; the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information; this definition has been incorporated into the International System of Quantities. However, in the computer and information technology fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1048576bytes, a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, in which this quantity is designated by the unit mebibyte. Less common is a convention that used the megabyte to mean 1000×1024 bytes; the megabyte is used to measure either 10002 bytes or 10242 bytes. The interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name.
As 1024 approximates 1000 corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST; the term megabyte continues to be used with different meanings: Base 10 1 MB = 1000000 bytes is the definition recommended by the International System of Units and the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media hard drives, flash-based storage, DVDs, is consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance; the Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units. In this convention, one thousand megabytes is equal to one gigabyte, where 1 GB is one billion bytes.
Base 2 1 MB = 1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the unambiguous binary prefix mebibyte. In this convention, one thousand and twenty-four megabytes is equal to one gigabyte, where 1 GB is 10243 bytes. Mixed 1 MB = 1024000 bytes is the definition used to describe the formatted capacity of the 1.44 MB 3.5-inch HD floppy disk, which has a capacity of 1474560bytes. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two; the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, the number of disk platters in the drive. Changes in any of these factors would not double the size. Sector sizes were set as powers of two for convenience in processing, it was a natural extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal and binary multiples when expressing total disk capacity.
Depending on compression methods and file format, a megabyte of data can be: a 1 megapixel bitmap image with 256 colors stored without any compression. A 4 megapixel JPEG image with normal compression. 1 minute of 128 kbit/s MP3 compressed music. 6 seconds of uncompressed CD audio. A typical English book volume in plain text format; the human genome consists of DNA representing 800 MB of data. The parts that differentiate one person from another can be compressed to 4 MB. Timeline of binary prefixes Gigabyte § Consumer confusion Historical Notes About The Cost Of Hard Drive Storage Space the megabyte International Electrotechnical Commission definitions IEC prefixes and symbols for binary multiples
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in common use today are decadic there have been a number of binary metric prefixes as well; each prefix has a unique symbol, prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand: one kilogram is equal to one thousand grams; the prefix milli- may be added to metre to indicate division by one thousand. Decimal multiplicative prefixes have been a feature of all forms of the metric system, with six of these dating back to the system's introduction in the 1790s. Metric prefixes have been used with some non-metric units; the SI prefixes are standardized for use in the International System of Units by the International Bureau of Weights and Measures in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities; the BIPM specifies twenty prefixes for the International System of Units.
Each prefix name has a symbol, used in combination with the symbols for units of measure. For example, the symbol for'kilo-' is'k', is used to produce'km','kg', and'kW', which are the SI symbols for kilometre and kilowatt, respectively. Where the Greek letter'μ' is unavailable, the symbol for micro'µ' may be used. Where both variants are unavailable, the micro prefix is written as the lowercase Latin letter'u'. Prefixes corresponding to an integer power of one thousand are preferred. Hence'100 m' is preferred over'1 hm' or'10 dam'; the prefixes hecto, deca and centi are used for everyday purposes, the centimetre is common. However, some modern building codes require that the millimetre be used in preference to the centimetre, because "use of centimetres leads to extensive usage of decimal points and confusion". Prefixes may not be used in combination; this applies to mass, for which the SI base unit contains a prefix. For example, milligram is used instead of microkilogram. In the arithmetic of measurements having units, the units are treated as multiplicative factors to values.
If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier, except when combining values with identical units. Hence, 5 mV × 5 mA = 5×10−3 V × 5×10−3 A = 25×10−6 V⋅A = 25 μW 5.00 mV + 10 μV = 5.00 mV + 0.01 mV = 5.01 mVWhen powers of units occur, for example, squared or cubed, the multiplication prefix must be considered part of the unit, thus included in the exponentiation. 1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m or 2×1018 m3, not 2000000 cubic metres. Examples5 cm = 5×10−2 m = 5 × 0.01 m = 0.05 m 9 km2 = 9 × 2 = 9 × 2 × m2 = 9×106 m2 = 9 × 1000000 m2 = 9000000 m2 3 MW = 3×106 W = 3 × 1000000 W = 3000000 W The use of prefixes can be traced back to the introduction of the metric system in the 1790s, long before the 1960 introduction of the SI. The prefixes, including those introduced after 1960, are used with any metric unit, whether included in the SI or not.
Metric prefixes may be used with non-metric units. The choice of prefixes with a given unit is dictated by convenience of use. Unit prefixes for amounts that are much larger or smaller than those encountered are used; the units kilogram, milligram and smaller are used for measurement of mass. However, megagram and larger are used. Megagram and teragram are used to disambiguate the metric tonne from other units with the name'ton'; the kilogram is the only base unit of the International System of Units that includes a metric prefix. The litre, millilitre and smaller are common. In Europe, the centilitre is used for packaged products such as wine and the decilitre is less frequently; the latter two items include prefixes corresponding to an exponent, not divisible by three. Larger volumes are denoted in kilolitres, megalitres or gigalitres, or else in cubic metres or cubic kilometres. For scientific purposes, the cubic metre is used; the kilometre, centimetre and smaller are common. The micrometre is referred to by the non-SI term micron.
In some fields, such as chemistry, the ångström competed with the nanometre. The femtometre, used in particle physics, is sometimes called a fermi. For large scales, megametre and larger are used. Instead, non-metric units are used, such as astronomical units, light years, parsecs; the second, millisecond and shorter are common. The kilosecond and megasecond have some use, though for these and longer times one uses either scientific notation or minutes, so on; the SI unit of angle is the radian, but degrees and seconds see some scientific use. Official policy varies from common practice for the degree Celsius. NIST states: "Prefix symbols may be used with the unit symbol °C and prefix names may be used with the unit name'degree Celsius'. For example, 12 m°C (12 millidegr
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, digital information, notably the bit and the byte, to indicate multiplication by a power of 2. The computer industry has used the units kilobyte and gigabyte, the corresponding symbols KB, MB, GB, in at least two different measurement systems. In citations of main memory capacity, gigabyte customarily means 1073741824 bytes; as this is a power of 1024, 1024 is a power of two, this usage is referred to as a binary measurement. In most other contexts, the industry uses the multipliers kilo, giga, etc. in a manner consistent with their meaning in the International System of Units, namely as powers of 1000. For example, a 500 gigabyte hard disk holds 500000000000 bytes, a 1 Gbit/s Ethernet connection transfers data at 1000000000 bit/s. In contrast with the binary prefix usage, this use is described as a decimal prefix, as 1000 is a power of 10; the use of the same unit prefixes with two different meanings has caused confusion.
Starting around 1998, the International Electrotechnical Commission and several other standards and trade organizations addressed the ambiguity by publishing standards and recommendations for a set of binary prefixes that refer to powers of 1024. Accordingly, the US National Institute of Standards and Technology requires that SI prefixes only be used in the decimal sense: kilobyte and megabyte denote one thousand bytes and one million bytes while new terms such as kibibyte and gibibyte, having the symbols KiB, MiB, GiB, denote 1024 bytes, 1048576 bytes, 1073741824 bytes, respectively. In 2008, the IEC prefixes were incorporated into the international standard system of units used alongside the International System of Quantities. Early computers used one of two addressing methods to access the system memory. For example, the IBM 701 used binary and could address 2048 words of 36 bits each, while the IBM 702 used decimal and could address ten thousand 7-bit words. By the mid-1960s, binary addressing had become the standard architecture in most computer designs, main memory sizes were most powers of two.
This is the most natural configuration for memory, as all combinations of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses. Early computer system documentation would specify the memory size with an exact number such as 4096, 8192, or 16384 words of storage; these are all powers of two, furthermore are small multiples of 210, or 1024. As storage capacities increased, several different methods were developed to abbreviate these quantities; the method most used today uses prefixes such as kilo, mega and corresponding symbols K, M, G, which the computer industry adopted from the metric system. The prefixes kilo- and mega-, meaning 1000 and 1000000 were used in the electronics industry before World War II. Along with giga- or G-, meaning 1000000000, they are now known as SI prefixes after the International System of Units, introduced in 1960 to formalize aspects of the metric system; the International System of Units does not define units for digital information but notes that the SI prefixes may be applied outside the contexts where base units or derived units would be used.
But as computer main memory in a binary-addressed system is manufactured in sizes that were expressed as multiples of 1024, when applied to computer memory, came to be used to mean 1024 bytes instead of 1000. This usage is not consistent with the SI. Compliance with the SI requires that the prefixes take their 1000-based meaning, that they are not to be used as placeholders for other numbers, like 1024; the use of K in the binary sense as in a "32K core" meaning 32 × 1024 words, i.e. 32768 words, can be found as early as 1959. Gene Amdahl's seminal 1964 article on IBM System/360 used "1K" to mean 1024; this style was used by other computer vendors, the CDC 7600 System Description made extensive use of K as 1024. Thus the first binary prefix was born. Another style was to truncate the last three digits and append K using K as a decimal prefix similar to SI, but always truncating to the next lower whole number instead of rounding to the nearest; the exact values 32768 words, 65536 words and 131072 words would be described as "32K", "65K" and "131K".
This style was used from about 1965 to 1975. These two styles were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the exact size was evident from context; the HP 21MX real-time computer denoted 196608 as "196K" and 1048576 as "1M", while the HP 3000 business computer could have "64K", "96K", or "128K" bytes of memory. The "truncation" method waned. Capitalization of the letter K became the de facto standard for binary notation, although this could not be extended to higher powers, use of the lowercase k did persist; the practice of using the SI-inspired "kilo" to indicate 1024 was extended to "megabyte" meaning 10242 bytes, "gigabyte" for 10243 bytes. For example, a "512 megabyte" RAM module is 512×10242 bytes, rather than 512000000; the symbols Kbit, Kbyte and Mbyte started to be used as "binary units"—"bit" or "
Long and short scales
The long and short scales are two of several large-number naming systems for integer powers of ten that use the same words with different meanings. The long scale is based on powers of one million, whereas the short scale is based on powers of one thousand. For whole numbers less than a thousand million the two scales are identical. From a thousand million up the two scales diverge, using the same words for different numbers, which can cause misunderstanding; every new term greater than million is one thousand times as large as the previous term. Thus, billion means a thousand millions, trillion means a thousand billions, so on. Thus, an n-illion equals 103n + 3; every new term greater than million is one million times as large as the previous term. Thus, billion means a million millions, trillion means a million billions, so on. Thus, an n-illion equals 106n. Countries where the long scale is used include most countries in continental Europe and most that are French-speaking, Spanish-speaking and Portuguese-speaking countries, except Brazil.
The short scale is now used in most English-speaking and Arabic-speaking countries, in Brazil, in the former Soviet Union and several other countries. Number names are rendered in the language of the country, but are similar everywhere due to shared etymology; some languages in East Asia and South Asia, have large number naming systems that are different from both the long and short scales, for example the Indian numbering system. For most of the 19th and 20th centuries, the United Kingdom used the long scale, whereas the United States used the short scale, so that the two systems were referred to as British and American in the English language. After several decades of increasing informal British usage of the short scale, in 1974 the government of the UK adopted it, it is used for all official purposes. With few exceptions, the British usage and American usage are now identical; the first recorded use of the terms short scale and long scale was by the French mathematician Geneviève Guitel in 1975.
To avoid confusion resulting from the coexistence of short and long term in any language, the SI recommends using the Metric prefix, which keeps the same meaning regardless of the country and the language. Long and short scales remain in de facto use for counting money; the relationship between the numeric values and the corresponding names in the two scales can be described as: The relationship between the names and the corresponding numeric values in the two scales can be described as: The root mil in million does not refer to the numeral, 1. The word, derives from the Old French, from the earlier Old Italian, milione, an intensification of the Latin word, mille, a thousand; that is, a million is a big thousand, much as a great gross is a dozen gross or 12×144 = 1728. The word milliard, or its translation, is found in many European languages and is used in those languages for 109. However, it is unknown in American English, which uses billion, not used in British English, which preferred to use thousand million before the current usage of billion.
The financial term, which derives from milliard, is used on financial markets, as, unlike the term, billion, it is internationally unambiguous and phonetically distinct from million. Many long scale countries use the word billiard for one thousand long scale billions, the word trilliard for one thousand long scale trillions, etc; the existence of the different scales means that care must be taken when comparing large numbers between languages or countries, or when interpreting old documents in countries where the dominant scale has changed over time. For example, British English and Italian historical documents can refer to either the short or long scale, depending on the date of the document, since each of the three countries has used both systems at various times in its history. Today, the United Kingdom uses the short scale, but France and Italy use the long scale; the pre-1974 former British English word billion, post-1961 current French word billion, post-1994 current Italian word bilione, German Billion.
Therefore, each of these words translates to the American English or post-1974 British English word: trillion, not billion. On the other hand, the pre-1961 former French word billion, pre-1994 former Italian word bilione, Brazilian Portuguese word bilhão and the Welsh word biliwn all refer to 109, being short scale terms; each of these words translates to post-1974 British English word billion. The term billion meant 1012 when introduced. In long scale countries, milliard was defined to its current value of 109, leaving billion at its original 1012 value and so on for the larger numbers; some of these countries, but not all, introduced new words billiard, etc. as intermediate terms. In some short scale countries, milliard was defined to 109 and billion dropped altogether, with trillion redefined down to 1012 and so on for the larger numbers. In many short scale countries, milliard was dropped altogether and billion was redefined down to 109, adjusting downwards the value of trillion and all
World Wide Web
The World Wide Web known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser. English scientist Tim Berners-Lee invented the World Wide Web in 1989, he wrote the first web browser in 1990 while employed at CERN near Switzerland. The browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and to the general public in August 1991; the World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web resources may be any type of downloaded media, but web pages are hypertext media that have been formatted in Hypertext Markup Language; such formatting allows for embedded hyperlinks that contain URLs and permit users to navigate to other web resources.
In addition to text, web pages may contain images, video and software components that are rendered in the user's web browser as coherent pages of multimedia content. Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be provided for a myriad of informative, commercial, governmental, or non-governmental reasons. Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to discuss the possibility of a web-like system at CERN.
While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On March 12, 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics and video, so that Berners-Lee goes on to use the term hypermedia.
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had been in development for about two months and the first Web server was about a month from completing its first successful test; this proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, Web 2.0 and RSS/Atom. The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University.
The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser and the first web server; the first web site, which described the project itself, was published on 20 December 1990. The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit to UNC. Jones stored it on his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext.
This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, several news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro.
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo