In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
In computing, minifloats are floating-point values represented with few bits. Predictably, they are not well suited for general-purpose numerical calculations, they are used for special purposes, most in computer graphics, where iterations are small and precision has aesthetic effects. Additionally, they are encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers. Minifloats with 16 bits are half-precision numbers. There are minifloats with 8 bits or fewer. Minifloats can be designed following the principles of the IEEE 754 standard. In this case they must obey the rules for the frontier between subnormal and normal numbers and must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent; the new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats. The Radeon R300 and R420 GPUs used an "fp24" floating-point format with 7 bits of exponent and 16 bits of mantissa.
"Full Precision" in Direct3D 9.0 is a proprietary 24-bit floating-point format. Microsoft's D3D9 graphics API supported both FP24 and FP32 as "Full Precision", as well as FP16 as "Partial Precision" for vertex and pixel shader calculations performed by the graphics hardware. In computer graphics minifloats are sometimes used to represent only integral values. If at the same time subnormal values should exist, the least subnormal number has to be 1; this statement can be used to calculate the bias value. The following example demonstrates the calculation, as well as the underlying principles. A minifloat in 1 byte with 1 sign bit, 4 exponent bits and 3 mantissa bits should be used to represent integral values. All IEEE 754 principles should be valid; the only free value is the exponent bias, which will come out as −2. The unknown exponent is called for the moment x. Numbers in a different base are marked as...base, for example, 1012 = 5. The bit patterns have spaces to visualize their parts. 0 0000 000 = 0 The mantissa is extended with "0.": 0 0000 001 = 0.0012 × 2x = 0.125 × 2x = 1... 0 0000 111 = 0.1112 × 2x = 0.875 × 2x = 7 The mantissa is extended with "1.": 0 0001 000 = 1.0002 × 2x = 1 × 2x = 8 0 0001 001 = 1.0012 × 2x = 1.125 × 2x = 9... 0 0010 000 = 1.0002 × 2x+1 = 1 × 2x+1 = 16 0 0010 001 = 1.0012 × 2x+1 = 1.125 × 2x+1 = 18... 0 1110 000 = 1.0002 × 2x+13 = 1.000 × 2x+13 = 65536 0 1110 001 = 1.0012 × 2x+13 = 1.125 × 2x+13 = 73728... 0 1110 110 = 1.1102 × 2x+13 = 1.750 × 2x+13 = 114688 0 1110 111 = 1.1112 × 2x+13 = 1.875 × 2x+13 = 122880 0 1111 000 = +infinity 1 1111 000 = −infinity If the exponent field were not treated specially, the value would be 0 1111 000 = 1.0002 × 2x+14 = 217 = 131072 x 1111 yyy = NaN Without the IEEE 754 special handling of the largest exponent, the greatest possible value would be 0 1111 111 = 1.1112 × 2x+14 = 1.875 × 217 = 245760 If the least subnormal value should be 1, the value of x has to be x = 3.
Therefore the bias has to be −2. This is a chart of all possible values when treating the float to a IEEE float. Due to the severe lack of precision with 8-bit floats, it is suggested that you only use them scaled to integer values. However, in practice, floats are not shown exactly. Instead, they are rounded. Integral minifloats in 1 byte have a greater range of ±122 880 than two's-complement integer with a range −128 to +127; the greater range is compensated by a poor precision, because there are only 4 mantissa bits, equivalent to more than one decimal place. They have greater range than half-precision minifloats with range ±65 504 compensated by lack of fractions and poor precision. There are only 242 different values, because 14 bit patterns represent NaN; the values between 0 and 16 have the same bit pattern as two's - complement integer. The first pattern with a different value is 00010001, 18 as a minifloat and 17 as a two's-complement integer; this coincidence does not occur at all with negative values, because this minifloat is a signed-magnitude format.
The real line on the right shows the varying density of the floating-point values – a property, common to any floating-point system. This varying density results in a curve similar to the exponential function. Although the curve may appear smooth, this is not the case; the graph consists of distinct points, these points lie on line segments with discrete slopes. The value of the exponent bits determines the absolute precision of the mantissa bits, it is this precision that determines the slope of each linear segment; the graphic demonstrates the addition of smaller -minifloats with 6 bits. This floating-point system follows the rules of IEEE 754 exactly. NaN as operand produces always NaN results. Inf − Inf and + Inf results in NaN too. Inf can be decremented by finite values without change. Sums with finite operands can give an infinite result; the range of the fini
String (computer science)
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed. A string is considered a data type and is implemented as an array data structure of bytes that stores a sequence of elements characters, using some character encoding. String may denote more general arrays or other sequence data types and structures. Depending on programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements; when a string appears in source code, it is known as a string literal or an anonymous string. In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet. Let Σ be a non-empty finite set of symbols, called the alphabet.
No assumption is made about the nature of the symbols. A string over Σ is any finite sequence of symbols from Σ. For example, if Σ = 01011 is a string over Σ; the length of a string s can be any non-negative integer. The empty string is the unique string over Σ of length 0, is denoted ε or λ; the set of all strings over Σ of length n is denoted Σn. For example, if Σ = Σ2 =. Note that Σ0 = for any alphabet Σ; the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ = Σ* =. Although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a formal language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, is denoted st.
For example, if Σ =, s = bear, t = hug st = bearhug and ts = hugbear. String concatenation is an non-commutative operation; the empty string ε serves as the identity element. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers. A string s is said to be a substring or factor of t if there exist strings u and v such that t = usv; the relation "is a substring of" defines a partial order on Σ*, the least element of, the empty string. A string s is said to be a prefix of t if there exists a string u such that t = su. If u is nonempty, s is said to be a proper prefix of t. Symmetrically, a string s is said to be a suffix of t if there exists a string u such that t = us. If u is nonempty, s is said to be a proper suffix of t. Suffixes and prefixes are substrings of t. Both the relations "is a prefix of" and "is a suffix of" are prefix orders. A string s = uv.
For example, if Σ = the string 0011001 is a rotation of 0100110, where u = 00110 and v = 01. The reverse of a string is a string in reverse order. For example, if s = abc the reverse of s is cba. A string, the reverse of itself is called a palindrome, which includes the empty string and all strings of length 1, it is useful to define an ordering on a set of strings. If the alphabet Σ has a total order one can define a total order on Σ* called lexicographical order. For example, if Σ = and 0 < 1 the lexicographical order on Σ* includes the relationships ε < 0 < 00 < 000 <... < 0001 < 001 < 01 < 010 < 011 < 0110 < 01111 < 1 < 10 < 100 < 101 < 111 < 1111 < 11111... The lexicographical order is total if the alphabetical order is, but isn't well-founded for any nontrivial alphabet if the alphabetical order is. See Shortlex for an alternative string ordering that preserves well-foundedness. A number of additional operations on strings occur in the formal theory; these are given in the article on string operations.
Strings admit the following interpretation as nodes on a graph: Fixed-length strings can be viewed as nodes on a hypercube Variable-length strings can be viewed as nodes on the k-ary tree, where k is the number of symbols in Σ Infinite strings can be viewed as i
Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s, as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Put, it represents each value as a range of possibilities. For example, instead of estimating the height of someone using standard arithmetic as 2.0 metres, using interval arithmetic we might be certain that that person is somewhere between 1.97 and 2.03 metres. This concept is suitable for a variety of purposes; the most common use is to keep track of and handle rounding errors directly during the calculation and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic helps find reliable and guaranteed solutions to equations and optimization problems.
Mathematically, instead of working with an uncertain real x we work with the two ends of the interval that contains x. In interval arithmetic, any variable x could be one of them. A function f when applied to x is uncertain. In interval arithmetic f produces an interval, all the possible values for f for all x ∈; the main focus of interval arithmetic is the simplest way to calculate upper and lower endpoints for the range of values of a function in one or more variables. These endpoints are not the supremum or infimum, since the precise calculation of those values can be difficult or impossible. Treatment is limited to real intervals, so quantities of form =, where a = − ∞ and b = ∞ are allowed; as with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined. More complicated functions can be calculated from these basic elements. Take as an example the calculation of body mass index; the BMI is the body weight in kilograms divided by the square of height in metres.
A bathroom scale may have a resolution of one kilogram. We do not know intermediate values – about 79.6 kg or 80.3 kg – but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone weighs 80.0 kg. In normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all real numbers that are greater than or equal to 79.5, while less than or equal to 80.5, or in other words the interval. For a man who weighs 80 kg and is 1.80 m tall, the BMI is about 24.7. With a weight of 79.5 kg and the same height the value is 24.5, while 80.5 kilograms gives 24.9. So the actual BMI is in the range; the error in this case does not affect the conclusion. For example, weight fluctuates in the course of a day so that the BMI can vary between 24 and 25. Without detailed analysis it is not possible to always exclude questions as to whether an error is large enough to have significant influence. Interval arithmetic states the range of possible outcomes explicitly.
Put, results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to error bars to a metric in expressing the extent of uncertainty. Simple arithmetic operations, such as basic arithmetic and trigonometric functions, enable the calculation of outer limits of intervals. Returning to the earlier BMI example, in determining the body mass index and body weight both affect the result. For height, measurements are in round centimetres: a recorded measurement of 1.80 metres means a height somewhere between 1.795 m and 1.805 m. This uncertainty must be combined with the fluctuation range in weight between 80.5 kg. The BMI is defined as the weight in kilograms divided by the square of height in metre. Using either 79.5 kg and 1.795 m or 80.5 kg and 1.805 m gives 24.7. But the person in question may only be 1.795 m tall, with a weight of 80.5 kilograms – or 1.805 m and 79.5 kilograms: all combinations of all possible intermediate values must be considered.
Using the interval arithmetic methods described below, the BMI lies in the interval / 2 = [ 24
In computer science, an associative array, symbol table, or dictionary is an abstract data type composed of a collection of pairs, such that each possible key appears at most once in the collection. Operations associated with this data type allow: the addition of a pair to the collection the removal of a pair from the collection the modification of an existing pair the lookup of a value associated with a particular keyThe dictionary problem is a classic computer science problem: the task of designing a data structure that maintains a set of data during'search','delete', and'insert' operations; the two major solutions to the dictionary problem are a search tree. In some cases it is possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures. Many programming languages include associative arrays as primitive data types, they are available in software libraries for many others. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern. The name does not come from the associative property known in mathematics. Rather, it arises from the fact. In an associative array, the association between a key and a value is known as a "binding", the same word "binding" may be used to refer to the process of creating a new association; the operations that are defined for an associative array are: Add or insert: add a new pair to the collection, binding the new key to its new value. The arguments to this operation are the value. Reassign: replace the value in one of the pairs that are in the collection, binding an old key to a new value; as with an insertion, the arguments to this operation are the value. Remove or delete: remove a pair from the collection, unbinding a given key from its value; the argument to this operation is the key. Lookup: find the value, bound to a given key; the argument to this operation is the key, the value is returned from the operation.
If no value is found, some associative array implementations raise an exception, while others create a pair with the given key and the default value of the value type. Instead of add or reassign there is a single set operation that adds a new pair if one does not exist, otherwise reassigns it. In addition, associative arrays may include other operations such as determining the number of bindings or constructing an iterator to loop over all the bindings. For such an operation, the order in which the bindings are returned may be arbitrary. A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the bindings operate in both directions: each value must be associated with a unique key, a second lookup operation takes a value as argument and looks up the key associated with that value. Suppose that the set of loans made by a library is represented in a data structure; each book in a library may be checked out only by a single library patron at a time.
However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be: A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, if Pat checks out a book, that would cause an insertion operation, leading to a different state: For dictionaries with small numbers of bindings, it may make sense to implement the dictionary using an association list, a linked list of bindings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of bindings. Another simple implementation technique, usable when the keys are restricted to a narrow range of integers, is direct addressing into an array: the value for a given key k is stored at the array cell A, or if there is no binding for k the cell stores a special sentinel value that indicates the absence of a binding.
As well as being simple, this technique is fast: each dictionary operation takes constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small; the two major approaches to implementing dictionaries are a search tree. The most used general purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array; the basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard; the standard defines: arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers and special "not a number" values interchange formats: encodings that may be used to exchange floating-point data in an efficient and compact form rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions operations: arithmetic and other operations on arithmetic formats exception handling: indications of exceptional conditions The current version, IEEE 754-2008 revision published in August 2008, includes nearly all of the original IEEE 754-1985 standard plus IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic.
The current version, IEEE 754-2008 published in August 2008, is derived from and replaces IEEE 754-1985, the previous version, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. The international standard ISO/IEC/IEEE 60559:2011 has been approved for adoption through JTC1/SC 25 under the ISO/IEEE PSDO Agreement and published; the binary formats in the original standard are included in the new standard along with three new basic formats, one binary and two decimal. To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format; as of September 2015, the standard is being revised to incorporate errata. An IEEE 754 format is a "set of representations of numerical values and symbols". A format may include how the set is encoded. A floating-point format is specified by: a base b, either 2 or 10 in IEEE 754. A format comprises: Finite numbers, which can be described by three integers: s = a sign, c = a significand having no more than p digits when written in base b, q = an exponent such that emin ≤ q+p−1 ≤ emax.
The numerical value of such a finite number is s ×. Moreover, there are two zero values, called signed zeros: the sign bit specifies whether a zero is +0 or −0. Two infinities: +∞ and −∞. Two kinds of NaN: a quiet NaN and a signaling NaN. For example, if b = 10, p = 7 and emax = 96 emin = −95, the significand satisfies 0 ≤ c ≤ 9,999,999, the exponent satisfies −101 ≤ q ≤ 90; the smallest non-zero positive number that can be represented is 1×10−101, the largest is 9999999×1090, so the full range of numbers is −9.999999×1096 through 9.999999×1096. The numbers − b1 − emax are the smallest normal numbers; some numbers may have several possible exponential format representations. For instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, −1234500×10−5. However, for most operations, such as arithmetic operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, the set of these representations is called a cohort.
When a result can have several representations, the standard specifies which member of the cohort is chosen. For the binary formats, the representation is made unique by choosing the smallest representable exponent allowing the value to be represented exactly. Further, the exponent is not represented directly, but a bias is added so that the smallest representable exponent is represented as 1, with 0 used for subnormal numbers. For numbers with an exponent in the normal range, the leading bit of the significand will always be 1. A leading 1 can be implied rather than explicitly present in the memory encoding, under the standard the explicitly represented part of the significand will lie between 0 and 1; this rule is called implicit bit convention, or hidden bit convention. This rule allows the binary format to have an extra bit of precision; the leading bit convention cannot be used for the subnormal numbers as they have an exponent outside the normal exponent range and scale by the smallest represented exponent as used for the smallest normal numbers.
Due to the possibility of multiple encodings, a NaN may carry other information: a sign bit and a payload, intended for diagnostic information indicating the source of the NaN. The standard defines five basic formats that are named for their numeric base and the number of bits used in their interchange encoding. There are two decimal floating-point basic formats; the binary32 and binary64 formats are the double formats of IEEE 754-1985 respectively. A conforming implementation must implement at least one of the basic formats. The
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function. A computer program is written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A collection of computer programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software; the underlying method used for some calculation or manipulation is known as an algorithm. The earliest programmable machines preceded the invention of the digital computer. In 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be repeated by arranging the cards.
In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled; the device would have had a "store"—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" would have been transferred to the "mill", for processing, and a "thread" being the execution of programmed instructions by the device. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea; the memoir covered the Analytical Engine. The translation contained Note G which detailed a method for calculating Bernoulli numbers using the Analytical Engine.
This note is recognized by some historians as the world's first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine, it is a finite-state machine. The machine can move the tape forth, changing its contents as it performs an algorithm; the machine starts in the initial state, goes through a sequence of steps, halts when it encounters the halt state. This machine is considered by some to be the origin of the stored-program computer—used by John von Neumann for the "Electronic Computing Instrument" that now bears the von Neumann architecture name; the Z3 computer, invented by Konrad Zuse in Germany, was a programmable computer. A digital computer uses electricity as the calculating component; the Z3 contained 2,400 relays to create the circuits. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.
The Electronic Numerical Integrator And Computer was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together, its 40 units weighed 30 tons, occupied 1,800 square feet, consumed $650 per hour in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables; each function table had 728 rotating knobs. Programming the ENIAC involved setting some of the 3,000 switches. Debugging a program took a week; the programmers of the ENIAC were women who were known collectively as the "ENIAC girls." The ENIAC featured parallel operations. Different sets of accumulators could work on different algorithms, it used punched card machines for input and output, it was controlled with a clock signal. It ran for eight years, calculating hydrogen bomb parameters, predicting weather patterns, producing firing tables to aim artillery guns.
The Manchester Baby was a stored-program computer. Programming transitioned away from setting dials. Only three bits of memory were available to store each instruction, so it was limited to eight instructions. 32 switches were available for programming. Computers manufactured; the computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed; this process was repeated. Computer programs were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed. In 1961, the Burroughs B5000 was built to be programmed in the ALGOL 60 language; the hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture; the Model 30 was the least expensive. Customers could retain the same application software; each System/360 model featured multiprogramming.
With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute; each model could emulate other computers. Customers could upgrade to the System/360 and ret