1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
2.
Numerical digit
–
A digit is a numeric symbol used in combinations to represent numbers in positional numeral systems. The name digit comes from the fact that the 10 digits of the hands correspond to the 10 symbols of the common base 10 numeral system, i. e. the decimal digits. In a given system, if the base is an integer. For example, the system has ten digits, whereas binary has two digits. In a basic system, a numeral is a sequence of digits. Each position in the sequence has a value, and each digit has a value. The value of the numeral is computed by multiplying each digit in the sequence by its place value, each digit in a number system represents an integer. For example, in decimal the digit 1 represents the one, and in the hexadecimal system. A positional number system must have a digit representing the integers from zero up to, but not including, thus in the positional decimal system, the numbers 0 to 9 can be expressed using their respective numerals 0 to 9 in the rightmost units position. The Hindu–Arabic numeral system uses a decimal separator, commonly a period in English, or a comma in other European languages, to denote the place or units place. Each successive place to the left of this has a value equal to the place value of the previous digit times the base. Similarly, each place to the right of the separator has a place value equal to the place value of the previous digit divided by the base. For example, in the numeral 10, the total value of the number is 1 ten,0 ones,3 tenths, and 4 hundredths. Note that the zero, which contributes no value to the number, the place value of any given digit in a numeral can be given by a simple calculation, which in itself is a compliment to the logic behind numeral systems. And to the right, the digit is multiplied by the base raised by a negative n, for example, in the number 10. This system was established by the 7th century in India, but was not yet in its modern form because the use of the digit zero had not yet widely accepted. Instead of a zero, a dot was left in the numeral as a placeholder, the first widely acknowledged use of zero was in 876. The original numerals were very similar to the ones, even down to the glyphs used to represent digits
3.
Computer memory
–
In computing, memory refers to the computer hardware devices involved to store information for immediate use in a computer, it is synonymous with the term primary storage. Computer memory operates at a speed, for example random-access memory, as a distinction from storage that provides slow-to-access program and data storage. If needed, contents of the memory can be transferred to secondary storage. An archaic synonym for memory is store, there are two main kinds of semiconductor memory, volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory, most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes both one bit per cell and multiple bits per cell. The memory cells are grouped into words of fixed word length, each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word, typical secondary storage devices are hard disk drives and solid-state drives. In the early 1940s, memory technology oftenly permit a capacity of a few bytes, the next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient, two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, the Williams tube would prove more capacious than the Selectron tube and less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory, jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s, developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term memory when used with reference to computers generally refers to Random Access Memory or RAM, volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM or dynamic RAM, SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM, non-volatile memory is computer memory that can retain the stored information even when not powered
4.
Arithmetic logic unit
–
An arithmetic logic unit is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit, which operates on floating point numbers, an ALU is a fundamental building block of many types of computing circuits, including the central processing unit of computers, FPUs, and graphics processing units. A single CPU, FPU or GPU may contain multiple ALUs, in many designs, the ALU also exchanges additional information with a status register, which relates to the result of the current or previous operations. An ALU has a variety of input and output nets, which are the electrical conductors used to digital signals between the ALU and external circuitry. When an ALU is operating, external circuits apply signals to the ALU inputs and, in response, a basic ALU has three parallel data buses consisting of two input operands and a result output. Each data bus is a group of signals that conveys one binary integer number, typically, the A, B and Y bus widths are identical and match the native word size of the external circuitry. The opcode size is related to the number of different operations the ALU can perform, for example, a four-bit opcode can specify up to sixteen different ALU operations. Generally, an ALU opcode is not the same as a machine language opcode, the status outputs are various individual signals that convey supplemental information about the result of an ALU operation. These outputs are usually stored in registers so they can be used in future ALU operations or for controlling conditional branching. The collection of bit registers that store the status outputs are often treated as a single, multi-bit register, zero, which indicates all bits of output are logic zero. Negative, which indicates the result of an operation is negative. Overflow, which indicates the result of an operation has exceeded the numeric range of output. Parity, which indicates whether an even or odd number of bits in the output are logic one, the status input allows additional information to be made available to the ALU when performing an operation. Typically, this is a bit that is the stored carry-out from a previous ALU operation. An ALU is a logic circuit, meaning that its outputs will change asynchronously in response to input changes. In general, external circuitry controls an ALU by applying signals to its inputs, at the same time, the CPU also routes the ALU result output to a destination register that will receive the sum. The ALUs input signals, which are stable until the next clock, are allowed to propagate through the ALU. When the next clock arrives, the destination register stores the ALU result and, since the ALU operation has completed, a number of basic arithmetic and bitwise logic functions are commonly supported by ALUs
5.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
6.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
7.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
8.
Array data structure
–
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula, the simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of 10 32-bit integer variables, with indices 0 through 9,2036, so that the element with index i has the address 2000 +4 × i. The memory address of the first element of an array is called first address or foundation address, because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term vector is used in computing to refer to an array, arrays are often used to implement tables, especially lookup tables, the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program and they are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers, in most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of a data structure are required to have the same size. The set of valid index tuples and the addresses of the elements are usually, Array types are often implemented by array structures, however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, john von Neumann wrote the first array-sorting program in 1945, during the building of the first stored-program computer. p. 159 Array indexing was originally done by self-modifying code, and later using index registers, some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN, Lisp, COBOL, and ALGOL60, had support for multi-dimensional arrays, in C++, class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables, many databases, small and large, consist of one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, one or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the way to allocate dynamic memory portably
9.
Arithmetic
–
Arithmetic is a branch of mathematics that consists of the study of numbers, especially the properties of the traditional operations between them—addition, subtraction, multiplication and division. Arithmetic is an part of number theory, and number theory is considered to be one of the top-level divisions of modern mathematics, along with algebra, geometry. The terms arithmetic and higher arithmetic were used until the beginning of the 20th century as synonyms for number theory and are still used to refer to a wider part of number theory. The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations as early as 2000 BC and these artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, in both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a board or the Roman abacus to obtain the results. Early number systems that included positional notation were not decimal, including the system for Babylonian numerals. Because of this concept, the ability to reuse the same digits for different values contributed to simpler. The continuous historical development of modern arithmetic starts with the Hellenistic civilization of ancient Greece, prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. For example, Nicomachus summarized the viewpoint of the earlier Pythagorean approach to numbers, Greek numerals were used by Archimedes, Diophantus and others in a positional notation not very different from ours. Because the ancient Greeks lacked a symbol for zero, they used three separate sets of symbols, one set for the units place, one for the tens place, and one for the hundreds. Then for the place they would reuse the symbols for the units place. Their addition algorithm was identical to ours, and their multiplication algorithm was very slightly different. Their long division algorithm was the same, and the square root algorithm that was taught in school was known to Archimedes. He preferred it to Heros method of successive approximation because, once computed, a digit doesnt change, and the square roots of perfect squares, such as 7485696, terminate immediately as 2736. For numbers with a part, such as 546.934. The ancient Chinese used a positional notation. Because they also lacked a symbol for zero, they had one set of symbols for the place
10.
Public-key cryptography
–
In a public key encryption system, any person can encrypt a message using the public key of the receiver, but such a message can be decrypted only with the receivers private key. For this to work it must be easy for a user to generate a public. The strength of a public key cryptography system relies on the degree of difficulty for a properly generated private key to be determined from its public key. Security then depends only on keeping the key private. Public key algorithms, unlike symmetric key algorithms, do not require a secure channel for the exchange of one secret keys between the parties. Because of the complexity of asymmetric encryption, it is usually used only for small blocks of data. This symmetric key is used to encrypt the rest of the potentially long message sequence. The symmetric encryption/decryption is based on algorithms and is much faster. In a public key system, a person can combine a message with a private key to create a short digital signature on the message. Thus the authenticity of a message can be demonstrated by the signature, Public key algorithms are fundamental security ingredients in cryptosystems, applications and protocols. They underpin various Internet standards, such as Transport Layer Security, S/MIME, PGP, some public key algorithms provide key distribution and secrecy, some provide digital signatures, and some provide both. Public key cryptography finds application in, among others, the information technology security discipline, information security is concerned with all aspects of protecting electronic information assets against security threats. Public key cryptography is used as a method of assuring the confidentiality, authenticity and non-repudiability of electronic communications, two of the best-known uses of public key cryptography are, Public key encryption, in which a message is encrypted with a recipients public key. The message cannot be decrypted by anyone who does not possess the matching private key, who is presumed to be the owner of that key. This is used in an attempt to ensure confidentiality, digital signatures, in which a message is signed with the senders private key and can be verified by anyone who has access to the senders public key. This verification proves that the sender had access to the private key, an analogy to public key encryption is that of a locked mail box with a mail slot. The mail slot is exposed and accessible to the public – its location is, in essence, anyone knowing the street address can go to the door and drop a written message through the slot. However, only the person who possesses the key can open the mailbox, an analogy for digital signatures is the sealing of an envelope with a personal wax seal
11.
Gaussian integration
–
The domain of integration for such a rule is conventionally taken as, so the rule is stated as ∫ −11 f d x = ∑ i =1 n w i f. Gaussian quadrature as above will produce good results if the function f is well approximated by a polynomial function within the range. The method is not, for example, suitable for functions with singularities, common weighting functions include ω =1 /1 − x 2 and ω = e − x 2. It can be shown that the evaluation points xi are just the roots of a polynomial belonging to a class of orthogonal polynomials. For the simplest integration problem stated above, i. e. with ω =1, the polynomials are Legendre polynomials, Pn. With the n-th polynomial normalized to give Pn =1, the i-th Gauss node, xi, is the root of Pn. Some low-order rules for solving the problem are listed below. An integral over must be changed into an integral over before applying the Gaussian quadrature rule and this change of interval can be done in the following way, ∫ a b f d x = b − a 2 ∫ −11 f d x. Applying the Gaussian quadrature rule then results in the following approximation, the integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than. That is, the problem is to calculate ∫ a b ω f d x for some choices of a, b, for a = −1, b =1, and ω =1, the problem is the same as that considered above. Other choices lead to other integration rules, some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun, let pn be a nontrivial polynomial of degree n such that ∫ a b ω x k p n d x =0, for all k =0,1, …, n −1. If we pick the n nodes xi to be the zeros of pn, furthermore, all these nodes xi will lie in the open interval. The polynomial pn is said to be a polynomial of degree n associated to the weight function ω. It is unique up to a constant normalization factor, thus ∫ a b ω h d x = ∫ a b ω r d x. Because of the choice of nodes xi, the corresponding relation ∑ i =1 n w i h = ∑ i =1 n w i r holds also. The exactness of the integral for h then follows from corresponding exactness for polynomials of degree only n or less. We can write ∏1 ≤ j ≤ n j ≠ i = ∏1 ≤ j ≤ n x − x i = p n a n where a n is the coefficient of x n in p n
12.
Mathematical constant
–
A mathematical constant is a special number, usually a real number, that is significantly interesting in some way. Constants arise in areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory. The more popular constants have been studied throughout the ages and computed to many decimal places, all mathematical constants are definable numbers and usually are also computable numbers. These are constants which one is likely to encounter during pre-college education in many countries, however, its ubiquity is not limited to pure mathematics. It appears in many formulas in physics, and several physical constants are most naturally defined with π or its reciprocal factored out and it is debatable, however, if such appearances are fundamental in any sense. For example, the textbook nonrelativistic ground state wave function of the atom is ψ =11 /2 e − r / a 0. This formula contains a π, but it is unclear if that is fundamental in a physical sense, furthermore, this formula gives only an approximate description of physical reality, as it omits spin, relativity, and the quantal nature of the electromagnetic field itself. The numeric value of π is approximately 3.1415926535, memorizing increasingly precise digits of π is a world record pursuit. The constant e also has applications to probability theory, where it arises in a way not obviously related to exponential growth, suppose a slot machine with a one in n probability of winning is played n times. Then, for large n the probability that nothing will be won is approximately 1/e, another application of e, discovered in part by Jacob Bernoulli along with French mathematician Pierre Raymond de Montmort, is in the problem of derangements, also known as the hat check problem. Here n guests are invited to a party, and at the door each guest checks his hat with the butler who then places them into labelled boxes, the butler does not know the name of the guests, and so must put them into boxes selected at random. The problem of de Montmort is, what is the probability that none of the hats gets put into the right box, the answer is p n =1 −11. + ⋯ + n 1 n. and as n tends to infinity, the numeric value of e is approximately 2.7182818284. The square root of 2, often known as root 2, radical 2, or Pythagorass constant, and written as √2, is the algebraic number that. It is more called the principal square root of 2. Geometrically the square root of 2 is the length of a diagonal across a square sides of one unit of length. It was probably the first number known to be irrational and its numerical value truncated to 65 decimal places is,1.41421356237309504880168872420969807856967187537694807317667973799. The quick approximation 99/70 for the root of two is frequently used
13.
Pi
–
The number π is a mathematical constant, the ratio of a circles circumference to its diameter, commonly approximated as 3.14159. It has been represented by the Greek letter π since the mid-18th century, being an irrational number, π cannot be expressed exactly as a fraction. Still, fractions such as 22/7 and other numbers are commonly used to approximate π. The digits appear to be randomly distributed, in particular, the digit sequence of π is conjectured to satisfy a specific kind of statistical randomness, but to date no proof of this has been discovered. Also, π is a number, i. e. a number that is not the root of any non-zero polynomial having rational coefficients. This transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass, ancient civilizations required fairly accurate computed values for π for practical reasons. It was calculated to seven digits, using techniques, in Chinese mathematics. The extensive calculations involved have also used to test supercomputers. Because its definition relates to the circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses, and spheres. Because of its role as an eigenvalue, π appears in areas of mathematics. It is also found in cosmology, thermodynamics, mechanics, attempts to memorize the value of π with increasing precision have led to records of over 70,000 digits. In English, π is pronounced as pie, in mathematical use, the lowercase letter π is distinguished from its capitalized and enlarged counterpart ∏, which denotes a product of a sequence, analogous to how ∑ denotes summation. The choice of the symbol π is discussed in the section Adoption of the symbol π, π is commonly defined as the ratio of a circles circumference C to its diameter d, π = C d The ratio C/d is constant, regardless of the circles size. For example, if a circle has twice the diameter of another circle it will also have twice the circumference, preserving the ratio C/d. This definition of π implicitly makes use of geometry, although the notion of a circle can be extended to any curved geometry. Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be defined independently of geometry using limits. An integral such as this was adopted as the definition of π by Karl Weierstrass, definitions of π such as these that rely on a notion of circumference, and hence implicitly on concepts of the integral calculus, are no longer common in the literature. One such definition, due to Richard Baltzer, and popularized by Edmund Landau, is the following, the cosine can be defined independently of geometry as a power series, or as the solution of a differential equation
14.
Riemann zeta function
–
More general representations of ζ for all s are given below. The Riemann zeta function plays a role in analytic number theory and has applications in physics, probability theory. As a function of a variable, Leonhard Euler first introduced and studied it in the first half of the eighteenth century without using complex analysis. The values of the Riemann zeta function at even positive integers were computed by Euler, the first of them, ζ, provides a solution to the Basel problem. In 1979 Apéry proved the irrationality of ζ, the values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, the Riemann zeta function ζ is a function of a complex variable s = σ + it. It can also be defined by the integral ζ =1 Γ ∫0 ∞ x s −1 e x −1 d x where Γ is the gamma function. The Riemann zeta function is defined as the continuation of the function defined for σ >1 by the sum of the preceding series. Leonhard Euler considered the series in 1740 for positive integer values of s. The above series is a prototypical Dirichlet series that converges absolutely to a function for s such that σ >1. Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values s ≠1, for s =1 the series is the harmonic series which diverges to +∞, and lim s →1 ζ =1. Thus the Riemann zeta function is a function on the whole complex s-plane. For any positive even integer 2n, ζ = n +1 B2 n 2 n 2, where B2n is the 2nth Bernoulli number. For odd positive integers, no simple expression is known, although these values are thought to be related to the algebraic K-theory of the integers. For nonpositive integers, one has ζ = B n +1 n +1 for n ≥0 In particular, ζ = −12, Similarly to the above, this assigns a finite result to the series 1 +1 +1 +1 + ⋯. ζ ≈ −1.4603545 This is employed in calculating of kinetic boundary layer problems of linear kinetic equations, ζ =1 +12 +13 + ⋯ = ∞, if we approach from numbers larger than 1. Then this is the harmonic series, but its Cauchy principal value lim ε →0 ζ + ζ2 exists which is the Euler–Mascheroni constant γ =0. 5772…. ζ ≈2.612, This is employed in calculating the critical temperature for a Bose–Einstein condensate in a box with periodic boundary conditions, and for spin wave physics in magnetic systems
15.
Fractal
–
A fractal is a mathematical set that exhibits a repeating pattern displayed at every scale. It is also known as expanding symmetry or evolving symmetry, if the replication is exactly the same at every scale, it is called a self-similar pattern. An example of this is the Menger Sponge, Fractals can also be nearly the same at different levels. This latter pattern is illustrated in small magnifications of the Mandelbrot set, Fractals also include the idea of a detailed pattern that repeats itself. Fractals are different from other geometric figures because of the way in which they scale, doubling the edge lengths of a polygon multiplies its area by four, which is two raised to the power of two. Likewise, if the radius of a sphere is doubled, its volume scales by eight, but if a fractals one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer. This power is called the dimension of the fractal. As mathematical equations, fractals are usually nowhere differentiable, the term fractal was first used by mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus meaning broken or fractured, there is some disagreement amongst authorities about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as beautiful, damn hard, increasingly useful, Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in images, structures and sounds and found in nature, technology, art, Fractals are of particular relevance in the field of chaos theory, since the graphs of most chaotic processes are fractal. The word fractal often has different connotations for laypeople than for mathematicians, the mathematical concept is difficult to define formally even for mathematicians, but key features can be understood with little mathematical background. If this is done on fractals, however, no new detail appears, nothing changes, self-similarity itself is not necessarily counter-intuitive. The difference for fractals is that the pattern reproduced must be detailed, a regular line, for instance, is conventionally understood to be 1-dimensional, if such a curve is divided into pieces each 1/3 the length of the original, there are always 3 equal pieces. In contrast, consider the Koch snowflake and it is also 1-dimensional for the same reason as the ordinary line, but it has, in addition, a fractal dimension greater than 1 because of how its detail can be measured. This also leads to understanding a third feature, that fractals as mathematical equations are nowhere differentiable, in a concrete sense, this means fractals cannot be measured in traditional ways. The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, according to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity. In his writings, Leibniz used the term fractional exponents, also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called self-inverse fractals
16.
Mandelbrot set
–
Its definition and name are due to Adrien Douady, in tribute to the mathematician Benoit Mandelbrot. The set is connected to a Julia set, and related Julia sets produce similarly complex fractal shapes, Mandelbrot set images may be created by sampling the complex numbers and determining, for each sample point c, whether the result of iterating the above function goes to infinity. If c is constant and the initial value of z 0 is variable instead. Images of the Mandelbrot set exhibit an elaborate and infinitely complicated boundary that reveals progressively ever-finer recursive detail at increasing magnifications, the style of this repeating detail depends on the region of the set being examined. The sets boundary also incorporates smaller versions of the shape, so the fractal property of self-similarity applies to the entire set. The Mandelbrot set has become popular outside mathematics both for its appeal and as an example of a complex structure arising from the application of simple rules. It is one of the examples of mathematical visualization. The Mandelbrot set has its place in complex dynamics, a field first investigated by the French mathematicians Pierre Fatou and this fractal was first defined and drawn in 1978 by Robert W. Brooks and Peter Matelski as part of a study of Kleinian groups. On 1 March 1980, at IBMs Thomas J. Watson Research Center in Yorktown Heights, New York, Mandelbrot studied the parameter space of quadratic polynomials in an article that appeared in 1980. The mathematicians Heinz-Otto Peitgen and Peter Richter became well known for promoting the set with photographs, books, the cover article of the August 1985 Scientific American introduced a wide audience to the algorithm for computing the Mandelbrot set. The cover featured an image created by Peitgen, et al, the Mandelbrot set became prominent in the mid-1980s as a computer graphics demo, when personal computers became powerful enough to plot and display the set in high resolution. The Mandelbrot set is the set of values of c in the plane for which the orbit of 0 under iteration of the quadratic map z n +1 = z n 2 + c remains bounded. That is, a number c is part of the Mandelbrot set if, when starting with z0 =0 and applying the iteration repeatedly. This can also be represented as z n +1 = z n 2 + c, c ∈ M ⟺ lim sup n → ∞ | z n +1 | ≤2. For example, letting c =1 gives the sequence 0,1,2,5,26, as this sequence is unbounded,1 is not an element of the Mandelbrot set. On the other hand, c = −1 gives the sequence 0, −1,0, which is bounded, and so −1 belongs to the Mandelbrot set. The Mandelbrot set M is defined by a family of quadratic polynomials P c, C → C given by P c, z ↦ z 2 + c. For each c, one considers the behavior of the sequence obtained by iterating P c starting at critical point z =0, the Mandelbrot set is defined as the set of all points c such that the above sequence does not escape to infinity
17.
Odometer
–
An odometer or odograph is an instrument that indicates distance travelled by a vehicle, such as a bicycle or automobile. The device may be electronic, mechanical, or a combination of the two, the noun derives from the Greek words hodós and métron. Possibly the first evidence for the use of an odometer can be found in the works of the ancient Roman Pliny, both authors list the distances of routes traveled by Alexander the Great as by his bematists Diognetus and Baeton. However, the accuracy of the bematistss measurements rather indicates the use of a mechanical device. 2% from the actual distance. From the nine surviving bematists measurements in Plinys Naturalis Historia eight show a deviation of less than 5% from the actual distance, three of them being within 1%. An odometer for measuring distance was first described by Vitruvius around 27 and 23 BC, hero of Alexandria describes a similar device in chapter 34 of his Dioptra. Some researchers have speculated that the device might have included technology similar to that of the Greek Antikythera mechanism, the odometer of Vitruvius was based on chariot wheels of 4 feet diameter turning 400 times in one Roman mile. For each revolution a pin on the axle engaged a 400 tooth cogwheel thus turning it one complete revolution per mile and this engaged another gear with holes along the circumference, where pebbles were located, that were to drop one by one into a box. The distance traveled would thus be given simply by counting the number of pebbles, whether this instrument was ever built at the time is disputed. Leonardo da Vinci later tried to build it himself according to the description, however, in 1981 engineer Andre Sleeswyk built his own replica, replacing the square-toothed gear designs of da Vinci with the triangular, pointed teeth found in the Antikythera mechanism. With this modification, the Vitruvius odometer functioned perfectly, the odometer was also independently invented in ancient China, possibly by the prolific inventor and early scientist Zhang Heng of the Han Dynasty. By the 3rd century, the Chinese had termed the device as the jì lĭ gŭ chē, there is speculation that some time in the 1st century BC, the beating of drums and gongs were mechanically-driven by working automatically off the rotation of the road-wheels. This might have actually been the design of one Loxia Hong, the odometer was used also in subsequent periods of Chinese history. In the historical text of the Jin Shu, the oldest part of the compiled text, the passage in the Jin Shu expanded upon this, explaining that it took a similar form to the mechanical device of the south-pointing chariot invented by Ma Jun. As recorded in the Song Shi of the Song Dynasty, the odometer and south-pointing chariot were combined into one wheeled device by engineers of the 9th century, 11th century, and 12th century. The Sun Tzu Suan Ching, dated from the 3rd century to 5th century, the historical text of the Song Shi, recording the people and events of the Chinese Song Dynasty, also mentioned the odometer used in that period. At the completion of every li, the figure of a man in the lower storey strikes a drum, at the completion of every ten li. The carriage-pole ends in a phoenix-head, and the carriage is drawn by four horses, the escort was formerly of 18 men, but in the 4th year of the Yung-Hsi reign-period the emperor Thai Tsung increased it to 30
18.
Lisp (programming language)
–
Lisp is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1958, Lisp is the second-oldest high-level programming language in use today. Only Fortran is older, by one year, Lisp has changed since its early days, and many dialects have existed over its history. Today, the best known general-purpose Lisp dialects are Common Lisp, Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Churchs lambda calculus. It quickly became the programming language for artificial intelligence research. The name LISP derives from LISt Processor, linked lists are one of Lisps major data structures, and Lisp source code is made of lists. Thus, Lisp programs can manipulate source code as a data structure, the interchangeability of code and data gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists, Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology. McCarthy published its design in a paper in Communications of the ACM in 1960, entitled Recursive Functions of Symbolic Expressions and Their Computation by Machine and he showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms. Information Processing Language was the first AI language, from 1955 or 1956, and already included many of the concepts, such as list-processing and recursion, McCarthys original notation used bracketed M-expressions that would be translated into S-expressions. As an example, the M-expression car is equivalent to the S-expression, once Lisp was implemented, programmers rapidly chose to use S-expressions, and M-expressions were abandoned. M-expressions surfaced again with short-lived attempts of MLISP by Horace Enea, Lisp was first implemented by Steve Russell on an IBM704 computer. Russell had read McCarthys paper and realized that the Lisp eval function could be implemented in machine code, the result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, evaluate Lisp expressions. Two assembly language macros for the IBM704 became the operations for decomposing lists, car. From the context, it is clear that the register is used here to mean memory register. Lisp dialects still use car and cdr for the operations that return the first item in a list, the first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT. This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely, the language used in Hart and Levins memo is much closer to modern Lisp style than McCarthys earlier code. Lisp was a system to implement with the compiler techniques
19.
Python (programming language)
–
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. The language provides constructs intended to enable writing clear programs on both a small and large scale and it has a large and comprehensive standard library. Python interpreters are available for operating systems, allowing Python code to run on a wide variety of systems. CPython, the implementation of Python, is open source software and has a community-based development model. CPython is managed by the non-profit Python Software Foundation, about the origin of Python, Van Rossum wrote in 1996, Over six years ago, in December 1989, I was looking for a hobby programming project that would keep me occupied during the week around Christmas. Would be closed, but I had a computer. I decided to write an interpreter for the new scripting language I had been thinking about lately, I chose Python as a working title for the project, being in a slightly irreverent mood. Python 2.0 was released on 16 October 2000 and had major new features, including a cycle-detecting garbage collector. With this release the development process was changed and became more transparent, Python 3.0, a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its features have been backported to the backwards-compatible Python 2.6. x and 2.7. x version series. The End Of Life date for Python 2.7 was initially set at 2015, many other paradigms are supported via extensions, including design by contract and logic programming. Python uses dynamic typing and a mix of reference counting and a garbage collector for memory management. An important feature of Python is dynamic name resolution, which binds method, the design of Python offers some support for functional programming in the Lisp tradition. The language has map, reduce and filter functions, list comprehensions, dictionaries, and sets, the standard library has two modules that implement functional tools borrowed from Haskell and Standard ML. Python can also be embedded in existing applications that need a programmable interface, while offering choice in coding methodology, the Python philosophy rejects exuberant syntax, such as in Perl, in favor of a sparser, less-cluttered grammar. As Alex Martelli put it, To describe something as clever is not considered a compliment in the Python culture. Pythons philosophy rejects the Perl there is more one way to do it approach to language design in favor of there should be one—and preferably only one—obvious way to do it. Pythons developers strive to avoid premature optimization, and moreover, reject patches to non-critical parts of CPython that would offer an increase in speed at the cost of clarity
20.
Perl
–
Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6, though Perl is not officially an acronym, there are various backronyms in use, the best-known being Practical Extraction and Reporting Language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier, since then, it has undergone many changes and revisions. Perl 6, which began as a redesign of Perl 5 in 2000, both languages continue to be developed independently by different development teams and liberally borrow ideas from one another. The Perl languages borrow features from other programming languages including C, shell script, AWK and they provide powerful text processing facilities without the arbitrary data-length limits of many contemporary Unix commandline tools, facilitating easy manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in due to its unsurpassed regular expression. In addition to CGI, Perl 5 is used for administration, network programming, finance, bioinformatics. It has been nicknamed the Swiss Army chainsaw of scripting languages because of its flexibility and power, in 1998, it was also referred to as the duct tape that holds the Internet together, in reference to both its ubiquitous use as a glue language and its perceived inelegance. Larry Wall began work on Perl in 1987, while working as a programmer at Unisys, the language expanded rapidly over the next few years. Perl 2, released in 1988, featured a regular expression engine. Perl 3, released in 1989, added support for data streams. Originally, the documentation for Perl was a single man page. In 1991, Programming Perl, known to many Perl programmers as the Camel Book because of its cover, was published and became the de facto reference for the language. At the same time, the Perl version number was bumped to 4, not to mark a change in the language. Perl 4 went through a series of releases, culminating in Perl 4.036 in 1993. At that point, Wall abandoned Perl 4 to begin work on Perl 5, initial design of Perl 5 continued into 1994. The perl5-porters mailing list was established in May 1994 to coordinate work on porting Perl 5 to different platforms and it remains the primary forum for development, maintenance, and porting of Perl 5. Perl 5.000 was released on October 17,1994 and it was a nearly complete rewrite of the interpreter, and it added many new features to the language, including objects, references, lexical variables, and modules
21.
Haskell (programming language)
–
Haskell /ˈhæskəl/ is a standardized, general-purpose purely functional programming language, with non-strict semantics and strong static typing. It is named after logician Haskell Curry, the latest standard of Haskell is Haskell 2010. As of May 2016, a group is working on the next version, Haskell features a type system with type inference and lazy evaluation. Type classes first appeared in the Haskell programming language and its main implementation is the Glasgow Haskell Compiler. Haskell is based on the semantics, but not the syntax, of the language Miranda, Haskell is used widely in academia and also used in industry. Following the release of Miranda by Research Software Ltd, in 1985, by 1987, more than a dozen non-strict, purely functional programming languages existed. Of these, Miranda was used most widely, but it was proprietary software, the committees purpose was to consolidate the existing functional languages into a common one that would serve as a basis for future research in functional-language design. The first version of Haskell was defined in 1990, the committees efforts resulted in a series of language definitions. The committee expressly welcomed creating extensions and variants of Haskell 98 via adding and incorporating experimental features, in February 1999, the Haskell 98 language standard was originally published as The Haskell 98 Report. In January 2003, a version was published as Haskell 98 Language and Libraries. The language continues to rapidly, with the Glasgow Haskell Compiler implementation representing the current de facto standard. In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell Prime and this was intended to be an ongoing incremental process to revise the language definition, producing a new revision up to once per year. The first revision, named Haskell 2010, was announced in November 2009 and it introduces the Language-Pragma-Syntax-Extension which allows for code designating a Haskell source as Haskell 2010 or requiring certain extensions to the Haskell language. Haskell features lazy evaluation, pattern matching, list comprehension, type classes and it is a purely functional language, which means that in general, functions in Haskell have no side effects. A distinct construct exists to represent side effects, orthogonal to the type of functions, a pure function may return a side effect which is subsequently executed, modeling the impure functions of other languages. Haskell has a strong, static type system based on Hindley–Milner type inference, haskells principal innovation in this area is to add type classes, originally conceived as a principled way to add overloading to the language, but since finding many more uses. The construct which represents side effects is an example of a monad, monads are a general framework which can model different kinds of computation, including error handling, nondeterminism, parsing, and software transactional memory. Monads are defined as ordinary datatypes, but Haskell provides some syntactic sugar for their use, Haskell has an open, published specification, and multiple implementations exist
22.
Ruby (programming language)
–
Ruby is a dynamic, reflective, object-oriented, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro Matz Matsumoto in Japan, according to its creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms, including functional, object-oriented, and it also has a dynamic type system and automatic memory management. Ruby was conceived on February 24,1993, I knew Perl, but I didnt like it really, because it had the smell of a toy language. The object-oriented language seemed very promising, but I didnt like it, because I didnt think it was a true object-oriented language — OO features appeared to be add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, I looked for but couldnt find one. The name Ruby originated during a chat session between Matsumoto and Keiju Ishitsuka on February 24,1993, before any code had been written for the language. Initially two names were proposed, Coral and Ruby, Matsumoto chose the latter in a later e-mail to Ishitsuka. Matsumoto later noted a factor in choosing the name Ruby – it was the birthstone of one of his colleagues, the first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21,1995. Subsequently, three versions of Ruby were released in two days. The release coincided with the launch of the Japanese-language ruby-list mailing list, in the same year, Matsumoto was hired by netlab. jp to work on Ruby as a full-time developer. In 1998, the Ruby Application Archive was launched by Matsumoto, in 1999, the first English language mailing list ruby-talk began, which signaled a growing interest in the language outside Japan. In this same year, Matsumoto and Keiju Ishitsuka wrote the first book on Ruby, The Object-oriented Scripting Language Ruby and it would be followed in the early 2000s by around 20 books on Ruby published in Japanese. By 2000, Ruby was more popular than Python in Japan, in September 2000, the first English language book Programming Ruby was printed, which was later freely released to the public, further widening the adoption of Ruby amongst English speakers. In early 2002, the English-language ruby-talk mailing list was receiving more messages than the Japanese-language ruby-list, Ruby 1.8 was initially released in August 2003, was stable for a long time, and was retired June 2013. Although deprecated, there is still based on it. Ruby 1.8 is only compatible with Ruby 1.9. Ruby 1.8 has been the subject of industry standards
23.
Computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC
24.
IBM 1620
–
The IBM1620 was announced by IBM on October 21,1959, and marketed as an inexpensive scientific computer. After a total production of two thousand machines, it was withdrawn on November 19,1970. Modified versions of the 1620 were used as the CPU of the IBM1710, core memory cycle times were 20 microseconds for the Model I,10 microseconds for the Model II. For an explanation of all three known interpretations of the code name see the section on the machines development history. It was a word length decimal computer with a memory that could hold anything from 20,000 to 60,000 decimal digits increasing in 20,000 decimal digit increments. Memory was accessed two decimal digits at the same time and it was set to mark the most significant digit of a number. In the least significant digit of 5-digit addresses it was set for indirect addressing, in the middle 3 digits of 5-digit addresses they were set to select one of 7 index registers. Some instructions, such as the B instruction, only used the P Address, fixed-point data words could be any size from two decimal digits up to all of memory not used for other purposes. Floating-point data words could be any size from 4 decimal digits up to 102 decimal digits, the machine had no programmer-accessible registers, all operations were memory to memory. The table below lists Alphameric mode characters, the table below lists numeric mode characters. The Model I used the Cyrillic character Ж on the typewriter as a general purpose invalid character with correct parity, in some 1620 installations it was called a SMERSH, as used in the James Bond novels that had become popular in the late 1960s. The Model II used a new character ❚ as a general purpose invalid character with correct parity and he also showed how the machines paper tape reading support could not properly read tapes containing record marks, since record marks are used to terminate the characters read in storage. Most 1620 installations used the more convenient punched card input/output, rather than paper tape, the successor to the 1620, the IBM1130 was based on a totally different, 16-bit binary architecture. The Monitors provided disk based versions of 1620 SPS IId, FORTRAN IId as well as a DUP, both Monitor systems required 20,000 digits or more of memory and 1 or more 1311 disk drives. A standard preliminary was to clear the computer memory of any previous users detritus - being magnetic cores and this was effected by using the console facilities to load a simple computer program via typing its machine code at the console typewriter, running it, and stopping it. This was not challenging as only one instruction was needed such as 160001000000, loaded at address zero and this was the normal machine code means of copying a constant of up to five digits. The digit string was addressed at its end and extended through lower addresses until a digit with a flag marked its end. But for this instruction, no flag would ever be found because the source digits had shortly before been overwritten by digits lacking a flag, each 20,000 digit module of memory took just under one second to clear
25.
IBM 1401
–
The IBM1401 is a variable wordlength decimal computer that was announced by IBM on October 5,1959. Over 12,000 units were produced and many were leased or resold after they were replaced with newer technology, the 1401 was withdrawn on February 8,1971. These features include, high speed card punching and reading, magnetic tape input and output, high speed printing, stored program, and arithmetic and logical ability. The 1401 may be operated as an independent system, in conjunction with IBM punched card equipment, monthly rental for 1401 configurations started at US$2,500. IBM was pleasantly surprised to receive 5,200 orders in just the first five weeks – more than predicted for the life of the machine. By late 1961, the 2000 installed in the USA were about one quarter of all electronic stored-program computers by all manufacturers, the number of installed 1401s peaked above 10,000 in the mid-1960s. In all, by the nearly half of all computer systems in the world were 1401-type systems. The system was marketed until February 1971, commonly used by small businesses as their primary data processing machines, the 1401 was also frequently used as an off-line peripheral controller for mainframe computers. In such installations, with an IBM7090 for example, the computers used only magnetic tape for input-output. It was the 1401 that transferred input data from slow peripherals to tape, and transferred output data from tape to the card punch and this allowed the mainframes throughput to not be limited by the speed of a card reader or printer. During the 1970s, IBM installed many 1401s in India and Pakistan where they were in use well into the 1980s, some of todays Indian and Pakistani software entrepreneurs started on these 1401s. The first computer in Pakistan, for example, was a 1401 installed at Pakistan International Airlines, each alphanumeric character in the 1401 was encoded by six bits, called B, A,8,4,2,1. The B, A bits were called zone bits and the 8,4,2,1 bits were called numeric bits, for digits 1 through 9, the bits B, A were zero, the digit BCD encoded in bits 8,4,2,1. Thus the letter A,12,1 in the punched card character code, was encoded B, A,1, encodings of punched card characters with two or more digit punches can be found in the Character and op codes table. IBM called the 1401s character code BCD, even though that term describes only the digit encoding. The 1401s alphanumeric collating sequence was compatible with the punched card collating sequence, associated with each memory location were two other bits, called C for odd parity check and M for word mark. Each memory location then, had the following bits, C B A8421 M The 1401 was available in six memory configurations,1400,2000,4000,8000,12000, each character was addressable, addresses ranging from 0 through 15999. A very small number of 1401s were expanded to 32,000 characters by special request, some operations used specific memory locations
26.
Fraction (mathematics)
–
A fraction represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction consists of an integer numerator displayed above a line, numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals. The numerator represents a number of parts, and the denominator. For example, in the fraction 3/4, the numerator,3, tells us that the fraction represents 3 equal parts, the picture to the right illustrates 34 or ¾ of a cake. Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, an integer such as the number 7 can be thought of as having an implicit denominator of one,7 equals 7/1. Other uses for fractions are to represent ratios and to represent division, thus the fraction ¾ is also used to represent the ratio 3,4 and the division 3 ÷4. The test for a number being a number is that it can be written in that form. In a fraction, the number of parts being described is the numerator. Informally, they may be distinguished by placement alone but in formal contexts they are separated by a fraction bar. The fraction bar may be horizontal, oblique, or diagonal and these marks are respectively known as the horizontal bar, the slash or stroke, the division slash, and the fraction slash. In typography, horizontal fractions are known as en or nut fractions and diagonal fractions as em fractions. The denominators of English fractions are expressed as ordinal numbers. When the denominator is 1, it may be expressed in terms of wholes but is commonly ignored. When the numerator is one, it may be omitted, a fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. Fractions should always be hyphenated when used as adjectives, alternatively, a fraction may be described by reading it out as the numerator over the denominator, with the denominator expressed as a cardinal number. The term over is used even in the case of solidus fractions, Fractions with large denominators that are not powers of ten are often rendered in this fashion while those with denominators divisible by ten are typically read in the normal ordinal fashion. A simple fraction is a number written as a/b or a b
27.
Greatest common divisor
–
In mathematics, the greatest common divisor of two or more integers, when at least one of them is not zero, is the largest positive integer that is a divisor of both numbers. For example, the GCD of 8 and 12 is 4, the greatest common divisor is also known as the greatest common factor, highest common factor, greatest common measure, or highest common divisor. This notion can be extended to polynomials and other commutative rings, in this article we will denote the greatest common divisor of two integers a and b as gcd. What is the greatest common divisor of 54 and 24, the number 54 can be expressed as a product of two integers in several different ways,54 ×1 =27 ×2 =18 ×3 =9 ×6. Thus the divisors of 54 are,1,2,3,6,9,18,27,54, similarly, the divisors of 24 are,1,2,3,4,6,8,12,24. The numbers that these two share in common are the common divisors of 54 and 24,1,2,3,6. The greatest of these is 6 and that is, the greatest common divisor of 54 and 24. The greatest common divisor is useful for reducing fractions to be in lowest terms, for example, gcd =14, therefore,4256 =3 ⋅144 ⋅14 =34. Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1, for example,9 and 28 are relatively prime. For example, a 24-by-60 rectangular area can be divided into a grid of, 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, therefore,12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, in practice, this method is only feasible for small numbers, computing prime factorizations in general takes far too long. Here is another example, illustrated by a Venn diagram. Suppose it is desired to find the greatest common divisor of 48 and 180, first, find the prime factorizations of the two numbers,48 =2 ×2 ×2 ×2 ×3,180 =2 ×2 ×3 ×3 ×5. What they share in common is two 2s and a 3, Least common multiple =2 ×2 × ×3 ×5 =720 Greatest common divisor =2 ×2 ×3 =12. To compute gcd, divide 48 by 18 to get a quotient of 2, then divide 18 by 12 to get a quotient of 1 and a remainder of 6. Then divide 12 by 6 to get a remainder of 0, note that we ignored the quotient in each step except to notice when the remainder reached 0, signalling that we had arrived at the answer. Formally the algorithm can be described as, gcd = a gcd = gcd, in this sense the GCD problem is analogous to e. g. the integer factorization problem, which has no known polynomial-time algorithm, but is not known to be NP-complete. Shallcross et al. showed that a problem is NC-equivalent to the problem of integer linear programming with two variables, if either problem is in NC or is P-complete, the other is as well
28.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
29.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
30.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
31.
Addition
–
Addition is one of the four basic operations of arithmetic, with the others being subtraction, multiplication and division. The addition of two numbers is the total amount of those quantities combined. For example, in the picture on the right, there is a combination of three apples and two together, making a total of five apples. This observation is equivalent to the mathematical expression 3 +2 =5 i. e.3 add 2 is equal to 5, besides counting fruits, addition can also represent combining other physical objects. In arithmetic, rules for addition involving fractions and negative numbers have been devised amongst others, in algebra, addition is studied more abstractly. It is commutative, meaning that order does not matter, and it is associative, repeated addition of 1 is the same as counting, addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication, performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers, the most basic task,1 +1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the system, starting with single digits. Mechanical aids range from the ancient abacus to the modern computer, Addition is written using the plus sign + between the terms, that is, in infix notation. The result is expressed with an equals sign, for example, 3½ =3 + ½ =3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead, the sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, ∑ k =15 k 2 =12 +22 +32 +42 +52 =55. The numbers or the objects to be added in addition are collectively referred to as the terms, the addends or the summands. This is to be distinguished from factors, which are multiplied, some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an addend at all, today, due to the commutative property of addition, augend is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin, using the gerundive suffix -nd results in addend, thing to be added. Likewise from augere to increase, one gets augend, thing to be increased, sum and summand derive from the Latin noun summa the highest, the top and associated verb summare
32.
Subtraction
–
Subtraction is a mathematical operation that represents the operation of removing objects from a collection. It is signified by the minus sign, for example, in the picture on the right, there are 5 −2 apples—meaning 5 apples with 2 taken away, which is a total of 3 apples. It is anticommutative, meaning that changing the order changes the sign of the answer and it is not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Subtraction of 0 does not change a number, subtraction also obeys predictable rules concerning related operations such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers, general binary operations that continue these patterns are studied in abstract algebra. Performing subtraction is one of the simplest numerical tasks, subtraction of very small numbers is accessible to young children. In primary education, students are taught to subtract numbers in the system, starting with single digits. Subtraction is written using the minus sign − between the terms, that is, in infix notation, the result is expressed with an equals sign. This is most common in accounting, formally, the number being subtracted is known as the subtrahend, while the number it is subtracted from is the minuend. All of this terminology derives from Latin, subtraction is an English word derived from the Latin verb subtrahere, which is in turn a compound of sub from under and trahere to pull, thus to subtract is to draw from below, take away. Using the gerundive suffix -nd results in subtrahend, thing to be subtracted, likewise from minuere to reduce or diminish, one gets minuend, thing to be diminished. Imagine a line segment of length b with the left end labeled a, starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition, a + b = c, from c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction, c − b = a, now, a line segment labeled with the numbers 1,2, and 3. From position 3, it takes no steps to the left to stay at 3 and it takes 2 steps to the left to get to position 1, so 3 −2 =1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3, to represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number, from 3, it takes 3 steps to the left to get to 0, so 3 −3 =0. But 3 −4 is still invalid since it leaves the line
33.
Multiplication
–
Multiplication is one of the four elementary, mathematical operations of arithmetic, with the others being addition, subtraction and division. Multiplication can also be visualized as counting objects arranged in a rectangle or as finding the area of a rectangle whose sides have given lengths, the area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis. The inverse operation of multiplication is division, for example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number, Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For these more abstract constructs, the order that the operands are multiplied sometimes does matter, a listing of the many different kinds of products that are used in mathematics is given in the product page. In arithmetic, multiplication is often written using the sign × between the terms, that is, in infix notation, there are other mathematical notations for multiplication, Multiplication is also denoted by dot signs, usually a middle-position dot,5 ⋅2 or 5. 2 The middle dot notation, encoded in Unicode as U+22C5 ⋅ dot operator, is standard in the United States, the United Kingdom, when the dot operator character is not accessible, the interpunct is used. In other countries use a comma as a decimal mark. In algebra, multiplication involving variables is often written as a juxtaposition, the notation can also be used for quantities that are surrounded by parentheses. In matrix multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a product of two vectors, yielding a vector as the result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk is still the most common notation and this is due to the fact that most computers historically were limited to small character sets that lacked a multiplication sign, while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language, the numbers to be multiplied are generally called the factors. The number to be multiplied is called the multiplicand, while the number of times the multiplicand is to be multiplied comes from the multiplier. Usually the multiplier is placed first and the multiplicand is placed second, however sometimes the first factor is the multiplicand, additionally, there are some sources in which the term multiplicand is regarded as a synonym for factor. In algebra, a number that is the multiplier of a variable or expression is called a coefficient, the result of a multiplication is called a product. A product of integers is a multiple of each factor, for example,15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5