1.
Sequence
–
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed. Like a set, it contains members, the number of elements is called the length of the sequence. Unlike a set, order matters, and exactly the elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a function whose domain is either the set of the numbers or the set of the first n natural numbers. The position of an element in a sequence is its rank or index and it depends on the context or of a specific convention, if the first element has index 0 or 1. For example, is a sequence of letters with the letter M first, also, the sequence, which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, the empty sequence is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order, Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations, Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are useful for specific types of sequences. One way to specify a sequence is to list the elements, for example, the first four odd numbers form the sequence. This notation can be used for sequences as well. For instance, the sequence of positive odd integers can be written. Listing is most useful for sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples, the prime numbers are the natural numbers bigger than 1, that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence, the prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers are the integer sequence whose elements are the sum of the two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is, for a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences
2.
0 (number)
–
0 is both a number and the numerical digit used to represent that number in numerals. The number 0 fulfills a role in mathematics as the additive identity of the integers, real numbers. As a digit,0 is used as a placeholder in place value systems, names for the number 0 in English include zero, nought or naught, nil, or—in contexts where at least one adjacent digit distinguishes it from the letter O—oh or o. Informal or slang terms for zero include zilch and zip, ought and aught, as well as cipher, have also been used historically. The word zero came into the English language via French zéro from Italian zero, in pre-Islamic time the word ṣifr had the meaning empty. Sifr evolved to mean zero when it was used to translate śūnya from India, the first known English use of zero was in 1598. The Italian mathematician Fibonacci, who grew up in North Africa and is credited with introducing the system to Europe. This became zefiro in Italian, and was contracted to zero in Venetian. The Italian word zefiro was already in existence and may have influenced the spelling when transcribing Arabic ṣifr, modern usage There are different words used for the number or concept of zero depending on the context. For the simple notion of lacking, the words nothing and none are often used, sometimes the words nought, naught and aught are used. Several sports have specific words for zero, such as nil in football, love in tennis and it is often called oh in the context of telephone numbers. Slang words for zero include zip, zilch, nada, duck egg and goose egg are also slang for zero. Ancient Egyptian numerals were base 10 and they used hieroglyphs for the digits and were not positional. By 1740 BC, the Egyptians had a symbol for zero in accounting texts. The symbol nfr, meaning beautiful, was used to indicate the base level in drawings of tombs and pyramids. By the middle of the 2nd millennium BC, the Babylonian mathematics had a sophisticated sexagesimal positional numeral system, the lack of a positional value was indicated by a space between sexagesimal numerals. By 300 BC, a symbol was co-opted as a placeholder in the same Babylonian system. In a tablet unearthed at Kish, the scribe Bêl-bân-aplu wrote his zeros with three hooks, rather than two slanted wedges, the Babylonian placeholder was not a true zero because it was not used alone
3.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics
4.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
5.
Array data structure
–
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula, the simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of 10 32-bit integer variables, with indices 0 through 9,2036, so that the element with index i has the address 2000 +4 × i. The memory address of the first element of an array is called first address or foundation address, because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term vector is used in computing to refer to an array, arrays are often used to implement tables, especially lookup tables, the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program and they are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers, in most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of a data structure are required to have the same size. The set of valid index tuples and the addresses of the elements are usually, Array types are often implemented by array structures, however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, john von Neumann wrote the first array-sorting program in 1945, during the building of the first stored-program computer. p. 159 Array indexing was originally done by self-modifying code, and later using index registers, some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN, Lisp, COBOL, and ALGOL60, had support for multi-dimensional arrays, in C++, class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables, many databases, small and large, consist of one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, one or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the way to allocate dynamic memory portably
6.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
7.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
8.
Pointer (computer programming)
–
In computer science, a pointer is a programming language object, whose value refers to another value stored elsewhere in the computer memory using its memory address. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a number in a books index could be considered a pointer to the corresponding page. Pointers to data significantly improve performance for operations such as traversing strings, lookup tables, control tables. In particular, it is much cheaper in time and space to copy and dereference pointers than it is to copy. Pointers are also used to hold the addresses of entry points for called subroutines in procedural programming, in object-oriented programming, pointers to functions are used for binding methods, often using what are called virtual method tables. A pointer is a simple, more concrete implementation of the more abstract data type. Several languages support some type of pointer, although some have more restrictions on their use than others, because pointers allow both protected and unprotected access to memory addresses, there are risks associated with using them particularly in the latter case. Other measures may also be taken, harold Lawson is credited with the 1964 invention of the pointer. According to the Oxford English Dictionary, the word pointer first appeared in print as a pointer in a technical memorandum by the System Development Corporation. In computer science, a pointer is a kind of reference, a data primitive is any datum that can be read from or written to computer memory using one memory access. A data aggregate is a group of primitives that are contiguous in memory. In the context of these definitions, a byte is the smallest primitive, the memory address of the initial byte of a datum is considered the memory address of the entire datum. A memory pointer is a primitive, the value of which is intended to be used as a memory address and it is also said that a pointer points to a datum when the pointers value is the datums memory address. More generally, a pointer is a kind of reference, and it is said that a pointer references a datum stored somewhere in memory, to obtain that datum is to dereference the pointer. The feature that separates pointers from other kinds of reference is that a value is meant to be interpreted as a memory address. References serve as a level of indirection, A pointers value determines which memory address is to be used in a calculation, when setting up data structures like lists, queues and trees, it is necessary to have pointers to help manage how the structure is implemented and controlled. Typical examples of pointers are start pointers, end pointers, and these pointers can either be absolute or relative
9.
IBM 7090
–
The IBM7090 was a second-generation transistorized version of the earlier IBM709 vacuum tube mainframe computers and was designed for large-scale scientific and technological applications. The 7090 was the member of the IBM 700/7000 series scientific computers. The first 7090 installation was in November 1959, in 1960, a typical system sold for $2.9 million or could be rented for $63,500 a month. The 7090 used a 36-bit word length, with a space of 32,768 words. It operated with a memory cycle of 2.18 μs. With a processing speed of around 100 Kflop/s, the 7090 was six times faster than the 709, although the 709 was a superior machine to its predecessor, the 704, it was being built and sold at the time that transistor circuitry was supplanting vacuum tube circuits. Hence, IBM redeployed its 709 engineering group to the design of a transistorized successor and that project became called the 709-T, which because of the sound when spoken, quickly shifted to the nomenclature 7090. Similarly, the related machines such as the 7070 and other 7000 series equipment were called by names of digit - digit - decade, an upgraded version, the IBM7094, was first installed in September 1962. It had seven index registers, instead of three on the earlier machines, the 7094 console had a distinctive box on top that displayed lights for the four new index registers. Photos The 7094 introduced double-precision floating point and additional instructions, but was backward compatible with the 7090. Minor changes in instruction formats, particularly the way the additional index registers were addressed, sometimes caused problems. On the earlier models, when more than one bit was set in the tag field, on the 7094, if the three-bit tag field was not zero, it selected just one of seven index registers, however the or behavior remained available in a multiple tag compatibility mode. In 1963, IBM introduced two new, lower cost machines called the IBM7040 and 7044, a 7094/7044 Direct Coupled System was introduced later, with the 7094 performing computation while the 7044 handled Input/Output. The 7090 used germanium transistors and germanium diffused junction drift transistors. The 7090 used the Standard Modular System cards using current-mode logic some using diffused junction drift transistors, the basic instruction format was the same as the IBM709, a three-bit prefix, 15-bit decrement, three-bit tag, and 15-bit address. The prefix field specified the class of instruction, the decrement field often contained an immediate operand to modify the results of the operation, or was used to further define the instruction type. The three bits of the tag specified three index registers, the contents of which were subtracted from the address to produce an effective address, the address field contained either an address or an immediate operand. Fixed-point numbers were stored in binary sign/magnitude format, the double-precision number was stored in memory in an even-odd pair of consecutive words, the sign and exponent in the second word were ignored when the number was used as an operand
10.
IBM
–
International Business Machines Corporation is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries. The company originated in 1911 as the Computing-Tabulating-Recording Company and was renamed International Business Machines in 1924, IBM manufactures and markets computer hardware, middleware and software, and offers hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a research organization, holding the record for most patents generated by a business for 24 consecutive years. IBM has continually shifted its business mix by exiting commoditizing markets and focusing on higher-value, also in 2014, IBM announced that it would go fabless, continuing to design semiconductors, but offloading manufacturing to GlobalFoundries. Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the worlds largest employers, with nearly 380,000 employees. Known as IBMers, IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology, in the 1880s, technologies emerged that would ultimately form the core of what would become International Business Machines. On June 16,1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company based in Endicott, New York. The five companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York, Dayton, Ohio, Detroit, Michigan, Washington, D. C. and Toronto. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr. fired from the National Cash Register Company by John Henry Patterson, called on Flint and, Watson joined CTR as General Manager then,11 months later, was made President when court cases relating to his time at NCR were resolved. Having learned Pattersons pioneering business practices, Watson proceeded to put the stamp of NCR onto CTRs companies and his favorite slogan, THINK, became a mantra for each companys employees. During Watsons first four years, revenues more than doubled to $9 million, Watson had never liked the clumsy hyphenated title of the CTR and in 1924 chose to replace it with the more expansive title International Business Machines. By 1933 most of the subsidiaries had been merged into one company, in 1937, IBMs tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U. S. During the Second World War the company produced small arms for the American war effort, in 1949, Thomas Watson, Sr. created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after almost 40 years at the company helm, in 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines, in 1963, IBM employees and computers helped NASA track the orbital flight of the Mercury astronauts. A year later it moved its headquarters from New York City to Armonk. The latter half of the 1960s saw IBM continue its support of space exploration, on April 7,1964, IBM announced the first computer system family, the IBM System/360
11.
Edsger W. Dijkstra
–
Edsger Wybe Dijkstra was a Dutch computer scientist and an early pioneer in many research areas of computing science. A theoretical physicist by training, he worked as a programmer at the Mathematisch Centrum from 1952 to 1962 and he was a professor of mathematics at the Eindhoven University of Technology and a research fellow at the Burroughs Corporation. He held the Schlumberger Centennial Chair in Computer Sciences at the University of Texas at Austin from 1984 until 1999, one of the most influential members of computing sciences founding generation, Dijkstra helped shape the new discipline from both an engineering and a theoretical perspective. Many of his papers are the source of new research areas, several concepts and problems that are now standard in computer science were first identified by Dijkstra or bear names coined by him. Dijkstra coined the phrase structured programming and during the 1970s this became the new programming orthodoxy, the academic study of concurrent computing started in the 1960s, with Dijkstra credited with being the first paper in this field, identifying and solving the mutual exclusion problem. He was also one of the pioneers of the research on principles of distributed computing. Shortly before his death in 2002, he received the ACM PODC Influential-Paper Award in distributed computing for his work on self-stabilization of program computation and this annual award was renamed the Dijkstra Prize the following year, in his honor. Edsger W. Dijkstra was born in Rotterdam and his father was a chemist who was president of the Dutch Chemical Society, he taught chemistry at a secondary school and was later its superintendent. His mother was a mathematician, but never had a formal job, Dijkstra had considered a career in law and had hoped to represent the Netherlands in the United Nations. However, after graduating school in 1948, at his parents suggestion he studied mathematics and physics. In the early 1950s, electronic computers were a novelty, Dijkstra stumbled on his career quite by accident, and through his supervisor, Professor A. For some time Dijkstra remained committed to physics, working on it in Leiden three days out of each week, with increasing exposure to computing, however, his focus began to shift. But was that a respectable profession, for after all, what was programming. Where was the body of knowledge that could support it as an intellectually respectable discipline. Full of misgivings I knocked on van Wijngaardens office door, asking him whether I could speak to him for a moment and this was a turning point in my life and I completed my study of physics formally as quickly as I could. Debets in 1957, he was required as a part of the rites to state his profession. He stated that he was a programmer, which was unacceptable to the authorities and his thesis supervisor was van Wijngaarden. From 1952 until 1962 Dijkstra worked at the Mathematisch Centrum in Amsterdam, where he worked closely with Bram Jan Loopstra and Carel S. Scholten, who had been hired to build a computer
12.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
13.
Computer hardware
–
Computer hardware is the collection of physical components that constitute a computer system. By contrast, software is instructions that can be stored and run by hardware, hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and this is referred to as the Von Neumann bottleneck and often limits the performance of the system. For the third year, U. S. business-to-business channel sales increased. The impressive growth was the fastest sales increase since the end of the recession, sales growth accelerated in the second half of the year peaking in fourth quarter with a 6.9 percent increase over the fourth quarter of 2012. There are a number of different types of system in use today. The personal computer, also known as the PC, is one of the most common types of computer due to its versatility, laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance. The computer case is a plastic or metal enclosure that houses most of the components, a case can be either big or small, but the form factor of motherboard for which it is designed matters more. A power supply unit converts alternating current electric power to low-voltage DC power for the components of the computer. Laptops are capable of running from a battery, normally for a period of hours. The motherboard is the component of a computer. It is usually cooled by a heatsink and fan, or water-cooling system, most newer CPUs include an on-die Graphics Processing Unit. The clock speed of CPUs governs how fast it executes instructions, many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling. The chipset, which includes the bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-Access Memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory, RAM usually comes on DIMMs in the sizes 2GB, 4GB, and 8GB, but can be much larger. Read-Only Memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, the BIOS includes boot firmware and power management firmware
14.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards
15.
COBOL
–
COBOL is a compiled English-like computer programming language designed for business use. It is imperative, procedural and, since 2002, object-oriented, COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still used in legacy applications deployed on mainframe computers, such as large-scale batch. Most programming in COBOL is now purely to maintain existing applications, COBOL was designed in 1959 by CODASYL and was partly based on previous programming language design work by Grace Hopper, commonly referred to as the mother of COBOL. It was created as part of a US Department of Defense effort to create a programming language for data processing. Intended as a stopgap, the Department of Defense promptly forced computer manufacturers to provide it and it was standardized in 1968 and has since been revised four times. Expansions include support for structured and object-oriented programming, the current standard is ISO/IEC1989,2014. COBOL has an English-like syntax, which was designed to be self-documenting, however, it is verbose and uses over 300 reserved words. In contrast with modern, succinct syntax like y = x, COBOL code is split into four divisions containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large library, the standard specifies 43 statements,87 functions. COBOL has been criticized throughout its life, however, for its verbosity, design process and poor support for structured programming, in the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A1959 survey had found that in any data processing installation, the programming cost US$800,000 on average, representatives included Grace Hopper, inventor of the English-like data processing language FLOW-MATIC, Jean Sammet and Saul Gorn. The group asked the Department of Defense to sponsor an effort to create a business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, the DoD operated 225 computers, had a further 175 on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization, Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda. On May 28 and 29 of 1959, a meeting was held at the Pentagon to discuss the creation of a programming language for business. It was attended by 41 people and was chaired by Phillips, the Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the mainstream language at the time, lacked the features needed to write such programs
16.
Lua (programming language)
–
Lua is a lightweight multi-paradigm programming language designed primarily for embedded systems and clients. Lua is cross-platform, since it is written in ANSI C, Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. As Lua was intended to be a general embeddable extension language, the designers of Lua focused on improving its speed, portability, extensibility, from 1977 until 1992, Brazil had a policy of strong trade barriers for computer hardware and software. In that atmosphere, Tecgrafs clients could not afford, either politically or financially and those reasons led Tecgraf to implement the basic tools it needed from scratch. Luas historical father and mother were the data-description/configuration languages SOL and DEL and they had been independently developed at Tecgraf in 1992–1993 to add some flexibility into two different projects. There was a lack of any structures in SOL and DEL. As the languages authors wrote in The Evolution of Lua, In 1993, the only contender was Tcl. However, Tcl had unfamiliar syntax, did not offer support for data description. We did not consider LISP or Scheme because of their unfriendly syntax, Python was still in its infancy. In the free, do-it-yourself atmosphere that then reigned in Tecgraf, because many potential users of the language were not professional programmers, the language should avoid cryptic syntax and semantics. The implementation of the new language should be portable, because Tecgrafs clients had a very diverse collection of computer platforms. Finally, since we expected that other Tecgraf products would also need to embed a scripting language, Lua 1.0 was designed in such a way that its object constructors, being then slightly different from the current light and flexible style, incorporated the data-description syntax of SOL. Lua syntax for control structures was mostly borrowed from Modula, but also had influence from CLU, C++, SNOBOL. In an article published in Dr. Lua semantics have been influenced by Scheme over time, especially with the introduction of anonymous functions. Versions of Lua prior to version 5.0 were released under a similar to the BSD license. From version 5.0 onwards, Lua has been licensed under the MIT License, both are permissive free software licences and are almost identical. Lua, for instance, does not contain explicit support for inheritance, in general, Lua strives to provide simple, flexible meta-features that can be extended as needed, rather than supply a feature-set specific to one programming paradigm. As a result, the language is light – the full reference interpreter is only about 180 kB compiled –
17.
Recursion
–
Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic, the most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines a number of instances, it is often done in such a way that no loop or infinite chain of references can occur. The ancestors of ones ancestors are also ones ancestors, the Fibonacci sequence is a classic example of recursion, Fib =0 as base case 1, Fib =1 as base case 2, For all integers n >1, Fib, = Fib + Fib. Many mathematical axioms are based upon recursive rules, for example, the formal definition of the natural numbers by the Peano axioms can be described as,0 is a natural number, and each natural number has a successor, which is also a natural number. By this base case and recursive rule, one can generate the set of all natural numbers, recursively defined mathematical objects include functions, sets, and especially fractals. There are various more tongue-in-cheek definitions of recursion, see recursive humor, Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be recursive, to understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, the running of a procedure involves actually following the rules and performing the steps. An analogy, a procedure is like a recipe, running a procedure is like actually preparing the meal. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, for this reason recursive definitions are very rare in everyday situations. An example could be the procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point, If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively, if every trial fails by reaching only dead ends, return on the path led to this branching point. Whether this actually defines a terminating procedure depends on the nature of the maze, in any case, executing the procedure requires carefully recording all currently explored branching points, and which of their branches have already been exhaustively tried. This can be understood in terms of a definition of a syntactic category. A sentence can have a structure in which what follows the verb is another sentence, Dorothy thinks witches are dangerous, so a sentence can be defined recursively as something with a structure that includes a noun phrase, a verb, and optionally another sentence
18.
The C Programming Language (book)
–
The book was central to the development and popularization of the C programming language and is still widely read and used today. The first edition of the book, published in 1978, was the first widely available book on the C programming language, C was created by Dennis Ritchie. Brian Kernighan wrote the first C tutorial, the authors came together to write the book in conjunction with the languages early development at AT&T Bell Labs. The version of C described in this book is referred to as K&R C. The second edition of the book has since translated into over 20 languages. In 2012 an eBook version of the edition was published in ePub, Mobi. ANSI C, first standardized in 1989, has undergone several revisions. However, no new edition of The C Programming Language has been issued to cover the more recent standards, BYTE stated in August 1983, is the definitive work on the C language. Dont read any further until you have this book, jerry Pournelle wrote in the magazine that year that the book is still the standard. He continued, You can learn the C language without getting Kernighan and Ritchie, youre also working too hard if you make it the only book on C that you buy. The C Programming Language has often cited as a model for technical writing, with reviewers describing it as having clear presentation. Examples generally consist of programs of the type one is likely to encounter in daily usage of the language. Its authors said, We have tried to retain the brevity of the first edition, C is not a big language, and it is not well served by a big book. We have improved the exposition of critical features, such as pointers and we have refined the original examples, and have added new examples in several chapters. For instance, the treatment of complicated declarations is augmented by programs that convert declarations into words, as before, all examples have been tested directly from the text, which is in machine-readable form. The book introduced the hello, world program, which just prints out the text hello, world, numerous texts since then have followed that convention for introducing a programming language. Before the advent of ANSI C, the first edition of the served as the de facto standard of the language for writers of C compilers. It is meant for easy comprehension by programmers, but not as a definition for compiler writers—that role properly belongs to the standard itself, appendix B is a summary of the facilities of the standard library
19.
Pure mathematics
–
Broadly speaking, pure mathematics is mathematics that studies entirely abstract concepts. Even though the pure and applied viewpoints are distinct philosophical positions, in there is much overlap in the activity of pure. To develop accurate models for describing the world, many applied mathematicians draw on tools. On the other hand, many pure mathematicians draw on natural and social phenomena as inspiration for their abstract research, ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between arithmetic, now called number theory, and logistic, now called arithmetic. Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, the term itself is enshrined in the full title of the Sadleirian Chair, founded in the mid-nineteenth century. The idea of a discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind, in the following years, specialisation and professionalisation started to make a rift more apparent. At the start of the twentieth century mathematicians took up the axiomatic method, in fact in an axiomatic setting rigorous adds nothing to the idea of proof. Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved, Pure mathematician became a recognized vocation, achievable through training. One central concept in mathematics is the idea of generality. One can use generality to avoid duplication of effort, proving a general instead of having to prove separate cases independently. Generality can facilitate connections between different branches of mathematics, category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math. Generalitys impact on intuition is both dependent on the subject and a matter of preference or learning style. Often generality is seen as a hindrance to intuition, although it can function as an aid to it. Each of these branches of abstract mathematics have many sub-specialties. A steep rise in abstraction was seen mid 20th century, in practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, the point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central
20.
Modulo operator
–
In computing, the modulo operation finds the remainder after division of one number by another. Given two positive numbers, a and n, a n is the remainder of the Euclidean division of a by n. Although typically performed with a and n both being integers, many computing systems allow other types of numeric operands, the range of numbers for an integer modulo of n is 0 to n −1. See modular arithmetic for an older and related convention applied in number theory, when either a or n is negative, the naive definition breaks down and programming languages differ in how these values are defined. In mathematics, the result of the operation is the remainder of the Euclidean division. Computers and calculators have various ways of storing and representing numbers, usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a or n. Standard Pascal and ALGOL68 give a positive remainder even for negative divisors, a modulo 0 is undefined in most systems, although some do define it as a. Despite its widespread use, truncated division is shown to be inferior to the other definitions, when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. For special cases, on some hardware, faster alternatives exist, optimizing compilers may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression &. This can allow writing clearer code without compromising performance and this optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend, unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, some modulo operations can be factored or expanded similar to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange, identity, mod n = a mod n. nx mod n =0 for all positive integer values of x. If p is a number which is not a divisor of b, then abp−1 mod p = a mod p. B−1 mod n denotes the multiplicative inverse, which is defined if and only if b and n are relatively prime. Distributive, mod n = mod n. ab mod n = mod n, division, a/b mod n = mod n, when the right hand side is defined. Inverse multiplication, mod n = a mod n, modulo and modulo – many uses of the word modulo, all of which grew out of Carl F. Gausss introduction of modular arithmetic in 1801. Modular exponentiation ^ Perl usually uses arithmetic modulo operator that is machine-independent, for examples and exceptions, see the Perl documentation on multiplicative operators. ^ Mathematically, these two choices are but two of the number of choices available for the inequality satisfied by a remainder
21.
Off-by-one error
–
An off-by-one error, also commonly known as an OBOB, OB1 error, or that extra inch you didnt really want, is a logic error involving the discrete equivalent of a boundary condition. It often occurs in computer programming when an iterative loop iterates one time too many or too few and this can also occur in a mathematical context. Consider an array of items, and items m through n are to be processed, an intuitive answer may be n − m, but that is off by one, exhibiting a fencepost error, the correct answer is +1. For this reason, ranges in computing are often represented by half-open intervals, at that point, i becomes 5, so i <5 is false and the loop ends. However, if the comparison used were <=, the loop would be carried out six times, i takes the values 0,1,2,3,4, and 5. Likewise, if i were initialized to 1 rather than 0, there would only be four iterations, i takes the values 1,2,3, both of these alternatives can cause off-by-one errors. Another such error can occur if a loop is used in place of a while loop A do-while loop is guaranteed to run at least once. Array-related confusion may result from differences in programming languages. Numbering from 0 is most common, but some languages start array numbering with 1, pascal has arrays with user-defined indices. This makes it possible to model the array indices after the problem domain, a fencepost error is a specific type of off-by-one error. An early description of this appears in the works of Vitruvius. The following problem illustrates the error, If you build a straight fence 30 meters long with posts spaced 3 meters apart, the naive answer 10 is wrong. The fence has 10 sections, but 11 posts, the reverse error occurs when the number of posts is known and the number of sections is assumed to be the same. The actual number of sections is one less than the number of posts, more generally, the problem can be stated as follows, If you have n posts, how many sections are there between them. The correct answer may be n −1 if the line of posts is open-ended, n if they form a loop, the precise problem definition must be carefully considered, as the setup for one situation may give the wrong answer for other situations. Fencepost errors come from counting things rather than the spaces between them, or vice versa, or by neglecting to consider whether one should count one or both ends of a row, fencepost errors can also occur in units other than length. This error involves the difference between expected and worst case behaviours of an algorithm, in larger numbers being off-by-one is often not a major issue. In smaller numbers, however, and specific cases where accuracy is paramount committing an error can be disastrous
22.
Modular arithmetic
–
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers wrap around upon reaching a certain value—the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, a familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7,00 now, then 8 hours later it will be 3,00. Usual addition would suggest that the time should be 7 +8 =15. Likewise, if the clock starts at 12,00 and 21 hours elapse, then the time will be 9,00 the next day, because the hour number starts over after it reaches 12, this is arithmetic modulo 12. According to the definition below,12 is congruent not only to 12 itself, Modular arithmetic can be handled mathematically by introducing a congruence relation on the integers that is compatible with the operations on integers, addition, subtraction, and multiplication. For a positive n, two integers a and b are said to be congruent modulo n, written, a ≡ b. The number n is called the modulus of the congruence, for example,38 ≡14 because 38 −14 =24, which is a multiple of 12. The same rule holds for negative values, −8 ≡72 ≡ −3 −3 ≡ −8. Equivalently, a ≡ b mod n can also be thought of as asserting that the remainders of the division of both a and b by n are the same, for instance,38 ≡14 because both 38 and 14 have the same remainder 2 when divided by 12. It is also the case that 38 −14 =24 is a multiple of 12. A remark on the notation, Because it is common to consider several congruence relations for different moduli at the same time, in spite of the ternary notation, the congruence relation for a given modulus is binary. This would have been if the notation a ≡n b had been used. The properties that make this relation a congruence relation are the following, if a 1 ≡ b 1 and a 2 ≡ b 2, then, a 1 + a 2 ≡ b 1 + b 2 a 1 − a 2 ≡ b 1 − b 2. The above two properties would still hold if the theory were expanded to all real numbers, that is if a1, a2, b1, b2. The next property, however, would fail if these variables were not all integers, the notion of modular arithmetic is related to that of the remainder in Euclidean division. The operation of finding the remainder is referred to as the modulo operation. For example, the remainder of the division of 14 by 12 is denoted by 14 mod 12, as this remainder is 2, we have 14 mod 12 =2
23.
Modulo operation
–
In computing, the modulo operation finds the remainder after division of one number by another. Given two positive numbers, a and n, a n is the remainder of the Euclidean division of a by n. Although typically performed with a and n both being integers, many computing systems allow other types of numeric operands, the range of numbers for an integer modulo of n is 0 to n −1. See modular arithmetic for an older and related convention applied in number theory, when either a or n is negative, the naive definition breaks down and programming languages differ in how these values are defined. In mathematics, the result of the operation is the remainder of the Euclidean division. Computers and calculators have various ways of storing and representing numbers, usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a or n. Standard Pascal and ALGOL68 give a positive remainder even for negative divisors, a modulo 0 is undefined in most systems, although some do define it as a. Despite its widespread use, truncated division is shown to be inferior to the other definitions, when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. For special cases, on some hardware, faster alternatives exist, optimizing compilers may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression &. This can allow writing clearer code without compromising performance and this optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend, unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, some modulo operations can be factored or expanded similar to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange, identity, mod n = a mod n. nx mod n =0 for all positive integer values of x. If p is a number which is not a divisor of b, then abp−1 mod p = a mod p. B−1 mod n denotes the multiplicative inverse, which is defined if and only if b and n are relatively prime. Distributive, mod n = mod n. ab mod n = mod n, division, a/b mod n = mod n, when the right hand side is defined. Inverse multiplication, mod n = a mod n, modulo and modulo – many uses of the word modulo, all of which grew out of Carl F. Gausss introduction of modular arithmetic in 1801. Modular exponentiation ^ Perl usually uses arithmetic modulo operator that is machine-independent, for examples and exceptions, see the Perl documentation on multiplicative operators. ^ Mathematically, these two choices are but two of the number of choices available for the inequality satisfied by a remainder
24.
Positional notation
–
Positional notation or place-value notation is a method of representing or encoding numbers. Positional notation is distinguished from other notations for its use of the symbol for the different orders of magnitude. This greatly simplified arithmetic, leading to the spread of the notation across the world. With the use of a point, the notation can be extended to include fractions. The Hindu–Arabic numeral system, base-10, is the most commonly used system in the world today for most calculations, today, the base-10 system, which is likely motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past however, and some continue to be used today, for example, the Babylonian numeral system, credited as the first positional numeral system, was base-60, but it lacked a real 0 value. Zero was indicated by a space between sexagesimal numerals, by 300 BC, a punctuation symbol was co-opted as a placeholder in the same Babylonian system. In a tablet unearthed at Kish, the scribe Bêl-bân-aplu wrote his zeros with three hooks, rather than two slanted wedges, the Babylonian placeholder was not a true zero because it was not used alone. Nor was it used at the end of a number, thus numbers like 2 and 120,3 and 180,4 and 240, looked the same because the larger numbers lacked a final sexagesimal placeholder. Counting rods and most abacuses have been used to represent numbers in a numeral system. This approach required no memorization of tables and could produce practical results quickly, for four centuries there was strong disagreement between those who believed in adopting the positional system in writing numbers and those who wanted to stay with the additive-system-plus-abacus. Although electronic calculators have largely replaced the abacus, the continues to be used in Japan. After the French Revolution, the new French government promoted the extension of the decimal system, some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful. Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world. According to Joseph Needham and Lam Lay Yong, decimal fractions were first developed and used by the Chinese in the 1st century BC, the written Chinese decimal fractions were non-positional. However, counting rod fractions were positional, the Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, anticipating Simon Stevin, but did not develop any notation to represent them. A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century. A key argument against the system was its susceptibility to easy fraud by simply putting a number at the beginning or end of a quantity, thereby changing 100 into 5100
25.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
26.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
27.
Bernoulli number
–
In mathematics, the Bernoulli numbers Bn are a sequence of rational numbers with deep connections to number theory. The values of the first few Bernoulli numbers are B0 =1, B±1 = ±1/2, B2 = 1/6, B3 =0, B4 = −1/30, B5 =0, B6 = 1/42, B7 =0, B8 = −1/30. The superscript ± is used by this article to designate the two conventions for Bernoulli numbers. They differ only in the sign of the n =1 term, B−n are the first Bernoulli numbers, in this convention, B−1 = −1/2. B+n are the second Bernoulli numbers, which are called the original Bernoulli numbers. In this convention, B+1 = +1/2, since Bn =0 for all odd n >1, and many formulas only involve even-index Bernoulli numbers, some authors write Bn to mean B2n. This article does not follow this notation, the Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Kōwa. Sekis discovery was published in 1712 in his work Katsuyo Sampo, Bernoullis, also posthumously. Ada Lovelaces note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbages machine, as a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program. Bernoulli numbers feature prominently in the form expression of the sum of the mth powers of the first n positive integers. For m, n ≥0 define S m = ∑ k =1 n k m =1 m +2 m + ⋯ + n m and this expression can always be rewritten as a polynomial in n of degree m +1. The coefficients of polynomials are related to the Bernoulli numbers by Bernoullis formula, S m =1 m +1 ∑ k =0 m B k + n m +1 − k. For example, taking m to be 1 gives the triangular numbers 0,1,3,6, … A000217,1 +2 + ⋯ + n =12 =12. Taking m to be 2 gives the square pyramidal numbers 0,1,5,14,12 +22 + ⋯ + n 2 =13 =13. Some authors use the convention for Bernoulli numbers and state Bernoullis formula in this way. Bernoullis formula is sometimes called Faulhabers formula after Johann Faulhaber who also found ways to calculate sums of powers. Faulhabers formula was generalized by V. Guo and J. Zeng to a q-analog, many characterizations of the Bernoulli numbers have been found in the last 300 years, and each could be used to introduce these numbers. Here only four of the most useful ones are mentioned, an equation, an explicit formula, a generating function
28.
Bell number
–
In combinatorial mathematics, the Bell numbers count the number of partitions of a set. These numbers have been studied by mathematicians since the 19th century, and their roots go back to medieval Japan, but they are named after Eric Temple Bell, who wrote about them in the 1930s. The nth of these numbers, Bn, counts the number of different ways to partition a set that has n elements, or equivalently. Outside of mathematics, the number also counts the number of different rhyme schemes for n-line poems. As well as appearing in counting problems, these numbers have a different interpretation, in particular, Bn is the nth moment of a Poisson distribution with mean 1. In general, Bn is the number of partitions of a set of size n, a partition of a set S is defined as a set of nonempty, pairwise disjoint subsets of S whose union is S. For example, B3 =5 because the 3-element set can be partitioned in 5 distinct ways, b0 is 1 because there is exactly one partition of the empty set. Every member of the empty set is a nonempty set, therefore, the empty set is the only partition of itself. As suggested by the set notation above, we consider neither the order of the partitions nor the order of elements within each partition and this means that the following partitionings are all considered identical. If, instead, different orderings of the sets are considered to be different partitions, If a number N is a squarefree positive integer, then Bn gives the number of different multiplicative partitions of N. These are factorizations of N into numbers greater than one, treating two factorizations as the same if they have the same factors in a different order. A rhyme scheme describes which lines rhyme with other. Thus, the 15 possible four-line rhyme schemes are AAAA, AAAB, AABA, AABB, AABC, ABAA, ABAB, ABAC, ABBA, ABBB, ABBC, ABCA, ABCB, ABCC, and ABCD. The Bell numbers come up in a card shuffling problem mentioned in the addendum to Gardner, of these, the number that return the deck to its original sorted order is exactly Bn. Thus, the probability that the deck is in its original order after shuffling it in this way is Bn/nn, probability that would describe a uniformly random permutation of the deck. Related to card shuffling are several problems of counting special kinds of permutations that are also answered by the Bell numbers. For instance, the nth Bell number equals number of permutations on n items in which no three values that are in sorted order have the last two of three consecutive. The permutations that avoid the generalized patterns 12-3, 32-1, 3-21, 1-32, 3-12, 21-3, the permutations in which every 321 pattern can be extended to a 3241 pattern are also counted by the Bell numbers
29.
Zeroth law of thermodynamics
–
The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other. Accordingly, thermal equilibrium between systems is a transitive relation, two systems are said to be in the relation of thermal equilibrium if they are linked by a wall permeable only to heat and they do not change over time. The physical meaning of the law was expressed by Maxwell in the words, for this reason, another statement of the law is All diathermal walls are equivalent. The law is important for the formulation of thermodynamics, which needs the assertion that the relation of thermal equilibrium is an equivalence relation. This information is needed for a definition of temperature that will agree with the physical existence of valid thermometers. A thermodynamic system is by definition in its own state of thermodynamic equilibrium. One precise statement of the law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. This means that a tag can be assigned to every system. This property is used to justify the use of temperature as a tagging system. This statement asserts that thermal equilibrium is a relation between thermodynamic systems. If we also define that every system is in thermal equilibrium with itself. Binary relations that are both reflexive and Euclidean are equivalence relations, one consequence of an equivalence relationship is that the equilibrium relationship is symmetric, If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus we may say that two systems are in equilibrium with each other, or that they are in mutual equilibrium. A reflexive, transitive relationship does not guarantee an equivalence relationship, in order for the above statement to be true, both reflexivity and symmetry must be implicitly assumed. It is the Euclidean relationships which apply directly to thermometry, an ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. The zeroth law provides no information regarding this final reading, the zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set divides that set into a collection of distinct subsets where any member of the set is a member of one, in the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be tagged with a label identifying the subset to which it belongs
30.
Index case
–
An index case will sometimes achieve the status of a classic case in the literature, as did Phineas Gage. The index case may indicate the source of the disease, the spread. The index case is the first patient that indicates the existence of an outbreak, earlier cases may be found and are labeled primary, secondary, tertiary, etc. The term primary case can only apply to infectious diseases spread from human to human. Patient Zero was used to refer to the index case in the spread of HIV in North America. In genetics, the case is the case of the original patient that stimulates investigation of other members of the family to discover a possible genetic factor. In the early years of the AIDS epidemic, a “patient zero” transmission scenario was compiled by Dr. William Darrow, centers for Disease Control and Prevention. This epidemiological study showed how “patient zero” had infected multiple partners with HIV, the CDC identified Gaëtan Dugas as a carrier of the virus from Europe to the United States and spreading it to other men he encountered at gay bathhouses. Journalist Randy Shilts subsequently wrote about Patient Zero, based on Darrows findings, in his 1987 book And the Band Played On, Dugas was a flight attendant who was sexually promiscuous in several North American cities, according to Shilts book. He was vilified for several years as a spreader of HIV. Four years later, Darrow repudiated the studys methodology and how Shilts had represented its conclusions, and more often than not, especially in large disease outbreaks, they’re not. Mary Mallon was a case of a typhoid outbreak in the early 1900s. An apparently healthy carrier, she infected 47 people while working as a cook and she eventually was isolated to prevent her from spreading the disease to others. The first recorded victim of the Ebola virus was a 44-year-old schoolteacher named Mabalo Lokela, 64-year-old Liu Jianlun, a Guangdong doctor, transmitted SARS during a stay in the Hong Kong Metropole Hotel in 2003. A baby in the Lewis House at 40 Broad Street is considered the patient in the 1854 cholera outbreak in the Soho neighborhood of London. Édgar Enrique Hernández may be patient zero of the 2009 swine flu outbreak and he recovered, and a bronze statue has been erected in his honor. Maria Adela Gutierrez, who contracted the virus about the time as Hernández. Two-year-old Emile Ouamouno is believed to be the patient in the 2014 Ebola epidemic in Guinea
31.
Sample (statistics)
–
In statistics and quantitative research methodology, a data sample is a set of data collected and/or selected from a statistical population by a defined procedure. The elements of a sample are known as points, sampling units or observations. Typically, the population is large, making a census or a complete enumeration of all the values in the population is either impractical or impossible. The sample usually represents a subset of manageable size, samples are collected and statistics are calculated from the samples so that one can make inferences or extrapolations from the sample to the population. The data sample may be drawn from a population replacement, in which case it is a subset of a population, or with replacement. A complete sample is a set of objects from a parent population that includes ALL such objects that satisfy a set of well-defined selection criteria, for example, a complete sample of Australian men taller than 2m would consist of a list of every Australian male taller than 2m. But it wouldnt include German males, or tall Australian females, so to compile such a complete sample requires a complete list of the parent population, including data on height, gender, and nationality for each member of that parent population. In the case of populations, such a complete list is unlikely to exist. An unbiased sample is a set of chosen from a complete sample using a selection process that does not depend on the properties of the objects. For example, a sample of Australian men taller than 2m might consist of a randomly sampled subset of 1% of Australian males taller than 2m. But one chosen from the electoral register might not be unbiased since, for example, the best way to avoid a biased or unrepresentative sample is to select a random sample, also known as a probability sample. A random sample is defined as a sample where each member of the population has a known. Several types of samples are simple random samples, systematic samples, stratified random samples. A sample that is not random is called a sample or a non-probability sampling. Some examples of nonrandom samples are samples, judgment samples, purposive samples, quota samples, snowball samples. They can be used in many situations, in mathematical terms, given a random variable X with distribution F, a random sample of length n is a set of n independent, identically distributed random variables with distribution F. A sample concretely represents n experiments in which the quantity is measured. For example, if X represents the height of an individual and n individuals are measured, note that a sample of random variables must not be confused with the realizations of these variables
32.
Epidemiology
–
Epidemiology is the study and analysis of the patterns, causes, and effects of health and disease conditions in defined populations. It is the cornerstone of public health, and shapes policy decisions and evidence-based practice by identifying risk factors for disease, epidemiologists help with study design, collection, and statistical analysis of data, amend interpretation and dissemination of results. Epidemiology has helped develop methodology used in research, public health studies. However, the term is used in studies of zoological populations, although the term epizoology is available. The distinction between epidemic and endemic was first drawn by Hippocrates, to distinguish between diseases that are visited upon a population from those that reside within a population. The term epidemiology appears to have first been used to describe the study of epidemics in 1802 by the Spanish physician Villalba in Epidemiología Española, epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. Therefore, this epidemiology is based upon how the pattern of the disease cause changes in the function of everyone, Hippocrates believed sickness of the human body to be caused by an imbalance of the four humors. The cure to the sickness was to remove or add the humor in question to balance the body and this belief led to the application of bloodletting and dieting in medicine. He coined the terms endemic and epidemic, in the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro was the first to propose a theory that these very small, unseeable, particles that cause disease were alive. They were considered to be able to spread by air, multiply by themselves, in this way he refuted Galens miasma theory. In 1543 he wrote a book De contagione et contagiosis morbis, the development of a sufficiently powerful microscope by Antonie van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease. Wu Youke developed the concept that some diseases were caused by transmissible agents and his book Wenyi Lun can be regarded as the main etiological work that brought forward the concept, ultimately attributed to Westerners, of germs as a cause of epidemic diseases. His concepts are considered in current scientific research in relation to Traditional Chinese Medicine studies. Another pioneer, Thomas Sydenham, was the first to distinguish the fevers of Londoners in the later 1600s and his theories on cures of fevers met with much resistance from traditional physicians at the time. He was not able to find the cause of the smallpox fever he researched and treated. John Graunt, a haberdasher and amateur statistician, published Natural and Political Observations, upon the Bills of Mortality in 1662. In it, he analysed the mortality rolls in London before the Great Plague, presented one of the first life tables and he provided statistical evidence for many theories on disease, and also refuted some widespread ideas on them. John Snow is famous for his investigations into the causes of the 19th century cholera epidemics and he began with noticing the significantly higher death rates in two areas supplied by Southwark Company