1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
4.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
5.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
6.
Adrien-Marie Legendre
–
Adrien-Marie Legendre was a French mathematician. Legendre made numerous contributions to mathematics, well-known and important concepts such as the Legendre polynomials and Legendre transformation are named after him. Adrien-Marie Legendre was born in Paris on 18 September 1752 to a wealthy family and he received his education at the Collège Mazarin in Paris, and defended his thesis in physics and mathematics in 1770. He taught at the École Militaire in Paris from 1775 to 1780, at the same time, he was associated with the Bureau des Longitudes. In 1782, the Berlin Academy awarded Legendre a prize for his treatise on projectiles in resistant media and this treatise also brought him to the attention of Lagrange. The Académie des Sciences made Legendre an adjoint member in 1783, in 1789 he was elected a Fellow of the Royal Society. He assisted with the Anglo-French Survey to calculate the distance between the Paris Observatory and the Royal Greenwich Observatory by means of trigonometry. To this end in 1787 he visited Dover and London together with Dominique, comte de Cassini, the three also visited William Herschel, the discoverer of the planet Uranus. Legendre lost his fortune in 1793 during the French Revolution. That year, he also married Marguerite-Claudine Couhin, who helped him put his affairs in order, in 1795 Legendre became one of six members of the mathematics section of the reconstituted Académie des Sciences, renamed the Institut National des Sciences et des Arts. Later, in 1803, Napoleon reorganized the Institut National, and his pension was partially reinstated with the change in government in 1828. In 1831 he was made an officer of the Légion dHonneur, Legendre died in Paris on 10 January 1833, after a long and painful illness, and Legendres widow carefully preserved his belongings to memorialize him. Upon her death in 1856, she was buried next to her husband in the village of Auteuil, where the couple had lived, Legendres name is one of the 72 names inscribed on the Eiffel Tower. Today, the term least squares method is used as a translation from the French méthode des moindres carrés. Around 1811 he named the gamma function and introduced the symbol Γ normalizing it to Γ = n, in 1830 he gave a proof of Fermats last theorem for exponent n =5, which was also proven by Lejeune Dirichlet in 1828. In number theory, he conjectured the quadratic reciprocity law, subsequently proved by Gauss, in connection to this and he also did pioneering work on the distribution of primes, and on the application of analysis to number theory. His 1798 conjecture of the prime number theorem was proved by Hadamard. He is known for the Legendre transformation, which is used to go from the Lagrangian to the Hamiltonian formulation of classical mechanics, in thermodynamics it is also used to obtain the enthalpy and the Helmholtz and Gibbs energies from the internal energy
7.
Carl Friedrich Gauss
–
Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick, in the Duchy of Brunswick-Wolfenbüttel, as the son of poor working-class parents. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter and he was christened and confirmed in a church near the school he attended as a child. A contested story relates that, when he was eight, he figured out how to add up all the numbers from 1 to 100, there are many other anecdotes about his precocity while a toddler, and he made his first ground-breaking mathematical discoveries while still a teenager. He completed Disquisitiones Arithmeticae, his opus, in 1798 at the age of 21. This work was fundamental in consolidating number theory as a discipline and has shaped the field to the present day, while at university, Gauss independently rediscovered several important theorems. Gauss was so pleased by this result that he requested that a regular heptadecagon be inscribed on his tombstone, the stonemason declined, stating that the difficult construction would essentially look like a circle. The year 1796 was most productive for both Gauss and number theory and he discovered a construction of the heptadecagon on 30 March. He further advanced modular arithmetic, greatly simplifying manipulations in number theory, on 8 April he became the first to prove the quadratic reciprocity law. This remarkably general law allows mathematicians to determine the solvability of any quadratic equation in modular arithmetic, the prime number theorem, conjectured on 31 May, gives a good understanding of how the prime numbers are distributed among the integers. Gauss also discovered that every integer is representable as a sum of at most three triangular numbers on 10 July and then jotted down in his diary the note, ΕΥΡΗΚΑ. On October 1 he published a result on the number of solutions of polynomials with coefficients in finite fields, in 1831 Gauss developed a fruitful collaboration with the physics professor Wilhelm Weber, leading to new knowledge in magnetism and the discovery of Kirchhoffs circuit laws in electricity. It was during this time that he formulated his namesake law and they constructed the first electromechanical telegraph in 1833, which connected the observatory with the institute for physics in Göttingen. In 1840, Gauss published his influential Dioptrische Untersuchungen, in which he gave the first systematic analysis on the formation of images under a paraxial approximation. Among his results, Gauss showed that under a paraxial approximation an optical system can be characterized by its cardinal points and he derived the Gaussian lens formula. In 1845, he became associated member of the Royal Institute of the Netherlands, in 1854, Gauss selected the topic for Bernhard Riemanns Habilitationvortrag, Über die Hypothesen, welche der Geometrie zu Grunde liegen. On the way home from Riemanns lecture, Weber reported that Gauss was full of praise, Gauss died in Göttingen, on 23 February 1855 and is interred in the Albani Cemetery there. Two individuals gave eulogies at his funeral, Gausss son-in-law Heinrich Ewald and Wolfgang Sartorius von Waltershausen and his brain was preserved and was studied by Rudolf Wagner who found its mass to be 1,492 grams and the cerebral area equal to 219,588 square millimeters. Highly developed convolutions were also found, which in the early 20th century were suggested as the explanation of his genius, Gauss was a Lutheran Protestant, a member of the St. Albans Evangelical Lutheran church in Göttingen
8.
Quadratic reciprocity
–
In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. There are a number of equivalent statements of the theorem, one version of the law states that for p and q odd prime numbers, = p −12 q −12 where denotes the Legendre symbol. This law, combined with the properties of the Legendre symbol and this makes it possible to determine, for any quadratic equation, x 2 ≡ a, where p is an odd prime, whether it has a solution. However, it does not provide any help at all for actually finding the solution, the solution can be found using quadratic residues. The theorem was conjectured by Euler and Legendre and first proved by Gauss and he refers to it as the fundamental theorem in the Disquisitiones Arithmeticae and his papers, writing The fundamental theorem must certainly be regarded as one of the most elegant of its type. Privately he referred to it as the golden theorem and he published six proofs, and two more were found in his posthumous papers. There are now over 200 published proofs, the first section of this article gives a special case of quadratic reciprocity that is representative of the general case. The second section gives the formulations of quadratic reciprocity found by Legendre, consider the polynomial f = n 2 −5 and its values for n ∈ N. The prime factorizations of these values are given as follows, The prime numbers that appear as factors are 2,5, no primes ending in 3 or 7 ever appear. Another way of phrasing this is that the primes p for which exists an n such that n2 ≡5 are precisely 2,5. Or in other words, when p is a prime that is neither 2 nor 5,5 is a residue modulo p iff p is 1 or 4 modulo 5. In other words,5 is a residue modulo p iff p is a quadratic residue modulo 5. The law of quadratic reciprocity gives a similar characterization of prime divisors of f = n2 − c for any integer c, a quadratic residue is any number congruent to a square. A quadratic nonresidue is any number that is not congruent to a square, the adjective quadratic can be dropped if the context makes it clear that it is implied. When working modulo primes, it is usual to treat zero as a special case, by doing so, the following statements become true, Modulo a prime, there are an equal number of quadratic residues and nonresidues. Modulo a prime, the product of two quadratic residues is a residue, the product of a residue and a nonresidue is a nonresidue, and this table is complete for odd primes less than 50. To check whether a number m is a quadratic residue mod one of these primes p, If a is in row p, then m is a residue, if a is not in row p of the table, then m is a nonresidue. The quadratic reciprocity law is the statement that certain patterns found in the table are true in general, in this article, p and q always refer to distinct positive odd prime numbers
9.
APL (programming language)
–
APL is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array and it uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming and it has also inspired several other programming languages. It is still used today for certain applications, the preface states its premise, Applied mathematics is largely concerned with the design and analysis of explicit procedures for calculating the exact or approximate values of various functions. Such explicit procedures are called algorithms or programs, because an effective notation for the description of programs exhibits considerable syntactic structure, it is called a programming language. In 1960, he work for IBM and, working with Adin Falkoff. Students tested their code in Hellermans lab and this implementation of a portion of the notation was called PAT. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, Lawrence asked Iverson and his group to help utilize the language as a tool for the development and use of computers in education. After Lawrence M. Breed and Philip S and this work was finished in late 1965 and later known as IVSYS. The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report and this was formally supervised by Niklaus Wirth. Like Hellermans PAT system earlier, this implementation did not include the APL character set but used special English reserved words for functions and this was used on paper printing terminal workstations using the Selectric typewriter and typeball mechanism, such as the IBM1050 and IBM2741 terminal. Keycaps could be placed over the keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could actually type in and see real APL characters as used in Iversons notation, Falkoff and Iverson had the special APL Selectric typeballs,987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typeball for the APL character set, some APL symbols, even with the APL characters on the typeball, still had to be typed in by over-striking two existing typeball characters. An example would be the grade up character, which had to be made from a delta and this was necessary because the APL character set was larger than the 88 characters allowed on the Selectric typeball. IBM was chiefly responsible for the introduction of APL to the marketplace, APL was first available in 1967 for the IBM1130 as APL\1130. It would run in as little as 8k 16-bit words of memory, somewhat later, as suitably performing hardware was finally becoming available in the mid- to late-1980s, many users migrated their applications to the personal computer environment
10.
BASIC
–
BASIC is a family of general-purpose, high-level programming languages whose design philosophy emphasizes ease of use. In 1964, John G. Kemeny and Thomas E. Kurtz designed the original BASIC language at Dartmouth College in the U. S. state of New Hampshire and they wanted to enable students in fields other than science and mathematics to use computers. At the time, nearly all use of computers required writing custom software, versions of BASIC became widespread on microcomputers in the mid-1970s and 1980s. Microcomputers usually shipped with BASIC, often in the machines firmware, having an easy-to-learn language on these early personal computers allowed small business owners, professionals, hobbyists, and consultants to develop custom software on computers they could afford. In the 2010s, BASIC remains popular in many computing dialects and in new languages influenced by BASIC, before the mid-1960s, the only computers were huge mainframe computers. Users submitted jobs on punched cards or similar media to specialist computer operators, the computer stored these, then used a batch processing system to run this queue of jobs one after another, allowing very high levels of utilization of these expensive machines. As the performance of computing hardware rose through the 1960s, multi-processing was developed and this allowed a mix of batch jobs to be run together, but the real revolution was the development of time-sharing. The original BASIC language was released on May 1,1964 by John G. Kemeny and Thomas E. Kurtz, the acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. BASIC was designed to allow students to write computer programs for the Dartmouth Time-Sharing System. It was intended specifically for technical users who did not have or want the mathematical background previously expected. Being able to use a computer to support teaching and research was quite novel at the time, the language was based on FORTRAN II, with some influences from ALGOL60 and with additions to make it suitable for timesharing. Wanting use of the language to become widespread, its designers made the available free of charge. They also made it available to schools in the Hanover area. In the following years, as dialects of BASIC appeared, Kemeny. A version was a part of the Pick operating system from 1973 onward. During this period a number of computer games were written in BASIC. A number of these were collected by DEC employee David H. Ahl and he later collected a number of these into book form,101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a computer for education use
11.
Microsoft Excel
–
Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS, Android and iOS. It features calculation, graphing tools, pivot tables, and a programming language called Visual Basic for Applications. It has been a widely applied spreadsheet for these platforms, especially since version 5 in 1993. Excel forms part of Microsoft Office, Microsoft Excel has the basic features of all spreadsheets, using a grid of cells arranged in numbered rows and letter-named columns to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering, in addition, it can display data as line graphs, histograms and charts, and with a very limited three-dimensional graphical display. It allows sectioning of data to view its dependencies on various factors for different perspectives, Excel was not designed to be used as a database. Microsoft allows for a number of optional command-line switches to control the manner in which Excel starts, the Windows version of Excel supports programming through Microsofts Visual Basic for Applications, which is a dialect of Visual Basic. Programming with VBA allows spreadsheet manipulation that is awkward or impossible with standard spreadsheet techniques, programmers may write code directly using the Visual Basic Editor, which includes a window for writing code, debugging code, and code module organization environment. A common and easy way to generate VBA code is by using the Macro Recorder, the Macro Recorder records actions of the user and generates VBA code in the form of a macro. These actions can then be repeated automatically by running the macro, the macros can also be linked to different trigger types like keyboard shortcuts, a command button or a graphic. The actions in the macro can be executed from these types or from the generic toolbar options. The VBA code of the macro can also be edited in the VBE, advanced users can employ user prompts to create an interactive program, or react to events such as sheets being loaded or changed. Macro Recorded code may not be compatible between Excel versions, some code that is used in Excel 2010 can not be used in Excel 2003. Making a Macro that changes the colors and making changes to other aspects of cells may not be backward compatible. User-created VBA subroutines execute these actions and operate like macros generated using the macro recorder, from its first version Excel supported end user programming of macros and user defined functions.0. Beginning with version 5.0 Excel recorded macros in VBA by default, after version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, Excel supports charts, graphs, or histograms generated from specified groups of cells. The generated graphic component can either be embedded within the current sheet and these displays are dynamically updated if the content of cells change
12.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
13.
C++
–
C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation and it was designed with a bias toward system programming and embedded, resource-constrained and large systems, with performance, efficiency and flexibility of use as its design highlights. C++ is a language, with implementations of it available on many platforms and provided by various organizations, including the Free Software Foundation, LLVM, Microsoft, Intel. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2014 as ISO/IEC14882,2014. The C++ programming language was standardized in 1998 as ISO/IEC14882,1998. The current C++14 standard supersedes these and C++11, with new features, the C++17 standard is due in 2017, with the draft largely implemented by some compilers already, and C++20 is the next planned standard thereafter. Many other programming languages have influenced by C++, including C#, D, Java. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on C with Classes, the motivation for creating a new language originated from Stroustrups experience in programming for his Ph. D. thesis. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing, remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable, as well as C and Simulas influences, other languages also influenced C++, including ALGOL68, Ada, CLU and ML. Initially, Stroustrups C with Classes added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining, furthermore, it included the development of a standalone compiler for C++, Cfront. In 1985, the first edition of The C++ Programming Language was released, the first commercial implementation of C++ was released in October of the same year. In 1989, C++2.0 was released, followed by the second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static functions, const member functions. In 1990, The Annotated C++ Reference Manual was published and this work became the basis for the future standard. Later feature additions included templates, exceptions, namespaces, new casts, after a minor C++14 update released in December 2014, various new additions are planned for 2017 and 2020. According to Stroustrup, the name signifies the nature of the changes from C. This name is credited to Rick Mascitti and was first used in December 1983, when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit
14.
R (programming language)
–
R is an open source programming language and software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing. The R language is used among statisticians and data miners for developing statistical software. Polls, surveys of data miners, and studies of scholarly literature databases show that Rs popularity has increased substantially in recent years, while R has a command line interface, there are several graphical front-ends available. R is an implementation of the S programming language combined with lexical scoping semantics inspired by Scheme, S was created by John Chambers while at Bell Labs. There are some important differences, but much of the code written for S runs unaltered. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, R is named partly after the first names of the first two R authors and partly as a play on the name of S. The project was conceived in 1992, with a version released in 1995. R is easily extensible through functions and extensions, and the R community is noted for its contributions in terms of packages. Many of Rs standard functions are written in R itself, which makes it easy for users to follow the algorithmic choices made, for computationally intensive tasks, C, C++, and Fortran code can be linked and called at run time. Advanced users can write C, C++, Java. NET or Python code to manipulate R objects directly, R is highly extensible through the use of user-submitted packages for specific functions or specific areas of study. Due to its S heritage, R has stronger object-oriented programming facilities than most statistical computing languages, extending R is also eased by its lexical scoping rules. Another strength of R is static graphics, which can produce publication-quality graphs, dynamic and interactive graphics are available through additional packages. R has Rd, its own LaTeX-like documentation format, which is used to supply comprehensive documentation, R is an interpreted language, users typically access it through a command-line interpreter. If a user types 2+2 at the R command prompt and presses enter, Rs data structures include vectors, matrices, arrays, data frames and lists. Rs extensible object system includes objects for, regression models, time-series, the scalar data type was never a data structure of R. Instead, a scalar is represented as a vector with length one. R supports procedural programming with functions and, for some functions, a generic function acts differently depending on the classes of arguments passed to it. In other words, the generic function dispatches the function specific to that class of object, for example, R has a generic print function that can print almost every class of object in R with a simple print syntax. Arrays are stored in column-major order, the capabilities of R are extended through user-created packages, which allow specialized statistical techniques, graphical devices, import/export capabilities, reporting tools, etc
15.
Python (programming language)
–
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. The language provides constructs intended to enable writing clear programs on both a small and large scale and it has a large and comprehensive standard library. Python interpreters are available for operating systems, allowing Python code to run on a wide variety of systems. CPython, the implementation of Python, is open source software and has a community-based development model. CPython is managed by the non-profit Python Software Foundation, about the origin of Python, Van Rossum wrote in 1996, Over six years ago, in December 1989, I was looking for a hobby programming project that would keep me occupied during the week around Christmas. Would be closed, but I had a computer. I decided to write an interpreter for the new scripting language I had been thinking about lately, I chose Python as a working title for the project, being in a slightly irreverent mood. Python 2.0 was released on 16 October 2000 and had major new features, including a cycle-detecting garbage collector. With this release the development process was changed and became more transparent, Python 3.0, a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its features have been backported to the backwards-compatible Python 2.6. x and 2.7. x version series. The End Of Life date for Python 2.7 was initially set at 2015, many other paradigms are supported via extensions, including design by contract and logic programming. Python uses dynamic typing and a mix of reference counting and a garbage collector for memory management. An important feature of Python is dynamic name resolution, which binds method, the design of Python offers some support for functional programming in the Lisp tradition. The language has map, reduce and filter functions, list comprehensions, dictionaries, and sets, the standard library has two modules that implement functional tools borrowed from Haskell and Standard ML. Python can also be embedded in existing applications that need a programmable interface, while offering choice in coding methodology, the Python philosophy rejects exuberant syntax, such as in Perl, in favor of a sparser, less-cluttered grammar. As Alex Martelli put it, To describe something as clever is not considered a compliment in the Python culture. Pythons philosophy rejects the Perl there is more one way to do it approach to language design in favor of there should be one—and preferably only one—obvious way to do it. Pythons developers strive to avoid premature optimization, and moreover, reject patches to non-critical parts of CPython that would offer an increase in speed at the cost of clarity
16.
J (programming language)
–
The J programming language, developed in the early 1990s by Kenneth E. Iverson and Roger Hui, is a synthesis of APL and the FP and FL function-level languages created by John Backus. To avoid repeating the APL special-character problem, J uses only the basic ASCII character set, resorting to the use of the dot, most such primary J words serve as mathematical symbols, with the dot or colon extending the meaning of the basic characters available. Also, many characters which in other languages often must be paired are treated by J as stand-alone words or, J is a very terse array programming language, and is most suited to mathematical and statistical programming, especially when performing operations on matrices. It has also used in extreme programming and network performance analysis. Like the original FP/FL languages, J supports function-level programming via its tacit programming features, since March 2011, J is free and open-source software under the GPLv3 license. One may also purchase source for use under a negotiated license. J permits point-free style and function composition, thus, its programs can be very terse and are considered difficult to read by some programmers. Program in J is Hello, world and this implementation of hello world reflects the traditional use of J – programs are entered into a J interpreter session, and the results of expressions are displayed. Its also possible to arrange for J scripts to be executed as standalone programs, heres how this might look on a Unix system, Historically, APL used / to indicate the fold, so +/123 was equivalent to 1+2+3. Meanwhile, division was represented with the mathematical division symbol, which was implemented by overstriking a minus sign. +/ sums the items of the array, % divides the sum by the number of items. Note, avg is defined using a train of three verbs known as a fork. Specifically Ny is the same as V1 which shows some of the power of J, some examples of using avg, v=.20 $100 NB. Its significance in J is similar to the significance of select in SQL, implementing quicksort, from the J Dictionary yields, The following is an implementation of quicksort demonstrating tacit programming. Tacit programming involves composing functions together and not referring explicitly to any variables, Js support for forks and hooks dictates rules on how arguments applied to this function will be applied to its component functions. Sorting in J is usually accomplished using the built-in verbs /, user-defined sorts such as quicksort, above, typically are for illustration only. The following expression exhibits pi with n digits and demonstrates the extended precision abilities of J, 10x^n NB. extended precision 10 to the nth * pi 314159265358979323846264338327950288419716939937510 J supports three simple types, Numeric Literal Boxed Of these, numeric has the most variants. One of Js numeric types is the bit, there are two bit values,0, and 1
17.
Sawtooth wave
–
The sawtooth wave is a kind of non-sinusoidal waveform. It is so named based on its resemblance to the teeth of a saw with a zero rake angle. The convention is that a sawtooth wave ramps upward and then sharply drops, however, in a reverse sawtooth wave, the wave ramps downward and then sharply rises. It can also be considered the case of an asymmetric triangle wave. The piecewise linear function x = t − ⌊ t ⌋ = t − floor or x = t based on the function of time t is an example of a sawtooth wave with period 1. A more general form, in the range −1 to 1, a sawtooth can be constructed using additive synthesis. The infinite Fourier series x r e v e r s e s a w t o o t h =2 A π ∑ k =1 ∞ k sin k converges to a sawtooth wave. A conventional sawtooth can be constructed using x s a w t o o t h = A2 − A π ∑ k =1 ∞ k sin k where A is amplitude. Note, cot y = -tan x In digital synthesis, these series are only summed over k such that the highest harmonic and this summation can generally be more efficiently calculated with a fast Fourier transform. An audio demonstration of a sawtooth played at 440 Hz and 880 Hz and 1760 Hz is available below, both bandlimited and aliased tones are presented. Sawtooth waves are known for their use in music, the sawtooth and square waves are among the most common waveforms used to create sounds with subtractive analog and virtual analog music synthesizers. The sawtooth wave is the form of the vertical and horizontal deflection signals used to generate a raster on CRT-based television or monitor screens, oscilloscopes also use a sawtooth wave for their horizontal deflection, though they typically use electrostatic deflection. On the waves ramp, the field produced by the deflection yoke drags the electron beam across the face of the CRT. On the waves cliff, the magnetic field collapses, causing the electron beam to return to its resting position as quickly as possible. Frequency is 15.734 kHz on NTSC,15.625 kHz for PAL, the vertical deflection system operates the same way as the horizontal, though at a much lower frequency. The ramp portion of the wave must appear as a straight line, if otherwise, it indicates that the voltage isnt increasing linearly, and therefore that the magnetic field produced by the deflection yoke is not linear. As a result, the beam will accelerate during the non-linear portions. This would result in a television image squished in the direction of the non-linearity, extreme cases will show marked brightness increases, since the electron beam spends more time on that side of the picture
18.
LaTeX
–
LaTeX is a document preparation system. When writing, the writer uses plain text as opposed to the text found in WYSIWYG word processors like Microsoft Word or LibreOffice Writer. The writer uses markup tagging conventions to define the structure of a document, to stylise text throughout a document. A TeX distribution such as TeX Live or MikTeX is used to produce a file suitable for printing or digital distribution. Within the typesetting system, its name is stylised as LaTeX and it also has a prominent role in the preparation and publication of books and articles that contain complex multilingual materials, such as Tamil, Sanskrit and Greek. LaTeX uses the TeX typesetting program for formatting its output, LaTeX can be used as a standalone document preparation system or as an intermediate format. In the latter role, for example, it is used as part of a pipeline for translating DocBook. LaTeX is intended to provide a language that accesses the power of TeX in an easier way for writers. In short, TeX handles the layout side, while LaTeX handles the content side for document processing, LaTeX comprises a collection of TeX macros and a program to process LaTeX documents. LaTeX was originally written in the early 1980s by Leslie Lamport at SRI International, LaTeX is free software and is distributed under the LaTeX Project Public License. It therefore encourages the separation of layout from content while still allowing manual typesetting adjustments where needed and this concept is similar to the mechanism by which many word processors allow styles to be defined globally for an entire document or the use of Cascading Style Sheets to style HTML. The LaTeX system is a language that also handles typesetting and rendering. LaTeX can be extended by using the underlying macro language to develop custom formats. Such macros are often collected into packages, which are available to address special formatting such as complicated mathematical content or graphics. Indeed, in the example below, the environment is provided by the amsmath package. In order to create a document in LaTeX, you first write a file, say document. tex, then you give your document. tex file as input to the TeX program, and TeX writes out a file suitable for viewing onscreen or printing. This write-format-preview cycle is one of the ways in which working with LaTeX differs from what-you-see-is-what-you-get word-processing. It is similar to the code-compile-execute cycle familiar to computer programmers, today, many LaTeX-aware editing programs make this cycle a simple matter of pressing a single key, while showing the output preview on the screen beside the input window
19.
Unicode
–
Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the worlds writing systems. As of June 2016, the most recent version is Unicode 9.0, the standard is maintained by the Unicode Consortium. Unicodes success at unifying character sets has led to its widespread, the standard has been implemented in many recent technologies, including modern operating systems, XML, Java, and the. NET Framework. Unicode can be implemented by different character encodings, the most commonly used encodings are UTF-8, UTF-16 and the now-obsolete UCS-2. UTF-8 uses one byte for any ASCII character, all of which have the same values in both UTF-8 and ASCII encoding, and up to four bytes for other characters. UCS-2 uses a 16-bit code unit for each character but cannot encode every character in the current Unicode standard, UTF-16 extends UCS-2, using one 16-bit unit for the characters that were representable in UCS-2 and two 16-bit units to handle each of the additional characters. Many traditional character encodings share a common problem in that they allow bilingual computer processing, Unicode, in intent, encodes the underlying characters—graphemes and grapheme-like units—rather than the variant glyphs for such characters. In the case of Chinese characters, this leads to controversies over distinguishing the underlying character from its variant glyphs. In text processing, Unicode takes the role of providing a unique code point—a number, in other words, Unicode represents a character in an abstract way and leaves the visual rendering to other software, such as a web browser or word processor. This simple aim becomes complicated, however, because of concessions made by Unicodes designers in the hope of encouraging a more rapid adoption of Unicode, the first 256 code points were made identical to the content of ISO-8859-1 so as to make it trivial to convert existing western text. For other examples, see duplicate characters in Unicode and he explained that he name Unicode is intended to suggest a unique, unified, universal encoding. In this document, entitled Unicode 88, Becker outlined a 16-bit character model, Unicode could be roughly described as wide-body ASCII that has been stretched to 16 bits to encompass the characters of all the worlds living languages. In a properly engineered design,16 bits per character are more than sufficient for this purpose, Unicode aims in the first instance at the characters published in modern text, whose number is undoubtedly far below 214 =16,384. By the end of 1990, most of the work on mapping existing character encoding standards had been completed, the Unicode Consortium was incorporated in California on January 3,1991, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992, in 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. The Microsoft TrueType specification version 1.0 from 1992 used the name Apple Unicode instead of Unicode for the Platform ID in the naming table, Unicode defines a codespace of 1,114,112 code points in the range 0hex to 10FFFFhex. Normally a Unicode code point is referred to by writing U+ followed by its hexadecimal number, for code points in the Basic Multilingual Plane, four digits are used, for code points outside the BMP, five or six digits are used, as required. Code points in Planes 1 through 16 are accessed as surrogate pairs in UTF-16, within each plane, characters are allocated within named blocks of related characters
20.
Order theory
–
Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a framework for describing statements such as this is less than that or this precedes that. This article introduces the field and provides basic definitions, a list of order-theoretic terms can be found in the order theory glossary. Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the order on the natural numbers e. g.2 is less than 3,10 is greater than 5. This intuitive concept can be extended to orders on sets of numbers, such as the integers. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general, other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, in other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the relation, e. g. Pediatricians are physicians. However, many other orders do not and those orders like the subset-of relation for which there exist incomparable elements are called partial orders, orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting and this is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes sense, because one can derive numerous theorems in the general setting. These insights can then be transferred to many less abstract applications. Driven by the wide usage of orders, numerous special kinds of ordered sets have been defined. In addition, order theory does not restrict itself to the classes of ordering relations. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found and this section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Suppose that P is a set and that ≤ is a relation on P, a set with a partial order on it is called a partially ordered set, poset, or just an ordered set if the intended meaning is clear. By checking these properties, one sees that the well-known orders on natural numbers, integers, rational numbers
21.
Coprime integers
–
In number theory, two integers a and b are said to be relatively prime, mutually prime, or coprime if the only positive integer that divides both of them is 1. That is, the common positive factor of the two numbers is 1. This is equivalent to their greatest common divisor being 1, the numerator and denominator of a reduced fraction are coprime. In addition to gcd =1 and =1, the notation a ⊥ b is used to indicate that a and b are relatively prime. For example,14 and 15 are coprime, being divisible by only 1. The numbers 1 and −1 are the only integers coprime to every integer, a fast way to determine whether two numbers are coprime is given by the Euclidean algorithm. The number of integers coprime to an integer n, between 1 and n, is given by Eulers totient function φ. A set of integers can also be called if its elements share no common positive factor except 1. A set of integers is said to be pairwise coprime if a and b are coprime for every pair of different integers in it, a number of conditions are individually equivalent to a and b being coprime, No prime number divides both a and b. There exist integers x and y such that ax + by =1, the integer b has a multiplicative inverse modulo a, there exists an integer y such that by ≡1. In other words, b is a unit in the ring Z/aZ of integers modulo a, the least common multiple of a and b is equal to their product ab, i. e. LCM = ab. As a consequence of the point, if a and b are coprime and br ≡ bs. That is, we may divide by b when working modulo a, as a consequence of the first point, if a and b are coprime, then so are any powers ak and bl. If a and b are coprime and a divides the product bc and this can be viewed as a generalization of Euclids lemma. In a sense that can be made precise, the probability that two randomly chosen integers are coprime is 6/π2, which is about 61%, two natural numbers a and b are coprime if and only if the numbers 2a −1 and 2b −1 are coprime. As a generalization of this, following easily from the Euclidean algorithm in base n >1, a set of integers S = can also be called coprime or setwise coprime if the greatest common divisor of all the elements of the set is 1. For example, the integers 6,10,15 are coprime because 1 is the positive integer that divides all of them. If every pair in a set of integers is coprime, then the set is said to be pairwise coprime, pairwise coprimality is a stronger condition than setwise coprimality, every pairwise coprime finite set is also setwise coprime, but the reverse is not true
22.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
23.
Piecewise linear function
–
In mathematics, a piecewise linear function is a real-valued function defined on the real numbers or a segment thereof, whose graph is composed of straight-line sections. It is a function, each of whose pieces is an affine function. Usually – but not always – the function is assumed to be continuous, in that case, since the graph of a linear function is a line, the graph of a piecewise linear function consists of line segments and rays. Other examples of linear functions include the absolute value function, the square wave, the sawtooth function. An approximation to a curve can be found by sampling the curve. An algorithm for computing the most significant points subject to a given error tolerance has been published, if partitions are already known, linear regression can be performed independently on these partitions. However, continuity is not preserved in that case, a stable algorithm with this case has been derived. If partitions are not known, the sum of squares can be used to choose optimal separation points. A variant of decision tree learning called model trees learns piecewise linear functions, the notion of a piecewise linear function makes sense in several different contexts. In each case, the function may be real-valued, or it may take values from a space, an affine space. In dimensions higher than one, it is common to require the domain of each piece to be a polygon or polytope and this guarantees that the graph of the function will be composed of polygonal or polytopal pieces. Important sub-classes of piecewise linear functions include the continuous linear functions. In general, for every n dimensional continuous piecewise linear function f, R n → R, there is a Π ∈ P such that, f = min Σ ∈ Π max ∈ Σ a → ⋅ x → + b. If f is convex as well as continuous, then there is a Σ ∈ P such that, splines generalize piecewise linear functions to higher-order polynomials, which are in turn contained in the category of piecewise-differentiable functions, PDIFF. Piecewise constant function Linear interpolation Spline interpolation Tropical geometry Polygonal chain Apps, P. Long, N. & Rees, journal of Public Economic Theory,16, 523-545
24.
Semi-continuity
–
In mathematical analysis, semi-continuity is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is upper semi-continuous at a point x0 if, roughly speaking, consider the function f, piecewise defined by f = –1 for x <0 and f =1 for x ≥0. This function is upper semi-continuous at x0 =0, but not lower semi-continuous, the indicator function of an open set is lower semi-continuous, whereas the indicator function of a closed set is upper semi-continuous. The floor function f = ⌊ x ⌋, which returns the greatest integer less than or equal to a real number x, is everywhere upper semi-continuous. Similarly, the function f = ⌈ x ⌉ is lower semi-continuous. A function may be upper or lower semi-continuous without being either left or right continuous. For example, the function f = {1 if x <1,2 if x =1,1 /2 if x >1, is upper semi-continuous at x =1 although not left or right continuous. The limit from the left is equal to 1 and the limit from the right is equal to 1/2, both of which are different from the function value of 2. Similarly the function f = { sin if x ≠0,1 if x =0, is upper semi-continuous at x =0 while the function limits from the left or right at zero do not even exist. Let be a space and let L + denote the set of positive measurable functions endowed with the topology of convergence in measure with respect to μ. Then the integral, seen as an operator from L + to is lower semi-continuous, suppose X is a topological space, x 0 is a point in X and f, X → R ∪ is an extended real-valued function. For the particular case of a space, this can be expressed as lim sup x → x 0 f ≤ f where lim sup is the limit superior. The function f is called upper semi-continuous if it is upper semi-continuous at every point of its domain, the function f is called lower semi-continuous if it is lower semi-continuous at every point of its domain. A function is lower semi-continuous if and only if is a set for every α ∈ R, alternatively. Lower level sets are also called sublevel sets or trenches, a function is continuous at x0 if and only if it is upper and lower semi-continuous there. Therefore, semi-continuity can be used to prove continuity, if f and g are two real-valued functions which are both upper semi-continuous at x0, then so is f + g. If both functions are non-negative, then the function fg will also be upper semi-continuous at x0. The composition f∘g of upper semi-continuous functions f and g is not necessarily upper semi-continuous, multiplying a positive upper semi-continuous function with a negative number turns it into a lower semi-continuous function
25.
Fourier series
–
In mathematics, a Fourier series is a way to represent a function as the sum of simple sine waves. More formally, it decomposes any periodic function or periodic signal into the sum of a set of simple oscillating functions, the discrete-time Fourier transform is a periodic function, often defined in terms of a Fourier series. The Z-transform, another example of application, reduces to a Fourier series for the important case |z|=1, Fourier series are also central to the original proof of the Nyquist–Shannon sampling theorem. The study of Fourier series is a branch of Fourier analysis, the Mémoire introduced Fourier analysis, specifically Fourier series. Through Fouriers research the fact was established that a function can be represented by a trigonometric series. The first announcement of this discovery was made by Fourier in 1807. The heat equation is a differential equation. These simple solutions are now sometimes called eigensolutions, Fouriers idea was to model a complicated heat source as a superposition of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series, from a modern point of view, Fouriers results are somewhat informal, due to the lack of a precise notion of function and integral in the early nineteenth century. Later, Peter Gustav Lejeune Dirichlet and Bernhard Riemann expressed Fouriers results with greater precision, in this section, s denotes a function of the real variable x, and s is integrable on an interval, for real numbers x0 and P. We will attempt to represent s in that interval as a sum, or series. Outside the interval, the series is periodic with period P and it follows that if s also has that property, the approximation is valid on the entire real line. We can begin with a summation, s N = A02 + ∑ n =1 N A n ⋅ sin . S N is a function with period P. The inverse relationships between the coefficients are, A n = a n 2 + b n 2 ϕ n = atan2 , when the coefficients are computed as follows, s N approximates s on, and the approximation improves as N → ∞. The infinite sum, s ∞, is called the Fourier series representation of s, both components of a complex-valued function are real-valued functions that can be represented by a Fourier series. This is the formula as before except cn and c−n are no longer complex conjugates. In particular, the Fourier series converges absolutely and uniformly to s whenever the derivative of s is square integrable, if a function is square-integrable on the interval, then the Fourier series converges to the function at almost every point
26.
Modulo operation
–
In computing, the modulo operation finds the remainder after division of one number by another. Given two positive numbers, a and n, a n is the remainder of the Euclidean division of a by n. Although typically performed with a and n both being integers, many computing systems allow other types of numeric operands, the range of numbers for an integer modulo of n is 0 to n −1. See modular arithmetic for an older and related convention applied in number theory, when either a or n is negative, the naive definition breaks down and programming languages differ in how these values are defined. In mathematics, the result of the operation is the remainder of the Euclidean division. Computers and calculators have various ways of storing and representing numbers, usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a or n. Standard Pascal and ALGOL68 give a positive remainder even for negative divisors, a modulo 0 is undefined in most systems, although some do define it as a. Despite its widespread use, truncated division is shown to be inferior to the other definitions, when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. For special cases, on some hardware, faster alternatives exist, optimizing compilers may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression &. This can allow writing clearer code without compromising performance and this optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend, unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, some modulo operations can be factored or expanded similar to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange, identity, mod n = a mod n. nx mod n =0 for all positive integer values of x. If p is a number which is not a divisor of b, then abp−1 mod p = a mod p. B−1 mod n denotes the multiplicative inverse, which is defined if and only if b and n are relatively prime. Distributive, mod n = mod n. ab mod n = mod n, division, a/b mod n = mod n, when the right hand side is defined. Inverse multiplication, mod n = a mod n, modulo and modulo – many uses of the word modulo, all of which grew out of Carl F. Gausss introduction of modular arithmetic in 1801. Modular exponentiation ^ Perl usually uses arithmetic modulo operator that is machine-independent, for examples and exceptions, see the Perl documentation on multiplicative operators. ^ Mathematically, these two choices are but two of the number of choices available for the inequality satisfied by a remainder
27.
Indicator function
–
It is usually denoted by a symbol 1 or I, sometimes in boldface or blackboard boldface, with a subscript describing the set. The indicator function of a subset A of a set X is a function 1 A, X → defined as 1 A, = {1 if x ∈ A,0 if x ∉ A. The Iverson bracket allows the equivalent notation, to be used instead of 1 A, the function 1 A is sometimes denoted I A, χ A, KA or even just A. The set of all functions on X can be identified with P. This is a case of the notation Y X for the set of all functions f, X → Y. The notation 1 A is also used to denote the identity function of A, the notation χ A is also used to denote the characteristic function in convex analysis. A related concept in statistics is that of a dummy variable, the term characteristic function has an unrelated meaning in probability theory. The indicator or characteristic function of a subset A of some set X and this mapping is surjective only when A is a non-empty proper subset of X. By a similar argument, if A ≡ Ø then 1A =0, in the following, the dot represents multiplication, 1·1 =1, 1·0 =0 etc. + and − represent addition and subtraction. ∩ and ∪ is intersection and union, respectively. More generally, suppose A1, …, A n is a collection of subsets of X, for any x ∈ X, ∏ k ∈ I is clearly a product of 0s and 1s. This product has the value 1 at precisely those x ∈ X that belong to none of the sets Ak and is 0 otherwise and that is ∏ k ∈ I =1 X − ⋃ k A k =1 −1 ⋃ k A k. This is one form of the principle of inclusion-exclusion, as suggested by the previous example, the indicator function is a useful notational device in combinatorics. This identity is used in a proof of Markovs inequality. In many cases, such as theory, the inverse of the indicator function may be defined. This is commonly called the generalized Möbius function, as a generalization of the inverse of the function in elementary number theory. Given a probability space with A ∈ F, the random variable 1 A, Ω → R is defined by 1 A =1 if ω ∈ A. Mean E = P Variance Var = P Covariance Cov = P − P P Kurt Gödel described the function in his 1934 paper On Undecidable Propositions of Formal Mathematical Systems. There shall correspond to each class or relation R a representing function φ =0 if R and φ =1 if ~R, for example, because the product of characteristic functions φ1*φ2*
28.
Sign function
–
In mathematics, the sign function or signum function is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is represented as sgn. The signum function of a number x is defined as follows. Any real number can be expressed as the product of its value and its sign function. The numbers cancel and all we are left with is the sign of x, D | x | d x = sgn for x ≠0. The signum function is differentiable with derivative 0 everywhere except at 0, using this identity, it is easy to derive the distributional derivative, d sgn d x =2 d H d x =2 δ. The signum can also be using the Iverson bracket notation. The signum can also be using the floor and the absolute value functions. For k ≫1, an approximation of the sign function is sgn ≈ tanh . Another approximation is sgn ≈ x x 2 + ε2, which gets sharper as ε →0, note that this is the derivative of √x2 + ε2. This is inspired from the fact that the above is equal for all nonzero x if ε =0. See Heaviside step function – Analytic approximations, the signum function can be generalized to complex numbers as, sgn = z | z | for any complex number z except z =0. The signum of a complex number z is the point on the unit circle of the complex plane that is nearest to z. Then, for z ≠0, sgn = e i arg z and we then have, csgn = z z 2 = z 2 z. At real values of x, it is possible to define a generalized function–version of the function, ε such that ε2 =1 everywhere. This generalized signum allows construction of the algebra of generalized functions, absolute value Heaviside function Negative number Rectangular function Sigmoid function Step function Three-way comparison Zero crossing Modulus function
29.
Base (exponentiation)
–
In exponentiation, the base is the number b in an expression of the form bn. The number n is called the exponent and the expression is known formally as exponentiation of b by n or the exponential of n with base b and it is more commonly expressed as the nth power of b, b to the nth power or b to the power n. For example, the power of 10 is 10,000 because 104 =10 ×10 ×10 ×10 =10,000. The term power strictly refers to the expression, but is sometimes used to refer to the exponent. When the nth power of b equals a number a, or a = bn, for example,10 is a fourth root of 10,000. The inverse function to exponentiation with base b is called the logarithm to base b, for example, log1010,000 =4
30.
Irrational number
–
In mathematics, the irrational numbers are all the real numbers, which are not rational numbers, the latter being the numbers constructed from ratios of integers. Irrational numbers may also be dealt with via non-terminating continued fractions, for example, the decimal representation of the number π starts with 3.14159265358979, but no finite number of digits can represent π exactly, nor does it repeat. Mathematicians do not generally take terminating or repeating to be the definition of the concept of rational number, as a consequence of Cantors proof that the real numbers are uncountable and the rationals countable, it follows that almost all real numbers are irrational. The first proof of the existence of numbers is usually attributed to a Pythagorean. The then-current Pythagorean method would have claimed that there must be sufficiently small. However, Hippasus, in the 5th century BC, was able to deduce that there was in no common unit of measure. His reasoning is as follows, Start with a right triangle with side lengths of integers a, b. The ratio of the hypotenuse to a leg is represented by c, b, assume a, b, and c are in the smallest possible terms. By the Pythagorean theorem, c2 = a2+b2 = b2+b2 = 2b2, since c2 = 2b2, c2 is divisible by 2, and therefore even. Since c2 is even, c must be even, since c is even, dividing c by 2 yields an integer. Squaring both sides of c = 2y yields c2 =2, or c2 = 4y2, substituting 4y2 for c2 in the first equation gives us 4y2= 2b2. Dividing by 2 yields 2y2 = b2, since y is an integer, and 2y2 = b2, b2 is divisible by 2, and therefore even. Since b2 is even, b must be even and we have just show that both b and c must be even. Hence they have a factor of 2. However this contradicts the assumption that they have no common factors and this contradiction proves that c and b cannot both be integers, and thus the existence of a number that cannot be expressed as a ratio of two integers. Greek mathematicians termed this ratio of incommensurable magnitudes alogos, or inexpressible. ”Another legend states that Hippasus was merely exiled for this revelation, the discovery of incommensurable ratios was indicative of another problem facing the Greeks, the relation of the discrete to the continuous. Brought into light by Zeno of Elea, who questioned the conception that quantities are discrete and composed of a number of units of a given size. ”However Zeno found that in fact “ in general are not discrete collections of units. That in fact, these divisions of quantity must necessarily be infinite, for example, consider a line segment, this segment can be split in half, that half split in half, the half of the half in half, and so on
31.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory