1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
2.
ANSI C
–
Historically, the names referred specifically to the original and best-supported version of the standard. Software developers writing in C are encouraged to conform to the standards, the first standard for C was published by ANSI. While some software developers use the term ISO C, others are standards body–neutral, in 1983, the American National Standards Institute formed a committee, X3J11, to establish a standard specification of C. The standard was completed in 1989 and ratified as ANSI X3. 159-1989 Programming Language C and this version of the language is often referred to as ANSI C. Later on sometimes the label C89 is used to distinguish it from C99, the same standard as C89 was ratified by the International Organization for Standardization as ISO/IEC9899,1990, with only formatting changes, which is sometimes referred to as C90. Therefore, the terms C89 and C90 refer to essentially the same language and this standard has been withdrawn by both ANSI/INCITS and ISO/IEC. In 1995, the ISO published an extension, called Amendment 1 and its full name finally was ISO/IEC 9899/AMD1,1995 or nicknamed C95. C11 is the current standard for the C programming language, ANSI C is now supported by almost all the widely used compilers. Most of the C code being written nowadays is based on ANSI C, any program written only in standard C and without any hardware dependent assumptions is virtually guaranteed to compile correctly on any platform with a conforming C implementation. To mitigate the differences between K&R C and the ANSI C standard, the macro can be used to split code into ANSI. In the above example, a prototype is used in a declaration for ANSI compliant implementations. Those are still ANSI-compliant as of C99, note how this code checks both definition and evaluation, this is because some implementations may set __STDC__ to zero to indicate non-ANSI compliance. ISO/IEC9899,1999 Programming Languages -- C, ANSI Standards Action Vol.36, #48
3.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
4.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
5.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
6.
Mac OS
–
The family of Macintosh operating systems developed by Apple Inc. In 1984, Apple debuted the operating system that is now known as the Classic Mac OS with its release of the original Macintosh System Software. The system, rebranded Mac OS in 1996, was preinstalled on every Macintosh until 2002, noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors. The current Mac operating system is macOS, originally named Mac OS X until 2012, the current macOS is preinstalled with every Mac and is updated annually. It is the basis of Apples current system software for its devices, iOS, watchOS. Apples effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, the Macintosh is credited with having popularized this concept. The classic Mac OS is the original Macintosh operating system that was introduced in 1984 alongside the first Macintosh and remained in primary use on Macs through 2001. Apple released the original Macintosh on January 24,1984, its system software was partially based on the Lisa OS and the Xerox PARC Alto computer. It was originally named System Software, or simply System, Apple rebranded it as Mac OS in 1996 due in part to its Macintosh clone program that ended a year later, Mac OS is characterized by its monolithic system. Nine major versions of the classic Mac OS were released, the name Classic that now signifies the system as a whole is a reference to a compatibility layer that helped ease the transition to Mac OS X. Although the system was marketed as simply version 10 of Mac OS. Precursors to the release of Mac OS X include OpenStep, Apples Rhapsody project. MacOS makes use of the BSD codebase and the XNU kernel, the first desktop version of the system was released on March 24,2001, supporting the Aqua user interface. Since then, several more versions adding newer features and technologies have been released, since 2011, new releases have been offered on an annual basis. It was followed by several more official server-based releases, server functionality has instead been offered as an add-on for the desktop system since 2011. The first version of the system was ready for use in February 1988, in 1988, Apple released its first Unix-based OS, A/UX, which was a Unix operating system with the Mac OS look and feel. It was not very competitive for its time, due in part to the crowded Unix market, A/UX had most of its success in sales to the U. S. government, where POSIX compliance was a requirement that Mac OS could not meet. The Macintosh Application Environment was a software package introduced by Apple in 1994 that allowed users of certain Unix-based computer workstations to run Apple Macintosh application software, MAE used the X Window System to emulate a Macintosh Finder-style graphical user interface
7.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
8.
GNU General Public License
–
The GNU General Public License is a widely used free software license, which guarantees end users the freedom to run, study, share and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project. The GPL is a license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free licenses, of which the BSD licenses. GPL was the first copyleft license for general use, historically, the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free software licensed under the GPL include the Linux kernel. In 2007, the version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional any later version clause, developers can omit it when licensing their software, for instance the Linux kernel is licensed under GPLv2 without the any later version clause. The GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project, the original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler. These licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, Stallmans goal was to produce one license that could be used for any project, thus making it possible for many projects to share code. The second version of the license, version 2, was released in 1991, version 3 was developed to attempt to address these concerns and was officially released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that any vendor distributing binaries must also make the source code available under the same licensing terms. The second problem was that distributors might add restrictions, either to the license, the union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, according to Richard Stallman, the major change in GPLv2 was the Liberty or Death clause, as he calls it – Section 7. The section says that licensees may distribute a GPL-covered work only if they can all of the licenses obligations. In other words, the obligations of the license may not be severed due to conflicting obligations and this provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users freedom under the license
9.
Creative Commons
–
Creative Commons is an American non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons licenses free of charge to the public and these licenses allow creators to communicate which rights they reserve, and which rights they waive for the benefit of recipients or other creators. An easy-to-understand one-page explanation of rights, with associated visual symbols, Creative Commons licenses do not replace copyright, but are based upon it. The result is an agile, low-overhead and low-cost copyright-management regime, Wikipedia uses one of these licenses. The organization was founded in 2001 by Lawrence Lessig, Hal Abelson, the first article in a general interest publication about Creative Commons, written by Hal Plotkin, was published in February 2002. The first set of licenses was released in December 2002. In 2003 the Open Content Project, a 1998 precursor project by David A. Wiley, announced the Creative Commons as successor project, matthew Haughey and Aaron Swartz also played a role in the early stages of the project. As of January 2016 there were an estimated 1.1 billion works licensed under the various Creative Commons licenses, as of March 2015, Flickr alone hosts over 306 million Creative Commons licensed photos. Creative Commons is governed by a board of directors and their licenses have been embraced by many as a way for creators to take control of how they choose to share their copyrighted works. Beyond that, Creative Commons has provided institutional, practical and legal support for individuals and groups wishing to experiment, Creative Commons attempts to counter what Lawrence Lessig, founder of Creative Commons, considers to be a dominant and increasingly restrictive permission culture. Lessig describes this as a culture in which creators get to create only with the permission of the powerful, as of 2017, they are Paul Keller, Jonathan Nightingale, Chris Thorne. As of 2015, there are more than 100 affiliates working in over 75 jurisdictions to support, Creative Commons Korea is the affiliated network of Creative Commons in South Korea. In March 2005, CC Korea was initiated by Jongsoo Yoon, the major Korean portal sites, including Daum and Naver, have been participating in the use of Creative Commons licences. In January 2009, the Creative Commons Korea Association was consequently founded as an incorporated association. Since then, CC Korea has been promoting the liberal. Since March 15,2012 he has been detained by the Syrian government in Damascus at Adra Prison, on October 17,2015 Creative Commons Board of Directors approved a resolution calling for Bassel Khartabil release. All current CC licenses require attribution, which can be inconvenient for works based on other works. Critics also worried that the lack of rewards for content producers will dissuade artists from publishing their work, Creative Commons founder Lawrence Lessig countered that copyright laws have not always offered the strong and seemingly indefinite protection that todays law provides
10.
Public Domain
–
The term public domain has two senses of meaning. Anything published is out in the domain in the sense that it is available to the public. Once published, news and information in books is in the public domain, in the sense of intellectual property, works in the public domain are those whose exclusive intellectual property rights have expired, have been forfeited, or are inapplicable. Examples for works not covered by copyright which are therefore in the domain, are the formulae of Newtonian physics, cooking recipes. Examples for works actively dedicated into public domain by their authors are reference implementations of algorithms, NIHs ImageJ. The term is not normally applied to situations where the creator of a work retains residual rights, as rights are country-based and vary, a work may be subject to rights in one country and be in the public domain in another. Some rights depend on registrations on a basis, and the absence of registration in a particular country, if required. Although the term public domain did not come into use until the mid-18th century, the Romans had a large proprietary rights system where they defined many things that cannot be privately owned as res nullius, res communes, res publicae and res universitatis. The term res nullius was defined as not yet appropriated. The term res communes was defined as things that could be enjoyed by mankind, such as air, sunlight. The term res publicae referred to things that were shared by all citizens, when the first early copyright law was first established in Britain with the Statute of Anne in 1710, public domain did not appear. However, similar concepts were developed by British and French jurists in the eighteenth century, instead of public domain they used terms such as publici juris or propriété publique to describe works that were not covered by copyright law. The phrase fall in the domain can be traced to mid-nineteenth century France to describe the end of copyright term. In this historical context Paul Torremans describes copyright as a coral reef of private right jutting up from the ocean of the public domain. Because copyright law is different from country to country, Pamela Samuelson has described the public domain as being different sizes at different times in different countries. According to James Boyle this definition underlines common usage of the public domain and equates the public domain to public property. However, the usage of the public domain can be more granular. Such a definition regards work in copyright as private property subject to fair use rights, the materials that compose our cultural heritage must be free for all living to use no less than matter necessary for biological survival
11.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
12.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
13.
Number theory
–
Number theory or, in older usage, arithmetic is a branch of pure mathematics devoted primarily to the study of the integers. It is sometimes called The Queen of Mathematics because of its place in the discipline. Number theorists study prime numbers as well as the properties of objects out of integers or defined as generalizations of the integers. Integers can be considered either in themselves or as solutions to equations, questions in number theory are often best understood through the study of analytical objects that encode properties of the integers, primes or other number-theoretic objects in some fashion. One may also study real numbers in relation to rational numbers, the older term for number theory is arithmetic. By the early century, it had been superseded by number theory. The use of the arithmetic for number theory regained some ground in the second half of the 20th century. In particular, arithmetical is preferred as an adjective to number-theoretic. The first historical find of a nature is a fragment of a table. The triples are too many and too large to have been obtained by brute force, the heading over the first column reads, The takiltum of the diagonal which has been subtracted such that the width. The tables layout suggests that it was constructed by means of what amounts, in language, to the identity 2 +1 =2. If some other method was used, the triples were first constructed and then reordered by c / a, presumably for use as a table. It is not known what these applications may have been, or whether there could have any, Babylonian astronomy, for example. It has been suggested instead that the table was a source of examples for school problems. While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, late Neoplatonic sources state that Pythagoras learned mathematics from the Babylonians. Much earlier sources state that Thales and Pythagoras traveled and studied in Egypt, Euclid IX 21—34 is very probably Pythagorean, it is very simple material, but it is all that is needed to prove that 2 is irrational. Pythagorean mystics gave great importance to the odd and the even, the discovery that 2 is irrational is credited to the early Pythagoreans. This forced a distinction between numbers, on the one hand, and lengths and proportions, on the other hand, the Pythagorean tradition spoke also of so-called polygonal or figurate numbers
14.
Group theory
–
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra, linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is central to public key cryptography. The first class of groups to undergo a systematic study was permutation groups, given any set X and a collection G of bijections of X into itself that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn, in general, an early construction due to Cayley exhibited any group as a permutation group, acting on itself by means of the left regular representation. In many cases, the structure of a group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥5 and this fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥5 in radicals. The next important class of groups is given by matrix groups, here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the vector space Kn by linear transformations. In the case of groups, X is a set, for matrix groups. The concept of a group is closely related with the concept of a symmetry group. The theory of groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, the groups themselves may be discrete or continuous. Most groups considered in the first stage of the development of group theory were concrete, having been realized through numbers, permutations, or matrices. It was not until the nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations, a significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory
15.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
16.
Mathematical analysis
–
Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions. These theories are studied in the context of real and complex numbers. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis, analysis may be distinguished from geometry, however, it can be applied to any space of mathematical objects that has a definition of nearness or specific distances between objects. Mathematical analysis formally developed in the 17th century during the Scientific Revolution, early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, a geometric sum is implicit in Zenos paradox of the dichotomy. The explicit use of infinitesimals appears in Archimedes The Method of Mechanical Theorems, in Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle. Zu Chongzhi established a method that would later be called Cavalieris principle to find the volume of a sphere in the 5th century, the Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolles theorem in the 12th century. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and his followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century. The modern foundations of analysis were established in 17th century Europe. During this period, calculus techniques were applied to approximate discrete problems by continuous ones, in the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the definition of continuity in 1816. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required a change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations, the contributions of these mathematicians and others, such as Weierstrass, developed the -definition of limit approach, thus founding the modern field of mathematical analysis. In the middle of the 19th century Riemann introduced his theory of integration, the last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the epsilon-delta definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of numbers without proof. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the size of the set of discontinuities of real functions, also, monsters began to be investigated
17.
Topology
–
In mathematics, topology is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. This can be studied by considering a collection of subsets, called open sets, important topological properties include connectedness and compactness. Topology developed as a field of study out of geometry and set theory, through analysis of such as space, dimension. Such ideas go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs, Leonhard Eulers Seven Bridges of Königsberg Problem and Polyhedron Formula are arguably the fields first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, by the middle of the 20th century, topology had become a major branch of mathematics. It defines the basic notions used in all branches of topology. Algebraic topology tries to measure degrees of connectivity using algebraic constructs such as homology, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to geometry and together they make up the geometric theory of differentiable manifolds. Geometric topology primarily studies manifolds and their embeddings in other manifolds, a particularly active area is low-dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots, Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler and his 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750 Euler wrote to a friend that he had realised the importance of the edges of a polyhedron and this led to his polyhedron formula, V − E + F =2. Some authorities regard this analysis as the first theorem, signalling the birth of topology, further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term Topologie in Vorstudien zur Topologie, written in his native German, in 1847, the term topologist in the sense of a specialist in topology was used in 1905 in the magazine Spectator. Their work was corrected, consolidated and greatly extended by Henri Poincaré, in 1895 he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a case of a general topological space. In 1914, Felix Hausdorff coined the term topological space and gave the definition for what is now called a Hausdorff space, currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski
18.
Hilbert space
–
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of algebra and calculus from the two-dimensional Euclidean plane. A Hilbert space is a vector space possessing the structure of an inner product that allows length. Furthermore, Hilbert spaces are complete, there are limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces, the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis —and ergodic theory, john von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis, geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space, at a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be specified by its coordinates with respect to a set of coordinate axes. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms of the space of sequences that are square-summable. The latter space is often in the literature referred to as the Hilbert space. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of vectors, denoted by ℝ3. The dot product takes two vectors x and y, and produces a real number x·y, If x and y are represented in Cartesian coordinates, then the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3. The dot product satisfies the properties, It is symmetric in x and y, x · y = y · x. It is linear in its first argument, · y = ax1 · y + bx2 · y for any scalars a, b, and vectors x1, x2, and y. It is positive definite, for all x, x · x ≥0, with equality if. An operation on pairs of vectors that, like the dot product, a vector space equipped with such an inner product is known as a inner product space. Every finite-dimensional inner product space is also a Hilbert space, multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist
19.
ZFC
–
Zermelo–Fraenkel set theory with the historically controversial axiom of choice included is commonly abbreviated ZFC, where C stands for choice. Many authors use ZF to refer to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded, today ZFC is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. ZFC is intended to formalize a single primitive notion, that of a hereditary well-founded set, thus the axioms of ZFC refer only to pure sets and prevent its models from containing urelements. Furthermore, proper classes can only be treated indirectly, specifically, ZFC does not allow for the existence of a universal set nor for unrestricted comprehension, thereby avoiding Russells paradox. Von Neumann–Bernays–Gödel set theory is a commonly used conservative extension of ZFC that does allow explicit treatment of proper classes, formally, ZFC is a one-sorted theory in first-order logic. The signature has equality and a primitive binary relation, set membership. The formula a ∈ b means that the set a is a member of the set b, there are many equivalent formulations of the ZFC axioms. Most of the ZFC axioms state the existence of particular sets defined from other sets, for example, the axiom of pairing says that given any two sets a and b there is a new set containing exactly a and b. Other axioms describe properties of set membership, a goal of the ZFC axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe. The metamathematics of ZFC has been extensively studied, landmark results in this area established the independence of the continuum hypothesis from ZFC, and of the axiom of choice from the remaining ZFC axioms. The consistency of a such as ZFC cannot be proved within the theory itself. In 1908, Ernst Zermelo proposed the first axiomatic set theory, moreover, one of Zermelos axioms invoked a concept, that of a definite property, whose operational meaning was not clear. They also independently proposed replacing the axiom schema of specification with the schema of replacement. Appending this schema, as well as the axiom of regularity, adding to ZF either the axiom of choice or a statement that is equivalent to it yields ZFC. There are many equivalent formulations of the ZFC axioms, for a discussion of this see Fraenkel, Bar-Hillel & Lévy 1973, the following particular axiom set is from Kunen. The axioms per se are expressed in the symbolism of first order logic, the associated English prose is only intended to aid the intuition. All formulations of ZFC imply that at least one set exists, Kunen includes an axiom that directly asserts the existence of a set, in addition to the axioms given below. Its omission here can be justified in two ways, first, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty
20.
TeX
–
TeX is a typesetting system designed and mostly written by Donald Knuth and released in 1978. TeX is free software, which has made it accessible to a wider range of users. TeX is a means by which to typeset complex mathematical formulae. TeX is popular in academia, especially in mathematics, computer science, economics, engineering, physics, statistics and it has largely displaced Unix troff, the other favored formatting system, in many Unix installations, which use both for different purposes. It is also used for many other typesetting tasks, especially in the form of LaTeX, ConTeXt, the widely used MIME type for TeX is application/x-tex. Within the typesetting system, its name is stylized as TeX, when the first paper volume of Donald Knuths The Art of Computer Programming was published in 1968, it was typeset using hot metal typesetting set by a Monotype Corporation typecaster. This method, dating back to the 19th century, produced a classic style appreciated by Knuth. When Knuth received the proofs of the new book on 30 March 1977. Around that time, Knuth saw for the first time the output of a high-quality digital typesetting system, the disappointing galley proofs gave him the final motivation to solve the problem at hand once and for all by designing his own typesetting system. On 13 May 1977, he wrote a memo to himself describing the features of TeX. He planned to finish it on his sabbatical in 1978, but as it happened the language was not frozen until 1989, guy Steele happened to be at Stanford during the summer of 1978, when Knuth was developing his first version of TeX. When Steele returned to Massachusetts Institute of Technology that autumn, he rewrote TeXs input/output to run under the Incompatible Timesharing System operating system, the first version of TeX was written in the SAIL programming language to run on a PDP-10 under Stanfords WAITS operating system. For later versions of TeX, Knuth invented the concept of literate programming, the language used is called WEB and produces programs in DEC PDP-10 Pascal. A new version of TeX, rewritten from scratch and called TeX82, was published in 1982, among other changes, the original hyphenation algorithm was replaced by a new algorithm written by Frank Liang. In 1989, Donald Knuth released new versions of TeX and METAFONT and this is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. The current version of TeX is 3.14159265, it was last updated 2014-01-12, the design was frozen after version 3.0, and no new feature or fundamental change will be added, so all newer versions will contain only bug fixes. For this reason, he has stated that the final change will be to change the version number to π. Likewise, versions of METAFONT after 2.0 asymptotically approach e, for example, the Omega project was developed after 1991, primarily to enhance TeXs multilingual typesetting abilities
21.
Predicate calculus
–
First-order logic – also known as first-order predicate calculus and predicate logic – is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. This distinguishes it from propositional logic, which does not use quantifiers, Sometimes theory is understood in a more formal sense, which is just a set of sentences in first-order logic. In first-order theories, predicates are associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets, There are many deductive systems for first-order logic which are both sound and complete. Although the logical relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem, first-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, no first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axioms systems that do fully describe these two structures can be obtained in stronger logics such as second-order logic, for a history of first-order logic and how it came to dominate formal logic, see José Ferreirós. While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates, a predicate takes an entity or entities in the domain of discourse as input and outputs either True or False. Consider the two sentences Socrates is a philosopher and Plato is a philosopher, in propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate is a philosopher occurs in both sentences, which have a structure of a is a philosopher. The variable a is instantiated as Socrates in the first sentence and is instantiated as Plato in the second sentence, while first-order logic allows for the use of predicates, such as is a philosopher in this example, propositional logic does not. Relationships between predicates can be stated using logical connectives, consider, for example, the first-order formula if a is a philosopher, then a is a scholar. This formula is a statement with a is a philosopher as its hypothesis. The truth of this depends on which object is denoted by a. Quantifiers can be applied to variables in a formula, the variable a in the previous formula can be universally quantified, for instance, with the first-order sentence For every a, if a is a philosopher, then a is a scholar. The universal quantifier for every in this sentence expresses the idea that the if a is a philosopher. The negation of the sentence For every a, if a is a philosopher, then a is a scholar is logically equivalent to the sentence There exists a such that a is a philosopher and a is not a scholar
22.
Gerhard Gentzen
–
Gerhard Karl Erich Gentzen was a German mathematician and logician. He made major contributions to the foundations of mathematics, proof theory, especially on natural deduction and he died in 1945 after the Second World War, because he was deprived of food after being arrested in Prague. Gentzen was a student of Paul Bernays at the University of Göttingen, Bernays was fired as non-Aryan in April 1933 and therefore Hermann Weyl formally acted as his supervisor. Gentzen joined the Sturmabteilung in November 1933 although he was by no means compelled to do so, nevertheless he kept in contact with Bernays until the beginning of the Second World War. In 1935, he corresponded with Abraham Fraenkel in Jerusalem and was implicated by the Nazi teachers union as one who keeps contacts to the Chosen People, between November 1935 and 1939 he was an assistant of David Hilbert in Göttingen. Gentzen joined the Nazi Party in 1937, in April 1939 Gentzen swore the oath of loyalty to Adolf Hitler as part of his academic appointment. From 1943 he was a teacher at the University of Prague, under a contract from the SS Gentzen evidently worked for the V-2 project. Gentzen was arrested during the uprising against the occupying German forces on May 5,1945. He, along with the rest of the staff of the German University in Prague was subsequently handed over to Russian forces. Because of his past association with the SA, NSDAP and NSD Dozentenbund, Gentzen was detained in a prison camp, Gentzens main work was on the foundations of mathematics, in proof theory, specifically natural deduction and the sequent calculus. One of Gentzens papers had a publication in the ideological Deutsche Mathematik that was founded by Ludwig Bieberbach who promoted Aryan mathematics. Gentzen proved the consistency of the Peano axioms in a paper published in 1936, in his Habilitationsschrift, finished in 1939, he determined the proof-theoretical strength of Peano arithmetic. This was done by a proof of the unprovability of the principle of transfinite induction, used in his 1936 proof of consistency. The principle can, however, be expressed in arithmetic, so that a proof of Gödels incompleteness theorem followed. Gödel used a coding procedure to construct an unprovable formula of arithmetic, Gentzens proof was published in 1943 and marked the beginning of ordinal proof theory. Über die Existenz unabhängiger Axiomensysteme zu unendlichen Satzsystemen, vortrag, gehalten in Münster am 27. Juni 1936 am Institut von Heinrich Scholz, die gegenwärtige Lage in der mathematischen Grundlagenforschung. Neue Fassung des Widerspruchsfreiheitsbeweises für die reine Zahlentheorie, forschungen zur Logik und zur Grundlegung der exakten Wissenschaften
23.
Stack (data structure)
–
The order in which elements come off a stack gives rise to its alternative name, LIFO. Additionally, an operation may give access to the top without modifying the stack. Considered as a data structure, or more abstractly a sequential collection. This makes it possible to implement a stack as a linked list. A stack may be implemented to have a bounded capacity, if the stack is full and does not contain enough space to accept an entity to be pushed, the stack is then considered to be in an overflow state. The pop operation removes an item from the top of the stack, Stacks entered the computer science literature in 1946, in the computer design of Alan M. Turing as a means of calling and returning from subroutines. Subroutines had already implemented in Konrad Zuses Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955, the same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957. Stacks are often described by analogy to a stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any already there, when a plate is removed from the stack, the one below it pops up to become the new top. In many implementations, a stack has more operations than push, an example is top of stack, or peek, which observes the top-most element without removing it from the stack. Since this can be done with a pop and a push with the same data, an underflow condition can occur in the stack top operation if the stack is empty, the same as pop. Also, implementations often have a function which just returns whether the stack is empty, a stack can be easily implemented either through an array or a linked list. The following will demonstrate both implementations, using pseudocode, an array can be used to implement a stack, as follows. The first element is the bottom, resulting in array being the first element pushed onto the stack, some languages, notably those in the Forth family, are designed around language-defined stacks that are directly visible to and manipulated by the programmer. Javas library contains a Stack class that is a specialization of Vector, following is an example program in Java language, using that class. A common use of stacks at the level is as a means of allocating and accessing memory. A typical stack is an area of memory with a fixed origin
24.
Robert Solovay
–
Robert Martin Solovay is an American mathematician specializing in set theory. Solovay earned his Ph. D. from the University of Chicago in 1964 under the direction of Saunders Mac Lane, Solovay has spent his career at the University of California at Berkeley, where his Ph. D. students include W. Hugh Woodin and Matthew Foreman. Outside of set theory, developing the Solovay–Strassen primality test, used to large natural numbers that are prime with high probability. This method has had implications for cryptography, proving that GL completely axiomatizes the logic of the provability predicate of Peano Arithmetic. With Alexei Kitaev, proving that a set of quantum gates can efficiently approximate an arbitrary unitary operator on one qubit. A model of set-theory in which set of reals is Lebesgue measurable. A nonconstructible Δ13 set of integers, transactions of the American Mathematical Society. Solovay, Robert M. and Volker Strassen, a fast Monte-Carlo test for primality. Provability logic Robert M. Solovay at the Mathematics Genealogy Project Robert Solovay at DBLP Bibliography Server
25.
Peano arithmetic
–
These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete. In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic, the Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set of natural numbers, the next four are general statements about equality, in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the underlying logic. The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation, the ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. When Peano formulated his axioms, the language of logic was in its infancy. Peano was unaware of Freges work and independently recreated his logical apparatus based on the work of Boole, the Peano axioms define the arithmetical properties of natural numbers, usually represented as a set N or N. The non-logical symbols for the axioms consist of a constant symbol 0, the first axiom states that the constant 0 is a natural number, The next four axioms describe the equality relation. Since they are valid in first-order logic with equality, they are not considered to be part of the Peano axioms in modern treatments. The remaining axioms define the properties of the natural numbers. The naturals are assumed to be closed under a successor function S. Peanos original formulation of the axioms used 1 instead of 0 as the first natural number. This choice is arbitrary, as axiom 1 does not endow the constant 0 with any additional properties, however, because 0 is the additive identity in arithmetic, most modern formulations of the Peano axioms start from 0. Axioms 1,6,7,8 define a representation of the intuitive notion of natural numbers. However, considering the notion of natural numbers as can be derived from the axioms, axioms 1,6,7,8 do not imply that the successor function generates all the natural numbers different from 0. Put differently, they do not guarantee that every natural number other than zero must succeed some other natural number, the intuitive notion that all natural numbers observe a succession relation with one or two other numbers requires an additional axiom, which is sometimes called the axiom of induction. The induction axiom is stated in the following form, In Peanos original formulation. It is now common to replace this second-order principle with a weaker first-order induction scheme, there are important differences between the second-order and first-order formulations, as discussed in the section § Models below. The Peano axioms can be augmented with the operations of addition and multiplication, the respective functions and relations are constructed in set theory or second-order logic, and can be shown to be unique using the Peano axioms. Addition is a function that maps two natural numbers to another one and it is defined recursively as, a +0 = a, a + S = S
26.
Natural deduction
–
In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the natural way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the laws of deductive reasoning. Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. His proposals led to different notations such as Fitch-style calculus or Suppes method of which e. g. Lemmon gave a variant called system L. The term natural deduction was coined in that paper, Ich wollte nun zunächst einmal einen Formalismus aufstellen, so ergab sich ein Kalkül des natürlichen Schließens. Gentzen was motivated by a desire to establish the consistency of number theory and he was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his system, the sequent calculus. His 1965 monograph Natural deduction, a study was to become a reference work on natural deduction. In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly, the system presented in this article is a minor variation of Gentzens or Prawitzs formulation, but with a closer adherence to Martin-Löfs description of logical judgments and connectives. A judgment is something that is knowable, that is, an object of knowledge and it is evident if one in fact knows it. In mathematical logic however, evidence is not as directly observable. The process of deduction is what constitutes a proof, in other words, the most important judgments in logic are of the form A is true. The letter A stands for any expression representing a proposition, the truth judgments thus require a more primitive judgment, to start with, we shall concern ourselves with the simplest two judgments A is a proposition and A is true, abbreviated as A prop and A true respectively. The judgment A prop defines the structure of proofs of A. For this reason, the rules for this judgment are sometimes known as formation rules. To illustrate, if we have two propositions A and B, then we form the compound proposition A and B, written symbolically as A ∧ B. The general form of a rule is, where each J i is a judgment. The judgments above the line are known as premises, and those below the line are conclusions, other common logical propositions are disjunction, negation, implication, and the logical constants truth and falsehood
27.
Alexa Internet
–
Alexa Internet, Inc. is a California-based company that provides commercial web traffic data and analytics. It is an owned subsidiary of Amazon. com. Founded as an independent company in 1996, Alexa was acquired by Amazon in 1999 and its toolbar collects data on browsing behavior and transmits them to the Alexa website, where they are stored and analyzed. This is the basis for the web traffic reporting. According to its website, Alexa provides traffic data, global rankings, as of 2015, its website has been visited by over 6.5 million people monthly. Alexa Internet was founded in April 1996 by American web entrepreneurs Brewster Kahle, Alexa initially offered a toolbar that gave Internet users suggestions on where to go next, based on the traffic patterns of its user community. The company also offered context for each site visited, to whom it was registered, how many pages it had, how other sites pointed to it. Alexas operations grew to include archiving of web pages as they are crawled and this database served as the basis for the creation of the Internet Archive accessible through the Wayback Machine. In 1998, the company donated a copy of the archive, Alexa continues to supply the Internet Archive with Web crawls. In 1999, as the company moved away from its vision of providing an intelligent search engine. Alexa began a partnership with Google in early 2002, and with the web directory DMOZ in January 2003, in May 2006, replaced Google with Bing as a provider of search results. In December 2006, Amazon released Alexa Image Search, built in-house, it was the first major application built on the companys Web platform. In December 2005, Alexa opened its extensive search index and Web-crawling facilities to third-party programs through a set of Web services. These could be used, for instance, to construct vertical search engines that could run on Alexas own servers or elsewhere. In May 2007, Alexa changed their API to limit comparisons to three websites, reduce the size of embedded graphs in Flash, and add mandatory embedded BritePic advertisements. In April 2007, the company filed a lawsuit, Alexa v. Hornbaker, in the lawsuit, Alexa alleged that Ron Hornbaker was stealing traffic graphs for profit, and that the primary purpose of his site was to display graphs that were generated by Alexas servers. Hornbaker removed the term Alexa from his name on March 19,2007. Thereafter, Alexa became a purely analytics-focused company, on March 31,2009, Alexa launched a major website redesign
28.
ZFC set theory
–
Zermelo–Fraenkel set theory with the historically controversial axiom of choice included is commonly abbreviated ZFC, where C stands for choice. Many authors use ZF to refer to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded, today ZFC is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. ZFC is intended to formalize a single primitive notion, that of a hereditary well-founded set, thus the axioms of ZFC refer only to pure sets and prevent its models from containing urelements. Furthermore, proper classes can only be treated indirectly, specifically, ZFC does not allow for the existence of a universal set nor for unrestricted comprehension, thereby avoiding Russells paradox. Von Neumann–Bernays–Gödel set theory is a commonly used conservative extension of ZFC that does allow explicit treatment of proper classes, formally, ZFC is a one-sorted theory in first-order logic. The signature has equality and a primitive binary relation, set membership. The formula a ∈ b means that the set a is a member of the set b, there are many equivalent formulations of the ZFC axioms. Most of the ZFC axioms state the existence of particular sets defined from other sets, for example, the axiom of pairing says that given any two sets a and b there is a new set containing exactly a and b. Other axioms describe properties of set membership, a goal of the ZFC axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe. The metamathematics of ZFC has been extensively studied, landmark results in this area established the independence of the continuum hypothesis from ZFC, and of the axiom of choice from the remaining ZFC axioms. The consistency of a such as ZFC cannot be proved within the theory itself. In 1908, Ernst Zermelo proposed the first axiomatic set theory, moreover, one of Zermelos axioms invoked a concept, that of a definite property, whose operational meaning was not clear. They also independently proposed replacing the axiom schema of specification with the schema of replacement. Appending this schema, as well as the axiom of regularity, adding to ZF either the axiom of choice or a statement that is equivalent to it yields ZFC. There are many equivalent formulations of the ZFC axioms, for a discussion of this see Fraenkel, Bar-Hillel & Lévy 1973, the following particular axiom set is from Kunen. The axioms per se are expressed in the symbolism of first order logic, the associated English prose is only intended to aid the intuition. All formulations of ZFC imply that at least one set exists, Kunen includes an axiom that directly asserts the existence of a set, in addition to the axioms given below. Its omission here can be justified in two ways, first, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty
29.
Category (mathematics)
–
In mathematics, a category is an algebraic structure that comprises objects that are linked by arrows. A category has two properties, the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets, on the other hand, any monoid can be understood as a special sort of category, and so can any preorder. In general, the objects and arrows may be abstract entities of any kind, and this is the central idea of category theory, a branch of mathematics which seeks to generalize all of mathematics in terms of objects and arrows, independent of what the objects and arrows represent. Virtually every branch of mathematics can be described in terms of categories. For more extensive motivational background and historical notes, see category theory, two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two categories may also be considered equivalent for purposes of category theory, all of the preceding categories have the identity map as identity arrow and composition as the associative operation on arrows. The classic and still used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below, the basic definitions in this article are contained within the first few chapters of any of these books. Category theory first appeared in a paper entitled General Theory of Natural Equivalences, written by Samuel Eilenberg, there are many equivalent definitions of a category. One commonly used definition is as follows, a category C consists of a class ob of objects a class hom of morphisms, or arrows, or maps, between the objects. Each morphism f has a source object a and a target object b where a and b are in ob and we write f, a → b, and we say f is a morphism from a to b. From these axioms, one can prove there is exactly one identity morphism for every object. Some authors use a variation of the definition in which each object is identified with the corresponding identity morphism. A category C is called if both ob and hom are actually sets and not proper classes, and large otherwise. A locally small category is a such that for all objects a and b. Many important categories in mathematics, although not small, are at least locally small, the class of all sets together with all functions between sets, where composition is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics, the category Rel consists of all sets, with binary relations as morphisms
30.
Propositional calculus
–
Logical connectives are found in natural languages. In English for example, some examples are and, or, not”, the following is an example of a very simple inference within the scope of propositional logic, Premise 1, If its raining then its cloudy. Both premises and the conclusion are propositions, the premises are taken for granted and then with the application of modus ponens the conclusion follows. Not only that, but they will also correspond with any other inference of this form, Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of rules and axioms allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions, a constructed sequence of such formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the represented by the theorem. When a formal system is used to represent formal logic, only statement letters are represented directly, usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-order logic, although propositional logic had been hinted by earlier philosophers, it was developed into a formal logic by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions and this advancement was different from the traditional syllogistic logic which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood, consequently, the system was essentially reinvented by Peter Abelard in the 12th century. Propositional logic was eventually refined using symbolic logic, the 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community, consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely independent of Leibniz. Just as propositional logic can be considered an advancement from the earlier syllogistic logic, one author describes predicate logic as combining the distinctive features of syllogistic logic and propositional logic. Consequently, predicate logic ushered in a new era in history, however, advances in propositional logic were still made after Frege, including Natural Deduction. Natural deduction was invented by Gerhard Gentzen and Jan Łukasiewicz, Truth-Trees were invented by Evert Willem Beth. The invention of truth-tables, however, is of controversial attribution, within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure, itself, is credited to either Ludwig Wittgenstein or Emil Post
31.
Operation (mathematics)
–
In mathematics, an operation is a calculation from zero or more input values to an output value. The number of operands is the arity of the operation, the most commonly studied operations are binary operations of arity 2, such as addition and multiplication, and unary operations of arity 1, such as additive inverse and multiplicative inverse. An operation of arity zero, or 0-ary operation is a constant, the mixed product is an example of an operation of arity 3, or ternary operation. Generally, the arity is supposed to be finite, but infinitary operations are sometimes considered, in this context, the usual operations, of finite arity are also called finitary operations. There are two types of operations, unary and binary. Unary operations involve only one value, such as negation and trigonometric functions, binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation. Operations can involve mathematical objects other than numbers, the logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted, rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the operation of complementation. Operations on functions include composition and convolution, operations may not be defined for every possible value. For example, in the numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain, the set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the numbers, the squaring operation only produces non-negative numbers. A vector can be multiplied by a scalar to form another vector, and the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, the values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs, an operation is like an operator, but the point of view is different. An operation ω is a function of the form ω, V → Y, where V ⊂ X1 × … × Xk. The sets Xk are called the domains of the operation, the set Y is called the codomain of the operation, thus a unary operation has arity one, and a binary operation has arity two
32.
Recursion
–
Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic, the most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines a number of instances, it is often done in such a way that no loop or infinite chain of references can occur. The ancestors of ones ancestors are also ones ancestors, the Fibonacci sequence is a classic example of recursion, Fib =0 as base case 1, Fib =1 as base case 2, For all integers n >1, Fib, = Fib + Fib. Many mathematical axioms are based upon recursive rules, for example, the formal definition of the natural numbers by the Peano axioms can be described as,0 is a natural number, and each natural number has a successor, which is also a natural number. By this base case and recursive rule, one can generate the set of all natural numbers, recursively defined mathematical objects include functions, sets, and especially fractals. There are various more tongue-in-cheek definitions of recursion, see recursive humor, Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be recursive, to understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, the running of a procedure involves actually following the rules and performing the steps. An analogy, a procedure is like a recipe, running a procedure is like actually preparing the meal. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, for this reason recursive definitions are very rare in everyday situations. An example could be the procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point, If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively, if every trial fails by reaching only dead ends, return on the path led to this branching point. Whether this actually defines a terminating procedure depends on the nature of the maze, in any case, executing the procedure requires carefully recording all currently explored branching points, and which of their branches have already been exhaustively tried. This can be understood in terms of a definition of a syntactic category. A sentence can have a structure in which what follows the verb is another sentence, Dorothy thinks witches are dangerous, so a sentence can be defined recursively as something with a structure that includes a noun phrase, a verb, and optionally another sentence
33.
Mathematical induction
–
Mathematical induction is a mathematical proof technique used to prove a given statement about any well-ordered set. Most commonly, it is used to establish statements for the set of all natural numbers, mathematical induction is a form of direct proof, usually done in two steps. When trying to prove a statement for a set of natural numbers. The second step, known as the step, is to prove that, if the statement is assumed to be true for any one natural number. Having proved these two steps, the rule of inference establishes the statement to be true for all natural numbers, in common terminology, using the stated approach is referred to as using the Principle of mathematical induction. Mathematical induction in this sense is closely related to recursion. Mathematical induction, in form, is the foundation of all correctness proofs for computer programs. Although its name may suggest otherwise, mathematical induction should not be misconstrued as a form of inductive reasoning, mathematical induction is an inference rule used in proofs. In mathematics, proofs including those using mathematical induction are examples of deductive reasoning, in 370 BC, Platos Parmenides may have contained an early example of an implicit inductive proof. The earliest implicit traces of mathematical induction may be found in Euclids proof that the number of primes is infinite, none of these ancient mathematicians, however, explicitly stated the inductive hypothesis. Another similar case was that of Francesco Maurolico in his Arithmeticorum libri duo, the first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle arithmétique. Another Frenchman, Fermat, made use of a related principle. The inductive hypothesis was also employed by the Swiss Jakob Bernoulli, the modern rigorous and systematic treatment of the principle came only in the 19th century, with George Boole, Augustus de Morgan, Charles Sanders Peirce, Giuseppe Peano, and Richard Dedekind. The simplest and most common form of mathematical induction infers that a statement involving a number n holds for all values of n. The proof consists of two steps, The basis, prove that the statement holds for the first natural number n, usually, n =0 or n =1, rarely, n = –1. The inductive step, prove that, if the statement holds for some number n. The hypothesis in the step that the statement holds for some n is called the induction hypothesis. To perform the step, one assumes the induction hypothesis
34.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
35.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
36.
Dedekind cut
–
Dedekind cuts are one method of construction of the real numbers. The set B may or may not have a smallest element among the rationals, if B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique number which, loosely speaking. In other words, A contains every rational number less than the cut, an irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one, whenever, then, we have to do with a cut produced by no rational number, we create a new irrational number, which we regard as completely defined by this cut. From now on, therefore, to every definite cut there corresponds a definite rational or irrational number. More generally, a Dedekind cut is a partition of an ordered set into two non-empty parts A and B, such that A is closed downwards and B is closed upwards. It is straightforward to show that a Dedekind cut among the numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a real number. In other words, the line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps. Dedekind used the German word Schnitt in a visual sense rooted in Euclidean geometry and his theorem asserting the completeness of the real number system is nevertheless a theorem about numbers and not geometry. In David Hilberts axiom system, continuity is provided by the Axiom of Archimedes, in mathematical logic, the identification of the real numbers with the real number line is provided by the Cantor–Dedekind axiom. It is more symmetrical to use the notation for Dedekind cuts and it can be a simplification, in terms of notation if nothing more, to concentrate on one half — say, the lower one — and call any downward closed set A without greatest element a Dedekind cut. If the ordered set S is complete, then, for every Dedekind cut of S, the set B must have an element b, hence we must have that A is the interval. In this case, we say that b is represented by the cut, the important purpose of the Dedekind cut is to work with number sets that are not complete. The cut itself can represent a number not in the collection of numbers. The cut can represent a number b, even though the numbers contained in the two sets A and B do not actually include the number b that their cut represents. Even though there is no value for √2, if the rational numbers are partitioned into A and B this way
37.
Sequence
–
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed. Like a set, it contains members, the number of elements is called the length of the sequence. Unlike a set, order matters, and exactly the elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a function whose domain is either the set of the numbers or the set of the first n natural numbers. The position of an element in a sequence is its rank or index and it depends on the context or of a specific convention, if the first element has index 0 or 1. For example, is a sequence of letters with the letter M first, also, the sequence, which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, the empty sequence is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order, Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations, Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are useful for specific types of sequences. One way to specify a sequence is to list the elements, for example, the first four odd numbers form the sequence. This notation can be used for sequences as well. For instance, the sequence of positive odd integers can be written. Listing is most useful for sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples, the prime numbers are the natural numbers bigger than 1, that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence, the prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers are the integer sequence whose elements are the sum of the two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is, for a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences
38.
Limit (mathematics)
–
In mathematics, a limit is the value that a function or sequence approaches as the input or index approaches some value. Limits are essential to calculus and are used to define continuity, derivatives, the concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. In formulas, a limit is usually written as lim n → c f = L and is read as the limit of f of n as n approaches c equals L. Here lim indicates limit, and the fact that function f approaches the limit L as n approaches c is represented by the right arrow, suppose f is a real-valued function and c is a real number. Intuitively speaking, the lim x → c f = L means that f can be made to be as close to L as desired by making x sufficiently close to c. The first inequality means that the distance x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c. Note that the definition of a limit is true even if f ≠ L. Indeed. Now since x +1 is continuous in x at 1, we can now plug in 1 for x, in addition to limits at finite values, functions can also have limits at infinity. In this case, the limit of f as x approaches infinity is 2, in mathematical notation, lim x → ∞2 x −1 x =2. Consider the following sequence,1.79,1.799,1.7999 and it can be observed that the numbers are approaching 1.8, the limit of the sequence. Formally, suppose a1, a2. is a sequence of real numbers, intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value | an − L | is the distance between an and L. Not every sequence has a limit, if it does, it is called convergent, one can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related, on one hand, the limit as n goes to infinity of a sequence a is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f as x goes to infinity, if it exists, is the same as the limit of any sequence a that approaches L. Note that one such sequence would be L + 1/n, in non-standard analysis, the limit of a sequence can be expressed as the standard part of the value a H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, lim n → ∞ a n = st , here the standard part function st rounds off each finite hyperreal number to the nearest real number. This formalizes the intuition that for very large values of the index. Conversely, the part of a hyperreal a = represented in the ultrapower construction by a Cauchy sequence, is simply the limit of that sequence
39.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
40.
Exponential function
–
In mathematics, an exponential function is a function of the form in which the input variable x occurs as an exponent. A function of the form f = b x + c, as functions of a real variable, exponential functions are uniquely characterized by the fact that the growth rate of such a function is directly proportional to the value of the function. The constant of proportionality of this relationship is the logarithm of the base b. The argument of the function can be any real or complex number or even an entirely different kind of mathematical object. Its ubiquitous occurrence in pure and applied mathematics has led mathematician W. Rudin to opine that the function is the most important function in mathematics. In applied settings, exponential functions model a relationship in which a constant change in the independent variable gives the same change in the dependent variable. The graph of y = e x is upward-sloping, and increases faster as x increases, the graph always lies above the x -axis but can get arbitrarily close to it for negative x, thus, the x -axis is a horizontal asymptote. The slope of the tangent to the graph at each point is equal to its y -coordinate at that point, as implied by its derivative function. Its inverse function is the logarithm, denoted log, ln, or log e, because of this. The exponential function exp, C → C can be characterized in a variety of equivalent ways, the constant e is then defined as e = exp = ∑ k =0 ∞. The exponential function arises whenever a quantity grows or decays at a proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number lim n → ∞ n now known as e, later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If instead interest is compounded daily, this becomes 365, letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp = lim n → ∞ n first given by Euler. This is one of a number of characterizations of the exponential function, from any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, exp = exp ⋅ exp which is why it can be written as ex. The derivative of the function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself is expressible in terms of the exponential function and this function property leads to exponential growth and exponential decay. The exponential function extends to a function on the complex plane. Eulers formula relates its values at purely imaginary arguments to trigonometric functions, the exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra
41.
Trigonometric function
–
In mathematics, the trigonometric functions are functions of an angle. They relate the angles of a triangle to the lengths of its sides, trigonometric functions are important in the study of triangles and modeling periodic phenomena, among many other applications. The most familiar trigonometric functions are the sine, cosine, more precise definitions are detailed below. Trigonometric functions have a range of uses including computing unknown lengths. In this use, trigonometric functions are used, for instance, in navigation, engineering, a common use in elementary physics is resolving a vector into Cartesian coordinates. In modern usage, there are six basic trigonometric functions, tabulated here with equations that relate them to one another and that is, for any similar triangle the ratio of the hypotenuse and another of the sides remains the same. If the hypotenuse is twice as long, so are the sides and it is these ratios that the trigonometric functions express. To define the functions for the angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows, The hypotenuse is the side opposite the right angle, the hypotenuse is always the longest side of a right-angled triangle. The opposite side is the side opposite to the angle we are interested in, in this side a. The adjacent side is the side having both the angles of interest, in this case side b, in ordinary Euclidean geometry, according to the triangle postulate, the inside angles of every triangle total 180°. Therefore, in a triangle, the two non-right angles total 90°, so each of these angles must be in the range of as expressed in interval notation. The following definitions apply to angles in this 0° – 90° range and they can be extended to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. For example, the figure shows sin for angles θ, π − θ, π + θ, and 2π − θ depicted on the unit circle and as a graph. The value of the sine repeats itself apart from sign in all four quadrants, and if the range of θ is extended to additional rotations, the trigonometric functions are summarized in the following table and described in more detail below. The angle θ is the angle between the hypotenuse and the adjacent line – the angle at A in the accompanying diagram, the sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case sin A = opposite hypotenuse = a h and this ratio does not depend on the size of the particular right triangle chosen, as long as it contains the angle A, since all such triangles are similar. The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse, in our case cos A = adjacent hypotenuse = b h