1.
Relational database
–
A relational database is a digital database whose organization is based on the relational model of data, as proposed by E. F. Codd in 1970. The various software systems used to maintain relational databases are known as a database management system. Virtually all relational database systems use SQL as the language for querying and maintaining the database and this model organizes data into one or more tables of columns and rows, with a unique key identifying each row. Rows are also called records or tuples, generally, each table/relation represents one entity type. The rows represent instances of type of entity and the columns representing values attributed to that instance. Each row in a table has its own unique key, rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row. Codd showed that data relationships of arbitrary complexity can be represented by a set of concepts. Part of this processing involves consistently being able to select or modify one, therefore, most physical implementations have a unique primary key for each table. When a new row is written to the table, a new value for the primary key is generated. System performance is optimized for PKs, other, more natural keys may also be identified and defined as alternate keys. Often several columns are needed to form an AK, both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, a unique identifier. The primary keys within a database are used to define the relationships among the tables, when a PK migrates to another table, it becomes a foreign key in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, relationships are a logical connection between different tables, established on the basis of interaction among these tables. In order for a management system to operate efficiently and accurately. Most of the programming within a RDBMS is accomplished using stored procedures, often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the design may grant access to only the stored procedures. Fundamental stored procedures contain the logic needed to insert new and update existing data, more complex procedures may be written to implement additional rules and logic related to processing or selecting the data
2.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
3.
Abstract algebra
–
In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, the term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. Algebraic structures, with their homomorphisms, form mathematical categories. Category theory is a formalism that allows a way for expressing properties. Universal algebra is a subject that studies types of algebraic structures as single objects. For example, the structure of groups is an object in universal algebra. As in other parts of mathematics, concrete problems and examples have played important roles in the development of abstract algebra, through the end of the nineteenth century, many – perhaps most – of these problems were in some way related to the theory of algebraic equations. Numerous textbooks in abstract algebra start with definitions of various algebraic structures. This creates an impression that in algebra axioms had come first and then served as a motivation. The true order of development was almost exactly the opposite. For example, the numbers of the nineteenth century had kinematic and physical motivations. An archetypical example of this progressive synthesis can be seen in the history of group theory, there were several threads in the early development of group theory, in modern language loosely corresponding to number theory, theory of equations, and geometry. Leonhard Euler considered algebraic operations on numbers modulo an integer, modular arithmetic, lagranges goal was to understand why equations of third and fourth degree admit formulae for solutions, and he identified as key objects permutations of the roots. An important novel step taken by Lagrange in this paper was the view of the roots, i. e. as symbols. However, he did not consider composition of permutations, serendipitously, the first edition of Edward Warings Meditationes Algebraicae appeared in the same year, with an expanded version published in 1782. Waring proved the theorem on symmetric functions, and specially considered the relation between the roots of a quartic equation and its resolvent cubic. Kronecker claimed in 1888 that the study of modern algebra began with this first paper of Vandermonde, cauchy states quite clearly that Vandermonde had priority over Lagrange for this remarkable idea, which eventually led to the study of group theory. Paolo Ruffini was the first person to develop the theory of permutation groups and his goal was to establish the impossibility of an algebraic solution to a general algebraic equation of degree greater than four
4.
Involution (mathematics)
–
In mathematics, an involution, or an involutory function, is a function f that is its own inverse, f = x for all x in the domain of f. The identity map is an example of an involution. Common examples in mathematics of nontrivial involutions include multiplication by −1 in arithmetic, other examples include circle inversion, rotation by a half-turn, and reciprocal ciphers such as the ROT13 transformation and the Beaufort polyalphabetic cipher. The number of involutions, including the identity involution, on a set with n =0,1,2. Elements is given by a recurrence relation found by Heinrich August Rothe in 1800, a0 = a1 =1, an = an −1 + an −2, for n >1. The first few terms of this sequence are 1,1,2,4,10,26,76,232, these numbers are called the telephone numbers, and they also count the number of Young tableaux with a given number of cells. The composition g ∘ f of two involutions f and g is an if and only if they commute, g ∘ f = f ∘ g. Every involution on an odd number of elements has at least one fixed point, more generally, for an involution on a finite set of elements, the number of elements and the number of fixed points have the same parity. Basic examples of involutions are the functions, f 1 = − x, or f 2 =1 x and these are not the only pre-calculus involutions. Another in R + is, f = ln , x >0, the graph of an involution is line-symmetric over the line y = x. This is due to the fact that the inverse of any general function will be its reflection over the 45° line y = x and this can be seen by swapping x with y. If, in particular, the function is an involution, then it will serve as its own reflection, other elementary involutions are useful in solving functional equations. A simple example of an involution of the three-dimensional Euclidean space is reflection against a plane, performing a reflection twice brings a point back to its original coordinates. Another is the reflection through the origin, this is an abuse of language as it is not a reflection. These transformations are examples of affine involutions, an involution is a projectivity of period 2, that is, a projectivity that interchanges pairs of points. Coxeter relates three theorems on involutions, Any projectivity that interchanges two points is an involution, the three pairs of opposite sides of a complete quadrangle meet any line in three pairs of an involution. If an involution has one fixed point, it has another, in this instance the involution is termed hyperbolic, while if there are no fixed points it is elliptic. Another type of involution occurring in geometry is a polarity which is a correlation of period 2
5.
Cartesian product
–
In Set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B, products can be specified using set-builder notation, e. g. A table can be created by taking the Cartesian product of a set of rows, If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More generally, a Cartesian product of n sets, also known as an n-fold Cartesian product, can be represented by an array of n dimensions, an ordered pair is a 2-tuple or couple. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, an illustrative example is the standard 52-card deck. The standard playing card ranks form a 13-element set, the card suits form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form, both sets are distinct, even disjoint. The main historical example is the Cartesian plane in analytic geometry, usually, such a pairs first and second components are called its x and y coordinates, respectively, cf. picture. The set of all such pairs is thus assigned to the set of all points in the plane, a formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, the Kuratowski definition, is =, note that, under this definition, X × Y ⊆ P, where P represents the power set. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, let A, B, C, and D be sets. × C ≠ A × If for example A =, then × A = ≠ = A ×, the Cartesian product behaves nicely with respect to intersections, cf. left picture. × = ∩ In most cases the above statement is not true if we replace intersection with union, cf. middle picture. Other properties related with subsets are, if A ⊆ B then A × C ⊆ B × C, the cardinality of a set is the number of elements of the set. For example, defining two sets, A = and B =, both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements, each element of A is paired with each element of B. Each pair makes up one element of the output set, the number of values in each element of the resulting set is equal to the number of sets whose cartesian product is being taken,2 in this case
6.
Augustus De Morgan
–
Augustus De Morgan was a British mathematician and logician. He formulated De Morgans laws and introduced the mathematical induction. Augustus De Morgan was born in Madurai, India in 1806 and his father was Lieut. -Colonel John De Morgan, who held various appointments in the service of the East India Company. His mother, Elizabeth Dodson descended from James Dodson, who computed a table of anti-logarithms, that is, Augustus De Morgan became blind in one eye a month or two after he was born. The family moved to England when Augustus was seven months old, when De Morgan was ten years old, his father died. Mrs. De Morgan resided at various places in the southwest of England and his mathematical talents went unnoticed until he was fourteen, when a family-friend discovered him making an elaborate drawing of a figure in Euclid with ruler and compasses. She explained the aim of Euclid to Augustus, and gave him an initiation into demonstration and he received his secondary education from Mr. Parsons, a fellow of Oriel College, Oxford, who appreciated classics better than mathematics. His mother was an active and ardent member of the Church of England, and desired that her son should become a clergyman, I shall use the world Anti-Deism to signify the opinion that there does not exist a Creator who made and sustains the Universe. His college tutor was John Philips Higman, FRS, at college he played the flute for recreation and was prominent in the musical clubs. His love of knowledge for its own sake interfered with training for the great mathematical race, as a consequence he came out fourth wrangler. This entitled him to the degree of Bachelor of Arts, but to take the degree of Master of Arts. To the signing of any such test De Morgan felt a strong objection, in about 1875 theological tests for academic degrees were abolished in the Universities of Oxford and Cambridge. As no career was open to him at his own university, he decided to go to the Bar, and took up residence in London, about this time the movement for founding London University took shape. A body of liberal-minded men resolved to meet the difficulty by establishing in London a University on the principle of religious neutrality, De Morgan, then 22 years of age, was appointed professor of mathematics. His introductory lecture On the study of mathematics is a discourse upon mental education of permanent value, the London University was a new institution, and the relations of the Council of management, the Senate of professors and the body of students were not well defined. A dispute arose between the professor of anatomy and his students, and in consequence of the action taken by the Council, another professor of mathematics was appointed, who then drowned a few years later. De Morgan had shown himself a prince of teachers, he was invited to return to his chair and its object was to spread scientific and other knowledge by means of cheap and clearly written treatises by the best writers of the time. One of its most voluminous and effective writers was De Morgan, when De Morgan came to reside in London he found a congenial friend in William Frend, notwithstanding his mathematical heresy about negative quantities
7.
Charles Sanders Peirce
–
Charles Sanders Peirce was an American philosopher, logician, mathematician, and scientist who is sometimes known as the father of pragmatism. He was educated as a chemist and employed as a scientist for 30 years, today he is appreciated largely for his contributions to logic, mathematics, philosophy, scientific methodology, and semiotics, and for his founding of pragmatism. An innovator in mathematics, statistics, philosophy, research methodology, and various sciences, Peirce considered himself, first and foremost and he made major contributions to logic, but logic for him encompassed much of that which is now called epistemology and philosophy of science. As early as 1886 he saw that logical operations could be carried out by electrical switching circuits, in 1934, the philosopher Paul Weiss called Peirce the most original and versatile of American philosophers and Americas greatest logician. Websters Biographical Dictionary said in 1943 that Peirce was now regarded as the most original thinker, keith Devlin similarly referred to Peirce as one of the greatest philosophers ever. Peirce was born at 3 Phillips Place in Cambridge, Massachusetts and he was the son of Sarah Hunt Mills and Benjamin Peirce, himself a professor of astronomy and mathematics at Harvard University and perhaps the first serious research mathematician in America. At age 12, Charles read his older brothers copy of Richard Whatelys Elements of Logic, so began his lifelong fascination with logic and reasoning. At Harvard, he began lifelong friendships with Francis Ellingwood Abbot, Chauncey Wright, one of his Harvard instructors, Charles William Eliot, formed an unfavorable opinion of Peirce. This opinion proved fateful, because Eliot, while President of Harvard 1869–1909—a period encompassing nearly all of Peirces working life—repeatedly vetoed Harvards employing Peirce in any capacity. Peirce suffered from his late teens onward from a condition then known as facial neuralgia. Its consequences may have led to the isolation which made his lifes later years so tragic. That employment exempted Peirce from having to part in the Civil War, it would have been very awkward for him to do so. At the Survey, he worked mainly in geodesy and gravimetry and he was elected a resident fellow of the American Academy of Arts and Sciences in January 1867. From 1869 to 1872, he was employed as an Assistant in Harvards astronomical observatory, doing important work on determining the brightness of stars, on April 20,1877 he was elected a member of the National Academy of Sciences. Also in 1877, he proposed measuring the meter as so many wavelengths of light of a certain frequency, during the 1880s, Peirces indifference to bureaucratic detail waxed while his Survey works quality and timeliness waned. Peirce took years to write reports that he should have completed in months, meanwhile, he wrote entries, ultimately thousands during 1883–1909, on philosophy, logic, science, and other subjects for the encyclopedic Century Dictionary. In 1885, an investigation by the Allison Commission exonerated Peirce, in 1891, Peirce resigned from the Coast Survey at Superintendent Thomas Corwin Mendenhalls request. He never again held regular employment, in 1879, Peirce was appointed Lecturer in logic at Johns Hopkins University, which had strong departments in a number of areas that interested him, such as philosophy, psychology, and mathematics
8.
Alfred Tarski
–
Alfred Tarski was a renowned Polish logician, mathematician and philosopher. Tarski taught and carried out research in mathematics at the University of California, Alfred Tarski was born Alfred Teitelbaum, to parents who were Polish Jews in comfortable circumstances relative to other Jews in the overall region. He first manifested his mathematical abilities while in school, at Warsaws Szkoła Mazowiecka. Nevertheless, he entered the University of Warsaw in 1918 intending to study biology, Leśniewski recognized Tarskis potential as a mathematician and encouraged him to abandon biology. Tarski and Leśniewski soon grew cool to each other, however, in later life, Tarski reserved his warmest praise for Kotarbiński, as was mutual. In 1923, Alfred Teitelbaum and his brother Wacław changed their surname to Tarski, the Tarski brothers also converted to Roman Catholicism, Polands dominant religion. Alfred did so even though he was an avowed atheist, Tarski was a Polish nationalist who saw himself as a Pole and wished to be fully accepted as such — later, in America, he spoke Polish at home. In 1929 Tarski married fellow teacher Maria Witkowska, a Pole of Catholic background and she had worked as a courier for the army in the Polish–Soviet War. They had two children, a son Jan who became a physicist, and a daughter Ina who married the mathematician Andrzej Ehrenfeucht, Tarski applied for a chair of philosophy at Lwów University, but on Bertrand Russells recommendation it was awarded to Leon Chwistek. In 1930, Tarski visited the University of Vienna, lectured to Karl Mengers colloquium, thanks to a fellowship, he was able to return to Vienna during the first half of 1935 to work with Mengers research group. From Vienna he traveled to Paris to present his ideas on truth at the first meeting of the Unity of Science movement, in 1937, Tarski applied for a chair at Poznań University but the chair was abolished. Tarskis ties to the Unity of Science movement likely saved his life, thus he left Poland in August 1939, on the last ship to sail from Poland for the United States before the German and Soviet invasion of Poland and the outbreak of World War II. Tarski left reluctantly, because Leśniewski had died a few months before, oblivious to the Nazi threat, he left his wife and children in Warsaw. He did not see again until 1946. During the war, nearly all his Jewish extended family were murdered at the hands of the German occupying authorities, in 1942, Tarski joined the Mathematics Department at the University of California, Berkeley, where he spent the rest of his career. Tarski became an American citizen in 1945, although emeritus from 1968, he taught until 1973 and supervised Ph. D. candidates until his death. At Berkeley, Tarski acquired a reputation as an awesome and demanding teacher, Tarski was extroverted, quick-witted, strong-willed, energetic, and sharp-tongued. He preferred his research to be collaborative — sometimes working all night with a colleague — and was very fastidious about priority, some students were frightened away, but a circle of disciples remained, many of whom became world-renowned leaders in the field
9.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
10.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
11.
Permutation
–
These differ from combinations, which are selections of some members of a set where order is disregarded. For example, written as tuples, there are six permutations of the set, namely and these are all the possible orderings of this three element set. As another example, an anagram of a word, all of whose letters are different, is a permutation of its letters, in this example, the letters are already ordered in the original word and the anagram is a reordering of the letters. The study of permutations of finite sets is a topic in the field of combinatorics, Permutations occur, in more or less prominent ways, in almost every area of mathematics. For similar reasons permutations arise in the study of sorting algorithms in computer science, the number of permutations of n distinct objects is n factorial, usually written as n. which means the product of all positive integers less than or equal to n. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself and that is, it is a function from S to S for which every element occurs exactly once as an image value. This is related to the rearrangement of the elements of S in which each element s is replaced by the corresponding f, the collection of such permutations form a group called the symmetric group of S. The key to this structure is the fact that the composition of two permutations results in another rearrangement. Permutations may act on structured objects by rearranging their components, or by certain replacements of symbols, in elementary combinatorics, the k-permutations, or partial permutations, are the ordered arrangements of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set, fabian Stedman in 1677 described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells, first, two must be admitted to be varied in two ways which he illustrates by showing 12 and 21 and he then explains that with three bells there are three times two figures to be produced out of three which again is illustrated. His explanation involves cast away 3, and 1.2 will remain, cast away 2, and 1.3 will remain, cast away 1, and 2.3 will remain. He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three, effectively this is an recursive process. He continues with five bells using the casting method and tabulates the resulting 120 combinations. At this point he gives up and remarks, Now the nature of these methods is such, in modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. There are two equivalent common ways of regarding permutations, sometimes called the active and passive forms, or in older terminology substitutions and permutations, which form is preferable depends on the type of questions being asked in a given discipline. The active way to regard permutations of a set S is to them as the bijections from S to itself. Thus, the permutations are thought of as functions which can be composed with each other, forming groups of permutations
12.
First-order logic
–
First-order logic – also known as first-order predicate calculus and predicate logic – is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. This distinguishes it from propositional logic, which does not use quantifiers, Sometimes theory is understood in a more formal sense, which is just a set of sentences in first-order logic. In first-order theories, predicates are associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets, There are many deductive systems for first-order logic which are both sound and complete. Although the logical relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem, first-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, no first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axioms systems that do fully describe these two structures can be obtained in stronger logics such as second-order logic, for a history of first-order logic and how it came to dominate formal logic, see José Ferreirós. While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates, a predicate takes an entity or entities in the domain of discourse as input and outputs either True or False. Consider the two sentences Socrates is a philosopher and Plato is a philosopher, in propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate is a philosopher occurs in both sentences, which have a structure of a is a philosopher. The variable a is instantiated as Socrates in the first sentence and is instantiated as Plato in the second sentence, while first-order logic allows for the use of predicates, such as is a philosopher in this example, propositional logic does not. Relationships between predicates can be stated using logical connectives, consider, for example, the first-order formula if a is a philosopher, then a is a scholar. This formula is a statement with a is a philosopher as its hypothesis. The truth of this depends on which object is denoted by a. Quantifiers can be applied to variables in a formula, the variable a in the previous formula can be universally quantified, for instance, with the first-order sentence For every a, if a is a philosopher, then a is a scholar. The universal quantifier for every in this sentence expresses the idea that the if a is a philosopher. The negation of the sentence For every a, if a is a philosopher, then a is a scholar is logically equivalent to the sentence There exists a such that a is a philosopher and a is not a scholar
13.
Theory (logic)
–
In mathematical logic, a theory is a set of sentences in a formal language. Usually a deductive system is understood from context, an element ϕ ∈ T of a theory T is then called an axiom of the theory, and any sentence that follows from the axioms is called a theorem of the theory. Every axiom is also a theorem, a first-order theory is a set of first-order sentences. When defining theories for foundational purposes, additional care must be taken, the construction of a theory begins by specifying a definite non-empty conceptual class E, the elements of which are called statements. These initial statements are often called the elements or elementary statements of the theory. A theory T is a class consisting of certain of these elementary statements. The elementary statements which belong to T are called the elementary theorems of T, in this way, a theory is a way of designating a subset of E which consists entirely of true statements. This general way of designating a theory stipulates that the truth of any of its statements is not known without reference to T. Thus the same statement may be true with respect to one theory. This is as in language, where statements such as He is a terrible person. Cannot be judged to be true or false without reference to some interpretation of who He is, a theory S is a subtheory of a theory T if S is a subset of T. If T is a subset of S then S is an extension or supertheory of T A theory is said to be a theory if T is an inductive class. That is, that its content is based on some formal deductive system, in a deductive theory, any sentence which is a logical consequence of one or more of the axioms is also a sentence of that theory. A syntactically consistent theory is a theory from which not every sentence in the language can be proven. In a deductive system that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory, a satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory, any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ. A consistent theory is defined to be a syntactically consistent theory. For first-order logic, the most important case, it follows from the theorem that the two meanings coincide
14.
Multiplicative inverse
–
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity,1. The multiplicative inverse of a fraction a/b is b/a, for the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth, the reciprocal function, the function f that maps x to 1/x, is one of the simplest examples of a function which is its own inverse. In the phrase multiplicative inverse, the qualifier multiplicative is often omitted, multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen that ab ≠ ba, then inverse typically implies that an element is both a left and right inverse. The notation f −1 is sometimes used for the inverse function of the function f. For example, the multiplicative inverse 1/ = −1 is the cosecant of x, only for linear maps are they strongly related. The terminology difference reciprocal versus inverse is not sufficient to make this distinction, since many authors prefer the opposite naming convention, in the real numbers, zero does not have a reciprocal because no real number multiplied by 0 produces 1. With the exception of zero, reciprocals of every real number are real, reciprocals of every rational number are rational, the property that every element other than zero has a multiplicative inverse is part of the definition of a field, of which these are all examples. On the other hand, no other than 1 and −1 has an integer reciprocal. In modular arithmetic, the multiplicative inverse of a is also defined. This multiplicative inverse exists if and only if a and n are coprime, for example, the inverse of 3 modulo 11 is 4 because 4 ·3 ≡1. The extended Euclidean algorithm may be used to compute it, the sedenions are an algebra in which every nonzero element has a multiplicative inverse, but which nonetheless has divisors of zero, i. e. nonzero elements x, y such that xy =0. A square matrix has an inverse if and only if its determinant has an inverse in the coefficient ring, the linear map that has the matrix A−1 with respect to some base is then the reciprocal function of the map having A as matrix in the same base. Thus, the two notions of the inverse of a function are strongly related in this case, while they must be carefully distinguished in the general case. A ring in which every element has a multiplicative inverse is a division ring. As mentioned above, the reciprocal of every complex number z = a + bi is complex. In particular, if ||z||=1, then 1 / z = z ¯, consequently, the imaginary units, ±i, have additive inverse equal to multiplicative inverse, and are the only complex numbers with this property
15.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b
16.
Logical disjunction
–
In logic and mathematics, or is the truth-functional operator of disjunction, also known as alternation, the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is written as ∨ or +. A or B is true if A is true, or if B is true, or if both A and B are true. In logic, or by means the inclusive or, distinguished from an exclusive or. An operand of a disjunction is called a disjunct, related concepts in other fields are, In natural language, the coordinating conjunction or. In programming languages, the short-circuit or control structure, or is usually expressed with an infix operator, in mathematics and logic, ∨, in electronics, +, and in most programming languages, |, ||, or or. In Jan Łukasiewiczs prefix notation for logic, the operator is A, logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value of false if and only if both of its operands are false. More generally, a disjunction is a formula that can have one or more literals separated only by ors. A single literal is often considered to be a degenerate disjunction, the disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original expression. In keeping with the concept of truth, when disjunction is defined as an operator or function of arbitrary arity. Falsehood-preserving, The interpretation under which all variables are assigned a value of false produces a truth value of false as a result of disjunction. The mathematical symbol for logical disjunction varies in the literature, in addition to the word or, and the formula Apq, the symbol ∨, deriving from the Latin word vel is commonly used for disjunction. For example, A ∨ B is read as A or B, such a disjunction is false if both A and B are false. In all other cases it is true, all of the following are disjunctions, A ∨ B ¬ A ∨ B A ∨ ¬ B ∨ ¬ C ∨ D ∨ ¬ E. The corresponding operation in set theory is the set-theoretic union, operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations, for example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged. Logical disjunction is usually short-circuited, that is, if the first operand evaluates to true then the second operand is not evaluated, the logical disjunction operator thus usually constitutes a sequence point. In a parallel language, it is possible to both sides, they are evaluated in parallel, and if one terminates with value true
17.
Lattice (order)
–
A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of an ordered set in which every two elements have a unique supremum and a unique infimum. An example is given by the numbers, partially ordered by divisibility, for which the unique supremum is the least common multiple. Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities, since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras and these lattice-like structures all admit order-theoretic as well as algebraic descriptions. If is an ordered set, and S ⊆ L is an arbitrary subset. A set may have many upper bounds, or none at all, an upper bound u of S is said to be its least upper bound, or join, or supremum, if u ≤ x for each upper bound x of S. A set need not have a least upper bound, but it cannot have more than one, dually, l ∈ L is said to be a lower bound of S if l ≤ s for each s ∈ S. A lower bound l of S is said to be its greatest lower bound, or meet, or infimum, a set may have many lower bounds, or none at all, but can have at most one greatest lower bound. A partially ordered set is called a join-semilattice and a meet-semilattice if each two-element subset ⊆ L has a join and a meet, denoted by a ∨ b, is called a lattice if it is both a join- and a meet-semilattice. This definition makes ∨ and ∧ binary operations, both operations are monotone with respect to the order, a1 ≤ a2 and b1 ≤ b2 implies that a1 ∨ b1 ≤ a2 ∨ b2 and a1 ∧ b1 ≤ a2 ∧ b2. It follows by an argument that every non-empty finite subset of a lattice has a least upper bound. With additional assumptions, further conclusions may be possible, see Completeness for more discussion of this subject, a bounded lattice is a lattice that additionally has a greatest element 1 and a least element 0, which satisfy 0 ≤ x ≤1 for every x in L. The greatest and least element is called the maximum and minimum, or the top and bottom element. A partially ordered set is a lattice if and only if every finite set of elements has a join. Taking B to be the empty set, ⋁ = ∨ = ∨0 = ⋁ A and ⋀ = ∧ = ∧1 = ⋀ A which is consistent with the fact that A ∪ ∅ = A. A lattice element y is said to another element x, if y > x. Here, y > x means x ≤ y and x ≠ y
18.
Monoid
–
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element. Monoids are studied in semigroup theory as they are semigroups with identity, monoids occur in several branches of mathematics, for instance, they can be regarded as categories with a single object. Thus, they capture the idea of composition within a set. In fact, all functions from a set into itself form naturally a monoid with respect to function composition, monoids are also commonly used in computer science, both in its foundational aspects and in practical programming. The set of strings built from a set of characters is a free monoid. The transition monoid and syntactic monoid are used in describing finite state machines, whereas trace monoids and history provide a foundation for process calculi. Some of the more important results in the study of monoids are the Krohn–Rhodes theorem, the history of monoids, as well as a discussion of additional general properties, are found in the article on semigroups. Identity element There exists an element e in S such that for every element a in S, in other words, a monoid is a semigroup with an identity element. It can also be thought of as a magma with associativity and identity, the identity element of a monoid is unique. A monoid in which each element has an inverse is a group. Depending on the context, the symbol for the operation may be omitted, so that the operation is denoted by juxtaposition, for example. This notation does not imply that it is numbers being multiplied, N is thus a monoid under the binary operation inherited from M. If there is a generator of M that has finite cardinality, not every set S will generate a monoid, as the generated structure may lack an identity element. A monoid whose operation is commutative is called a commutative monoid, commutative monoids are often written additively. Any commutative monoid is endowed with its algebraic preordering ≤, defined by x ≤ y if there exists z such that x + z = y. An order-unit of a commutative monoid M is an element u of M such that for any element x of M, there exists a positive integer n such that x ≤ nu. This is often used in case M is the cone of a partially ordered abelian group G. A monoid for which the operation is commutative for some, but not all elements is a trace monoid, trace monoids commonly occur in the theory of concurrent computation
19.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
20.
Injective
–
In mathematics, an injective function or injection or one-to-one function is a function that preserves distinctness, it never maps distinct elements of its domain to the same element of its codomain. In other words, every element of the codomain is the image of at most one element of its domain. The term one-to-one function must not be confused with one-to-one correspondence, occasionally, an injective function from X to Y is denoted f, X ↣ Y, using an arrow with a barbed tail. A function f that is not injective is sometimes called many-to-one, however, the injective terminology is also sometimes used to mean single-valued, i. e. each argument is mapped to at most one value. A monomorphism is a generalization of a function in category theory. Let f be a function whose domain is a set X, the function f is said to be injective provided that for all a and b in X, whenever f = f, then a = b, that is, f = f implies a = b. Equivalently, if a ≠ b, then f ≠ f, in particular the identity function X → X is always injective. If the domain X = ∅ or X has only one element, the function f, R → R defined by f = 2x +1 is injective. The function g, R → R defined by g = x2 is not injective, however, if g is redefined so that its domain is the non-negative real numbers [0, +∞), then g is injective. The exponential function exp, R → R defined by exp = ex is injective, the natural logarithm function ln, → R defined by x ↦ ln x is injective. The function g, R → R defined by g = xn − x is not injective, since, for example, g = g =0. More generally, when X and Y are both the real line R, then a function f, R → R is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the line test. Functions with left inverses are always injections and that is, given f, X → Y, if there is a function g, Y → X such that, for every x ∈ X g = x then f is injective. In this case, g is called a retraction of f, conversely, f is called a section of g. Conversely, every injection f with non-empty domain has an inverse g. Note that g may not be an inverse of f because the composition in the other order, f o g. In other words, a function that can be undone or reversed, injections are reversible but not always invertible
21.
Surjective function
–
It is not required that x is unique, the function f may map one or more elements of X to the same element of Y. The French prefix sur means over or above and relates to the fact that the image of the domain of a surjective function completely covers the functions codomain, any function induces a surjection by restricting its codomain to its range. Every surjective function has an inverse, and every function with a right inverse is necessarily a surjection. The composite of surjective functions is always surjective, any function can be decomposed into a surjection and an injection. A surjective function is a function whose image is equal to its codomain, equivalently, a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f = y. Surjections are sometimes denoted by a two-headed rightwards arrow, as in f, X ↠ Y, symbolically, If f, X → Y, then f is said to be surjective if ∀ y ∈ Y, ∃ x ∈ X, f = y. For any set X, the identity function idX on X is surjective, the function f, Z → defined by f = n mod 2 is surjective. The function f, R → R defined by f = 2x +1 is surjective, because for every real number y we have an x such that f = y, an appropriate x is /2. However, this function is not injective since e. g. the pre-image of y =2 is, the function g, R → R defined by g = x2 is not surjective, because there is no real number x such that x2 = −1. However, the g, R → R0+ defined by g = x2 is surjective because for every y in the nonnegative real codomain Y there is at least one x in the real domain X such that x2 = y. The natural logarithm ln, → R is a surjective. Its inverse, the function, is not surjective as its range is the set of positive real numbers. The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the linear group of degree n, i. e. the group of all n×n invertible matrices. Under this definition the matrix exponential is surjective for complex matrices, the projection from a cartesian product A × B to one of its factors is surjective unless the other factor is empty. In a 3D video game vectors are projected onto a 2D flat screen by means of a surjective function, a function is bijective if and only if it is both surjective and injective. If a function is identified with its graph, then surjectivity is not a property of the function itself, unlike injectivity, surjectivity cannot be read off of the graph of the function alone. The function g, Y → X is said to be an inverse of the function f, X → Y if f = y for every y in Y
22.
Bijection
–
In mathematical terms, a bijective function f, X → Y is a one-to-one and onto mapping of a set X to a set Y. A bijection from the set X to the set Y has a function from Y to X. If X and Y are finite sets, then the existence of a means they have the same number of elements. For infinite sets the picture is complicated, leading to the concept of cardinal number. A bijective function from a set to itself is called a permutation. Bijective functions are essential to many areas of including the definitions of isomorphism, homeomorphism, diffeomorphism, permutation group. Satisfying properties and means that a bijection is a function with domain X and it is more common to see properties and written as a single statement, Every element of X is paired with exactly one element of Y. Functions which satisfy property are said to be onto Y and are called surjections, Functions which satisfy property are said to be one-to-one functions and are called injections. With this terminology, a bijection is a function which is both a surjection and an injection, or using words, a bijection is a function which is both one-to-one and onto. Consider the batting line-up of a baseball or cricket team, the set X will be the players on the team and the set Y will be the positions in the batting order The pairing is given by which player is in what position in this order. Property is satisfied since each player is somewhere in the list, property is satisfied since no player bats in two positions in the order. Property says that for each position in the order, there is some player batting in that position, in a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them all to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. The instructor was able to conclude there were just as many seats as there were students. For any set X, the identity function 1X, X → X, the function f, R → R, f = 2x +1 is bijective, since for each y there is a unique x = /2 such that f = y. In more generality, any linear function over the reals, f, R → R, f = ax + b is a bijection, each real number y is obtained from the real number x = /a. The function f, R →, given by f = arctan is bijective since each real x is paired with exactly one angle y in the interval so that tan = x
23.
Reflexive relation
–
In mathematics, a binary relation R over a set X is reflexive if every element of X is related to itself. In mathematical notation, this is, ∀ a ∈ X An example of a relation is the relation is equal to on the set of real numbers. A reflexive relation is said to have the property or is said to possess reflexivity. A relation that is irreflexive, or anti-reflexive, is a relation on a set where no element is related to itself. An example is the greater than relation on the real numbers, note that not every relation which is not reflexive is irreflexive, it is possible to define relations where some elements are related to themselves but others are not. A relation ~ on a set S is called quasi-reflexive if every element that is related to some element is related to itself, formally, if ∀x, y∈S. The reflexive closure ≃ of a binary relation ~ on a set S is the smallest reflexive relation on S that is a superset of ~, equivalently, it is the union of ~ and the identity relation on S, formally, = ∪. For example, the closure of x<y is x≤y. The reflexive reduction, or irreflexive kernel, of a binary relation ~ on a set S is the smallest relation ≆ such that ≆ shares the same reflexive closure as ~ and it can be seen in a way as the opposite of the reflexive closure. It is equivalent to the complement of the identity relation on S with regard to ~, formally and that is, it is equivalent to ~ except for where x~x is true. For example, the reduction of x≤y is x<y. Authors in philosophical logic often use deviating designations, a reflexive and a quasi-reflexive relation in the mathematical sense is called a totally reflexive and a reflexive relation in philosophical logic sense, respectively. Binary relation Symmetric relation Antisymmetric relation Transitive relation Levy, A, basic Set Theory, Perspectives in Mathematical Logic, Springer-Verlag. ISBN 0-486-42079-5 Lidl, R. and Pilz, G, applied abstract algebra, Undergraduate Texts in Mathematics, Springer-Verlag. Hazewinkel, Michiel, ed. Reflexivity, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
24.
Symmetric relation
–
In mathematics and other areas, a binary relation R over a set X is symmetric if it holds for all a and b in X that a is related to b if and only if b is related to a. In mathematical notation, this is, ∀ a, b ∈ X is equal to is comparable to, and. are odd, is married to is a fully biological sibling of is a homophone of By definition, a relation cannot be both symmetric and asymmetric. However, a relation can be symmetric nor asymmetric, which is the case for is less than or equal to. Symmetric and antisymmetric are actually independent of other, as these examples show. A symmetric relation that is also transitive and reflexive is an equivalence relation, one way to conceptualize a symmetric relation in graph theory is that a symmetric relation is an edge, with the edges two vertices being the two entities so related. Thus, symmetric relations and undirected graphs are combinatorially equivalent objects, asymmetric relation Antisymmetric relation Commutative property Symmetry in mathematics Symmetry
25.
Antisymmetric relation
–
In mathematics, a binary relation R on a set X is anti-symmetric if there is no pair of distinct elements of X each of which is related by R to the other. More formally, R is anti-symmetric precisely if for all a and b in X if R and R, then a = b, or, equivalently, if R with a ≠ b, then R must not hold. As a simple example, the divisibility order on the numbers is an anti-symmetric relation. In mathematical notation, this is, ∀ a, b ∈ X, ⇒ a = b or, equivalently, ∀ a, b ∈ X. The usual order relation ≤ on the numbers is anti-symmetric. A relation can be symmetric and anti-symmetric, and there are relations which are neither symmetric nor anti-symmetric. Anti-symmetry is different from asymmetry, which requires both anti-symmetry and irreflexivity, the relation x is even, y is odd between a pair of integers is anti-symmetric, Every asymmetric relation is also an anti-symmetric relation. Symmetric relation Asymmetric relation Symmetry in mathematics Weisstein, Eric W. Antisymmetric Relation, theory and Problems of Discrete Mathematics