1.
Addition
–
Addition is one of the four basic operations of arithmetic, with the others being subtraction, multiplication and division. The addition of two numbers is the total amount of those quantities combined. For example, in the picture on the right, there is a combination of three apples and two together, making a total of five apples. This observation is equivalent to the mathematical expression 3 +2 =5 i. e.3 add 2 is equal to 5, besides counting fruits, addition can also represent combining other physical objects. In arithmetic, rules for addition involving fractions and negative numbers have been devised amongst others, in algebra, addition is studied more abstractly. It is commutative, meaning that order does not matter, and it is associative, repeated addition of 1 is the same as counting, addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication, performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers, the most basic task,1 +1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the system, starting with single digits. Mechanical aids range from the ancient abacus to the modern computer, Addition is written using the plus sign + between the terms, that is, in infix notation. The result is expressed with an equals sign, for example, 3½ =3 + ½ =3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead, the sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, ∑ k =15 k 2 =12 +22 +32 +42 +52 =55. The numbers or the objects to be added in addition are collectively referred to as the terms, the addends or the summands. This is to be distinguished from factors, which are multiplied, some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an addend at all, today, due to the commutative property of addition, augend is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin, using the gerundive suffix -nd results in addend, thing to be added. Likewise from augere to increase, one gets augend, thing to be increased, sum and summand derive from the Latin noun summa the highest, the top and associated verb summare
2.
Analytic philosophy
–
Analytic philosophy is a style of philosophy that became dominant in English-speaking countries at the beginning of the 20th century. In the United Kingdom, United States, Canada, Australia, New Zealand, and Scandinavia, as a historical development, analytical philosophy refers to certain developments in early 20th-century philosophy that were the historical antecedents of the current practice. Central figures in historical development are Bertrand Russell, Ludwig Wittgenstein, G. E. Moore, Gottlob Frege. This may be contrasted with the traditional foundationalism, which considers philosophy to be a science that investigates the fundamental reasons. Consequently, many philosophers have considered their inquiries as continuous with, or subordinate to. This is an attitude that begins with John Locke, who described his work as that of an underlabourer to the achievements of scientists such as Newton. During the twentieth century, the most influential advocate of the continuity of philosophy with science was Willard Van Orman Quine, the principle that the logical clarification of thoughts can be achieved only by analysis of the logical form of philosophical propositions. The logical form of a proposition is a way of representing it, to reduce it to simpler components if necessary, however, analytic philosophers disagree widely about the correct logical form of ordinary language. The neglect of generalized philosophical systems in favour of more restricted inquiries stated rigorously and it is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. Its methods, in respect, resemble those of science. Analytic philosophy is often understood in contrast to other traditions, most notably continental philosophies such as existentialism and phenomenology. British idealism, as taught by such as F. H. Bradley and Thomas Hill Green. With reference to this basis the initiators of analytic philosophy, G. E. Moore and Bertrand Russell. Inspired by developments in logic, the early Russell claimed that the problems of philosophy can be solved by showing the simple constituents of complex notions. An important aspect of British idealism was logical holism — the opinion that the aspects of the world cannot be wholly without also knowing the whole world. This is closely related to the opinion that relations between items are actually internal relations, that is, properties internal to the nature of those items. Russell, along with Wittgenstein, in response promulgated logical atomism, Frege was also influential as a philosopher of mathematics in Germany at the beginning of the 20th century. Like Frege, Russell attempted to show that mathematics is reducible to logical fundamentals in The Principles of Mathematics, later, his book written with Whitehead, Principia Mathematica, encouraged many philosophers to renew their interest in the development of symbolic logic
3.
Venn diagram
–
A Venn diagram is a diagram that shows all possible logical relations between a finite collection of different sets. These diagrams depict elements as points in the plane, and sets as regions inside closed curves, a Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. In Venn diagrams the curves are overlapped in every possible way and they are thus a special case of Euler diagrams, which do not necessarily show all relations. Venn diagrams were conceived around 1880 by John Venn and they are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. A Venn diagram in which in addition the area of each shape is proportional to the number of elements it contains is called an area-proportional or scaled Venn diagram and this example involves two sets, A and B, represented here as coloured circles. The orange circle, set A, represents all living creatures that are two-legged, the blue circle, set B, represents the living creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram, living creatures that both can fly and have two legs—for example, parrots—are then in both sets, so they correspond to points in the region where the blue and orange circles overlap. That region contains all such and only living creatures. Humans and penguins are bipedal, and so are then in the circle, but since they cannot fly they appear in the left part of the orange circle. Mosquitoes have six legs, and fly, so the point for mosquitoes is in the part of the circle that does not overlap with the orange one. Creatures that are not two-legged and cannot fly would all be represented by points outside both circles, the combined region of sets A and B is called the union of A and B, denoted by A ∪ B. The union in this case contains all living creatures that are either two-legged or that can fly, the region in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by A ∩ B. For example, the intersection of the two sets is not empty, because there are points that represent creatures that are in both the orange and blue circles. They are rightly associated with Venn, however, because he comprehensively surveyed and formalized their usage, Venn himself did not use the term Venn diagram and referred to his invention as Eulerian Circles. Of these schemes one only, viz. that commonly called Eulerian circles, has met with any general acceptance, the first to use the term Venn diagram was Clarence Irving Lewis in 1918, in his book A Survey of Symbolic Logic. Venn diagrams are similar to Euler diagrams, which were invented by Leonhard Euler in the 18th century. Baron has noted that Leibniz in the 17th century produced similar diagrams before Euler and she also observes even earlier Euler-like diagrams by Ramon Lull in the 13th Century. In the 20th century, Venn diagrams were further developed, D. W. Henderson showed in 1963 that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number
4.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
5.
Logic gate
–
Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device. In modern practice, most gates are made from field-effect transistors, compound logic gates AND-OR-Invert and OR-AND-Invert are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates. In reversible logic, Toffoli gates are used, to build a functionally complete logic system, relays, valves, or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic, unlike simple diode logic gates, RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in integrated circuits. For higher speed and better density, the used in RTL were replaced by diodes resulting in diode-transistor logic. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors, to reduce power consumption still further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary MOSFET devices to achieve a high speed with low power dissipation, increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a large number of mixed logic gates into a single integrated circuit. Other types of logic gates include, but are not limited to Electronic logic gates differ significantly from their relay-and-switch equivalents and they are much faster, consume much less power, and are much smaller. Also, there is a structural difference. The switch circuit creates a continuous path for current to flow between its input and its output. The semiconductor logic gate, on the hand, acts as a high-gain voltage amplifier. It is not possible for current to flow between the output and the input of a logic gate. Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded and this means that the output of one gate can be wired to the inputs of one or several other gates, and so on. The output of one gate can drive a finite number of inputs to other gates. Also, there is always a delay, called the propagation delay, when gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, eventually, vacuum tubes replaced relays for logic operations
6.
XOR gate
–
The XOR gate is a digital logic gate that gives a true output when the number of true inputs is odd. An XOR gate implements an exclusive or, that is, a true output results if one, if both inputs are false or both are true, a false output results. XOR represents the inequality function, i. e. the output is true if the inputs are not alike otherwise the output is false, a way to remember XOR is one or the other but not both. XOR can also be viewed as addition modulo 2, as a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate, other uses include subtractors, comparators, and controlled inverters. The algebraic expressions A ⋅ B ¯ + A ¯ ⋅ B and ⋅ both represent the XOR gate with inputs A and B, the behavior of XOR is summarized in the truth table shown on the right. There are two symbols for XOR gates, the symbol and the IEEE symbol. For more information see Logic Gate Symbols, the logic symbols ⊕ and ⊻ can be used to denote XOR in algebraic expressions. C-like languages use the symbol ^ to denote bitwise XOR. An XOR gate can be constructed using MOSFETs, here is a diagram of a pass transistor logic implementation of an XOR gate. Note, The Rss resistor prevents shunting current directly from A and B to the output, without it, if the circuit that provides inputs A and B does not have the proper driving capability, the output might not swing rail to rail or be severely slew-rate limited. The Rss resistor also limits the current from Vdd to ground which protects the transistors, if a specific type of gate is not available, a circuit that implements the same function can be constructed from other available gates. A circuit implementing an XOR function can be constructed from an XNOR gate followed by a NOT gate. If we consider the expression A ⋅ B ¯ + A ¯ ⋅ B, we can construct an XOR gate circuit directly using AND, OR, however, this approach requires five gates of three different kinds. An XOR gate circuit can be made from four NAND gates in the configuration shown below, in fact, both NAND and NOR gates are so-called universal gates and any logical function can be constructed from either NAND logic or NOR logic alone. If the four NAND gates, below, are replaced by NOR gates, this results in an XNOR gate, strict reading of the definition of exclusive or, or observation of the IEC rectangular symbol, raises the question of correct behaviour with additional inputs. If a logic gate were to three or more inputs and produce a true output if exactly one of those inputs were true. However, it is rarely implemented this way in practice, the result is a circuit that outputs a 1 when the number of 1s at its inputs is odd, and a 0 when the number of incoming 1s is even
7.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
8.
Linguistics
–
Linguistics is the scientific study of language, and involves an analysis of language form, language meaning, and language in context. Linguists traditionally analyse human language by observing an interplay between sound and meaning, phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. While the study of semantics typically concerns itself with truth conditions, Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential sub-sets of rules, such as those pertaining to phonology, morphology, modern theories that deal with the principles of grammar are largely based within Noam Chomskys ideological school of generative grammar. In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles. This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an innate capacity and potential for language, while performance is the specific way in which it is used by individuals, groups. The study of parole is the domain of sociolinguistics, the sub-discipline that comprises the study of a system of linguistic facets within a certain speech community. Discourse analysis further examines the structure of texts and conversations emerging out of a speech communitys usage of language, Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of scripts in language. Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic, research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time. Language documentation combines anthropological inquiry with linguistic inquiry, in order to describe languages, lexicography involves the documentation of words that form a vocabulary. Such a documentation of a vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective, specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education, related areas of study also includes the disciplines of semiotics, literary criticism, translation, and speech-language pathology. Before the 20th century, the philology, first attested in 1716, was commonly used to refer to the science of language
9.
Field (mathematics)
–
In mathematics, a field is a set on which are defined addition, subtraction, multiplication, and division, which behave as they do when applied to rational and real numbers. A field is thus an algebraic structure, which is widely used in algebra, number theory. The best known fields are the field of numbers. In addition, the field of numbers is widely used, not only in mathematics. Finite fields are used in most cryptographic protocols used for computer security, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Formally, a field is a set together with two operations the addition and the multiplication, which have the properties, called axioms of fields. An operation is a mapping that associates an element of the set to every pair of its elements, the result of the addition of a and b is called the sum of a and b and denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, associativity of addition and multiplication For all a, b and c in F, one has a + = + c and a · = · c. Commutativity of addition and multiplication For all a and b in F one has a + b = b + a and a · b = b · a. Existence of additive and multiplicative identity elements There exists an element 0 in F, called the identity, such that for all a in F. There is an element 1, different from 0 and called the identity, such that for all a in F. Existence of additive inverses and multiplicative inverses For every a in F, there exists an element in F, denoted −a, such that a + =0. For every a ≠0 in F, there exists an element in F, denoted a−1, 1/a, or 1/a, distributivity of multiplication over addition For all a, b and c in F, one has a · = +. The elements 0 and 1 being required to be distinct, a field has, at least, for every a in F, one has − a = ⋅ a. Thus, the inverse of every element is known as soon as one knows the additive inverse of 1. A subtraction and a division are defined in every field by a − b = a +, a subfield E of a field F is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. It is straightforward to verify that a subfield is indeed a field, two groups are associated to every field. The field itself is a group under addition, when considering this group structure rather the field structure, one talks of the additive group of the field
10.
Logical connective
–
The most common logical connectives are binary connectives which join two sentences which can be thought of as the functions operands. Also commonly, negation is considered to be a unary connective, logical connectives along with quantifiers are the two main types of logical constants used in formal systems such as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, a logical connective is similar to but not equivalent to a conditional operator. In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a compound sentence. Some but not all such grammatical conjunctions are truth functions, for example, consider the following sentences, A, Jack went up the hill. B, Jill went up the hill, C, Jack went up the hill and Jill went up the hill. D, Jack went up the hill so Jill went up the hill, the words and and so are grammatical conjunctions joining the sentences and to form the compound sentences and. The and in is a connective, since the truth of is completely determined by and, it would make no sense to affirm. Various English words and word pairs express logical connectives, and some of them are synonymous, examples are, In formal languages, truth functions are represented by unambiguous symbols. These symbols are called logical connectives, logical operators, propositional operators, or, in classical logic, see well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives. Logical connectives can be used to more than two statements, so one can speak about n-ary logical connective. For example, the meaning of the statements it is raining, comes from Booles interpretation of logic as an elementary algebra. True, the symbol 1 comes from Booles interpretation of logic as an algebra over the two-element Boolean algebra. False, the symbol 0 comes also from Booles interpretation of logic as a ring, some authors used letters for connectives at some time of the history, u. for conjunction and o. Such a logical connective as converse implication ← is actually the same as material conditional with swapped arguments, thus, in some logical calculi certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is the equivalence between ¬P ∨ Q and P → Q. There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs and these correspond to possible choices of binary logical connectives for classical logic. Different implementations of classical logic can choose different functionally complete subsets of connectives, One approach is to choose a minimal set, and define other connectives by some logical form, as in the example with the material conditional above
11.
Logical biconditional
–
In logic and mathematics, the logical biconditional is the logical connective of two statements asserting p if and only if q, where p is an antecedent and q is a consequent. This is often abbreviated p iff q, the operator is denoted using a doubleheaded arrow, a prefixed E, an equality sign, an equivalence sign, or EQV. It is logically equivalent to ∧, or the XNOR boolean operator and it is also logically equivalent to or, meaning both or neither. The only difference from material conditional is the case when the hypothesis is false, in that case, in the conditional, the result is true, yet in the biconditional the result is false. In the conceptual interpretation, a = b means All a s are b s and all b s are a s, in other words and this does not mean that the concepts have the same meaning. Examples, triangle and trilateral, equiangular trilateral and equilateral triangle, the antecedent is the subject and the consequent is the predicate of a universal affirmative proposition. In the propositional interpretation, a ⇔ b means that a b and b implies a, in other words, that the propositions are equivalent. This does not mean that they have the same meaning, example, The triangle ABC has two equal sides, and The triangle ABC has two equal angles. The antecedent is the premise or the cause and the consequent is the consequence, when an implication is translated by a hypothetical judgment the antecedent is called the hypothesis and the consequent is called the thesis. A common way of demonstrating a biconditional is to use its equivalence to the conjunction of two converse conditionals, demonstrating these separately. When both members of the biconditional are propositions, it can be separated into two conditionals, of one is called a theorem and the other its reciprocal. Thus whenever a theorem and its reciprocal are true we have a biconditional, a simple theorem gives rise to an implication whose antecedent is the hypothesis and whose consequent is the thesis of the theorem. When a theorem and its reciprocal are true we say that its hypothesis is the necessary and sufficient condition of the thesis, that is to say, that it is at the same time both cause and consequence. Logical equality is an operation on two values, typically the values of two propositions, that produces a value of true if and only if both operands are false or both operands are true. The truth table for A ↔ B is as follows, More than two statements combined by ↔ are ambiguous, x 1 ↔ x 2 ↔ x 3 ↔. ↔ x n may be meant as ↔ x n, or may be used to say that all x i are together true or together false, commutativity, yes associativity, yes distributivity, Biconditional doesnt distribute over any binary function, but logical disjunction distributes over biconditional. Idempotency, no monotonicity, no truth-preserving, yes When all inputs are true, falsehood-preserving, no When all inputs are false, the output is not false. Walsh spectrum, Nonlinearity,0 Like all connectives in first-order logic, Biconditional introduction allows you to infer that, if B follows from A, and A follows from B, then A if and only if B
12.
Walsh matrix
–
In mathematics, a Walsh matrix is a specific square matrix with dimensions of some power of 2, entries of ±1, and the property that the dot product of any two distinct rows is zero. The Walsh matrix was proposed by Joseph L. Walsh in 1923, each row of a Walsh matrix corresponds to a Walsh function. Confusingly, different sources refer to either matrix as the Walsh matrix, the Walsh matrix are used in computing the Walsh transform and have applications in the efficient implementation of certain signal processing operations. The Hadamard matrices of dimension 2k for k ∈ N are given by the formula, H =, H =, and in general H = = H ⊗ H, for 2 ≤ k ∈ N. Rearrange the rows of the according to the number of sign change of each row. For example, in W = the successive rows have 0,3,1, if we rearrange the rows in sequency ordering, H =, then the successive rows have 0,1,2, and 3 sign changes. W =, where the rows have 0,1,3,2,7,6,4. W =, where the rows have 0,7,3,4,1,6,2