1.
Bioinformatics
–
Bioinformatics /ˌbaɪ. oʊˌɪnfərˈmætɪks/ is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, Bioinformatics has been used for in silico analyses of biological queries using mathematical and statistical techniques. Common uses of bioinformatics include the identification of genes and nucleotides. Often, such identification is made with the aim of understanding the genetic basis of disease, unique adaptations, desirable properties. In a less formal way, bioinformatics also tries to understand the principles within nucleic acid and protein sequences. Bioinformatics has become an important part of areas of biology. In experimental molecular biology, bioinformatics techniques such as image and signal processing allow extraction of useful results from large amounts of raw data, in the field of genetics and genomics, it aids in sequencing and annotating genomes and their observed mutations. It plays a role in the mining of biological literature. It also plays a role in the analysis of gene and protein expression and regulation, Bioinformatics tools aid in the comparison of genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more level, it helps analyze and catalogue the biological pathways. In structural biology, it aids in the simulation and modeling of DNA, RNA, historically, the term bioinformatics did not mean what it means today. Paulien Hogeweg and Ben Hesper coined it in 1970 to refer to the study of processes in biotic systems. This definition placed bioinformatics as a parallel to biophysics or biochemistry. Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s, comparing multiple sequences manually turned out to be impractical. A pioneer in the field was Margaret Oakley Dayhoff, who has been hailed by David Lipman, director of the National Center for Biotechnology Information, Dayhoff compiled one of the first protein sequence databases, initially published as books and pioneered methods of sequence alignment and molecular evolution. To study how normal cellular activities are altered in different disease states, therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This includes nucleotide and amino acid sequences, protein domains, the actual process of analyzing and interpreting data is referred to as computational biology. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, the primary goal of bioinformatics is to increase the understanding of biological processes
2.
DNA
–
Deoxyribonucleic acid is a molecule that carries the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids, alongside proteins, lipids and complex carbohydrates, most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are termed polynucleotides since they are composed of simpler units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases—cytosine, guanine, adenine, or thymine —a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two polynucleotide strands are bound together, according to base pairing rules, with hydrogen bonds to make double-stranded DNA. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, in comparison the total mass of the biosphere has been estimated to be as much as 4 trillion tons of carbon. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information and this information is replicated as and when the two strands separate. A large part of DNA is non-coding, meaning that these sections do not serve as patterns for protein sequences, the two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases and it is the sequence of these four nucleobases along the backbone that encodes biological information. RNA strands are created using DNA strands as a template in a process called transcription, under the genetic code, these RNA strands are translated to specify the sequence of amino acids within proteins in a process called translation. Within eukaryotic cells DNA is organized into structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, eukaryotic organisms store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast prokaryotes store their DNA only in the cytoplasm, within the eukaryotic chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed, DNA was first isolated by Friedrich Miescher in 1869. DNA is used by researchers as a tool to explore physical laws and theories, such as the ergodic theorem. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro-, among notable advances in this field are DNA origami and DNA-based hybrid materials. DNA is a polymer made from repeating units called nucleotides
3.
Nitrogenous base
–
A nitrogenous base, or nitrogen-containing base, is an organic molecule with a nitrogen atom that has the chemical properties of a base. The main biological function of a base is to bond nucleic acids together. A nitrogenous base owes its properties to the lone pair of electrons of a nitrogen atom. Nitrogenous bases are classified as the derivatives of two parent compounds, pyrimidine and purine. They are non-polar and due to their aromaticity, planar, both pyrimidines and purines resemble pyridine and are thus weak bases and relatively unreactive towards electrophilic aromatic substitution. A set of five nitrogenous bases is used in the construction of nucleotides and these nitrogenous bases are adenine, uracil, guanine, thymine, and cytosine. These nitrogenous bases hydrogen bond between opposing DNA strands to form the rungs of the ladder or double helix of DNA or a biological catalyst that is found in the nucleotides. Adenine is always paired with thymine, and guanine is paired with cytosine. These are known as base pairs, uracil is only present in RNA, replacing thymine. Pyrimidines include thymine, cytosine, and uracil and they have a single ring structure. They have a ring structure
4.
Computer programming
–
Computer programming is a process that leads from an original formulation of a computing problem to executable computer programs. Source code is written in one or more programming languages, the purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the domain, specialized algorithms. Related tasks include testing, debugging, and maintaining the code, implementation of the build system. Software engineering combines engineering techniques with software development practices, within software engineering, programming is regarded as one phase in a software development process. There is a debate on the extent to which the writing of programs is an art form. In general, good programming is considered to be the application of all three, with the goal of producing an efficient and evolvable software solution. Because the discipline covers many areas, which may or may not include critical applications, in most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined. Another ongoing debate is the extent to which the language used in writing computer programs affects the form that the final program takes. Different language patterns yield different patterns of thought and this idea challenges the possibility of representing the world perfectly with language because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form, however, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the set of the particular machine. Assembly languages were developed that let the programmer specify instruction in a text format, with abbreviations for each operation code. However, because a language is little more than a different notation for a machine language. High-level languages allow the programmer to write programs in terms that are more abstract and they harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula directly. Programs were mostly still entered using punched cards or paper tape, see computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers, text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. Whatever the approach to development may be, the program must satisfy some fundamental properties
5.
Sequence
–
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed. Like a set, it contains members, the number of elements is called the length of the sequence. Unlike a set, order matters, and exactly the elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a function whose domain is either the set of the numbers or the set of the first n natural numbers. The position of an element in a sequence is its rank or index and it depends on the context or of a specific convention, if the first element has index 0 or 1. For example, is a sequence of letters with the letter M first, also, the sequence, which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, the empty sequence is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order, Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations, Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are useful for specific types of sequences. One way to specify a sequence is to list the elements, for example, the first four odd numbers form the sequence. This notation can be used for sequences as well. For instance, the sequence of positive odd integers can be written. Listing is most useful for sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples, the prime numbers are the natural numbers bigger than 1, that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence, the prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers are the integer sequence whose elements are the sum of the two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is, for a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences
6.
Array data structure
–
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula, the simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of 10 32-bit integer variables, with indices 0 through 9,2036, so that the element with index i has the address 2000 +4 × i. The memory address of the first element of an array is called first address or foundation address, because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term vector is used in computing to refer to an array, arrays are often used to implement tables, especially lookup tables, the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program and they are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers, in most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of a data structure are required to have the same size. The set of valid index tuples and the addresses of the elements are usually, Array types are often implemented by array structures, however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, john von Neumann wrote the first array-sorting program in 1945, during the building of the first stored-program computer. p. 159 Array indexing was originally done by self-modifying code, and later using index registers, some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN, Lisp, COBOL, and ALGOL60, had support for multi-dimensional arrays, in C++, class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables, many databases, small and large, consist of one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, one or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the way to allocate dynamic memory portably
7.
Byte
–
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of used to encode a single character of text in a computer. The size of the byte has historically been hardware dependent and no standards existed that mandated the size. The de-facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte, the international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits, the popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size. The unit symbol for the byte was designated as the upper-case letter B by the IEC and IEEE in contrast to the bit, internationally, the unit octet, symbol o, explicitly denotes a sequence of eight bits, eliminating the ambiguity of the byte. It is a respelling of bite to avoid accidental mutation to bit. Early computers used a variety of four-bit binary coded decimal representations and these representations included alphanumeric characters and special graphical symbols. S. Government and universities during the 1960s, the prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines and these used the eight-bit µ-law encoding. This large investment promised to reduce costs for eight-bit data. The development of microprocessors in the 1970s popularized this storage size. A four-bit quantity is called a nibble, also nybble. The term octet is used to specify a size of eight bits. It is used extensively in protocol definitions, historically, the term octad or octade was used to denote eight bits as well at least in Western Europe, however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13, IEEE1541, in the International System of Quantities, B is the symbol of the bel, a unit of logarithmic power ratios named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a used unit
8.
Array data type
–
In computer science, an array type is a data type that is meant to describe a collection of elements, each selected by one or more indices that can be computed at run time by the program. Such a collection is called an array variable, array value. By analogy with the concepts of vector and matrix, array types with one. For example, in the Pascal programming language, the declaration type MyTable = array of integer, the declaration var A, MyTable then defines a variable A of that type, which is an aggregate of eight elements, each being an integer variable identified by two indices. In the Pascal program, those elements are denoted A, A, A, … A, special array types are often defined by the languages standard libraries. Dynamic lists are also common and easier to implement than dynamic arrays. Array types are distinguished from record types mainly because they allow the element indices to be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array variable. Depending on the language, array types may overlap other data types that describe aggregates of values, such as lists, array types are often implemented by array data structures, but sometimes by other means, such as hash tables, linked lists, or search trees. Heinz Rutishausers programming language Superplan included multi-dimensional arrays, rutishauser however although describing how a compiler for his language should be built, did not implement one. Assembly languages and low-level languages like BCPL generally have no support for arrays. These operations are required to satisfy the axioms get = V get = get if I ≠ J for any array state A, any value V, the first axiom means that each element behaves like a variable. The second axiom means that elements with distinct indices behave as disjoint variables and these axioms do not place any constraints on the set of valid index tuples I, therefore this abstract model can be used for triangular matrices and other oddly-shaped arrays. Most of those languages also restrict each index to an interval of integers. In some compiled languages, in fact, the index ranges may have to be known at compile time. On the other hand, some programming languages provide more liberal array types, such index values cannot be restricted to an interval, much less a fixed interval. So, these languages usually allow arbitrary new elements to be created at any time and this choice precludes the implementation of array types as array data structures. That is, those languages use syntax to implement a more general associative array semantics. The number of indices needed to specify an element is called the dimension, dimensionality, many languages support only one-dimensional arrays
9.
List (abstract data type)
–
In computer science, a list or sequence is an abstract data type that represents a countable number of ordered values, where the same value may occur more than once. An instance of a list is a representation of the mathematical concept of a finite sequence. Lists are an example of containers, as they contain other values. If the same value occurs multiple times, each occurrence is considered a distinct item, the name list is also used for several concrete data structures that can be used to implement abstract lists, especially linked lists. Many programming languages provide support for list data types, and have special syntax and semantics for lists and list operations. A list can often be constructed by writing the items in sequence, separated by commas, semicolons, or spaces, within a pair of such as parentheses, brackets, braces. Some languages may allow list types to be indexed or sliced like array types, in object-oriented programming languages, lists are usually provided as instances of subclasses of a generic list class, and traversed via separate iterators. List data types are implemented using array data structures or linked lists of some sort. In some contexts, such as in Lisp programming, the term list may refer specifically to a linked list rather than an array. In type theory and functional programming, abstract lists are usually defined inductively by two operations, nil that yields the empty list, and cons, which adds an item at the beginning of a list. This results in either a linked list or a tree, depending on whether the list has nested sublists, some older Lisp implementations also supported compressed lists which had a special internal representation. Lists can be manipulated using iteration or recursion, the former is often preferred in imperative programming languages, while the latter is the norm in functional languages. Some languages do not offer a list data structure, but offer the use of associative arrays or some kind of table to emulate lists, although Lua stores lists that have numerical indices as arrays internally, they still appear as dictionaries. In Lisp, lists are the data type and can represent both program code and data. In most dialects, the list of the first three prime numbers could be written as, in several dialects of Lisp, including Scheme, a list is a collection of pairs, consisting of a value and a pointer to the next pair, making a singly linked list. As the name implies, lists can be used to store a list of elements, however, unlike in traditional arrays, lists can expand and shrink, and are stored dynamically in memory. In computing, lists are easier to implement than sets, a finite set in the mathematical sense can be realized as a list with additional restrictions, that is, duplicate elements are disallowed and order is irrelevant. Sorting the list speeds up determining if an item is already in the set
10.
Memory management
–
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request and this is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management, the quality of the virtual memory manager can have an extensive effect on overall system performance. Modern general-purpose computer systems manage memory at two levels, operating system level, and application level, application-level memory management is generally categorized as either automatic memory management, usually involving garbage collection, or manual memory management. The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size, Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are free, the allocators metadata can also inflate the size of small allocations. This is often managed by chunking, the memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever lost as a memory leak. The specific dynamic memory allocation algorithm implemented can impact performance significantly, a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52, since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. This works well for simple embedded systems where no large objects need to be allocated, however, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games. All blocks of a particular size are kept in a linked list or tree. If a smaller size is requested than is available, the smallest available size is selected, one of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks, when a block is freed, it is compared to its buddy. If they are free, they are combined and placed in the correspondingly larger-sized buddy-block list. Virtual memory is a method of decoupling the memory organization from the physical hardware, the applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address, in this way addition of virtual memory enables granular control over memory systems and methods of access. In virtual memory systems the system limits how a process can access the memory. Even though the memory allocated for specific processes is normally isolated, shared memory is one of the fastest techniques for inter-process communication
11.
Source code
–
In computing, source code is any collection of computer instructions, possibly with comments, written using a human-readable programming language, usually as ordinary text. The source code of a program is designed to facilitate the work of computer programmers. The source code is often transformed by an assembler or compiler into binary machine code understood by the computer, the machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed, most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer or a system administrator, the Linux Information Project defines source code as, Source code is the version of software as it is originally written by a human in plain text. The notion of source code may also be more broadly, to include machine code and notations in graphical languages. It is therefore so construed as to include code, very high level languages. Often there are several steps of program translation or minification between the source code typed by a human and an executable program. The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer and this first-generation programming language had no distinction between source code and machine code. When IBM first offered software to work with its machine, the code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware, for decades, IBM distributed source code with its software product licenses, until 1983. Most early computer magazines published source code as type-in programs, Source code can also be stored in a database or elsewhere. The source code for a piece of software may be contained in a single file or many files. Though the practice is uncommon, a source code can be written in different programming languages. For example, a program written primarily in the C programming language, in some languages, such as Java, this can be done at run time. The code base of a programming project is the larger collection of all the source code of all the computer programs which make up the project. It has become practice to maintain code bases in version control systems. Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or even hundreds, in these cases, instructions for compilations, such as a Makefile, are included with the source code
12.
Formal language
–
In mathematics, computer science, and linguistics, a formal language is a set of strings of symbols together with a set of rules that are specific to it. The alphabet of a language is the set of symbols, letters. The strings formed from this alphabet are called words, and the words belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is defined by means of a formal grammar such as a regular grammar or context-free grammar. The field of language theory studies primarily the purely syntactical aspects of such languages—that is. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages. The first formal language is thought to be the one used by Gottlob Frege in his Begriffsschrift, literally meaning concept writing, axel Thues early semi-Thue system, which can be used for rewriting strings, was influential on formal grammars. The elements of an alphabet are called its letters, alphabets may be infinite, however, most definitions in formal language theory specify finite alphabets, and most results only apply to them. A word over an alphabet can be any sequence of letters. The set of all words over an alphabet Σ is usually denoted by Σ*, the length of a word is the number of letters it is composed of. For any alphabet there is one word of length 0, the empty word. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words, the result of concatenating a word with the empty word is the original word. A formal language L over an alphabet Σ is a subset of Σ*, that is, sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of well-formed expressions. In computer science and mathematics, which do not usually deal with natural languages, in practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the concept of a language. By an abuse of the definition, a formal language is often thought of as being equipped with a formal grammar that describes it. The following rules describe a formal language L over the alphabet Σ =, Every nonempty string that does not contain + or =, a string containing = is in L if and only if there is exactly one =, and it separates two valid strings of L. A string containing + but not = is in L if, no string is in L other than those implied by the previous rules
13.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage
14.
Symbol (formal)
–
A logical symbol is a fundamental concept in logic, tokens of which may be marks or a configuration of marks which form a particular pattern. In logic, symbols build literal utility to illustrate ideas, symbols of a formal language need not be symbols of anything. For instance there are constants which do not refer to any idea. Symbols of a formal language must be capable of being specified without any reference to any interpretation of them, a symbol or string of symbols may comprise a well-formed formula if it is consistent with the formation rules of the language. In a formal system a symbol may be used as a token in formal operations. The set of symbols in a formal language is referred to as an alphabet A formal symbol as used in first-order logic may be a variable, a constant. Formal symbols are thought of as purely syntactic structures, composed into larger structures using a formal grammar. The move to view units in natural language as formal symbols was initiated by Noam Chomsky, the generative grammar model looked upon syntax as autonomous from semantics. On this point I differ from a number of philosophers, but agree, I believe, with Chomsky and this is the philosophical premise underlying Montague grammar. List of mathematical symbols List of logic symbols
15.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
16.
Empty set
–
In mathematics, and more specifically set theory, the empty set is the unique set having no elements, its size or cardinality is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, in other theories, many possible properties of sets are vacuously true for the empty set. Null set was once a synonym for empty set, but is now a technical term in measure theory. The empty set may also be called the void set, common notations for the empty set include, ∅, and ∅. The latter two symbols were introduced by the Bourbaki group in 1939, inspired by the letter Ø in the Norwegian, although now considered an improper use of notation, in the past,0 was occasionally used as a symbol for the empty set. The empty-set symbol ∅ is found at Unicode point U+2205, in LaTeX, it is coded as \emptyset for ∅ or \varnothing for ∅. In standard axiomatic set theory, by the principle of extensionality, hence there is but one empty set, and we speak of the empty set rather than an empty set. The mathematical symbols employed below are explained here, in this context, zero is modelled by the empty set. For any property, For every element of ∅ the property holds, There is no element of ∅ for which the property holds. Conversely, if for some property and some set V, the two statements hold, For every element of V the property holds, There is no element of V for which the property holds. By the definition of subset, the empty set is a subset of any set A. That is, every element x of ∅ belongs to A. Indeed, since there are no elements of ∅ at all, there is no element of ∅ that is not in A. Any statement that begins for every element of ∅ is not making any substantive claim and this is often paraphrased as everything is true of the elements of the empty set. When speaking of the sum of the elements of a finite set, the reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one, a disarrangement of a set is a permutation of the set that leaves no element in the same position. The empty set is a disarrangment of itself as no element can be found that retains its original position. Since the empty set has no members, when it is considered as a subset of any ordered set, then member of that set will be an upper bound. For example, when considered as a subset of the numbers, with its usual ordering, represented by the real number line
17.
Length
–
In geometric measurements, length is the most extended dimension of an object. In the International System of Quantities, length is any quantity with dimension distance, in other contexts length is the measured dimension of an object. For example, it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, length is a measure of one dimension, whereas area is a measure of two dimensions and volume is a measure of three dimensions. In most systems of measurement, the unit of length is a base unit, measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As society has become more technologically oriented, much higher accuracies of measurement are required in a diverse set of fields. One of the oldest units of measurement used in the ancient world was the cubit which was the length of the arm from the tip of the finger to the elbow. This could then be subdivided into shorter units like the foot, hand or finger, the cubit could vary considerably due to the different sizes of people. After Albert Einsteins special relativity, length can no longer be thought of being constant in all reference frames. Thus a ruler that is one meter long in one frame of reference will not be one meter long in a frame that is travelling at a velocity relative to the first frame. This means length of an object is variable depending on the observer, in the physical sciences and engineering, when one speaks of units of length, the word length is synonymous with distance. There are several units that are used to measure length, in the International System of Units, the basic unit of length is the metre and is now defined in terms of the speed of light. The centimetre and the kilometre, derived from the metre, are commonly used units. In U. S. customary units, English or Imperial system of units, commonly used units of length are the inch, the foot, the yard, and the mile. Units used to denote distances in the vastness of space, as in astronomy, are longer than those typically used on Earth and include the astronomical unit, the light-year. Dimension Distance Orders of magnitude Reciprocal length Smoot Unit of length
18.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory
19.
Countable set
–
In mathematics, a countable set is a set with the same cardinality as some subset of the set of natural numbers. A countable set is either a set or a countably infinite set. Some authors use countable set to mean countably infinite alone, to avoid this ambiguity, the term at most countable may be used when finite sets are included and countably infinite, enumerable, or denumerable otherwise. Georg Cantor introduced the term countable set, contrasting sets that are countable with those that are uncountable, today, countable sets form the foundation of a branch of mathematics called discrete mathematics. A set S is countable if there exists a function f from S to the natural numbers N =. If such an f can be found that is also surjective, in other words, a set is countably infinite if it has one-to-one correspondence with the natural number set, N. As noted above, this terminology is not universal, some authors use countable to mean what is here called countably infinite, and do not include finite sets. Alternative formulations of the definition in terms of a function or a surjective function can also be given. In 1874, in his first set theory article, Cantor proved that the set of numbers is uncountable. In 1878, he used one-to-one correspondences to define and compare cardinalities, in 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities. A set is a collection of elements, and may be described in many ways, one way is simply to list all of its elements, for example, the set consisting of the integers 3,4, and 5 may be denoted. This is only effective for small sets, however, for larger sets, even in this case, however, it is still possible to list all the elements, because the set is finite. Some sets are infinite, these sets have more than n elements for any integer n, for example, the set of natural numbers, denotable by, has infinitely many elements, and we cannot use any normal number to give its size. Nonetheless, it out that infinite sets do have a well-defined notion of size. To understand what this means, we first examine what it does not mean, for example, there are infinitely many odd integers, infinitely many even integers, and infinitely many integers overall. However, it out that the number of even integers. This is because we arrange things such that for every integer, or, more generally, n→2n, see picture. However, not all sets have the same cardinality
20.
Subset
–
In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is contained inside B, that is, all elements of A are also elements of B. The relationship of one set being a subset of another is called inclusion or sometimes containment, the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra in which the relation is called inclusion. For any set S, the inclusion relation ⊆ is an order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may also partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A, when quantified, A ⊆ B is represented as, ∀x. So for example, for authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively and this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or may not equal y, but if x < y, then x definitely does not equal y, and is less than y. Similarly, using the convention that ⊂ is proper subset, if A ⊆ B, then A may or may not equal B, the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, any set is a subset of itself, but not a proper subset. The empty set, denoted by ∅, is also a subset of any given set X and it is also always a proper subset of any set except itself. These are two examples in both the subset and the whole set are infinite, and the subset has the same cardinality as the whole. The set of numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the set has a larger cardinality than the former set. Another example in an Euler diagram, Inclusion is the partial order in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n, then a ≤ b if and only if ⊆. For the power set P of a set S, the partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 <1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T
21.
Associative property
–
In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a rule of replacement for expressions in logical proofs. That is, rearranging the parentheses in such an expression will not change its value, consider the following equations, +4 =2 + =92 × = ×4 =24. Even though the parentheses were rearranged on each line, the values of the expressions were not altered, since this holds true when performing addition and multiplication on any real numbers, it can be said that addition and multiplication of real numbers are associative operations. Associativity is not to be confused with commutativity, which addresses whether or not the order of two operands changes the result. For example, the order doesnt matter in the multiplication of numbers, that is. Associative operations are abundant in mathematics, in fact, many algebraic structures explicitly require their binary operations to be associative, however, many important and interesting operations are non-associative, some examples include subtraction, exponentiation and the vector cross product. Z = x = xyz for all x, y, z in S, the associative law can also be expressed in functional notation thus, f = f. If a binary operation is associative, repeated application of the produces the same result regardless how valid pairs of parenthesis are inserted in the expression. This is called the generalized associative law, thus the product can be written unambiguously as abcd. As the number of elements increases, the number of ways to insert parentheses grows quickly. Some examples of associative operations include the following, the two methods produce the same result, string concatenation is associative. In arithmetic, addition and multiplication of numbers are associative, i. e. + z = x + = x + y + z z = x = x y z } for all x, y, z ∈ R. x, y, z\in \mathbb. }Because of associativity. Addition and multiplication of numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative, the greatest common divisor and least common multiple functions act associatively. Gcd = gcd = gcd lcm = lcm = lcm } for all x, y, z ∈ Z. x, y, z\in \mathbb. }Taking the intersection or the union of sets, ∩ C = A ∩ = A ∩ B ∩ C ∪ C = A ∪ = A ∪ B ∪ C } for all sets A, B, C. Slightly more generally, given four sets M, N, P and Q, with h, M to N, g, N to P, in short, composition of maps is always associative. Consider a set with three elements, A, B, and C, thus, for example, A=C = A
22.
Commutative property
–
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says 3 +4 =4 +3 or 2 ×5 =5 ×2, the property can also be used in more advanced settings. The name is needed there are operations, such as division and subtraction. The commutative property is a property associated with binary operations and functions. If the commutative property holds for a pair of elements under a binary operation then the two elements are said to commute under that operation. The term commutative is used in several related senses, putting on socks resembles a commutative operation since which sock is put on first is unimportant. Either way, the result, is the same, in contrast, putting on underwear and trousers is not commutative. The commutativity of addition is observed when paying for an item with cash, regardless of the order the bills are handed over in, they always give the same total. The multiplication of numbers is commutative, since y z = z y for all y, z ∈ R For example,3 ×5 =5 ×3. Some binary truth functions are also commutative, since the tables for the functions are the same when one changes the order of the operands. For example, the logical biconditional function p ↔ q is equivalent to q ↔ p and this function is also written as p IFF q, or as p ≡ q, or as Epq. Further examples of binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors. Concatenation, the act of joining character strings together, is a noncommutative operation, rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order. The twists of the Rubiks Cube are noncommutative and this can be studied using group theory. Some non-commutative binary operations, Records of the use of the commutative property go back to ancient times. The Egyptians used the property of multiplication to simplify computing products. Euclid is known to have assumed the property of multiplication in his book Elements
23.
Monoid
–
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element. Monoids are studied in semigroup theory as they are semigroups with identity, monoids occur in several branches of mathematics, for instance, they can be regarded as categories with a single object. Thus, they capture the idea of composition within a set. In fact, all functions from a set into itself form naturally a monoid with respect to function composition, monoids are also commonly used in computer science, both in its foundational aspects and in practical programming. The set of strings built from a set of characters is a free monoid. The transition monoid and syntactic monoid are used in describing finite state machines, whereas trace monoids and history provide a foundation for process calculi. Some of the more important results in the study of monoids are the Krohn–Rhodes theorem, the history of monoids, as well as a discussion of additional general properties, are found in the article on semigroups. Identity element There exists an element e in S such that for every element a in S, in other words, a monoid is a semigroup with an identity element. It can also be thought of as a magma with associativity and identity, the identity element of a monoid is unique. A monoid in which each element has an inverse is a group. Depending on the context, the symbol for the operation may be omitted, so that the operation is denoted by juxtaposition, for example. This notation does not imply that it is numbers being multiplied, N is thus a monoid under the binary operation inherited from M. If there is a generator of M that has finite cardinality, not every set S will generate a monoid, as the generated structure may lack an identity element. A monoid whose operation is commutative is called a commutative monoid, commutative monoids are often written additively. Any commutative monoid is endowed with its algebraic preordering ≤, defined by x ≤ y if there exists z such that x + z = y. An order-unit of a commutative monoid M is an element u of M such that for any element x of M, there exists a positive integer n such that x ≤ nu. This is often used in case M is the cone of a partially ordered abelian group G. A monoid for which the operation is commutative for some, but not all elements is a trace monoid, trace monoids commonly occur in the theory of concurrent computation
24.
Partially ordered set
–
In mathematics, especially order theory, a partially ordered set formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a binary relation indicating that, for pairs of elements in the set. The word partial in the partial order or partially ordered set is used as an indication that not every pair of elements need be comparable. That is, there may be pairs of elements for which neither element precedes the other in the poset, Partial orders thus generalize total orders, in which every pair is comparable. To be an order, a binary relation must be reflexive, antisymmetric. One familiar example of an ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, a poset can be visualized through its Hasse diagram, which depicts the ordering relation. A partial order is a binary relation ≤ over a set P satisfying particular axioms which are discussed below, when a ≤ b, we say that a is related to b. The axioms for a partial order state that the relation ≤ is reflexive, antisymmetric. That is, for all a, b, and c in P, it must satisfy, in other words, a partial order is an antisymmetric preorder. A set with an order is called a partially ordered set. The term ordered set is also used, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as ordered sets, for a, b, elements of a partially ordered set P, if a ≤ b or b ≤ a, then a and b are comparable. In the figure on top-right, e. g. and are comparable, while and are not, a partial order under which every pair of elements is comparable is called a total order or linear order, a totally ordered set is also called a chain. A subset of a poset in which no two elements are comparable is called an antichain. A more concise definition will be given using the strict order corresponding to ≤. For example, is covered by in the figure. Standard examples of posets arising in mathematics include, The real numbers ordered by the standard less-than-or-equal relation ≤, the set of subsets of a given set ordered by inclusion
25.
Palindrome
–
A palindrome is a word, phrase, number, or other sequence of characters which reads the same backward as forward, such as madam or racecar. Sentence-length palindromes may be written when allowances are made for adjustments to capital letters, punctuation, and word dividers, such as A man, a plan, was it a car or a cat I saw. Composing literature in palindromes is an example of constrained writing, the word palindrome was coined by the English playwright Ben Jonson in the 17th century from the Greek roots palin and dromos. Palindromes date back at least to 79 AD, as a palindrome was found as a graffito at Herculaneum and this palindrome, called the Sator Square, consists of a sentence written in Latin, Sator Arepo Tenet Opera Rotas. It is remarkable for the fact that the first letters of each form the first word, the second letters form the second word. Hence, it can be arranged into a square that reads in four different ways. As such, they can be referred to as palindromatic, the palindromic Latin riddle In girum imus nocte et consumimur igni describes the behavior of moths. It is likely that this palindrome is from medieval rather than ancient times, byzantine Greeks often inscribed the palindrome, Wash sins, not only face ΝΙΨΟΝ ΑΝΟΜΗΜΑΤΑ ΜΗ ΜΟΝΑΝ ΟΨΙΝ, on baptismal fonts. This practice was continued in many English churches, some well-known English palindromes are, Able was I ere I saw Elba, A man, a plan, a canal - Panama. Madam, Im Adam and Never odd or even, English palindromes of notable length include mathematician Peter Hiltons Doc, note, I dissent. A fast never prevents a fatness, I diet on cod and Scottish poet Alastair Reids T. Eliot, top bard, notes putrid tang emanating, is sad, Id assign it a name, gnat dirt upset on drab pot toilet. The most familiar palindromes in English are character-unit palindromes, the characters read the same backward as forward. Some examples of words are redivider, noon, civic, radar, level, rotor, kayak, reviver, racecar, redder, madam. There are also word-unit palindromes in which the unit of reversal is the word, word-unit palindromes were made popular in the recreational linguistics community by J. A. Lindon in the 1960s. Occasional examples in English were created in the 19th century, several in French and Latin date to the Middle Ages. Palindromes often consist of a sentence or phrase, e. g, mr. Owl ate my metal worm, Was it a cat I saw. Or Go hang a salami, Im a lasagna hog, punctuation, capitalization, and spaces are usually ignored. Some, such as Rats live on no evil star, Live on time, emit no evil, semordnilap is a name coined for words that spell a different word in reverse
26.
Order theory
–
Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a framework for describing statements such as this is less than that or this precedes that. This article introduces the field and provides basic definitions, a list of order-theoretic terms can be found in the order theory glossary. Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the order on the natural numbers e. g.2 is less than 3,10 is greater than 5. This intuitive concept can be extended to orders on sets of numbers, such as the integers. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general, other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, in other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the relation, e. g. Pediatricians are physicians. However, many other orders do not and those orders like the subset-of relation for which there exist incomparable elements are called partial orders, orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting and this is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes sense, because one can derive numerous theorems in the general setting. These insights can then be transferred to many less abstract applications. Driven by the wide usage of orders, numerous special kinds of ordered sets have been defined. In addition, order theory does not restrict itself to the classes of ordering relations. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found and this section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Suppose that P is a set and that ≤ is a relation on P, a set with a partial order on it is called a partially ordered set, poset, or just an ordered set if the intended meaning is clear. By checking these properties, one sees that the well-known orders on natural numbers, integers, rational numbers