1.
Semantic unification
–
Semantic unification, in philosophy, linguistics, and computer science, is the process of unifying lexically different concept representations that are judged to have the same semantic content. In business processes, the conceptual Semantic unification is defined as “the mapping of two expressions onto an expression in a format which is equivalent to the given expression”. Semantic unification has a history in fields like philosophy and linguistics. It has been used in different research areas like grammar unification, Semantic unification has since been applied to the fields of business processes and workflow management. Petri introduced the term “Pragmatic Semantic Unification” to refer to the approaches in which the results are tested against an application using the semantic mappings. In this pragmatic approach, the accuracy of the mapping is not as important as its usability, in general, the Semantic Unification in business processes is the process to find a common unified concept that match two lexicalized expressions into the same interpretation. Ontology alignment Semantic integration Michael M. Richter, Knowledge Management - Process Modeling, Lecture Notes, Calgary University 2004

2.
Text Encoding Initiative
–
The Text Encoding Initiative is a text-centric community of practice in the academic field of digital humanities, operating continuously since the 1980s. The community currently runs a mailing list, meetings and conference series, and maintains a technical standard, a journal, a wiki, a SourceForge repository. The TEI Guidelines, which define an XML format, are the defining output of the community of practice. The format differs from other open formats for text in that its primarily semantic rather than presentational. Some 500 different textual components and concepts, each is grounded in one or more academic discipline, the standard is split into two parts, a discursive textual description with extended examples and discussion and set of tag-by-tag definitions. Schemata in most of the formats are generated automatically from the tag-by-tag definitions. A number of support the production of the guidelines and the application of the guidelines to specific projects. Most users of the format do not use the range of tags but produce a customisation, using a project-specific subset of the tags. The TEI defines a sophisticated customization mechanism known as ODD for this purpose, in addition to documenting and describing each TEI tag, an ODD specification specifies its content model and other usage constraints, which may be expressed using schematron. TEI Lite is an example of such a customization and it defines an XML-based file format for exchanging texts. It is a selection from the extensive set of elements available in the full TEI Guidelines. The text of the TEI guidelines is rich in examples, there is also a samples page on the TEI wiki which gives examples of real-world projects which expose their underlying TEI. TEI allows texts to be marked up syntactically at any level of granularity, for example, this paragraph has been marked up into sentences and clauses. TEI has tags for marking up verse and this example shows a sonnet The choice tag is used to represent sections of text which might be encoded or tagged in more than one possible way. In the following example, based on one in the standard, choice is used twice, once to indicate an original, one Document Does it all is a literate programming language for XML schemas. In literate-programming style, ODD documents combine human-readable documentation and machine-readable models using the Documentation Elements module of the Text Encoding Initiative. Tools generate localised and internationalised HTML, ePub, or PDF human-readable output and DTDs, W3C XML Schema, Relax NG Compact Syntax, ODD is the format used internally by the Text Encoding Initiative for their eponymous technical standard. Although ODD files generally describe the difference between a customized XML format and the full TEI model, ODD also can be used to describe XML formats that are separate from the TEI

3.
Head-driven phrase structure grammar
–
Head-driven phrase structure grammar is a highly lexicalized, constraint-based grammar developed by Carl Pollard and Ivan Sag. It is a type of phrase structure grammar, as opposed to a dependency grammar, HPSG draws from other fields such as computer science and uses Ferdinand de Saussures notion of the sign. It uses a uniform formalism and is organized in a way which makes it attractive for natural language processing. An HPSG grammar includes principles and grammar rules and lexicon entries which are not considered to belong to a grammar. The formalism is based on lexicalism and this means that the lexicon is more than just a list of entries, it is in itself richly structured. Individual entries are marked with types, early versions of the grammar were very lexicalized with few grammatical rules. More recent research has tended to add more and richer rules, the basic type HPSG deals with is the sign. Words and phrases are two different subtypes of sign, a word has two features, and, both of which are split into subfeatures. Signs and rules are formalized as typed feature structures, HPSG generates strings by combining signs, which are defined by their location within a type hierarchy and by their internal feature structure, represented by attribute value matrices. Features take types or lists of types as their values, grammatical rules are largely expressed through the constraints signs place on one another. A signs feature structure describes its phonological, syntactic, and semantic properties, in common notation, AVMs are written with features in upper case and types in italicized lower case. Numbered indices in an AVM represent token identical values, in the simplified AVM for the word walks below, the verbs categorical information is divided into features that describe it and features that describe its arguments. Walks is a sign of type word with a head of type verb, as an intransitive verb, walks has no complement but requires a subject that is a third person singular noun. The semantic value of the subject is co-indexed with the verbs only argument, the following AVM for she represents a sign with a SYNSEM value that could fulfill those requirements. Signs of type phrase unify with one or more children and propagate information upward, the following AVM encodes the immediate dominance rule for a head-subj-phrase, which requires two children, the head child and a non-head child that fulfills the verbs SUBJ constraints. The end result is a sign with a head, empty subcategorization features. Although the actual grammar of HPSG is composed entirely of feature structures, various parsers based on the HPSG formalism have been written and optimizations are currently being investigated. An example of a system analyzing German sentences is provided by the Freie Universität Berlin, in addition the CoreGram project of the Grammar Group of the Freie Universität Berlin provides open source grammars that were implemented in the TRALE system

4.
Directed acyclic graph
–
In mathematics and computer science, a directed acyclic graph, is a finite directed graph with no directed cycles. Equivalently, a DAG is a graph that has a topological ordering. DAGs can model different kinds of information. Similarly, topological orderings of DAGs can be used to order the compilation operations in a makefile, the program evaluation and review technique uses DAGs to model the milestones and activities of large human projects, and schedule these projects to use as little total time as possible. Combinational logic blocks in electronic design, and the operations in dataflow programming languages. More abstractly, the reachability relation in a DAG forms a partial order, the corresponding concept for undirected graphs is a forest, an undirected graph without cycles. Choosing an orientation for a forest produces a kind of directed acyclic graph called a polytree. However there are other kinds of directed acyclic graph that are not formed by orienting the edges of an undirected acyclic graph. Moreover, every undirected graph has an orientation, an assignment of a direction for its edges that makes it into a directed acyclic graph. To emphasize that DAGs are not the thing as directed versions of undirected acyclic graphs. A graph is formed by a collection of vertices and edges, in the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A directed acyclic graph is a graph that has no cycles. A vertex v of a graph is said to be reachable from another vertex u when there exists a path that starts at u. As a special case, every vertex is considered to be reachable from itself, a graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with an ordering is acyclic. Conversely, every directed acyclic graph has a topological ordering, therefore, this property can be used as an alternative definition of the directed acyclic graphs, they are exactly the graphs that have topological orderings. The reachability relationship in any directed graph can be formalized as a partial order ≤ on the vertices of the DAG. For example, the DAG with two edges a → b and b → c has the same reachability relation as the graph with three edges a → b, b → c, and a → c

5.
Phrase structure grammar
–
Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy, context-sensitive grammars, or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars, the defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. The fundamental trait that these frameworks all share is that they view sentence structure in terms of the constituency relation, the constituency relation derives from the subject-predicate division of Latin and Greek grammars that is based on term logic and reaches back to Aristotle in antiquity. Basic clause structure is understood in terms of a division of the clause into subject. The binary division of the results in a one-to-one-or-more correspondence. For each element in a sentence, there are one or more nodes in the structure that one assumes for that sentence. A two word sentence such as Luke laughed necessarily implies three nodes in the structure, one for the noun Luke, one for the verb laughed. The constituency grammars listed above all view sentence structure in terms of this one-to-one-or-more correspondence, by the time of Gottlob Frege, a competing understanding of the logic of sentences had arisen. Frege rejected the division of the sentence and replaced it with an understanding of sentence logic in terms of predicates. On this alternative conception of logic, the binary division of the clause into subject. It therefore opened the door to the dependency relation, the dependency relation is a one-to-one correspondence, for every element in a sentence, there is just one node in the syntactic structure. The distinction is thus a graph-theoretical distinction, the dependency relation restricts the number of nodes in the syntactic structure of a sentence to the exact number of syntactic units that that sentence contains. Thus the two word sentence Luke laughed implies just two syntactic nodes, one for Luke and one for laughed, other grammars generally avoid attempts to group syntactic units into clusters in a manner that would allow classification in terms of the constituency vs. dependency distinction. In this respect, the following grammar frameworks do not come down solidly on either side of the dividing line, Construction grammar Cognitive grammar

6.
S-expression
–
In computing, s-expressions, sexprs or sexps are a notation for nested list data, invented for and popularized by the programming language Lisp, which uses them for source code as well as data. In the usual parenthesized syntax of Lisp, an s-expression is classically defined as an atom, the second, recursive part of the definition represents an ordered pair so that s-exprs are effectively binary trees. Most modern sexpr notations in use an abbreviated notation to represent lists in s-expressions. In the Lisp family of programming languages, s-expressions are used to represent both source code and data, other uses of S-expressions are in Lisp-derived languages such as DSSSL, and as mark-up in communications protocols like IMAP and John McCarthys CBCL. The details of the syntax and supported data types vary in the different languages, there are many variants of the S-expression format, supporting a variety of different syntaxes for different datatypes. The most widely supported are, Lists and pairs, Symbols, with-hyphen. @. $ a\ symbol\ with\ spaces Strings, Hello, world. Integers, -9876543210 Floating-point numbers, -0.06.283186. 023e23 The character # is often used to prefix extensions to the syntax, e. g. #x10 for hexadecimal integers, or #\C for characters. When representing source code in Lisp, the first element of an S-expression is commonly an operator or function name and this is called prefix notation or Polish notation. As an example, the Boolean expression written 4 == in C is represented as in Lisps s-expr-based prefix notation, as noted above, the precise definition of atom varies across LISP-like languages. In either case, a character can typically be included by escaping it with a preceding backslash. The recursive case of the definition is traditionally implemented using cons cells. This means that Lisp is homoiconic, that is, the representation of programs is also a data structure in a primitive type of the language itself. Nested lists can be written as S-expressions, is a two-element S-expression whose elements are also two-element S-expressions, the whitespace-separated notation used in Lisp is typical. Line breaks usually qualify as separators and this is a simple context-free grammar for a tiny subset of English written as an s-expression, Program code can be written in S-expressions, usually using prefix notation. Example in Common Lisp, S-expressions can be read in Lisp using the function READ, READ reads the textual representation of an s-expression and returns Lisp data. The function PRINT can be used to output an s-expression, the output then can be read with the function READ, when all printed data objects have a readable representation. Lisp has readable representations for numbers, strings, symbols, lists, Program code can be formatted as pretty printed S-expressions using the function PPRINT. Lisp programs are valid s-expressions, but not all s-expressions are valid Lisp programs, is a valid s-expression, but not a valid Lisp program, since Lisp uses prefix notation and a floating point number is not valid as an operation

7.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =

8.
Tree (data structure)
–
Alternatively, a tree can be defined abstractly as a whole as an ordered tree, with a value assigned to each node. Both these perspectives are useful, while a tree can be analyzed mathematically as a whole, a tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree, a tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy. Root The top node in a tree, child A node directly connected to another node when moving away from the Root. Parent The converse notion of a child, siblings A group of nodes with the same parent. Descendant A node reachable by repeated proceeding from parent to child, ancestor A node reachable by repeated proceeding from child to parent. Leaf A node with no children, branch Internal node A node with at least one child. Degree The number of sub trees of a node, edge The connection between one node and another. Path A sequence of nodes and edges connecting a node with a descendant, level The level of a node is defined by 1 +. Height of node The height of a node is the number of edges on the longest path between that node and a leaf, height of tree The height of a tree is the height of its root node. Depth The depth of a node is the number of edges from the root node to the node. Forest A forest is a set of n ≥0 disjoint trees, there is a distinction between a tree as an abstract data type and as a concrete data structure, analogous to the distinction between a list and a linked list. To allow finite trees, one must either allow the list of children to be empty, or allow trees to be empty, in case the list of children can be of fixed size. As a data structure, a tree is a group of nodes, where each node has a value. This data structure actually defines a graph, because it may have loops or several references to the same node. Thus there is also the requirement that no two references point to the node, and a tree that violates this is corrupt. For example, rather than an empty tree, one may have a reference, a tree is always non-empty. In fact, every node must have one parent