1.
Region connection calculus
–
The region connection calculus is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions by their relations to each other. For example, proper part is the union of TPP and NTPP, the composition table of RCC8 are as follows, * denotes the universal relation. The RCC8 calculus is intended for reasoning about spatial configurations, consider the following example, two houses are connected via a road. Each house is located on an own property, the first house possibly touches the boundary of the property, the second one surely does not. What can we infer about the relation of the property to the road. Other versions of the region connection calculus include RCC5 and RCC23, RCC8 has been partially implemented in GeoSPARQL as described below, Randell, D. A. Cui, Z. and Cohn, A. G. A spatial logic based on regions and connection, Proc, conf. on Knowledge Representation and Reasoning, Morgan Kaufmann, San Mateo, pp. 165–176,1992. Anthony G. Cohn, Brandon Bennett, John Gooday, Micholas Mark Gotts, Qualitative Spatial Representation, J. Renz, Qualitative Spatial Reasoning with Topological Information. Lecture Notes in Computer Science 2293, Springer Verlag,2002, T. Dong, A COMMENT ON RCC, FROM RCC TO RCC⁺⁺. Journal of Philosophical Logic, Vol 34, No,2, pp. 319--352 Spatial relation DE-9IM

2.
DPLL algorithm
–
Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the “Davis–Putnam method” or the “DP algorithm”. Other common names that maintain the distinction are DLL and DPLL, after more than 50 years the DPLL procedure still forms the basis for most efficient complete SAT solvers. It has recently extended for automated theorem proving for fragments of first-order logic. The SAT problem is important both from theoretical and practical points of view, as such, it was and still is a hot topic in research for many years, and competitions between SAT solvers regularly take place. DPLLs modern implementations like Chaff and zChaff, GRASP or Minisat are in the first places of the competitions these last years and this is known as the splitting rule, as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, in practice, this often leads to deterministic cascades of units, thus avoiding a large part of the naive search space. Pure literal elimination If a propositional variable occurs with only one polarity in the formula, pure literals can always be assigned in a way that makes all clauses containing them true. Thus, these clauses do not constrain the search anymore and can be deleted, unsatisfiability of a given partial assignment is detected if one clause becomes empty, i. e. if all its variables have been assigned in a way that makes the corresponding literals false. Satisfiability of the formula is detected either when all variables are assigned without generating the empty clause, or, in modern implementations, unsatisfiability of the complete formula can only be detected after exhaustive search. In other words, they replace every occurrence of l with true and every occurrence of not l with false in the formula Φ, the or in the return statement is a short-circuiting operator. Φ ∧ l denotes the simplified result of substituting true for l in Φ, the pseudocode DPLL function only returns whether the final assignment satisfies the formula or not. In a real implementation, the partial satisfying assignment typically is also returned on success, the Davis–Logemann–Loveland algorithm depends on the choice of branching literal, which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, efficiency is strongly affected by the choice of the branching literal, there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics, Davis, Logemann, Loveland had developed this algorithm. Some properties of this algorithm are, It is based on search. It is the basis for almost all modern SAT solvers and it DOES NOT use learning and chronological backtracking. Defining new data structures to make the algorithm faster, especially the part on unit propagation Defining variants of the backtracking algorithm. The latter direction include non-chronological backtracking and clause learning and these refinements describe a method of backtracking after reaching a conflict clause which learns the root causes of the conflict in order to avoid reaching the same conflict again

3.
Test functions for optimization
–
In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as, Convergence rate. Here some test functions are presented with the aim of giving an idea about the different situations that optimization algorithms have to face when coping with these kinds of problems, in the first part, some objective functions for single-objective optimization cases are presented. In the second part, test functions with their respective Pareto fronts for multi-objective optimization problems are given, the artificial landscapes presented herein for single-objective optimization problems are taken from Bäck, Haupt et al. and from Rody Oldenhuis software. Given the number of problems, just a few are presented here, the complete list of test functions is found on the Mathworks website. The test functions used to evaluate the algorithms for MOP were taken from Deb, Binh et al. and you can download the software developed by Deb, which implements the NSGA-II procedure with GAs, or the program posted on Internet, which implements the NSGA-II procedure with ES. Just a general form of the equation, a plot of the function, boundaries of the object variables. Himmelblaus function Rosenbrock function Rastrigin function Shekel function Virtual Library of Simulation Experiments, Test Functions and Datasets

4.
Difference-map algorithm
–
The difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets, from a mathematical perspective, the difference-map algorithm is a dynamical system based on a mapping of Euclidean space. Solutions are encoded as fixed points of the mapping, since these applications include NP-complete problems, the scope of the difference map is that of an incomplete algorithm. Whereas incomplete algorithms can efficiently verify solutions, they prove that a solution does not exist. The difference-map algorithm is a generalization of two methods, Fienups Hybrid input output algorithm for phase retrieval and the Douglas-Rachford algorithm for convex optimization. Iterative methods, in general, have a history in phase retrieval. The use of style of algorithm for hard, non-convex problems is a more recent development. The problem to be solved must first be formulated as a set problem in Euclidean space. Another prerequisite is an implementation of the projections P A and P B that, given an arbitrary point x. Since the left-hand side is an element of A and the RHS is an element of B, note that the fixed point x itself need not belong to either A or B. The set of fixed points will typically have much higher dimension than the set of solutions, the progress of the algorithm can be monitored by inspecting the norm of the difference of the two projections, Δ = | P A − P B |. When this vanishes, a point common to both constraint sets has been found and the algorithm can be terminated, incomplete algorithms, such as stochastic local search, are widely used for finding satisfying truth assignments to boolean formulas. For example, the real variables x11, x21 and x41 correspond to the same boolean variable or its negation and it is convenient to associate the values 1 and -1 with TRUE and FALSE rather than the traditional 1 and 0. In a satisfying assignment, the two variables in each row must be assigned the values, or, the corresponding constraint set, B, is thus a set of 34 =81 points. In projecting to this constraint the following operation is applied to each row, first, the two real values are rounded to 1 or -1, then, if the outcome is, the larger of the two original values is replaced by 1. Examples, → → It is an exercise to check that both of the projection operations described minimize the Euclidean distance between input and output values. The iterate is unchanged because the two projections agree, from PB, we can read off the satisfying truth assignment, q1 = TRUE, q2 = FALSE, q3 = TRUE. In the simple 2-SAT example above, the norm of the difference-map increment Δ decreased monotonically to zero in three iterations

5.
Backmarking
–
In constraint satisfaction, backmarking is a variant of the backtracking algorithm. Backmarking works like backtracking by iteratively evaluating variables in an order, for example. It improves over backtracking by maintaining information about the last time a variable x i was instantiated to a value, the second information is changed every time another variable is evaluated. In particular, the index of the maximal unchanged variable since the last evaluation of x i is changed every time another variable x j changes value. Every time a variable x j changes, all variables x i with i > j are considered in turn. If k was their previous associated index, this value is changed to m i n, the data collected this way is used to avoid some consistency checks. In particular, whenever backtracking would set x i = a, two conditions allow to determine partial consistency or inconsistency without checking with the constraints. Contrary to other variants to backtracking, backmarking does not reduce the search space but only possibly reduce the number of constraints that are satisfied by a partial solution

6.
Interchangeability algorithm
–
In computer science, an interchangeability algorithm is a technique used to more efficiently solve constraint satisfaction problems. If two variables A and B in a CSP may be swapped for each other without changing the nature of the problem or its solutions, then A and B are interchangeable variables. Interchangeable variables represent a symmetry of the CSP and by exploiting that symmetry, for example, if solutions with A=1 and B=2 have been tried, then by interchange symmetry, solutions with B=1 and A=2 need not be investigated. The concept of interchangeability and the interchangeability algorithm in constraint satisfaction problems was first introduced by Eugene Freuder in 1991, the interchangeability algorithm reduces the search space of backtracking search algorithms, thereby improving the efficiency of NP-Complete CSP problems. Fully Interchangeable A value a for variable v is fully interchangeable with value b if and only if every solution in which v = a remains a solution when b is substituted for a and vice versa. Fully Substitutable A value a for variable v is fully substitutable with value b if, finds neighborhood interchangeable values in a CSP. The algorithm can also be run for k steps as a preprocessor to simplify the subsequent backtrack search, finds k-interchangeable values in a CSP. The figure shows a graph coloring example with colors as vertices. The available colors at each vertex are shown, the colors yellow, green, brown, red, blue, pink represent vertex Y and are fully interchangeable by definition. For example, substituting maroon for green in the solution orange|X, in Computer Science, the interchangeability algorithm has been extensively used in the fields of artificial intelligence, graph coloring problems, abstraction frame-works and solution adaptation