1.
Backjumping
–
In backtracking algorithms, backjumping is a technique that reduces search space, therefore increasing efficiency. While backtracking always goes up one level in the tree when all values for a variable have been tested. In this article, a order of evaluation of variables x 1, …, x n is used. The algorithm then goes up to x k, changing x k s value if possible, the partial assignment is not always necessary in full to prove that no value of x k +1 lead to a solution. If the algorithm can prove this fact, it can consider a different value for x j instead of reconsidering x k as it would normally do. The efficiency of a backjumping algorithm depends on how high it is able to backjump, if this is the case, j is called a safe jump. Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, in practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe and these methods have different cost, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree. The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching, in constraint satisfaction, a partial evaluation is consistent if and only if it satisfies all constraints involving the assigned variables, and inconsistent otherwise. The condition in which all values of a variable x k +1 are inconsistent with the current partial solution x 1, …, x k = a 1, …. This happens exactly when the x k +1 is a leaf of the search tree The backjumping algorithm by Gaschnig does a backjump only in leaf dead ends. In other words, it works differently from backtracking only when every possible value of x k +1 has been tested and resulted inconsistent without the need of branching over another variable. Since every variable can take more than one value, the maximal index that comes out from the check for each value is a safe jump. In practice, the algorithm can check the evaluations above at the time it is checking the consistency of x k +1 = a k +1. The previous algorithm only backjumps when the values of a variable can be inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree, an internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the algorithm always backtracks. Backjumping at internal nodes cannot be done as for leaf nodes, indeed, if some evaluations of x k +1 required branching, it is because they are consistent with the current assignment

2.
Backmarking
–
In constraint satisfaction, backmarking is a variant of the backtracking algorithm. Backmarking works like backtracking by iteratively evaluating variables in an order, for example. It improves over backtracking by maintaining information about the last time a variable x i was instantiated to a value, the second information is changed every time another variable is evaluated. In particular, the index of the maximal unchanged variable since the last evaluation of x i is changed every time another variable x j changes value. Every time a variable x j changes, all variables x i with i > j are considered in turn. If k was their previous associated index, this value is changed to m i n, the data collected this way is used to avoid some consistency checks. In particular, whenever backtracking would set x i = a, two conditions allow to determine partial consistency or inconsistency without checking with the constraints. Contrary to other variants to backtracking, backmarking does not reduce the search space but only possibly reduce the number of constraints that are satisfied by a partial solution

3.
Barrier function
–
Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. The two most common types of functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods, consider the following constrained optimization problem, minimize f subject to x ≥ b where b is some constant. If one wishes to remove the inequality constraint, the problem can be re-formulated as minimize f + c, where c = ∞ if x < b and this problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, a barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new problem is formulated. This problem is not equivalent to the original, but as μ approaches zero, for logarithmic barrier functions, g is defined as − log when x < b and ∞ otherwise. This essentially relies on the fact that log tends to infinity as t tends to 0. This introduces a gradient to the function being optimized which favors less extreme values of x, logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized. Extending to higher dimensions is simple, provided each dimension is independent, for each variable x i which should be limited to be strictly lower than b i, add − log

4.
Constraint learning
–
In constraint satisfaction backtracking algorithms, constraint learning is a technique for improving efficiency. It works by recording new constraints whenever an inconsistency is found and this new constraint may reduce the search space, as future partial evaluations may be found inconsistent without further search. Clause learning is the name of this technique when applied to propositional satisfiability, backtracking algorithms work by choosing an unassigned variable and recursively solve the problems obtained by assigning a value to this variable. Whenever the current partial solution is found inconsistent, the algorithm goes back to the previously assigned variable, a constraint learning algorithm differs because it tries to record some information, before backtracking, in form of a new constraint. This can reduce the search because the subsequent search may encounter another partial solution that is inconsistent with this new constraint. If the algorithm has learned the new constraint, it will backtrack from this solution, while the original backtracking algorithm would do a subsequent search. If the partial solution x 1 = a 1, …, x k = a k is inconsistent, however, recording this constraint is not useful, as this partial solution will not be encountered again due to the way backtracking proceeds. For example, the algorithm may encounter an evaluation extending the subset x 2 = a 2, x 5 = a 5, x k −1 = a k −1 of the previous partial evaluation, the efficiency of constraint learning algorithm is balanced between two factors. On one hand, the more often a constraint is violated. Small inconsistent subsets of the current partial solution are usually better than large ones, on the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time. Size is however not the feature of learned constraints to take into account. Indeed, a constraint may be useless in a particular state of the search space because the values that violate it will not be encountered again. A larger constraint whose violating values are similar to the current partial assignment may be preferred in such cases. Various constraint learning techniques exist, differing in strictness of recorded constraints, learning constraints representing these partial evaluation is called graph-based learning. It uses the same rationale of graph-based backjumping and these methods are called graph-based because they are based on pairs of variables are in the same constraint, which can be found out from the graph associated to the constraint satisfaction problem. Jumpback learning is based on storing as constraints the inconsistent assignments that would be found by conflict-based backjumping, whenever a partial assignment is found inconsistent, this algorithm selects the violated constraint that is minimal according to an ordering based on the order of instantiation of variables. The evaluation restricted of the variables that are in this constraint is inconsistent and is shorter than the complete evaluation. Jumpback learning stores this fact as a new constraint, the ordering on constraints is based on the order of assignment of variable

5.
Constraint satisfaction dual problem
–
The dual problem is a reformulation of a constraint satisfaction problem expressing each constraint of the original problem as a variable. Dual problems only contain binary constraints, and are therefore solvable by algorithms tailored for such problems, the join graphs and join trees of a constraint satisfaction problem are graphs representing its dual problem or a problem obtained from the dual problem removing some redundant constraints. The dual problem of a constraint satisfaction problem contains a variable for each constraint of the original problem and its domains and constraints are built so to enforce a sort of equivalence to the original problem. In particular, the domain of a variable of the problem contains one element for each tuple satisfying the corresponding original constraint. This way, a variable can take a value if. The constraints of the dual problem forbid two dual variables to values that correspond to two incompatible tuples. More generally, the constraints of the dual problem enforce the same values for all variables shared by two constraints, if two dual variables correspond to constraints sharing some variables, the dual problem contains a constraint between them, enforcing equality of all shared variables. In the dual problem, all constraints are binary and they all enforce two values, which are tuples, to agree on one or more original variables. The dual graph is a representation of how variables are constrained in the dual problem, more precisely, the dual graph contains a node for each dual variable and an edge for every constraint between them. In addition, the edge between two variables is labeled by the variables that are enforced equal between these two dual variables. In the dual graph, some constraints may be unnecessary, indeed, dual constraints enforces equality of original variables, and some constraints may be redundant because of transitivity of equality. For example, if C2 and C1 are joined by an edge whose label contains y, as a result, a dual constraint between C2 and C3 enforcing equality of y is not necessary, and could be removed if present. A graph obtained from the graph by removing some redundant edges is called a join graph. If it is a tree, it is called a join tree, the dual problem can be solved from a join graph since all removed edges are redundant. In turn, the problem can be solved efficiently if that join graph is a tree, an algorithm for finding a join tree, if any, proceeds as follows. In the first step, edges are assigned weights, if two nodes represent constraints that share n variables, the edge joining them is assigned weight n, in the second step, a maximal-weight spanning tree is searched for. Once one is found, it is checked whether it enforces the required equality of variables, if this is the case, this spanning tree is a join tree. Another method for finding out whether a constraint satisfaction problem has a tree uses the primal graph of the problem

6.
Decomposition method (constraint satisfaction)
–
In constraint satisfaction, a decomposition method translates a constraint satisfaction problem into another constraint satisfaction problem that is binary and acyclic. Decomposition methods work by grouping variables into sets, and solving a subproblem for each set and these translations are done because solving binary acyclic problems is a tractable problem. Each structural restriction defined a measure of complexity of solving the problem after conversion, fixing a maximal allowed width is a way for identifying a subclass of constraint satisfaction problems. Decomposition methods translate a problem into a new one that is easier to solve, the new problem only contains binary constraints, their scopes form a directed acyclic graph. The variables of the new problem represent each a set of variables of the original one and these sets are not necessarily disjoint, but they cover the set of the original variables. The translation finds all partial solutions relative to set of variables. The problem that results from the translation represents the interactions between these local solutions, by definition, a decomposition method produces a binary acyclic problem, such problems can be solved in time polynomial in its size. The width of a method is a measure of the size of problem it produced. Originally, the width was defined as the cardinality of the sets of original variables, one method. Either way, the width of a decomposition is defined so that decompositions of size bounded by a constant do not produce excessively large problems. Instances having a decomposition of fixed width can be translated by decomposition into instances of size bounded by a polynomial in the size of the original instance, the width of a problem is the width of its minimal-width decomposition. While decompositions of fixed width can be used to solve a problem. Indeed, a fixed width problem has a decomposition of fixed width, in order for a problem of fixed width being efficiently solved by decomposition, one of its decompositions of low width has to be found efficiently. Decomposition methods create a problem that is easy to solve from an arbitrary one, the constraints of the new problem bounds the values of two new variables to have as values two tuples that agree on the shared original variables. Three further conditions ensure that the new problem is equivalent to the old one, in order for the new problem to be solvable efficiently, the primal graph of the new problem is required to be acyclic. In other words, viewing the variables as vertices and the constraints as edges, in order for the new problem to be equivalent to the old one, each original constraint is enforced as part of the definition of the domain of at least one new variables. A further condition that is necessary to ensure equivalence is that the constraints are sufficient to enforce all copies of each original variable to assume the same value. Since the same variable can be associated to several of the new variables

7.
Difference-map algorithm
–
The difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets, from a mathematical perspective, the difference-map algorithm is a dynamical system based on a mapping of Euclidean space. Solutions are encoded as fixed points of the mapping, since these applications include NP-complete problems, the scope of the difference map is that of an incomplete algorithm. Whereas incomplete algorithms can efficiently verify solutions, they prove that a solution does not exist. The difference-map algorithm is a generalization of two methods, Fienups Hybrid input output algorithm for phase retrieval and the Douglas-Rachford algorithm for convex optimization. Iterative methods, in general, have a history in phase retrieval. The use of style of algorithm for hard, non-convex problems is a more recent development. The problem to be solved must first be formulated as a set problem in Euclidean space. Another prerequisite is an implementation of the projections P A and P B that, given an arbitrary point x. Since the left-hand side is an element of A and the RHS is an element of B, note that the fixed point x itself need not belong to either A or B. The set of fixed points will typically have much higher dimension than the set of solutions, the progress of the algorithm can be monitored by inspecting the norm of the difference of the two projections, Δ = | P A − P B |. When this vanishes, a point common to both constraint sets has been found and the algorithm can be terminated, incomplete algorithms, such as stochastic local search, are widely used for finding satisfying truth assignments to boolean formulas. For example, the real variables x11, x21 and x41 correspond to the same boolean variable or its negation and it is convenient to associate the values 1 and -1 with TRUE and FALSE rather than the traditional 1 and 0. In a satisfying assignment, the two variables in each row must be assigned the values, or, the corresponding constraint set, B, is thus a set of 34 =81 points. In projecting to this constraint the following operation is applied to each row, first, the two real values are rounded to 1 or -1, then, if the outcome is, the larger of the two original values is replaced by 1. Examples, → → It is an exercise to check that both of the projection operations described minimize the Euclidean distance between input and output values. The iterate is unchanged because the two projections agree, from PB, we can read off the satisfying truth assignment, q1 = TRUE, q2 = FALSE, q3 = TRUE. In the simple 2-SAT example above, the norm of the difference-map increment Δ decreased monotonically to zero in three iterations

8.
DPLL algorithm
–
Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the “Davis–Putnam method” or the “DP algorithm”. Other common names that maintain the distinction are DLL and DPLL, after more than 50 years the DPLL procedure still forms the basis for most efficient complete SAT solvers. It has recently extended for automated theorem proving for fragments of first-order logic. The SAT problem is important both from theoretical and practical points of view, as such, it was and still is a hot topic in research for many years, and competitions between SAT solvers regularly take place. DPLLs modern implementations like Chaff and zChaff, GRASP or Minisat are in the first places of the competitions these last years and this is known as the splitting rule, as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, in practice, this often leads to deterministic cascades of units, thus avoiding a large part of the naive search space. Pure literal elimination If a propositional variable occurs with only one polarity in the formula, pure literals can always be assigned in a way that makes all clauses containing them true. Thus, these clauses do not constrain the search anymore and can be deleted, unsatisfiability of a given partial assignment is detected if one clause becomes empty, i. e. if all its variables have been assigned in a way that makes the corresponding literals false. Satisfiability of the formula is detected either when all variables are assigned without generating the empty clause, or, in modern implementations, unsatisfiability of the complete formula can only be detected after exhaustive search. In other words, they replace every occurrence of l with true and every occurrence of not l with false in the formula Φ, the or in the return statement is a short-circuiting operator. Φ ∧ l denotes the simplified result of substituting true for l in Φ, the pseudocode DPLL function only returns whether the final assignment satisfies the formula or not. In a real implementation, the partial satisfying assignment typically is also returned on success, the Davis–Logemann–Loveland algorithm depends on the choice of branching literal, which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, efficiency is strongly affected by the choice of the branching literal, there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics, Davis, Logemann, Loveland had developed this algorithm. Some properties of this algorithm are, It is based on search. It is the basis for almost all modern SAT solvers and it DOES NOT use learning and chronological backtracking. Defining new data structures to make the algorithm faster, especially the part on unit propagation Defining variants of the backtracking algorithm. The latter direction include non-chronological backtracking and clause learning and these refinements describe a method of backtracking after reaching a conflict clause which learns the root causes of the conflict in order to avoid reaching the same conflict again

9.
Hidden transformation
–
The hidden transformation reformulates a constraint satisfaction problem in such a way all constraints have at most two variables. The new problem is satisfiable if and only if the problem was. There are a number of algorithms for constraint satisfaction that work only on constraints that have at most two variables, if a problem has constraints with a larger arity, conversion into a problem made of binary constraints allows for execution of these solving algorithms. Constraints with one, two, or more variables are called unary, binary, or higher-order constraints, the number of variables in a constraint is called its arity. The hidden transformation converts an arbitrary constraint satisfaction problem into a binary one, the transformation is similar to that generating the dual problem. The problem is added new variables, one for each constraint of the original problem, the domain of each such variable is the set of satisfying tuples of the corresponding constraint. The constraints of the new problem enforce the value of the variables to be consistent with the values of the new variables. The second condition enforces a similar condition for variable y, the graph representing the result of this transformation is bipartite, as all constraints are between a new and an old variable. Moreover, the constraints are functional, for any value of a new variable. Bacchus, Fahiem, Xinguang Chen, Peter van Beek, Toby Walsh

10.
Interchangeability algorithm
–
In computer science, an interchangeability algorithm is a technique used to more efficiently solve constraint satisfaction problems. If two variables A and B in a CSP may be swapped for each other without changing the nature of the problem or its solutions, then A and B are interchangeable variables. Interchangeable variables represent a symmetry of the CSP and by exploiting that symmetry, for example, if solutions with A=1 and B=2 have been tried, then by interchange symmetry, solutions with B=1 and A=2 need not be investigated. The concept of interchangeability and the interchangeability algorithm in constraint satisfaction problems was first introduced by Eugene Freuder in 1991, the interchangeability algorithm reduces the search space of backtracking search algorithms, thereby improving the efficiency of NP-Complete CSP problems. Fully Interchangeable A value a for variable v is fully interchangeable with value b if and only if every solution in which v = a remains a solution when b is substituted for a and vice versa. Fully Substitutable A value a for variable v is fully substitutable with value b if, finds neighborhood interchangeable values in a CSP. The algorithm can also be run for k steps as a preprocessor to simplify the subsequent backtrack search, finds k-interchangeable values in a CSP. The figure shows a graph coloring example with colors as vertices. The available colors at each vertex are shown, the colors yellow, green, brown, red, blue, pink represent vertex Y and are fully interchangeable by definition. For example, substituting maroon for green in the solution orange|X, in Computer Science, the interchangeability algorithm has been extensively used in the fields of artificial intelligence, graph coloring problems, abstraction frame-works and solution adaptation

11.
Local consistency
–
In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the space and make the problem easier to solve. Various kinds of local conditions are leveraged, including node consistency, arc consistency. Every local consistency condition can be enforced by a transformation that changes the problem without changing its solutions, such a transformation is called constraint propagation. Constraint propagation works by reducing domains of variables, strengthening constraints and this leads to a reduction of the search space, making the problem easier to solve by some algorithms. Constraint propagation can also be used as an unsatisfiability checker, incomplete in general, local consistency conditions can be grouped into various classes. The original local consistency conditions require that every consistent assignment can be extended to another variable. Directional consistency only requires this condition to be satisfied when the variable is higher than the ones in the assignment. Relational consistency includes extensions to more than one variable, but this extension is required to satisfy a given constraint or set of constraints. In this article, a constraint satisfaction problem is defined as a set of variables, a set of domains, variables and domains are associated, the domain of a variable contains all values the variable can take. A constraint is composed of a sequence of variables, called its scope, and a set of their evaluations, the constraint satisfaction problems referred to in this article are assumed to be in a special form. A problem is in normalized form, respectively regular form, if every sequence of variables is the scope of at most one constraint or exactly one constraint, the assumption of regularity done only for binary constraints leads to the standardized form. In the figures used in article, the lack of links between two variables indicate that either no constraint or a constraint satisfied by all values exists between these two variables. The standard local consistency conditions all require that all consistent partial evaluations can be extended to another variable in such a way the resulting assignment is consistent, a partial evaluation is consistent if it satisfies all constraints whose scope is a subset of the assigned variables. Node consistency requires that every unary constraint on a variable is satisfied by all values in the domain of the variable and this condition can be trivially enforced by reducing the domain of each variable to the values that satisfy all unary constraints on that variable. As a result, unary constraints can be neglected and assumed incorporated into the domains, for example, given a variable V with a domain of and a constraint V ≤3, node consistency would restrict the domain to and the constraint could then be discarded. This pre-processing step simplifies later stages, a variable of a constraint satisfaction problem is arc-consistent with another one if each of its admissible values is consistent with some admissible value of the second variable. A problem is arc consistent if every variable is arc consistent with other one

12.
Local search (constraint satisfaction)
–
In constraint satisfaction, local search is an incomplete method for finding a solution to a problem. It is based on improving a assignment of the variables until all constraints are satisfied. In particular, local search algorithms typically modify the value of a variable in an assignment at each step, the new assignment is close to the previous one in the space of assignment, hence the name local search. All local search algorithms use a function that evaluates the quality of assignment and this amount is called the cost of the assignment. The aim of local search is that of finding an assignment of minimal cost, two classes of local search algorithms exist. The first one is that of greedy or non-randomized algorithms and these algorithms proceed by changing the current assignment by always trying to decrease its cost. The main problem of these algorithms is the presence of plateaus. The second class of search algorithm have been invented to solve this problem. They escape these plateaus by doing random moves, and are called randomized local search algorithms, the most basic form of local search is based on choosing the change that maximally decreases the cost of the solution. This method, called climbing, proceeds as follows, first, a random assignment is chosen, then. If no solution has been found after a number of changes. Hill climbing algorithms can only escape a plateau by doing changes that do not change the quality of the assignment, as a result, they can be stuck in a plateau where the quality of assignment has a local maxima. GSAT was the first local search algorithm for satisfiability, and is a form of hill climbing. A method for escaping from a minimum is that of using a weighted sum of violated constraints as a measure of cost. More precisely, if no change reduces the cost of the assignment and this way, every move that would not otherwise change the cost of the solution decreases it. Moreover, the weight of constraints that remain violated for a number of moves keeps increasing. Therefore, during a number of moves not satisfying a constraint, a drawback of hill climbing with moves that do not decrease cost, is that it may cycle over assignments of the same cost. Tabu search overcomes this problem by maintaining a list of forbidden assignments, in particular, the tabu list typically contains the list of the last changes

13.
Look-ahead (backtracking)
–
In backtracking algorithms, look ahead is the generic term for a subprocedure that attempts to foresee the effects of choosing a branching variable to evaluate or one of its values. The two main aims of look-ahead are to choose a variable to evaluate next and the order of values to assign to it, in a general constraint satisfaction problem, every variable can take a value in a domain. A backtracking algorithm therefore iteratively chooses a variable and tests each of its possible values, look ahead is used to check the effects of choosing a given variable to evaluate or to decide the order of values to give to it. The simpler technique for evaluating the effect of an assignment to a variable is called forward checking. Given the current partial solution and an assignment to evaluate. More generally, forward checking determines the values for x k that are consistent with the extended assignment, a look-ahead technique that may be more time-consuming but may produce better results is based on arc consistency. Namely, given a partial solution extended with a value for a new variable, in other words, for any unassigned variables, the values that cannot consistently be extended to another variable are removed. Two other methods involving arc consistency are full and partial look ahead and they enforce arc consistency, but not for every pair of variables. In particular, full look considers every pair of unassigned variables x i, x j and this is different than enforcing global arc consistency, which may possibly require a pair of variables to be reconsidered more than once. Instead, once full look ahead has enforced arc consistency between a pair of variables, the pair is not considered any more. Partial look ahead is similar, but an order of variables is considered. Look ahead based on arc consistency can also be extended to work with path consistency, the results of look ahead is used to decide the next variable to evaluate and the order of values to give to this variable. In particular, for any unassigned variable and value, look-ahead estimates the effects of setting that variable to that value, the choice of the next variable to evaluate is particularly important, as it may produce exponential differences in running time. In order to prove unsatisfiability as quickly as possible, variables leaving few alternatives after assigned are the preferred ones and this idea can be implemented by checking only satisfiability or unsatisfiability of variable/value pairs. In particular, the variable that is chosen is the one having a minimal number of values that are consistent with the current partial solution. In turn, consistency can be evaluated by simply checking partial consistency, experiments proved that these techniques are useful for large problems, especially the min-conflicts one. Randomization is also used for choosing a variable or value. For example, if two variables are equally preferred according to measure, the choice can be done randomly

14.
Ordered graph
–
An ordered graph is a graph with a total order over its nodes. In an ordered graph, the parents of a node are the nodes that are joined to it, more precisely, n is a parent of m in the ordered graph ⟨ N, E, < ⟩ if ∈ E and n < m. The width of a node is the number of its parents, the induced graph of an ordered graph is obtained by adding some edges to an ordering graph, using the method outlined below. The induced width of a graph is the width of its induced graph. Given an ordered graph, its graph is another ordered graph obtained by joining some pairs of nodes that are both parents of another node. In particular, nodes are considered in turn according to the ordering, for each node, if two of its parents are not joined by an edge, that edge is added. In other words, when considering node n, if m and l are parents of it and are not joined by an edge, the edge is added to the graph. Since the parents of a node are connected with each other. As an example, the graph of an ordered graph is calculated. The ordering is represented by the position of its nodes in the figures and its parents are b and c, as they are both joined to a and both precede a in the ordering. Since they are not joined by an edge, one is added, while this node only has d as a parent in the original graph, it also has c as a parent in the partially built induced graph. Indeed, c is joined to b and also precede b in the ordering, as a result, an edge joining c and d is added. Considering d does not produce any change, as this node has no parents, processing nodes in order matters, as the introduced edges may create new parents, which are then relevant to the introduction of new edges. The following example shows that a different ordering produces a different induced graph of the original graph. The ordering is the same as above but b and c are swapped, as in the previous case, both b and c are parents of a. Therefore, an edge between them is added, according to the new order, the second node that is considered is c. This node has one parent. Therefore, no new edge is added, the third considered node is b