1.
Backjumping
–
In backtracking algorithms, backjumping is a technique that reduces search space, therefore increasing efficiency. While backtracking always goes up one level in the tree when all values for a variable have been tested. In this article, a order of evaluation of variables x 1, …, x n is used. The algorithm then goes up to x k, changing x k s value if possible, the partial assignment is not always necessary in full to prove that no value of x k +1 lead to a solution. If the algorithm can prove this fact, it can consider a different value for x j instead of reconsidering x k as it would normally do. The efficiency of a backjumping algorithm depends on how high it is able to backjump, if this is the case, j is called a safe jump. Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, in practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe and these methods have different cost, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree. The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching, in constraint satisfaction, a partial evaluation is consistent if and only if it satisfies all constraints involving the assigned variables, and inconsistent otherwise. The condition in which all values of a variable x k +1 are inconsistent with the current partial solution x 1, …, x k = a 1, …. This happens exactly when the x k +1 is a leaf of the search tree The backjumping algorithm by Gaschnig does a backjump only in leaf dead ends. In other words, it works differently from backtracking only when every possible value of x k +1 has been tested and resulted inconsistent without the need of branching over another variable. Since every variable can take more than one value, the maximal index that comes out from the check for each value is a safe jump. In practice, the algorithm can check the evaluations above at the time it is checking the consistency of x k +1 = a k +1. The previous algorithm only backjumps when the values of a variable can be inconsistent with the current partial solution without further branching. In other words, it allows for a backjump only at leaf nodes in the search tree, an internal node of the search tree represents an assignment of a variable that is consistent with the previous ones. If no solution extends this assignment, the algorithm always backtracks. Backjumping at internal nodes cannot be done as for leaf nodes, indeed, if some evaluations of x k +1 required branching, it is because they are consistent with the current assignment

2.
Backmarking
–
In constraint satisfaction, backmarking is a variant of the backtracking algorithm. Backmarking works like backtracking by iteratively evaluating variables in an order, for example. It improves over backtracking by maintaining information about the last time a variable x i was instantiated to a value, the second information is changed every time another variable is evaluated. In particular, the index of the maximal unchanged variable since the last evaluation of x i is changed every time another variable x j changes value. Every time a variable x j changes, all variables x i with i > j are considered in turn. If k was their previous associated index, this value is changed to m i n, the data collected this way is used to avoid some consistency checks. In particular, whenever backtracking would set x i = a, two conditions allow to determine partial consistency or inconsistency without checking with the constraints. Contrary to other variants to backtracking, backmarking does not reduce the search space but only possibly reduce the number of constraints that are satisfied by a partial solution

3.
Barrier function
–
Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. The two most common types of functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods, consider the following constrained optimization problem, minimize f subject to x ≥ b where b is some constant. If one wishes to remove the inequality constraint, the problem can be re-formulated as minimize f + c, where c = ∞ if x < b and this problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function c, a barrier function, now, is a continuous approximation g to c that tends to infinity as x approaches b from above. Using such a function, a new problem is formulated. This problem is not equivalent to the original, but as μ approaches zero, for logarithmic barrier functions, g is defined as − log when x < b and ∞ otherwise. This essentially relies on the fact that log tends to infinity as t tends to 0. This introduces a gradient to the function being optimized which favors less extreme values of x, logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized. Extending to higher dimensions is simple, provided each dimension is independent, for each variable x i which should be limited to be strictly lower than b i, add − log