Movetofront transform
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message)

The movetofront (MTF) transform is an encoding of data (typically a stream of bytes) designed to improve the performance of entropy encoding techniques of compression. When efficiently implemented, it is fast enough that its benefits usually justify including it as an extra step in data compression algorithms.
Contents
The transform[edit]
The main idea is that each symbol in the data is replaced by its index in the stack of “recently used symbols”. For example, long sequences of identical symbols are replaced by as many zeroes, whereas when a symbol that has not been used in a long time appears, it is replaced with a large number. Thus at the end the data is transformed into a sequence of integers; if the data exhibits a lot of local correlations, then these integers tend to be small.
This algorithm was published in a paper by Ryabko.^{[1]} The original name of this code is “book stack”.^{[2]}
Let us give a precise description. Assume for simplicity that the symbols in the data are bytes. Each byte value is encoded by its index in a list of bytes, which changes over the course of the algorithm. The list is initially in order by byte value (0, 1, 2, 3, ..., 255). Therefore, the first byte is always encoded by its own value. However, after encoding a byte, that value is moved to the front of the list before continuing to the next byte.
An example will shed some light on how the transform works. Imagine instead of bytes, we are encoding values in a–z. We wish to transform the following sequence:
bananaaa
By convention, the list is initially (abcdefghijklmnopqrstuvwxyz). The first letter in the sequence is b, which appears at index 1 (the list is indexed from 0 to 25). We put a 1 to the output stream:
1
The b moves to the front of the list, producing (bacdefghijklmnopqrstuvwxyz). The next letter is a, which now appears at index 1. So we add a 1 to the output stream. We have:
1,1
and we move back the letter a to the top of the list. Continuing this way, we find that the sequence is encoded by:
1,1,13,1,1,1,0,0
Iteration  Sequence  List 

bananaaa  1  (abcdefghijklmnopqrstuvwxyz) 
bananaaa  1,1  (bacdefghijklmnopqrstuvwxyz) 
bananaaa  1,1,13  (abcdefghijklmnopqrstuvwxyz) 
bananaaa  1,1,13,1  (nabcdefghijklmopqrstuvwxyz) 
bananaaa  1,1,13,1,1  (anbcdefghijklmopqrstuvwxyz) 
bananaaa  1,1,13,1,1,1  (nabcdefghijklmopqrstuvwxyz) 
bananaaa  1,1,13,1,1,1,0  (anbcdefghijklmopqrstuvwxyz) 
bananaaa  1,1,13,1,1,1,0,0  (anbcdefghijklmopqrstuvwxyz) 
Final  1,1,13,1,1,1,0,0  (anbcdefghijklmopqrstuvwxyz) 
It is easy to see that the transform is reversible. Simply maintain the same list and decode by replacing each index in the encoded stream with the letter at that index in the list. Note the difference between this and the encoding method: The index in the list is used directly instead of looking up each value for its index.
i.e. you start again with (abcdefghijklmnopqrstuvwxyz). You take the "1" of the encoded block and look it up in the list, which results in "b". Then move the "b" to front which results in (bacdef...). Then take the next "1", look it up in the list, this results in "a", move the "a" to front ... etc.
Implementation[edit]
Details of implementation are important for performance, particularly for decoding. For encoding, no clear advantage is gained by using a linked list, so using an array to store the list is acceptable, with worstcase performance O(nk), where n is the length of the data to be encoded and k is the number of values (generally a constant for a given implementation).
However, for decoding, we can use specialized data structures to greatly improve performance.^{[example needed]}
Python[edit]
This is a possible implementation of the Move to Front algorithm in Python.
def MtF(plain_text):
# Initialise the list of characters (i.e. the dictionary)
dictionary = sorted(list(set(plain_text)))
# Transformation
compressed_text = list()
rank = 0
# read in each character
for c in plain_text:
rank = dictionary.index(str(c)) # find the rank of the character in the dictionary
compressed_text.append(str(rank)) # update the encoded text
# update the dictionary
dictionary.pop(rank)
dictionary.insert(0, c)
dictionary.sort() # sort dictionary
return (compressed_text, dictionary) # Return the encoded text as well as the dictionary
The inverse will recover the original text:
def iMtF(compressed_data):
compressed_text = compressed_data[0]
dictionary = list(compressed_data[1])
plain_text = ""
rank = 0
# read in each character of the encoded text
for i in compressed_text:
# read the rank of the character from dictionary
rank = int(i)
plain_text += str(dictionary[rank])
# update dictionary
e = dictionary.pop(rank)
dictionary.insert(0, e)
return plain_text # Return original string
Use in practical data compression algorithms[edit]
The MTF transform takes advantage of local correlation of frequencies to reduce the entropy of a message.^{[clarification needed]} Indeed, recently used letters stay towards the front of the list; if use of letters exhibits local correlations, this will result in a large number of small numbers such as "0"'s and "1"'s in the output.
However, not all data exhibits this type of local correlation, and for some messages, the MTF transform may actually increase the entropy.
An important use of the MTF transform is in Burrows–Wheeler transform based compression. The Burrows–Wheeler transform is very good at producing a sequence that exhibits local frequency correlation from text and certain other special classes of data. Compression benefits greatly from following up the Burrows–Wheeler transform with an MTF transform before the final entropyencoding step.
Example[edit]
This following section may be confusing or unclear to readers. In particular, What is the method used to arrive at these numbers?. (February 2011) (Learn how and when to remove this template message) 
As an example, imagine we wish to compress Hamlet's soliloquy (To be, or not to be...). We can calculate the entropy of this message to be 7033 bits. Naively, we might try to apply the MTF transform directly. The result is a message with 7807 bits of entropy (higher than the original). The reason is that English text does not in general exhibit a high level of local frequency correlation. However, if we first apply the Burrows–Wheeler transform, and then the MTF transform, we get a message with 6187 bits of entropy. Note that the Burrows–Wheeler transform does not decrease the entropy of the message; it only reorders the bytes in a way that makes the MTF transform more effective.
One problem with the basic MTF transform is that it makes the same changes for any character, regardless of frequency, which can result in diminished compression as characters that occur rarely may push frequent characters to higher values. Various alterations and alternatives have been developed for this reason. One common change is to make it so that characters above a certain point can only be moved to a certain threshold. Another is to make some algorithm that runs a count of each character's local frequency and uses these values to choose the characters' order at any point. Many of these transforms still reserve zero for repeat characters, since these are often the most common in data after the Burrows Wheeler Transform.
Movetofront linkedlist[edit]
 The term Move To Front (MTF) is also used in a slightly different context, as a type of a dynamic linked list. In an MTF list, each element is moved to the front when it is accessed.^{[3]} This ensures that, over time, the more frequently accessed elements are easier to access. It can be proved that the time it takes to access a sequence of elements in ^{[4]}
References[edit]
 ^ Ryabko, B. Ya Data compression by means of a "book stack”, Problems of Information Transmission, 1980, v. 16: (4), pp. 265–269
 ^ Ryabko, B. Ya.; Horspool, R. Nigel; Cormack, Gordon V. (1987). "Comments to: "A locally adaptive data compression scheme" by J. L. Bentley, D. D. Sleator, R. E. Tarjan and V. K. Wei.". Comm. ACM. 30 (9): 792–794. doi:10.1145/30401.315747.
 ^ Rivest, R. (1976). "On selforganizing sequential search heuristics". Communications of the ACM. 19 (2): 63. doi:10.1145/359997.360000.
 ^ Lecture notes in advanced data structures, by Prof. Erik Demaine, Scribe: Ray C. He, 2007.
 J. L. Bentley; D. D. Sleator; R. E. Tarjan; V. K. Wei (1986). "A Locally Adaptive Data Compression Scheme". Communications of the ACM. 29 (4). doi:10.1145/5684.5688.