1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base

2.
Search tree
–
In computer science, a search tree is a tree data structure used for locating specific keys from within a set. In order for a tree to function as a search tree, the advantage of search trees is their efficient search time given the tree is reasonably balanced, which is to say the leaves at either end are of comparable depths. Various search-tree data structures exist, several of which also allow efficient insertion and deletion of elements, Search trees are often used to implement an associative array. The search tree algorithm uses the key from the pair to find a location. A Binary Search Tree is a data structure where each node contains a key. For all nodes, the left subtrees key must be less than the key. These subtrees must all qualify as binary search trees, the time complexity for searching a binary search tree in the average case is O. B-trees are generalizations of binary search trees in that they can have a variable number of subtrees at each node. While child-nodes have a range, they will not necessarily be filled with data. The advantage is that B-trees do not need to be re-balanced as frequently as other self-balancing trees, due to the variable range of their node length, B-trees are optimized for systems that read large blocks of data. They are also used in databases. The time complexity for searching a B-tree is O, an -tree is a search tree where all of its leaves are the same depth. Each node has at least a children and at most b children, while the root has at least 2 children, a and b can be decided with the following formula,2 ≤ a ≤2 The time complexity for searching an -tree is O. A ternary search tree is a type of trie that can have 3 nodes, a lo kid, a kid. Each node stores a character and the tree itself is ordered the same way a binary search tree is. Searching a ternary search tree involves passing in a string to test whether any path contains it, the time complexity for searching a ternary search tree is O. Assuming the tree is ordered, we can take a key, the following algorithms are generalized for binary search trees, but the same idea can be applied to trees of other formats

3.
Leaf node
–
Alternatively, a tree can be defined abstractly as a whole as an ordered tree, with a value assigned to each node. Both these perspectives are useful, while a tree can be analyzed mathematically as a whole, a tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree, a tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy. Root The top node in a tree, child A node directly connected to another node when moving away from the Root. Parent The converse notion of a child, siblings A group of nodes with the same parent. Descendant A node reachable by repeated proceeding from parent to child, ancestor A node reachable by repeated proceeding from child to parent. Leaf A node with no children, branch Internal node A node with at least one child. Degree The number of sub trees of a node, edge The connection between one node and another. Path A sequence of nodes and edges connecting a node with a descendant, level The level of a node is defined by 1 +. Height of node The height of a node is the number of edges on the longest path between that node and a leaf, height of tree The height of a tree is the height of its root node. Depth The depth of a node is the number of edges from the root node to the node. Forest A forest is a set of n ≥0 disjoint trees, there is a distinction between a tree as an abstract data type and as a concrete data structure, analogous to the distinction between a list and a linked list. To allow finite trees, one must either allow the list of children to be empty, or allow trees to be empty, in case the list of children can be of fixed size. As a data structure, a tree is a group of nodes, where each node has a value. This data structure actually defines a graph, because it may have loops or several references to the same node. Thus there is also the requirement that no two references point to the node, and a tree that violates this is corrupt. For example, rather than an empty tree, one may have a reference, a tree is always non-empty. In fact, every node must have one parent

4.
Node (computer science)
–
A node is a basic unit used in computer science. Nodes are devices or data points on a larger network, devices such as a personal computer, cell phone, or printer are nodes. When defining nodes on the internet, a node is anything that has an IP address, nodes are individual parts of a larger data structure, such as linked lists and tree data structures. Nodes contain data and also may link to other nodes, links between nodes are often implemented by pointers. Nodes are often arranged into tree structures, a node represents the information contained in a single structure. These nodes may contain a value or condition, or possibly serve as another independent data structure, nodes are represented by a single parent node. The highest point on a structure is called a root node, which does not have a parent node. The height of a node is determined by the longest path from node to the furthest leaf node. Node depth is determined by the distance between that node and the root node. The root node is said to have a depth of zero, data can be discovered along these network paths. An IP address uses this kind of system of nodes to define its location in a network, child, A child node is a node extending from another node. For example, a computer with internet access could be considered a child node of a representing the internet. The inverse relationship is that of a parent node, if node C is a child of node A, then A is the parent node of C. Degree, the degree of a node is the number of children of the node, depth, the depth of node A is the length of the path from A to the root node. The root node is said to have depth 0, height, the height of node A is the length of the longest path through children to a leaf node. Internal node, a node with at least one child, leaf node, a node with no children. Root node, a node distinguished from the rest of the tree nodes, usually, it is depicted as the highest node of the tree. Sibling nodes, these are connected to the same parent node

5.
Child node
–
Alternatively, a tree can be defined abstractly as a whole as an ordered tree, with a value assigned to each node. Both these perspectives are useful, while a tree can be analyzed mathematically as a whole, a tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree, a tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy. Root The top node in a tree, child A node directly connected to another node when moving away from the Root. Parent The converse notion of a child, siblings A group of nodes with the same parent. Descendant A node reachable by repeated proceeding from parent to child, ancestor A node reachable by repeated proceeding from child to parent. Leaf A node with no children, branch Internal node A node with at least one child. Degree The number of sub trees of a node, edge The connection between one node and another. Path A sequence of nodes and edges connecting a node with a descendant, level The level of a node is defined by 1 +. Height of node The height of a node is the number of edges on the longest path between that node and a leaf, height of tree The height of a tree is the height of its root node. Depth The depth of a node is the number of edges from the root node to the node. Forest A forest is a set of n ≥0 disjoint trees, there is a distinction between a tree as an abstract data type and as a concrete data structure, analogous to the distinction between a list and a linked list. To allow finite trees, one must either allow the list of children to be empty, or allow trees to be empty, in case the list of children can be of fixed size. As a data structure, a tree is a group of nodes, where each node has a value. This data structure actually defines a graph, because it may have loops or several references to the same node. Thus there is also the requirement that no two references point to the node, and a tree that violates this is corrupt. For example, rather than an empty tree, one may have a reference, a tree is always non-empty. In fact, every node must have one parent

6.
Rooted tree
–
In mathematics, and more specifically in graph theory, a tree is an undirected graph in which any two vertices are connected by exactly one path. In other words, any connected graph is a tree. A forest is a disjoint union of trees, the various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory, although such data structures are generally rooted trees. A rooted tree itself has been defined by authors as a directed graph. The term tree was coined in 1857 by the British mathematician Arthur Cayley, a tree is an undirected graph G that satisfies any of the following equivalent conditions, G is connected and has no cycles. Any two vertices in G can be connected by a simple path. If G has finitely many vertices, say n of them, then the statements are also equivalent to any of the following conditions. G has no simple cycles and has n −1 edges, an internal vertex is a vertex of degree at least 2. Similarly, a vertex is a vertex of degree 1. An irreducible tree is a tree in which there is no vertex of degree 2, a forest is an undirected graph, all of whose connected components are trees, in other words, the graph consists of a disjoint union of trees. Equivalently, a forest is an acyclic graph. As special cases, an empty graph, a tree. A polytree is an acyclic graph whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, a directed tree is a directed graph which would be a tree if the directions on the edges were ignored, i. e. a polytree. Some authors restrict the phrase to the case where the edges are all directed towards a particular vertex, a rooted tree is a tree in which one vertex has been designated the root. The edges of a tree can be assigned a natural orientation, either away from or towards the root. The tree-order is the ordering on the vertices of a tree with u < v if. A rooted tree which is a subgraph of some graph G is a tree if the ends of every edge in G are comparable in this tree-order whenever those ends are vertices of the tree

7.
B-tree
–
In computer science, a B-tree is a self-balancing tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree is a generalization of a search tree in that a node can have more than two children. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read, B-trees are a good example of a data structure for external memory. It is commonly used in databases and filesystems, in B-trees, internal nodes can have a variable number of child nodes within some pre-defined range. When data is inserted or removed from a node, its number of child nodes changes, in order to maintain the pre-defined range, internal nodes may be joined or split. Because a range of nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees. The lower and upper bounds on the number of nodes are typically fixed for a particular implementation. For example, in a 2-3 B-tree, each node may have only 2 or 3 child nodes. Each internal node of a B-tree will contain a number of keys, the keys act as separation values which divide its subtrees. For example, if a node has 3 child nodes then it must have 2 keys, a1. All values in the leftmost subtree will be less than a1, usually, the number of keys is chosen to vary between d and 2 d, where d is the minimum number of keys, and d +1 is the minimum degree or branching factor of the tree. In practice, the take up the most space in a node. The factor of 2 will guarantee that nodes can be split or combined, Each split node has the required minimum number of keys. Similarly, if a node and its neighbor each have d keys. Deleting the key would make the internal node have d −1 keys, the result is an entirely full node of 2 d keys. The number of branches from a node will be one more than the number of stored in the node. In a 2-3 B-tree, the nodes will store either one key or two keys. A B-tree is sometimes described with the parameters — or simply with the highest branching order, a B-tree is kept balanced by requiring that all leaf nodes be at the same depth

8.
2-3 tree
–
In computer science, a 2–3 tree is a tree data structure, where every node with children has either two children and one data element or three children and two data elements. According to Knuth, a B-tree of order 3 is a 2-3 tree, nodes on the outside of the tree have no children and one or two data elements. 2−3 trees were invented by John Hopcroft in 1970, 2–3 trees are balanced, meaning that each right, center, and left subtree contains the same or close to the same amount of data. We say that a node is a 2-node if it has one data element. We say that a node is a 3-node if it has two data elements and three children. We say that T is a 2–3 tree if and only if one of the following statements hold, in other words, T does not have any nodes. T is a 2-node with data element a, every internal node is a 2-node or a 3-node. All leaves are at the same level, all data is kept in sorted order. Searching for an item in a 2–3 tree is similar to searching for an item in a search tree. Since the data elements in each node are ordered, a function will be directed to the correct subtree. Let T be a 2–3 tree and d be the data element we want to find, If T is empty, then d is not in T and were done. If d is not in r, then d is not in T. Otherwise, in particular, d can be found at a leaf node. We need no further steps and were done, suppose r is a 2-node with left child L and right child R. Let e be the element in r. There are three cases, If d is equal to e, then weve found d in T and were done, If d < e, then set T to L, which by definition is a 2–3 tree, and go back to step 2. If d > e, then set T to R and go back to step 2, suppose r is a 3-node with left child L, middle child M, and right child R. Let a and b be the two elements of r, where a < b. There are four cases, If d is equal to a or b, If d < a, then set T to L and go back to step 2

9.
2-3-4 tree
–
In computer science, a 2–3–4 tree is a self-balancing data structure that is commonly used to implement dictionaries. 2–3–4 trees are B-trees of order 4, like B-trees in general, they can search, insert, one property of a 2–3–4 tree is that all external nodes are at the same depth. 2–3–4 trees are an isometry of red–black trees, meaning that they are equivalent data structures, in other words, for every 2–3–4 tree, there exists at least one red–black tree with data elements in the same order. Moreover, insertion and deletion operations on 2–3–4 trees that cause node expansions, splits and merges are equivalent to the color-flipping, introductions to red–black trees usually introduce 2–3–4 trees first, because they are conceptually simpler. 2–3–4 trees, however, can be difficult to implement in most programming languages because of the number of special cases involved in operations on the tree. Red–black trees are simpler to implement, so tend to be used instead, every node is a 2-node, 3-node or a 4-node, and holds one, two, or three data elements, respectively. All leaves are at the same depth, all data is kept in sorted order. To insert a value, we start at the root of the 2–3–4 tree, - If the current node is a 4-node, split the remaining 3-node up into a pair of 2-nodes. If this is the node, the middle value becomes the new root 2-node. Otherwise, push the middle value up into the parent node, Find the child whose interval contains the value to be inserted. If that child is a leaf, insert the value into the child node, otherwise, descend into the child and repeat from step 1. To insert the value 25 into this 2–3–4 tree, Begin at the root, node is a 4-node, so its middle element 24 is pushed up into the parent node. The remaining 3-node is split into a pair of 2-nodes and, ascend back into the new parent. Stop here and insert value 25 into this node, consider just leaving the element there, marking it “deleted, ” possibly to be re-used for a future insertion. To remove a value from the 2–3–4 tree, Find the element to be deleted, If the element is not in a leaf node, remember its location and continue searching until a leaf, which will contain the element’s successor, is reached. The successor can be either the largest key that is smaller than the one to be removed and it is simplest to make adjustments to the tree from the top down such that the leaf node found is not a 2-node. That way, after the swap, there not be an empty leaf node. If the element is in a 2-node leaf, just make the adjustments below, the parent key moves down to this node to form a 3-node

10.
National Institute of Standards and Technology
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields

11.
Tree (data structure)
–
Alternatively, a tree can be defined abstractly as a whole as an ordered tree, with a value assigned to each node. Both these perspectives are useful, while a tree can be analyzed mathematically as a whole, a tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree, a tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy. Root The top node in a tree, child A node directly connected to another node when moving away from the Root. Parent The converse notion of a child, siblings A group of nodes with the same parent. Descendant A node reachable by repeated proceeding from parent to child, ancestor A node reachable by repeated proceeding from child to parent. Leaf A node with no children, branch Internal node A node with at least one child. Degree The number of sub trees of a node, edge The connection between one node and another. Path A sequence of nodes and edges connecting a node with a descendant, level The level of a node is defined by 1 +. Height of node The height of a node is the number of edges on the longest path between that node and a leaf, height of tree The height of a tree is the height of its root node. Depth The depth of a node is the number of edges from the root node to the node. Forest A forest is a set of n ≥0 disjoint trees, there is a distinction between a tree as an abstract data type and as a concrete data structure, analogous to the distinction between a list and a linked list. To allow finite trees, one must either allow the list of children to be empty, or allow trees to be empty, in case the list of children can be of fixed size. As a data structure, a tree is a group of nodes, where each node has a value. This data structure actually defines a graph, because it may have loops or several references to the same node. Thus there is also the requirement that no two references point to the node, and a tree that violates this is corrupt. For example, rather than an empty tree, one may have a reference, a tree is always non-empty. In fact, every node must have one parent

12.
AA tree
–
An AA tree in computer science is a form of balanced tree used for storing and retrieving ordered data efficiently. AA trees are named for Arne Andersson, their inventor, AA trees are a variation of the red-black tree, a form of binary search tree which supports efficient addition and deletion of entries. Unlike red-black trees, red nodes on an AA tree can only be added as a right subchild, in other words, no red node can be a left sub-child. This results in the simulation of a 2-3 tree instead of a 2-3-4 tree, the following invariants hold for AA trees, The level of every leaf node is one. The level of every child is exactly one less than that of its parent. The level of every child is equal to or one less than that of its parent. The level of every right grandchild is strictly less than that of its grandparent, every node of level greater than one has two children. A link where the level is equal to that of its parent is called a horizontal link. Individual right horizontal links are allowed, but consecutive ones are forbidden and these are more restrictive constraints than the analogous ones on red-black trees, with the result that re-balancing an AA tree is procedurally much simpler than re-balancing a red-black tree. Insertions and deletions may cause an AA tree to become unbalanced. Only two distinct operations are needed for restoring balance, skew and split, Skew is a right rotation to replace a subtree containing a left horizontal link with one containing a right horizontal link instead. Split is a rotation and level increase to replace a subtree containing two or more consecutive right horizontal links with one containing two fewer consecutive right horizontal links. Function skew is input, T, a representing an AA tree that needs to be rebalanced. Output, Another node representing the rebalanced AA tree, If nil then return Nil else if nil then return T else if level == level then Swap the pointers of horizontal left links. L = left left, = right right, = T return L else return T end if end function Skew, function split is input, T, output, Another node representing the rebalanced AA tree. If nil then return Nil else if nil or nil then return T else if level == level then We have two horizontal right links, take the middle node, elevate it, and return it. R = right right, = left left, = T level, = level +1 return R else return T end if end function Split, Insertion begins with the binary tree search. Then, as the call stack unwinds, its easy to check the validity of the tree, Note, in the code as given above, the increment of level

13.
AVL tree
–
In computer science, an AVL tree is a self-balancing binary search tree. It was the first such structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one, if at any time they differ by more than one, rebalancing is done to restore this property. Lookup, insertion, and deletion all take O time in both the average and worst cases, where n is the number of nodes in the prior to the operation. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations, the AVL tree is named after its two Soviet inventors, Georgy Adelson-Velsky and Evgenii Landis, who published it in their 1962 paper An algorithm for the organization of information. AVL trees are often compared with trees because both support the same set of operations and take O time for the basic operations. For lookup-intensive applications, AVL trees are faster than red–black trees because they are more strictly balanced, similar to red–black trees, AVL trees are height-balanced. Both are, in general, neither weight-balanced nor μ-balanced for any μ≤1⁄2, in a binary tree the balance factor of a node N is defined to be the height difference BalanceFactor, = –Height + Height of its two child subtrees. A binary tree is defined to be an AVL tree if the invariant BalanceFactor ∈ holds for every node N in the tree. A node N with BalanceFactor <0 is called left-heavy, one with BalanceFactor >0 is called right-heavy, balance factors can be kept up-to-date by knowing the previous balance factors and the change in height – it is not necessary to know the absolute height. For holding the AVL balance information, two bits per node are sufficient and this is because an AVL tree of height h contains at least Fh+2 –1 nodes where is the Fibonacci sequence with the seed values F1 =1, F2 =1. Searching for a key in an AVL tree can be done the same way as that of a normal unbalanced binary search tree. In order for search to work effectively it has to employ a comparison function which establishes a total order on the set of keys. The number of comparisons required for successful search is limited by the h and for unsuccessful search is very close to h. Once a node has been found in an AVL tree, the next or previous node can be accessed in amortized constant time, some instances of exploring these nearby nodes require traversing up to h ∝ log links. And since there are n−1 links in any tree, the amortized cost is found to be 2×/n, when inserting an element into an AVL tree, you initially follow the same process as inserting into a Binary Search Tree. After inserting a node, it is necessary to check each of the ancestors for consistency with the invariants of AVL trees. This is achieved by considering the factor of each node

14.
B*-tree
–
In computer science, a B-tree is a self-balancing tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree is a generalization of a search tree in that a node can have more than two children. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read, B-trees are a good example of a data structure for external memory. It is commonly used in databases and filesystems, in B-trees, internal nodes can have a variable number of child nodes within some pre-defined range. When data is inserted or removed from a node, its number of child nodes changes, in order to maintain the pre-defined range, internal nodes may be joined or split. Because a range of nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees. The lower and upper bounds on the number of nodes are typically fixed for a particular implementation. For example, in a 2-3 B-tree, each node may have only 2 or 3 child nodes. Each internal node of a B-tree will contain a number of keys, the keys act as separation values which divide its subtrees. For example, if a node has 3 child nodes then it must have 2 keys, a1. All values in the leftmost subtree will be less than a1, usually, the number of keys is chosen to vary between d and 2 d, where d is the minimum number of keys, and d +1 is the minimum degree or branching factor of the tree. In practice, the take up the most space in a node. The factor of 2 will guarantee that nodes can be split or combined, Each split node has the required minimum number of keys. Similarly, if a node and its neighbor each have d keys. Deleting the key would make the internal node have d −1 keys, the result is an entirely full node of 2 d keys. The number of branches from a node will be one more than the number of stored in the node. In a 2-3 B-tree, the nodes will store either one key or two keys. A B-tree is sometimes described with the parameters — or simply with the highest branching order, a B-tree is kept balanced by requiring that all leaf nodes be at the same depth

15.
Bx-tree
–
In computer science, the Bx tree is a query and update efficient B+ tree-based index structure for moving objects. The base structure of the Bx-tree is a B+ tree in which the internal nodes serve as a directory, in the earlier version of the Bx-tree, the leaf nodes contained the moving-object locations being indexed and corresponding index time. In the optimized version, each leaf node contains the id, velocity, single-dimensional mapping value. The fanout is increased by not storing the locations of moving objects, the B+ tree is a structure for indexing single-dimensional data. In order to adopt the B+ tree as a moving object index, specifically, objects are first partitioned according to their update time. For objects within the partition, the Bx-tree stores their locations at a given time which are estimated by linear interpolation. By doing so, the Bx-tree keeps a consistent view of all objects within the same partition without storing the time of an objects. Secondly, the space is partitioned by a grid and the location of an object is linearized within the partitions according to a space-filling curve, given an object O, tmu =120, the Bxvalue for O can be computed as follows. O is indexed in partition 0 as mentioned, o’s position at the label timestamp of partition 0 is. Using Z-curve with order =3, the Z-value of O, i. e. xrep is 2, concatenating indexpartition and xrep, Bxvalue 2=19. Example O and tmu=120 then Os position at the label timestamp of partition, given a new object, its index key is computed and then the object is inserted into the Bx-tree as in the B+ tree. An update consists of a deletion followed by an insertion, an auxiliary structure is employed to keep the latest key of each index so that an object can be deleted by searching for the key. The indexing key is computed before affecting the tree, in this way, the Bx-tree directly inherits the good properties of the B+ tree, and achieves efficient update performance. A range query retrieves all objects whose location falls within the rectangular range q = at time t q not prior to the current time, the Bx-tree uses query-window enlargement technique to answer queries. The main idea is to enlarge the query window so that it encloses all objects whose positions are not within query window at its label timestamp, after the enlargement, the partitions of the Bx-tree need to be traversed to find objects falling in the enlarged query window. In each partition, the use of a space-filling curve means that a query in the native, two-dimensional space becomes a set of range queries in the transformed. K nearest neighbor query is computed by iteratively performing range queries with an incrementally enlarged search region until k answers are obtained, another possibility is to employ similar querying ideas in The iDistance Technique. The range query and K Nearest Neighbor query algorithms can be extended to support interval queries, continuous queries

16.
Binary search tree
–
In computer science, binary search trees, sometimes called ordered or sorted binary trees, are a particular type of containers, data structures that store items in memory. They allow fast lookup, addition and removal of items, and can be used to implement either dynamic sets of items and this is much better than the linear time required to find items by key in an array, but slower than the corresponding operations on hash tables. Several variants of the search tree have been studied in computer science. A binary search tree is a binary tree, whose internal nodes each store a key. Generally, the represented by each node is a record rather than a single data element. However, for sequencing purposes, nodes are compared according to their keys rather than any part of their associated records, Binary search trees are a fundamental data structure used to construct more abstract data structures such as sets, multisets, and associative arrays. Some of their disadvantages are as follows, The shape of the search tree depends entirely on the order of insertions and deletions. When inserting or searching for an element in a search tree. The keys in the search tree may be long and the run time may increase. After a long intermixed sequence of random insertion and deletion, the height of the tree approaches square root of the number of keys, √n. Binary search requires an order relation by which every element can be compared with every element in the sense of a total preorder. The part of the element which effectively takes place in the comparison is called its key. Whether duplicates, i. e. different elements with same key, shall be allowed in the tree or not, does not depend on the order relation, in the context of binary search trees a total preorder is realized most flexibly by means of a three-way comparison subroutine. Binary search trees support three main operations, insertion of elements, deletion of elements, and lookup, searching a binary search tree for a specific key can be programmed recursively or iteratively. We begin by examining the root node, if the tree is null, the key we are searching for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful, if the key is less than that of the root, we search the left subtree. Similarly, if the key is greater than that of the root and this process is repeated until the key is found or the remaining subtree is null. If the searched key is not found after a null subtree is reached and this is easily expressed as a recursive algorithm, The same algorithm can be implemented iteratively, These two examples rely on the order relation being a total order

17.
Interval tree
–
In computer science, an interval tree is a tree data structure to hold intervals. Specifically, it one to efficiently find all intervals that overlap with any given interval or point. It is often used for windowing queries, for instance, to all roads on a computerized map inside a rectangular viewport. A similar data structure is the segment tree, the trivial solution is to visit each interval and test whether it intersects the given point or interval, which requires O time, where n is the number of intervals in the collection. Interval trees have a time of O and an initial creation time of O. After creation, interval trees may be dynamic, allowing efficient insertion and deletion of an interval in O. If the endpoints of intervals are within a small range, faster data structures exist with preprocessing time O. In a simple case, the intervals do not overlap and they can be inserted into a binary search tree. A naive approach might be to build two parallel trees, one ordered by the point, and one ordered by the ending point of each interval. This allows discarding half of tree in O time, but the results must be merged. This gives us queries in O = O, which is no better than brute-force and this article describes two alternative designs for an interval tree, dubbed the centered interval tree and the augmented tree. Queries require O time, with n being the number of intervals. Construction requires O time, and storage requires O space, given a set of n intervals on the number line, we want to construct a data structure so that we can efficiently retrieve all intervals overlapping another interval or point. We start by taking the entire range of all the intervals, the intervals in S_left and S_right are recursively divided in the same manner until there are no intervals left. The intervals in S_center that overlap the center point are stored in a data structure linked to the node in the interval tree. This data structure consists of two lists, one containing all the intervals sorted by their points, and another containing all the intervals sorted by their ending points. The task is to find all intervals in the tree that overlap a given point x, the tree is walked with a similar recursive algorithm as would be used to traverse a traditional binary tree, but with extra affordance for the intervals overlapping the center point at each node. For each tree node, x is compared to x_center, the midpoint used in node construction above, if x is less than x_center, the leftmost set of intervals, S_left, is considered

18.
Splay tree
–
A splay tree is a self-adjusting binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O amortized time, for many sequences of non-random operations, splay trees perform better than other search trees, even when the specific pattern of the sequence is unknown. The splay tree was invented by Daniel Sleator and Robert Tarjan in 1985, all normal operations on a binary search tree are combined with one basic operation, called splaying. Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree. One way to do this is to first perform a binary tree search for the element in question. Alternatively, an algorithm can combine the search and the tree reorganization into a single phase. Good performance for a tree depends on the fact that it is self-optimizing. The worst-case height—though unlikely—is O, with the average being O, having frequently used nodes near the root is an advantage for many practical applications, and is particularly useful for implementing caches and garbage collection algorithms. Advantages include, Comparable performance, Average-case performance is as efficient as other trees, small memory footprint, Splay trees do not need to store any bookkeeping data. The most significant disadvantage of splay trees is that the height of a tree can be linear. For example, this will be the case after accessing all n elements in non-decreasing order, since the height of a tree corresponds to the worst-case access time, this means that the actual cost of an operation can be high. However the amortized access cost of this worst case is logarithmic, O. Also, the representation of splay trees can change even when they are accessed in a read-only manner. This complicates the use of such trees in a multi-threaded environment. Specifically, extra management is needed if multiple threads are allowed to perform find operations concurrently and this also makes them unsuitable for general use in purely functional programming, although even there they can be used in limited ways to implement priority queues. When a node x is accessed, an operation is performed on x to move it to the root. To perform a splay operation we carry out a sequence of splay steps and it is important to remember to set gg to now point to x after any splay operation. If gg is null, then x obviously is now the root, There are three types of splay steps, each of which has a left- and right-handed case. For the sake of brevity, only one of two is shown for each type

19.
T-tree
–
In computer science a T-tree is a type of binary tree data structure that is used by main-memory databases, such as Datablitz, EXtremeDB, MySQL Cluster, Oracle TimesTen and MobileLite. T-trees seek to gain the benefits of in-memory tree structures such as AVL trees while avoiding the large storage space overhead which is common to them. T-trees do not keep copies of the data fields within the index tree nodes themselves. Instead, they take advantage of the fact that the data is always in main memory together with the index so that they just contain pointers to the actual data fields. The T in T-tree refers to the shape of the data structures in the original paper that first described this type of index. Although T-trees seem to be used for main-memory databases, recent research indicates that they actually do not perform better than B-trees on modern hardware, Rao, Jun. Cache conscious indexing for decision-support in main memory, proceedings of the 25th International Conference on Very Large Databases. Kim, Kyungwha, Junho Shim, Ig-hoon Lee, cache conscious trees, How do they perform on contemporary commodity microprocessors. Proceedings of the 5th International Conference on Computational Science and Its Applications, the main reason seems to be that the traditional assumption of memory references having uniform cost is no longer valid given the current speed gap between cache access and main memory access. A T-tree node usually consists of pointers to the parent node, the left and right child node, nodes with two subtrees are called internal nodes, nodes without subtrees are called leaf nodes and nodes with only one subtree are named half-leaf nodes. A node is called the bounding node for a value if the value is between the current minimum and maximum value, inclusively. For each internal node, leaf or half leaf nodes exist that contain the predecessor of its smallest data value, leaf and half-leaf nodes can contain any number of data elements from one to the maximum size of the data array. Search fails if the value is not found in the data array, If the search value is less than the minimum value of the current node then continue search in its left subtree. Search fails if there is no left subtree, If the search value is greater than the maximum value of the current node then continue search in its right subtree. Search fails if there is no right subtree, search for a bounding node for the new value. Now proceed to the holding the greatest lower bound for the node that the new value was inserted to. If the removed minimum value still fits in there then add it as the new value of the node. If no bounding node was found then insert the value into the last node searched if it fits into it

20.
Treap
–
The treap was first described by Cecilia R. Aragon and Raimund Seidel in 1989, its name is a portmanteau of tree and heap. It is a Cartesian tree in which each key is given a numeric priority, as with any binary search tree, the inorder traversal order of the nodes is the same as the sorted order of the keys. The structure of the tree is determined by the requirement that it be heap-ordered, that is, an equivalent way of describing the treap is that it could be formed by inserting the nodes highest-priority-first into a binary search tree without doing any rebalancing. Because random binary search trees are known to have logarithmic height with high probability, the same is true for treaps. This modification would cause the tree to lose its shape, instead, frequently accessed nodes would be more likely to be near the root of the tree. Naor and Nissim describe an application in maintaining authorization certificates in public-key cryptosystems, Treaps support the following basic operations, To search for a given key value, apply a standard binary search algorithm in a binary search tree, ignoring the priorities. To insert a new key x into the treap, generate a random priority y for x, binary search for x in the tree, and create a new node at the leaf position where the binary search determines a node for x should exist. Then, as long as x is not the root of the tree and has a priority number than its parent z. To delete a node x from the treap, if x is a leaf of the tree, if x has a single child z, remove x from the tree and make z be the child of the parent of x. Finally, if x has two children, swap its position in the tree with the position of its immediate successor z in the sorted order, resulting in one of the previous cases. In this final case, the swap may violate the heap-ordering property for z, in addition to the single-element insert, delete and lookup operations, several fast bulk operations have been defined on treaps, union, intersection and set difference. These rely on two operations, split and merge. To split a treap into two smaller treaps, those smaller than key x, and those larger than key x, insert x into the treap with maximum priority—larger than the priority of any node in the treap. After this insertion, x will be the node of the treap, all values less than x will be found in the left subtreap. This costs as much as an insertion into the treap. Merging two treaps that are the product of a split, one can safely assume that the greatest value in the first treap is less than the smallest value in the second treap. Rotate as necessary to fix the heap order, after that it will be a leaf node, and can easily be deleted. The result is one treap merged from the two original treaps and this is effectively undoing a split, and costs the same

21.
Weight-balanced tree
–
In computer science, weight-balanced binary trees are a type of self-balancing binary search trees that can be used to implement dynamic sets, dictionaries and sequences. These trees were introduced by Nievergelt and Reingold in the 1970s as trees of bounded balance and their more common name is due to Knuth. Specifically, each node stores the size of the subtree rooted at the node, getting the nth largest element in a set or determining an elements index in sorted order. Weight-balanced trees are popular in the programming community and are used to implement sets and maps in MIT Scheme, SLIB. A weight-balanced tree is a search tree that stores the sizes of subtrees in the nodes. That is, a node has fields key, of any ordered type value left, right, pointer to node size, by definition, the size of a leaf is zero. The size of a node is the sum of sizes of its two children, plus one. Based on the size, one defines the weight as weight = size +1, formally, node balance is defined as follows, A node is α-weight-balanced if weight ≥ α·weight and weight ≥ α·weight. Here, α is a parameter to be determined when implementing weight balanced trees. Larger values of α produce more balanced trees, but not all values of α are appropriate, Nievergelt, later work showed a lower bound of 2⁄11 for α, although it can be made arbitrarily small if a custom rebalancing algorithm is used. e. Balancing takes a constant amount of overhead in an amortized sense

22.
Heap (data structure)
–
A heap can be classified further as either a max heap or a min heap. In a max heap, the keys of parent nodes are always greater than or equal to those of the children, in a min heap, the keys of parent nodes are less than or equal to those of the children and the lowest key is in the root node. A common implementation of a heap is the binary heap, in which the tree is a binary tree. The heap data structure, specifically the binary heap, was introduced by J. W. J. Williams in 1964, heaps are also crucial in several efficient graph algorithms such as Dijkstras algorithm. In a heap, the highest priority element is stored at the root. A heap is not a structure and can be regarded as partially ordered. As visible from the heap-diagram, there is no relationship among nodes on any given level. When a heap is a binary tree, it has a smallest possible height—a heap with N nodes always has log N height. A heap is a data structure when you need to remove the object with the highest priority. Note that, as shown in the graphic, there is no implied ordering between siblings or cousins and no implied sequence for an in-order traversal, the heap relation mentioned above applies only between nodes and their parents, grandparents, etc. The maximum number of each node can have depends on the type of heap. More efficient than pop followed by push, since only need to once, not twice. Meld, joining two heaps to form a new heap containing all the elements of both, destroying the original heaps. Inspection size, return the number of items in the heap, is-empty, return true if the heap is empty, false otherwise. Called sift because node moves up the tree until it reaches the correct level, sift-down, move a node down in the tree, similar to sift-up, used to restore heap condition after deletion or replacement. Heaps are usually implemented in an array, and do not require pointers between elements, after an element is inserted into or deleted from a heap, the heap property may be violated and the heap must be balanced by internal operations. Full and almost full binary heaps may be represented in a very space-efficient way using an array alone, the first element will contain the root. The next two elements of the array contain its children, the next four contain the four children of the two child nodes, etc

23.
Binary heap
–
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, heap property, the key stored in each node is either greater than or equal to or less than or equal to the keys in the nodes children, according to some total order. Heaps where the parent key is greater than or equal to the keys are called max-heaps. Efficient algorithms are known for the two operations needed to implement a priority queue on a heap, inserting an element. Both the insert and remove operations modify the heap to conform to the property first. Then the heap property is restored by traversing up or down the heap, to add an element to a heap we must perform an up-heap operation, by following this algorithm, Add the element to the bottom level of the heap. Compare the added element with its parent, if they are in the correct order, if not, swap the element with its parent and return to the previous step. As an example of binary heap insertion, say we have a max-heap and we first place the 15 in the position marked by the X. However, the heap property is violated since 15 >8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap, However the heap property is violated since 15 >11, so we need to swap again. There is no need to check the left child after this step, at the start, the max-heap was valid, meaning 11 >5, if 15 >11. The procedure for deleting the root from the heap and restoring the properties is called down-heap, replace the root of the heap with the last element on the last level. Compare the new root with its children, if they are in the correct order, if not, swap the element with one of its children and return to the previous step. So, if we have the same max-heap as before We remove the 11, Now the heap property is violated since 8 is greater than 4. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap A of length heap_length, note that A is indexed starting at 1, not 0 as is common in many real programming languages. If they do not, the algorithm will fall through with no change to the array, the down-heap operation can also be used to modify the value of the root, even when an element is not being deleted. In the pseudocode above, what starts with // is a comment, note that A is an array that starts being indexed from 1 up to length, according to the pseudocode. Building a heap from an array of n elements can be done by starting with an empty heap

24.
Binomial heap
–
In computer science, a binomial heap is a heap similar to a binary heap but also supports quick merging of two heaps. This is achieved by using a tree structure. It is important as an implementation of the mergeable heap abstract data type, a binomial tree of order k has 2k nodes, height k. Because of its structure, a binomial tree of order k can be constructed from two trees of order k−1 trivially by attaching one of them as the leftmost child of the root of the other tree. This feature is central to the operation of a binomial heap. The name comes from the shape, a tree of order n has nodes at depth d. There can only be one or zero binomial trees for each order. The first property ensures that the root of each binomial tree contains the smallest key in the tree, the second property implies that a binomial heap with n nodes consists of at most log n +1 binomial trees. In fact, the number and orders of these trees are determined by the number of nodes n. For example number 13 is 1101 in binary,23 +22 +20, example of a binomial heap containing 13 nodes with distinct keys. The heap consists of three binomial trees with orders 0,2, and 3. Because no operation requires random access to the nodes of the binomial trees. As mentioned above, the simplest and most important operation is the merging of two trees of the same order within a binomial heap. Due to the structure of trees, they can be merged trivially. As their root node is the smallest element within the tree, then the other tree becomes a subtree of the combined tree. This operation is basic to the merging of two binomial heaps. The lists of roots of both heaps are traversed simultaneously in a similar to that of the merge algorithm. If only one of the heaps contains a tree of order j, if both heaps contain a tree of order j, the two trees are merged to one tree of order j+1 so that the minimum-heap property is satisfied

25.
Fibonacci heap
–
In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binary heap, michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. They named Fibonacci heaps after the Fibonacci numbers, which are used in their running time analysis, for the Fibonacci heap, the find-minimum operation takes constant amortized time. The insert and decrease key operations also work in constant amortized time, deleting an element works in O amortized time, where n is the size of the heap. This means that starting from an empty data structure, any sequence of a insert and decrease key operations and b delete operations would take O worst case time, in a binary or binomial heap such a sequence of operations would take O time. A Fibonacci heap is thus better than a binary or binomial heap when b is smaller than a by a non-constant factor. A Fibonacci heap is a collection of trees satisfying the property, that is. This implies that the key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible, the trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a lazy manner, for example, merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree. However, at some point order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes are kept low, every node has degree at most O and the size of a subtree rooted in a node of degree k is at least Fk+2. This is achieved by the rule that we can cut at most one child of each non-root node, when a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree. The number of trees is decreased in the operation delete minimum, as a result of a relaxed structure, some operations can take a long time while others are done very quickly. For the amortized running time analysis we use the potential method and this additional time is then later combined and subtracted from the actual running time of slow operations. The amount of time saved for use is measured at any given moment by a potential function. The potential of a Fibonacci heap is given by Potential = t + 2m where t is the number of trees in the Fibonacci heap, a node is marked if at least one of its children was cut since this node was made a child of another node. The amortized time for an operation is given by the sum of the time and c times the difference in potential

26.
Leftist tree
–
In computer science, a leftist tree or leftist heap is a priority queue implemented with a variant of a binary heap. Every node has an s-value which is the distance to the nearest leaf, in contrast to a binary heap, a leftist tree attempts to be very unbalanced. In addition to the property, leftist trees are maintained so the right descendant of each node has the lower s-value. The height-biased leftist tree was invented by Clark Allan Crane, the name comes from the fact that the left subtree is usually taller than the right subtree. When inserting a new node into a tree, a new tree is created and merged into the existing tree. To delete a minimum item, we remove the root and the left, both these operations take O time. For insertions, this is slower than binomial heaps which support insertion in amortized constant time, leftist trees are advantageous because of their ability to merge quickly, compared to binary heaps which take Θ. In almost all cases, the merging of skew heaps has better performance, however merging leftist heaps has worst-case O complexity while merging skew heaps has only amortized O complexity. The usual leftist tree is a height-biased leftist tree, however, other biases can exist, such as in the weight-biased leftist tree. The s-value of a node is the distance from that node to the nearest leaf of the binary representation of the tree. The extended representation fills out the tree so that each node has 2 children, the minimum distance to these leaves are marked in the diagram. Thus s-value of 4 is 2, since the closest leaf is that of 8 --if 8 were extended, the s-value of 5 is 1 since its extended representation would have one leaf itself. Merging two nodes depends on whether the tree is a min or max height biased leftist tree. For a min height biased leftist tree, set the higher valued node as the child of the lower valued node. If the lower valued node already has a child, then merge the higher valued node with the sub-tree rooted by the right child of the lower valued node. After merging, the s-value of the lower valued node must be updated, now check if the lower valued node has a left child. If it does not, then move the right child to the left, if it does have a left child, then the child with the highest s-value should go on the left. Initializing a height biased leftist tree is primarily done in one of two ways, the first is to merge each node one at a time into one HBLT

27.
Skew heap
–
A skew heap is a heap data structure implemented as a binary tree. Skew heaps are advantageous because of their ability to more quickly than binary heaps. In contrast with binary heaps, there are no structural constraints, only two conditions must be satisfied, The general heap order must be enforced Every operation on two skew heaps must be done using a special skew heap merge. A skew heap is a form of a leftist heap which attempts to maintain balance by unconditionally swapping all nodes in the merge path when merging two heaps. With no structural constraints, it may seem that a skew heap would be horribly inefficient, the result of skew merging two skew heaps s h 1 and s h 2 is also a skew heap. When two skew heaps are to be merged, we can use a process as the merge of two leftist heaps, Compare roots of two heaps, let p be the heap with the smaller root, and q be the other heap. Let r be the name of the resulting new heap, let the root of r be the root of p, and let rs right subtree be ps left subtree. Now, compute rs left subtree by recursively merging ps right subtree with q, before, after Alternatively, there is a non-recursive approach which is more wordy, and does require some sorting at the outset. Split each heap into subtrees by cutting every rightmost path and this will result in a set of trees in which the root either only has a left child or no children at all. Sort the subtrees in ascending order based on the value of the node of each subtree. While there are still multiple subtrees, iteratively recombine the last two, if the root of the second-to-last subtree has a left child, swap it to be the right child. Link the root of the last subtree as the child of the second-to-last subtree. Adding a value to a heap is like merging a tree with one node together with the original tree. Removing the first value in a heap can be accomplished by removing the root, in many functional languages, skew heaps become extremely simple to implement. Here is a sample implementation in Haskell. Sleator, Daniel Dominic, Tarjan, Robert Endre, CSE4101 lecture notes, York University Animations comparing leftist heaps and skew heaps, York University Java applet for simulating heaps, Kansas State University

28.
Van Emde Boas tree
–
A Van Emde Boas tree, also known as a vEB tree, is a tree data structure which implements an associative array with m-bit integer keys. It performs all operations in O time, or equivalently in O time, the M is not to be confused with the actual number of elements stored in the tree, by which the performance of other tree data-structures is often measured. The vEB tree has good space efficiency when it contains a number of elements. It was invented by a led by Dutch computer scientist Peter van Emde Boas in 1975. These both run in O time, since the minimum and maximum element are stored as attributes in each tree, for the sake of simplicity, let log2 m = k for some integer k. A vEB tree T over the universe has a node that stores an array T. children of length √M. T. children is a pointer to a vEB tree that is responsible for the values. Additionally, T stores two values T. min and T. max as well as an auxiliary vEB tree T. aux. Data is stored in a vEB tree as follows, The smallest value currently in the tree is stored in T. min, note that T. min is not stored anywhere else in the vEB tree, while T. max is. If T is empty then we use the convention that T. max=−1, any other value x is stored in the subtree T. children where i = ⌊x/√M⌋. The auxiliary tree T. aux keeps track of children are non-empty, so T. aux contains the value j if. The operation FindNext that searches for the successor of an element x in a vEB tree proceeds as follows, If x≤T. min then the search is complete, and the answer is T. min. If x>T. max then the element does not exist, return M. Otherwise. If x≤T. children. max then the value being searched for is contained in T. children so the search proceeds recursively in T. children, Otherwise, we search for the value i in T. aux. This gives us the index j of the first subtree that contains an element larger than x, the element found on the children level needs to be composed with the high bits to form a complete next element. This gives a recurrence for the time of T = T + O. The call insert that inserts a value x into a vEB tree T operates as follows, If T is empty then we set T. min = T. max = x and we are done. Otherwise, if x<T. min then we insert T. min into the subtree i responsible for T. min and then set T. min = x. If T. children was previously empty, then we also insert i into T. aux Otherwise, if x>T. max then we insert x into the subtree i responsible for x and then set T. max = x

29.
Trie
–
Unlike a binary search tree, no node in the tree stores the key associated with that node, instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a prefix of the string associated with that node. Values are not necessarily associated with every node, rather, values tend only to be associated with leaves, and with some inner nodes that correspond to keys of interest. For the space-optimized presentation of prefix tree, see compact prefix tree, in the example shown, keys are listed in the nodes and values below them. Each complete English word has an integer value associated with it. A trie can be seen as a deterministic finite automaton. Each finite language is generated by an automaton, and each trie can be compressed into a deterministic acyclic finite state automaton. Though tries are usually keyed by character strings, they need not be, the same algorithms can be adapted to serve similar functions of ordered lists of any construct, e. g. permutations on a list of digits or shapes. In particular, a trie is keyed on the individual bits making up any fixed-length binary datum. Tries were first described by René de la Briandais in 1959, the term trie was coined two years later by Edward Fredkin, who pronounces it /ˈtriː/, after the middle syllable of retrieval. However, other authors pronounce it /ˈtraɪ/, in an attempt to distinguish it verbally from tree, as discussed below, a trie has a number of advantages over binary search trees. A trie can also be used to replace a hash table, over which it has the advantages, Looking up data in a trie is faster in the worst case, O time. An imperfect hash table can have key collisions, a key collision is the hash function mapping of different keys to the same position in a hash table. The worst-case lookup speed in a hash table is O time. There are no collisions of different keys in a trie, buckets in a trie, which are analogous to hash table buckets that store key collisions, are necessary only if a single key is associated with more than one value. There is no need to provide a function or to change hash functions as more keys are added to a trie. A trie can provide an alphabetical ordering of the entries by key, some keys, such as floating point numbers, can lead to long chains and prefixes that are not particularly meaningful. Nevertheless, a bitwise trie can handle standard IEEE single and double format floating point numbers, a common application of a trie is storing a predictive text or autocomplete dictionary, such as found on a mobile telephone

30.
Ctrie
–
A concurrent hash-trie or Ctrie is a concurrent thread-safe lock-free implementation of a hash array mapped trie. It is used to implement the concurrent map abstraction and it has particularly scalable concurrent insert and remove operations and is memory-efficient. It is the first known concurrent data-structure that supports O, atomic, the Ctrie data structure is a non-blocking concurrent hash array mapped trie based on single-word compare-and-swap instructions in a shared-memory system. It supports concurrent lookup, insert and remove operations, just like the hash array mapped trie, it uses the entire 32-bit space for hash values thus having low risk of hashcode collisions. Each node may branch to up to 32 sub tries, to conserve memory, each node contains a 32 bits bitmap where each bit indicates the presence of a branch followed by an array of length equal to the Hamming weight of the bitmap. Keys are inserted by doing an atomic compare-and-swap operation on the node needs to be modified. To ensure that updates are done independently and in a proper order, the figure above illustrates the Ctrie insert operation. Trie A is empty - an atomic CAS instruction is used to swap the old node C1 with the new version of C1 which has the new key k1, if the CAS is not successful, the operation is restarted. If the CAS is successful, we obtain the trie B and this procedure is repeated when a new key k2 is added. The Ctrie is defined by the pointer to the root indirection node, the following types of nodes are defined for the Ctrie, structure INode structure CNode Branch, INode | SNode structure SNode A C-node is a branching node. It typically contains up to 32 branches, so W above is 5, each branch may either be a key-value pair or another I-node. To avoid wasting 32 entries in the branching array when some branches may be empty, a bitmap is used to denote which bits are full. If there is a bit, it also computes its position in the branch array, note that the insert operation above is tail-recursive, so it can be rewritten as a while loop. Other operations are described in detail in the original paper on Ctries. The data-structure has been proven to be correct - Ctrie operations have shown to have the atomicity, linearizability. The lookup operation can be modified to guarantee wait-freedom, however, they are far more scalable than most concurrent hash tables where the insertions are concerned. Most concurrent hash tables are bad at conserving memory - when the keys are removed from the hash table, Ctries have the property that the allocated memory is always a function of only the current number of keys in the data-structure. Ctries have logarithmic complexity bounds of the operations, albeit with a low constant factor due to the high branching level

31.
Radix tree
–
In computer science, a radix tree is a data structure that represents a space-optimized trie in which each node that is the only child is merged with its parent. The result is that the number of children of every node is at least the radix r of the radix tree, where r is a positive integer. Unlike in regular tries, edges can be labeled with sequences of elements as well as single elements and this makes radix trees much more efficient for small sets and for sets of strings that share long prefixes. Unlike regular trees, the key at each node is compared chunk-of-bits by chunk-of-bits, when the r is 2, the radix trie is binary, which minimizes sparseness at the expense of maximizing trie depth—i. e. Maximizing up to conflation of nondiverging bit-strings in the key, when r is an integer power of 2 greater or equal to 4, then the radix trie is an r-ary trie, which lessens the depth of the radix trie at the expense of potential sparseness. As an optimization, edge labels can be stored in constant size by using two pointers to a string, Radix trees are useful for constructing associative arrays with keys that can be expressed as strings. They are also used for inverted indexes of text documents in information retrieval, Radix trees support insertion, deletion, and searching operations. Insertion adds a new string to the trie while trying to minimize the amount of data stored, deletion removes a string from the trie. Searching operations include exact lookup, find predecessor, find successor, all of these operations are O where k is the maximum length of all strings in the set, where length is measured in the quantity of bits equal to the radix of the radix trie. The lookup operation determines if a string exists in a trie, most operations modify this approach in some way to handle their specific tasks. For instance, the node where a string terminates may be of importance and this operation is similar to tries except that some edges consume multiple elements. The following pseudo code assumes that these classes exist, edge Node targetNode string label Node Array of Edges edges function isLeaf function lookup To insert a string, we search the tree until we can make no further progress. This splitting step ensures that no node has more children there are possible string elements. Several cases of insertion are shown below, though more may exist, note that r simply represents the root. It is assumed that edges can be labelled with empty strings to terminate strings where necessary, to delete a string x from a tree, we first locate the leaf representing x. Then, assuming x exists, we remove the leaf node. If the parent of our leaf node has one other child, then that childs incoming label is appended to the parents incoming label. Find all strings with common prefix, Returns an array of strings which begin with the same prefix, find predecessor, Locates the largest string less than a given string, by lexicographic order

32.
Suffix tree
–
In computer science, a suffix tree is a compressed trie containing all the suffixes of the given text as their keys and positions in the text as their values. Suffix trees allow particularly fast implementations of many important string operations and these speedups come at a cost, storing a strings suffix tree typically requires significantly more space than storing the string itself. The concept was first introduced by Weiner, which Donald Knuth subsequently characterized as Algorithm of the Year 1973, the construction was greatly simplified by McCreight, and also by Ukkonen. Ukkonen provided the first online-construction of suffix trees, now known as Ukkonens algorithm and these algorithms are all linear-time for a constant-size alphabet, and have worst-case running time of O in general. Farach gave the first suffix tree construction algorithm that is optimal for all alphabets, in particular, this is the first linear-time algorithm for strings drawn from an alphabet of integers in a polynomial range. Farachs algorithm has become the basis for new algorithms for constructing suffix trees and suffix arrays, for example, in external memory, compressed, succinct. The suffix tree for the string S of length n is defined as a such that. Except for the root, every node has at least two children. Each edge is labeled with a non-empty substring of S, no two edges starting out of a node can have string-labels beginning with the same character. The string obtained by concatenating all the string-labels found on the path from the root to leaf i spells out suffix S, since such a tree does not exist for all strings, S is padded with a terminal symbol not seen in the string. This ensures that no suffix is a prefix of another, since all internal non-root nodes are branching, there can be at most n −1 such nodes, and n + +1 = 2n nodes in total. Suffix links are a key feature for older linear-time construction algorithms, although most newer algorithms, in a complete suffix tree, all internal non-root nodes have a suffix link to another internal node. If the path from the root to a node spells the string χ α, see for example the suffix link from the node for ANA to the node for NA in the figure above. Suffix links are used in some algorithms running on the tree. A generalized suffix tree is a tree made for a set of words instead of a single word. It represents all suffixes from this set of words, each word must be terminated by a different termination symbol or word. A suffix tree for a string S of length n can be built in Θ time, for larger alphabets, the running time is dominated by first sorting the letters to bring them into a range of size O, in general, this takes O time. The costs below are given under the assumption that the alphabet is constant and you can, Search for strings, Check if a string P of length m is a substring in O time

33.
X-fast trie
–
In computer science, an x-fast trie is a data structure for storing integers from a bounded domain. It supports exact and predecessor or successor queries in time O, using O space, where n is the number of stored values and M is the maximum value in the domain. The structure was proposed by Dan Willard in 1982, along with the more complicated y-fast trie, as a way to improve the usage of van Emde Boas trees. An x-fast trie is a bitwise trie, a tree where each subtree stores values whose binary representations start with a common prefix. Each internal node is labeled with the prefix of the values in its subtree and typically. The binary representation of an integer between 0 and M −1 uses ⌈log2 M⌉ bits, so the height of the trie is O, all values in the x-fast trie are stored at the leaves. Internal nodes are stored only if they have leaves in their subtree, if an internal node would have no left child, it stores a pointer to the smallest leaf in its right subtree instead, called a descendant pointer. Likewise, if it would have no child, it stores a pointer to the largest leaf in its left subtree. Each leaf stores a pointer to its predecessor and successor, thereby forming a doubly linked list, finally, there is a hash table for each level that contains all the nodes on that level. Together, these hash tables form the level-search structure, to guarantee the worst-case query times, these hash tables should use dynamic perfect hashing or cuckoo hashing. The total space usage is O, since each element has a path of length O. Like van Emde Boas trees, x-fast tries support the operations of an associative array. To find the successor or predecessor of a key k, we first find Ak and this is the node in the trie that has the longest common prefix with k. To find Ak, we perform a search on the levels. We start at level h/2, where h is the height of the trie, on each level, we query the corresponding hash table in the level-search structure with the prefix of k of the right length. If a node with that prefix does not exist, we know that Ak must be at a higher level, if a node with that prefix does exist, Ak can not be at a higher level, so we restrict our search to the current and lower levels. Once we find the lowest ancestor of k, we know that it has leaves in one of its subtrees, therefore the descendant pointer points to the successor or the predecessor of k. Depending on which one we are looking for, we might have to one step in the linked list to the next or previous leaf