1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base

2.
National Institute of Standards and Technology
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields

3.
Binary heap
–
A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a way of implementing priority queues. The binary heap was introduced by J. W. J. Williams in 1964, heap property, the key stored in each node is either greater than or equal to or less than or equal to the keys in the nodes children, according to some total order. Heaps where the parent key is greater than or equal to the keys are called max-heaps. Efficient algorithms are known for the two operations needed to implement a priority queue on a heap, inserting an element. Both the insert and remove operations modify the heap to conform to the property first. Then the heap property is restored by traversing up or down the heap, to add an element to a heap we must perform an up-heap operation, by following this algorithm, Add the element to the bottom level of the heap. Compare the added element with its parent, if they are in the correct order, if not, swap the element with its parent and return to the previous step. As an example of binary heap insertion, say we have a max-heap and we first place the 15 in the position marked by the X. However, the heap property is violated since 15 >8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap, However the heap property is violated since 15 >11, so we need to swap again. There is no need to check the left child after this step, at the start, the max-heap was valid, meaning 11 >5, if 15 >11. The procedure for deleting the root from the heap and restoring the properties is called down-heap, replace the root of the heap with the last element on the last level. Compare the new root with its children, if they are in the correct order, if not, swap the element with one of its children and return to the previous step. So, if we have the same max-heap as before We remove the 11, Now the heap property is violated since 8 is greater than 4. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap A of length heap_length, note that A is indexed starting at 1, not 0 as is common in many real programming languages. If they do not, the algorithm will fall through with no change to the array, the down-heap operation can also be used to modify the value of the root, even when an element is not being deleted. In the pseudocode above, what starts with // is a comment, note that A is an array that starts being indexed from 1 up to length, according to the pseudocode. Building a heap from an array of n elements can be done by starting with an empty heap

4.
Fibonacci heap
–
In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binary heap, michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984 and published them in a scientific journal in 1987. They named Fibonacci heaps after the Fibonacci numbers, which are used in their running time analysis, for the Fibonacci heap, the find-minimum operation takes constant amortized time. The insert and decrease key operations also work in constant amortized time, deleting an element works in O amortized time, where n is the size of the heap. This means that starting from an empty data structure, any sequence of a insert and decrease key operations and b delete operations would take O worst case time, in a binary or binomial heap such a sequence of operations would take O time. A Fibonacci heap is thus better than a binary or binomial heap when b is smaller than a by a non-constant factor. A Fibonacci heap is a collection of trees satisfying the property, that is. This implies that the key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible, the trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a lazy manner, for example, merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree. However, at some point order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes are kept low, every node has degree at most O and the size of a subtree rooted in a node of degree k is at least Fk+2. This is achieved by the rule that we can cut at most one child of each non-root node, when a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree. The number of trees is decreased in the operation delete minimum, as a result of a relaxed structure, some operations can take a long time while others are done very quickly. For the amortized running time analysis we use the potential method and this additional time is then later combined and subtracted from the actual running time of slow operations. The amount of time saved for use is measured at any given moment by a potential function. The potential of a Fibonacci heap is given by Potential = t + 2m where t is the number of trees in the Fibonacci heap, a node is marked if at least one of its children was cut since this node was made a child of another node. The amortized time for an operation is given by the sum of the time and c times the difference in potential

5.
Radix tree
–
In computer science, a radix tree is a data structure that represents a space-optimized trie in which each node that is the only child is merged with its parent. The result is that the number of children of every node is at least the radix r of the radix tree, where r is a positive integer. Unlike in regular tries, edges can be labeled with sequences of elements as well as single elements and this makes radix trees much more efficient for small sets and for sets of strings that share long prefixes. Unlike regular trees, the key at each node is compared chunk-of-bits by chunk-of-bits, when the r is 2, the radix trie is binary, which minimizes sparseness at the expense of maximizing trie depth—i. e. Maximizing up to conflation of nondiverging bit-strings in the key, when r is an integer power of 2 greater or equal to 4, then the radix trie is an r-ary trie, which lessens the depth of the radix trie at the expense of potential sparseness. As an optimization, edge labels can be stored in constant size by using two pointers to a string, Radix trees are useful for constructing associative arrays with keys that can be expressed as strings. They are also used for inverted indexes of text documents in information retrieval, Radix trees support insertion, deletion, and searching operations. Insertion adds a new string to the trie while trying to minimize the amount of data stored, deletion removes a string from the trie. Searching operations include exact lookup, find predecessor, find successor, all of these operations are O where k is the maximum length of all strings in the set, where length is measured in the quantity of bits equal to the radix of the radix trie. The lookup operation determines if a string exists in a trie, most operations modify this approach in some way to handle their specific tasks. For instance, the node where a string terminates may be of importance and this operation is similar to tries except that some edges consume multiple elements. The following pseudo code assumes that these classes exist, edge Node targetNode string label Node Array of Edges edges function isLeaf function lookup To insert a string, we search the tree until we can make no further progress. This splitting step ensures that no node has more children there are possible string elements. Several cases of insertion are shown below, though more may exist, note that r simply represents the root. It is assumed that edges can be labelled with empty strings to terminate strings where necessary, to delete a string x from a tree, we first locate the leaf representing x. Then, assuming x exists, we remove the leaf node. If the parent of our leaf node has one other child, then that childs incoming label is appended to the parents incoming label. Find all strings with common prefix, Returns an array of strings which begin with the same prefix, find predecessor, Locates the largest string less than a given string, by lexicographic order

6.
Leftist tree
–
In computer science, a leftist tree or leftist heap is a priority queue implemented with a variant of a binary heap. Every node has an s-value which is the distance to the nearest leaf, in contrast to a binary heap, a leftist tree attempts to be very unbalanced. In addition to the property, leftist trees are maintained so the right descendant of each node has the lower s-value. The height-biased leftist tree was invented by Clark Allan Crane, the name comes from the fact that the left subtree is usually taller than the right subtree. When inserting a new node into a tree, a new tree is created and merged into the existing tree. To delete a minimum item, we remove the root and the left, both these operations take O time. For insertions, this is slower than binomial heaps which support insertion in amortized constant time, leftist trees are advantageous because of their ability to merge quickly, compared to binary heaps which take Θ. In almost all cases, the merging of skew heaps has better performance, however merging leftist heaps has worst-case O complexity while merging skew heaps has only amortized O complexity. The usual leftist tree is a height-biased leftist tree, however, other biases can exist, such as in the weight-biased leftist tree. The s-value of a node is the distance from that node to the nearest leaf of the binary representation of the tree. The extended representation fills out the tree so that each node has 2 children, the minimum distance to these leaves are marked in the diagram. Thus s-value of 4 is 2, since the closest leaf is that of 8 --if 8 were extended, the s-value of 5 is 1 since its extended representation would have one leaf itself. Merging two nodes depends on whether the tree is a min or max height biased leftist tree. For a min height biased leftist tree, set the higher valued node as the child of the lower valued node. If the lower valued node already has a child, then merge the higher valued node with the sub-tree rooted by the right child of the lower valued node. After merging, the s-value of the lower valued node must be updated, now check if the lower valued node has a left child. If it does not, then move the right child to the left, if it does have a left child, then the child with the highest s-value should go on the left. Initializing a height biased leftist tree is primarily done in one of two ways, the first is to merge each node one at a time into one HBLT

7.
T-tree
–
In computer science a T-tree is a type of binary tree data structure that is used by main-memory databases, such as Datablitz, EXtremeDB, MySQL Cluster, Oracle TimesTen and MobileLite. T-trees seek to gain the benefits of in-memory tree structures such as AVL trees while avoiding the large storage space overhead which is common to them. T-trees do not keep copies of the data fields within the index tree nodes themselves. Instead, they take advantage of the fact that the data is always in main memory together with the index so that they just contain pointers to the actual data fields. The T in T-tree refers to the shape of the data structures in the original paper that first described this type of index. Although T-trees seem to be used for main-memory databases, recent research indicates that they actually do not perform better than B-trees on modern hardware, Rao, Jun. Cache conscious indexing for decision-support in main memory, proceedings of the 25th International Conference on Very Large Databases. Kim, Kyungwha, Junho Shim, Ig-hoon Lee, cache conscious trees, How do they perform on contemporary commodity microprocessors. Proceedings of the 5th International Conference on Computational Science and Its Applications, the main reason seems to be that the traditional assumption of memory references having uniform cost is no longer valid given the current speed gap between cache access and main memory access. A T-tree node usually consists of pointers to the parent node, the left and right child node, nodes with two subtrees are called internal nodes, nodes without subtrees are called leaf nodes and nodes with only one subtree are named half-leaf nodes. A node is called the bounding node for a value if the value is between the current minimum and maximum value, inclusively. For each internal node, leaf or half leaf nodes exist that contain the predecessor of its smallest data value, leaf and half-leaf nodes can contain any number of data elements from one to the maximum size of the data array. Search fails if the value is not found in the data array, If the search value is less than the minimum value of the current node then continue search in its left subtree. Search fails if there is no left subtree, If the search value is greater than the maximum value of the current node then continue search in its right subtree. Search fails if there is no right subtree, search for a bounding node for the new value. Now proceed to the holding the greatest lower bound for the node that the new value was inserted to. If the removed minimum value still fits in there then add it as the new value of the node. If no bounding node was found then insert the value into the last node searched if it fits into it

8.
Interval tree
–
In computer science, an interval tree is a tree data structure to hold intervals. Specifically, it one to efficiently find all intervals that overlap with any given interval or point. It is often used for windowing queries, for instance, to all roads on a computerized map inside a rectangular viewport. A similar data structure is the segment tree, the trivial solution is to visit each interval and test whether it intersects the given point or interval, which requires O time, where n is the number of intervals in the collection. Interval trees have a time of O and an initial creation time of O. After creation, interval trees may be dynamic, allowing efficient insertion and deletion of an interval in O. If the endpoints of intervals are within a small range, faster data structures exist with preprocessing time O. In a simple case, the intervals do not overlap and they can be inserted into a binary search tree. A naive approach might be to build two parallel trees, one ordered by the point, and one ordered by the ending point of each interval. This allows discarding half of tree in O time, but the results must be merged. This gives us queries in O = O, which is no better than brute-force and this article describes two alternative designs for an interval tree, dubbed the centered interval tree and the augmented tree. Queries require O time, with n being the number of intervals. Construction requires O time, and storage requires O space, given a set of n intervals on the number line, we want to construct a data structure so that we can efficiently retrieve all intervals overlapping another interval or point. We start by taking the entire range of all the intervals, the intervals in S_left and S_right are recursively divided in the same manner until there are no intervals left. The intervals in S_center that overlap the center point are stored in a data structure linked to the node in the interval tree. This data structure consists of two lists, one containing all the intervals sorted by their points, and another containing all the intervals sorted by their ending points. The task is to find all intervals in the tree that overlap a given point x, the tree is walked with a similar recursive algorithm as would be used to traverse a traditional binary tree, but with extra affordance for the intervals overlapping the center point at each node. For each tree node, x is compared to x_center, the midpoint used in node construction above, if x is less than x_center, the leftmost set of intervals, S_left, is considered

9.
K-d tree
–
In computer science, a k-d tree is a space-partitioning data structure for organizing points in a k-dimensional space. K-d trees are a data structure for several applications, such as searches involving a multidimensional search key. K-d trees are a case of binary space partitioning trees. The k-d tree is a tree in which every node is a k-dimensional point. Every non-leaf node can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points to the left of this hyperplane are represented by the subtree of that node. The hyperplane direction is chosen in the way, every node in the tree is associated with one of the k-dimensions. In such a case, the hyperplane would be set by the x-value of the point, since there are many possible ways to choose axis-aligned splitting planes, there are many different ways to construct k-d trees. The canonical method of k-d tree construction has the following constraints, As one moves down the tree, points are inserted by selecting the median of the points being put into the subtree, with respect to their coordinates in the axis being used to create the splitting plane. This method leads to a balanced k-d tree, in each leaf node is approximately the same distance from the root. However, balanced trees are not necessarily optimal for all applications, note that it is not required to select the median point. In the case where median points are not selected, there is no guarantee that the tree will be balanced, in practice, this technique often results in nicely balanced trees. Given a list of n points, the algorithm uses a median-finding sort to construct a balanced k-d tree containing those points. Function kdtree It is common that points after the median include only the ones that are greater than the median. For points that lie on the median, it is possible to define a function that compares the points in all dimensions. In some cases, it is acceptable to let points equal to the lie on one side of the median, for example, by splitting the points into a lesser than subset. This algorithm creates the invariant that for any node, all the nodes in the left subtree are on one side of a splitting plane, points that lie on the splitting plane may appear on either side. The splitting plane of a node goes through the point associated with that node, alternative algorithms for building a balanced k-d tree presort the data prior to building the tree

10.
B-tree
–
In computer science, a B-tree is a self-balancing tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree is a generalization of a search tree in that a node can have more than two children. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read, B-trees are a good example of a data structure for external memory. It is commonly used in databases and filesystems, in B-trees, internal nodes can have a variable number of child nodes within some pre-defined range. When data is inserted or removed from a node, its number of child nodes changes, in order to maintain the pre-defined range, internal nodes may be joined or split. Because a range of nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees. The lower and upper bounds on the number of nodes are typically fixed for a particular implementation. For example, in a 2-3 B-tree, each node may have only 2 or 3 child nodes. Each internal node of a B-tree will contain a number of keys, the keys act as separation values which divide its subtrees. For example, if a node has 3 child nodes then it must have 2 keys, a1. All values in the leftmost subtree will be less than a1, usually, the number of keys is chosen to vary between d and 2 d, where d is the minimum number of keys, and d +1 is the minimum degree or branching factor of the tree. In practice, the take up the most space in a node. The factor of 2 will guarantee that nodes can be split or combined, Each split node has the required minimum number of keys. Similarly, if a node and its neighbor each have d keys. Deleting the key would make the internal node have d −1 keys, the result is an entirely full node of 2 d keys. The number of branches from a node will be one more than the number of stored in the node. In a 2-3 B-tree, the nodes will store either one key or two keys. A B-tree is sometimes described with the parameters — or simply with the highest branching order, a B-tree is kept balanced by requiring that all leaf nodes be at the same depth

11.
B+ tree
–
A B+ tree is an n-ary tree with a variable but often large number of children per node. A B+ tree consists of a root, internal nodes and leaves, the root may be either a leaf or a node with two or more children. A B+ tree can be viewed as a B-tree in which each node contains only keys, the primary value of a B+ tree is in storing data for efficient retrieval in a block-oriented storage context — in particular, filesystems. This is primarily because unlike binary search trees, B+ trees have very high fanout, the ReiserFS, NSS, XFS, JFS, ReFS, and BFS filesystems all use this type of tree for metadata indexing, BFS also uses B+ trees for storing directories. NTFS uses B+ trees for directory and security-related metadata indexing, eXT4 uses extent trees for file extent indexing. Relational database management systems such as IBM DB2, Informix, Microsoft SQL Server, Oracle 8, Sybase ASE, key–value database management systems such as CouchDB and Tokyo Cabinet support this type of tree for data access. The order, or branching factor, b of a B+ tree measures the capacity of nodes for internal nodes in the tree. The actual number of children for a node, referred to here as m, is constrained for internal nodes so that ⌈ b /2 ⌉ ≤ m ≤ b, the root is an exception, it is allowed to have as few as two children. For example, if the order of a B+ tree is 7, leaf nodes have no children, but are constrained so that the number of keys must be at least ⌈ b /2 ⌉ −1 and at most b −1. In the situation where a B+ tree is nearly empty, it contains one node. This node is permitted to have as little as one key if necessary, the root of a B+ Tree represents the whole range of values in the tree, where every internal node is a subinterval. We are looking for a k in the B+ Tree. Starting from the root, we are looking for the leaf which may contain the value k, at each node, we figure out which internal pointer we should follow. An internal B+ Tree node has at most d ≤ b children and we select the corresponding node by searching on the key values of the node. It is important to increase fan-out, as this allows to direct searches to the level more efficiently. Index Entries are only to direct traffic’, thus we can compress them, perform a search to determine what bucket the new record should go into. If the bucket is not full, add the record, allocate new leaf and move half the buckets elements to the new bucket. Insert the new leafs smallest key and address into the parent, If the parent is full, split it too