-
Notifications
You must be signed in to change notification settings - Fork 0
Trees description
2–3 tree is a tree data structure, where every node with children (internal node) has either two children (2-node) and one data element or three children (3-nodes) and two data elements. Leaf nodes of this tree have no children and one or two data elements. (Wikipedia)
Always the left child is an element less than 1 data element of the parent vertex; the central child (if present) is an element that is between 1 and 2 data elements of the parent vertex; the right child is greater than the right data element of the parent vertex.
When at any vertex the number of data elements becomes 3, part of the tree is immediately rebuilt. This is done by dividing the vertex with 3 data elements into a parent and children, each of which will have 1 data element.
Thus, the tree is always balanced (all leaf nodes are on the same level).

- Container map in C++
- TreeMap class in Java
- Other associative array implementations
The main idea and advantage of a REdBlackTree is that it does not always need to call balancing when adding and removing an element. Thus, without worsening the complexity of the algorithm, we can make some concessions to the requirements for the height of the tree. We can increase the height of the tree without permanent balancing. The minimum and maximum height in the tree can differ by 2, for example, in one subtree we have a height of 10 and in another 20, while the complexity is o (2logn) and remains logarithmic.

In a red-black tree, the black height allows us to do this, that is, the number of black nodes from any leaf element is the same. The main properties of the tree are preserved thanks to 5 main conditions. From which we get the rules for adding and removing elements.
- Each node is either red or black and each node always has 2 children. even when it is a leaf, we add 2 leaf elements to it that have no other parameters than color - it is always black
- Tree root is always black.
- Rednode has only black children.
- Black height always the same/
- Each new elements always the red.



It was created in 1962 and named after inventors Adelson-Velsky and Landis. It is a self-balancing binary search tree.
Also, it was the first invented self-balancing BST.
AVL tree when adding numbers from 1 to 10 alternately.

AVL trees are recommended when you need to search for an item in fixed data quickly.
If the data is dynamic, i.e., many insert and delete operations, it is better to use red-black trees.
In order to achieve self-balancing properties, the AVL tree uses four types of rotations and unique insertion/deletion methods.


They are the same as right and left-right rotations, just mirrored.
Mainly, it is done in the same way as in BST, but it checks its balance, and if it is not less or equal to 1, it uses rotations to self-balance.
Zero balance is assigned when inserting the leaf. The process of including the vertice consists of three parts:
- Going through the search path until the key can't be found in the tree.
- Inclusion of a new vertex in the tree and determining the resulting balancing indicators.
- Backtrack the search path and check each vertice's balance indicator. If necessary - rebalance.
- If the vertice is a leaf, remove it and balance its ancestors from its father to the root.
- If the vertice is not a leaf, find the closest value in the subtree of the greatest height. Move it to the place of the removed vertex, thus calling the procedure for its removal.
- Efficient
- Self-balancing
- Faster than the red-black when searching
- Uses rotations for almost every operation and, as a result, is slower than the red-black tree when inserting or deleting an element
Always O(n)
Insertion, deletion, and traversal O(log n) in all cases
Search O(1) in the best case otherwise O(log n)
Splay tree is a type of binary search trees.
It provides auto-balancing, however it is not always perfectly balanced.
In some cases tree operations may have difficulty up to O(n).
For general normal-distributed m keys with same frequency we get O(m logn) complexity.
However, splay tree becomes more efficient when we do operation with certain keys more often than with other one.
More frequent key become closer to the root, which provides better complexity, up to O(1).
This effect is achieved by performing split operation every time we do operation with a tree.
That means, that the node we are inserting or searching becomes a root of the tree.
This is provided by a series of specific rotations.
As a result not only node becomes a root, but also its children become closer to the root.
This way, tree remains nearly balanced.
Splay trees are used for cases, where some key are accessed more often than other one.
This is caused by simple access to key, which were accessed a short time before.
In all cases we will perform a splay of x node.
Is used when node parent is root. Node is left child.
Single right rotation.
Before:

After:

Single left rotation.
Similar to zig rotation, but when node is a right child (left rotation).
Double right rotation.
Before:

After first rotation:

After second rotation:

Double left rotation.
Similar to zig-zig rotation, but from other side.
Double left-right rotation
Before:

After first rotation:

After second rotation:

Double right-left rotation.
Similar to zig-zag rotation, but from other side.
Every node in BTree can have more than one data elements. In the BTree of order m every node (except root) should have at most m children (or m - 1 elements) and at least ceil(m / 2) - 1 elements.
Unlike AVL, RedBlack or other trees it's always perfectly balanced (ie. every leaf has the same depth)
