How do we tailor/construct a binary search tree so that given an access pattern (how frequently certain elements are requested), we construct an optimal tree?
- Intuition is that the more often an element is accessed, the closer it should be placed to the root
- Remember that the depth of an element corresponds to the time it takes to access it in the tree.
Problem
Input: index and such that is a measure of ‘s frequency of appearance.
Output: BST contains elements such that
We want to minimize the expected time it takes to access a given element. This is very natural if we assume that and , because then is just a set of probabilities for each element, and the definition of expected value follows naturally.
In reality , doesn’t need to be constrained within , But probability is a good intuitive representation of what each could represent.
Solution space: all possible BSTs made up of elements Objective function: given and the tree that we create, is given by
and we would like to minimize the value of .
High-level question
What should we place at the root? This is a good question because
- There are possible answers
- We can consider in our search to an answer of this question
- Question naturally repeats itself for subproblems, which will be the two children of the node that we eventually pick
Assume that we’ve chosen element as the root. Then according to the definition of a BST, the left child of must contain the elements , while the right child has elements .
This assumes that are sorted in order.
-function
Then, how do we pick the root ?
Given a starting index and ending index of the elements we’re considering, where and , the generalized -function is given by
We need to add and because the left and right subtrees are one level deeper than the root, so when we multiply by , we add one more copy to account for the extra depth.
We include because we’ve chosen element as the root, which has a depth of (technically but whatever), so the cost it pays is just . This makes sense, we’re minimizing depth!
Simplification of function
Notice that adding all the summations together gives us the consecutive range (if we also include ), so it simplifies to
We can then pull out the summation because it doesn’t depend on anymore. This is important for runtime, otherwise we would be looping through for every .
Runtime
and , so there are subproblems. In addition, we have to try every possible root . At the top level, when and , this takes time per subproblem, so total runtime is given by .