All work should be done on your own. If you discuss this assignment in any way with anyone else, or find help for getting the answer from any source, you must state so clearly at the top of your submission. There should be no working together except to understand the question.
Consider the function:
Prove that a program which solves this recursively will always halt.
Consider a similar function:
The expansion of this function will look like:
This expansion will continue until 2i ≥ x. Since, by definition:
∀ x ∃ log2x ∋ the recursion will halt after iterations. Since 1 is added each iteration with an extra 1 when x ≤ 2i, g(x) may also be expressed as:
The original function, f(x,y), reduces each of its recursive addends in a manner similar to g(x) and will eventually terminate.
A more interesting proof was given in class:
Assume that the pair (x, y) is the smallest pair of values for which the algorithm will not halt. The ordering of the pairs (x, y) is lexigraphical, that is:
The expansion then is:
As defined by the ordering:
Since (x, y) was the minmum value that f(x, y) would halt on, each of the addends will halt, f(x, y) will then halt, so (x, y) cannot exist.
What is f(x,2)?
Support for this conclusion is given by a program which compares the recursive solution to log2(x) + 1.
Suppose that you have a recurrence of the form . What is the solution in terms of order notation?
Recall:
Consider the expansion of the expression:
I originally ended up with , and in all honesty don't entirely understand this solution.
One option for determining the solution is not the "master theorem." Consider a general expression:
There are three options for how g(n) will perform:
For this particular problem, T(n) is modified by a factor f(n) rather than a constant, so the solution cannot be applied.
In a certain set decomposition algorithm, a set is split into two non-intersecting subsets and the subsets are recursively subdivided. Each decomposition takes time propotional to the square of the number of elements in the current set being decomposed.
Show that the worst case time for this decomposition is Θ(n3).
The size of the subsets will be:
The time for generating each of these subsets is:
This combination has a minima when n1 = n2. The maximum is when one element is as large as possible. Because each subset must contain at least one element or the algorithm will not progress, this is:
Show that if the decomposition has the property that neither subset of a decomposition step has more than of the elements from the set being decomposed, the worst case of the decomposition is Θ(n2).
As before, the worst case scenario is when one element, ni, is maximized. The exact nature of that decomposition is a bit tricky, however. The most obvious method for maximizing one quantity is to say:
However, since there are times that it will not be posible to have a decomposition that divides evenly. The best case for maximization is:
Hypothesize that:
I'm missing a step, then it's: