CS136, Lecture 13

    1. PRO & CON of Recursion
      1. CON:
      2. PRO:
  1. More Sorts
    1. Merge sort
    2. Quicksort

PRO & CON of Recursion

CON:

Any recursive program requires additional storage on (implicit) run-time stack.

Sometimes (as in maze program) can replace user-defined stack with recursion.

If not stack then will get extra space usage. Overhead.

Also slight run-time overhead in procedure call since must push activation records on stack.

Often slightly faster and less space if can do iteratively.

PRO:

On other hand often easier to construct recursive solution.

Resulting code may be significantly clearer.

Smart compilers (particularly for functional languages) will remove "tail" recursion.


More Sorts

Merge sort

Divide and conquer sort:

    public static void mergeSort(Comparable data[], int n)
    // pre: 0 <= n <= data.length
    // post: values in data[0..n-1] are in ascending order
    {
        mergeSortRecursive(data,new Comparable[n],0,n-1);
    }
    
    private static void mergeSortRecursive(Comparable data[], 
                       Comparable temp[], 
                       int low, int high)
    // pre: 0 <= low <= high < data.length
    // post: values in data[low..high] are in ascending order
    {
        int numElts = high-low+1;
        int middle = low + numElts /2;
        int i;
    
        if (numElts  < 2) return;
        // move lower half of data into temporary storage
        for (i = low; i < middle; i++)
        {
            temp[i] = data[i];
        }
        // sort lower half of array
        mergeSortRecursive(temp,data,low,middle-1);
        // sort upper half of array
        mergeSortRecursive(data,temp,middle,high);
        // merge halves together
        merge(data,temp,low,middle,high);
      }

Easy to show mergeSortRecursive is correct if merge is.

Method merge is where all the work takes place.

Note that merge is not recursive!

     protected static void merge(int data[], int temp[],
                      int low, int middle, int high)
     // pre: data[middle..high] are ascending
     //      temp[low..middle-1] are ascending
     // post: data[low..high] contains all values in ascending 
     //       order
     {
        int ri = low; // result index
        int ti = low; // temp index
        int di = middle; // destination index
        // while two lists are not empty merge smaller value
        while (ti < middle) && di <= high)
        {
            if (data[di].lessThan(temp[ti])) {
                data[ri++] = data[di++]; // smaller is in high data
            } else {
                data[ri++] = temp[ti++]; // smaller is in temp
            }
        }
        // possibly some values left in temp array
        while (ti < middle)
        {
            data[ri++] = temp[ti++];
        }
        // ...or possibly some values left (in correct place) in 
        // data array
    }

It is easy to convince yourself that merge is correct. (A formal proof of correctness of iterative algorithms is actually harder than for recursive programs!)

It is also easy to see that if the portion of the array under consideration has k elements (i.e., k = last-first+1), then the complexity of mergeRuns is O(k):

If only look at comparisons then clear that every comparison (i.e., call to lessThan) in the if statement in the while loop results in an element being copied into sortArray.

In the worst case, you run out of all elts in one run when there is only 1 element left in the other run: k-1 comparisons, giving O(k)

If count copies of elements, then also O(k) since k/2 copies in copying data into temp, and then between k/2 and k more copies in putting elts back (in order) into data.

Can use this to determine the complexity of mergeSortRecursive .
Claim complexity is O(n log n) for sort of n elements.

Easiest to prove this if n = 2m for some m.

Prove by induction on m that sort of n = 2m elements takes <= n log n = 2m * m compares.

Base case: m=0, so n = 1. Don't do anything, so 0 compares <= 20 * 0.

Suppose true for m-1 and show for m.

mergeSortRecursive of n = 2m elements proceeds by doing mergeSortRecursive of two lists of size n / 2 = 2m-1, and then call of merge on list of size n = 2m.

Therefore,

#(compares) <=  2m-1 * (m-1) + 2m-1 * (m-1) + 2m
            = 2*(2m-1 * (m-1)) + 2m
            = 2m * (m-1) + 2m
            = 2m * ((m-1) + 1)
            = 2m * m
Therefore #(compares) <= 2m * m = n log n

End of proof.

Thus if n = 2m for some m, #(compares) <= n log n to do mergeSortRecursive

It is not hard to show that a similar bound holds for n not a power of 2.

Therefore O(n log n) compares. Same for number of copies.

Can cut down number of copies significantly if merge back and forth between two arrays rather than copy and then merge.

See efficient (but more complex) iterative version in MergeSort.java.

Quicksort

There is one last divide and conquer sorting algorithm: Quicksort.

While mergesort divided in half, sorted each half, and then merged (where all work is in the merge), Quicksort works in the opposite order.

That is, Quicksort splits the array (with lots of work), sorts each part, and then puts together (trivially).

/** 
  POST -- "elementArray" sorted into non-decreasing order  
**/
public void quicksort(Comparable[] elementArray)
{
    Q_sort(0, elementArray.length - 1, elementArray);   
}

/**
  PRE -- left <= right are legal indices of table.            
  POST -- table[left..right] sorted in non-decreasing order
**/
protected void Q_sort (int left, int right, Comparable[] table)
{
    if (right >= left)   // More than 1 elt in table
    {
        int pivotIndex = partition(left,right,table);
        // table[Left..pivotIndex] <= table[pivotIndex+1..right]  
        Q_sort(left, pivotIndex-1, table);      // Quicksort small elts
        Q_sort(pivotIndex+1, right, table);     // Quicksort large elts
    }
}

If partition works then Q_sort (and hence quicksort) clearly works. Note it always makes a recursive call on a smaller array (easy to blow so doesn't and then never terminates).

Partition is a little trickier. Algorithm below starts out by ensuring the elt at the left edge of the table is <= the one at the right. This allows guards on the while loops to be simpler and speeds up the algorithm by about 20% or more. Other optimizations can make it even faster.

/**
    post: table[left..pivotIndex-1] <= pivot 
            and pivot <= table[pivotIndex+1..right]  
**/
protected int partition (int left, int right, Comparable[] table)
{
        Comparable tempElt;         // used for swaps
        int smallIndex = left;      // index of current posn in left (small elt) partition
        int bigIndex = right;       // index of current posn in right (big elt) partition
        
        if (table[bigIndex].lessThan(table[smallIndex]))    
        {   // put sentinel at table[bigIndex] so don't 
            // walk off right edge of table in loop below
            tempElt = table[bigIndex];
            table[bigIndex] = table[smallIndex];
            table[smallIndex] = tempElt;
        } 
        
        Comparable pivot = table[left]; // pivot is fst elt 
        // Now table[smallIndex] = pivot <= table[bigIndex]
        do
        {
            do                          // scan right from smallIndex 
                smallIndex++;   
            while (table[smallIndex].lessThan(pivot));

            do                          // scan left from bigIndex
                bigIndex--;
            while (pivot.lessThan(table[bigIndex]));
            
            // Now table[smallIndex] >= pivot >= table[bigIndex]
             
            if (smallIndex < bigIndex)   
            {   // if big elt to left of small element, swap them
                tempElt = table[smallIndex]; 
                table[smallIndex] = table[bigIndex];
                table[bigIndex] = tempElt;
            } // if 
        } while (smallIndex < bigIndex); 
        // Move pivot into correct pos'n bet'n small & big elts
        
        int pivotIndex = bigIndex;      // pivot goes where bigIndex got stuck
        
        // swap pivot elt w/small elt at pivotIndex
        tempElt = table[pivotIndex];            
        table[pivotIndex] = table[left];    
        table[left] = tempElt;
        
        return pivotIndex;  
    }

The basic idea of the algorithm is to start with smallIndex and bigIndex at the left and right edges of the array. Move each of them toward the middle until smallIndex is on a "big" element (one >= than pivot) and bigIndex is on a small one. As long as the indices haven't crossed (i.e. as long as smallIndex < bigIndex) swap them so that the small elt goes on the small side and the big elt on the big side. When they cross, swap the rightmost small elt (pointed to by bigIndex) with the pivot element and return its new index to Q_sort. Clearly at the end,

table[left..pivotIndex-1] <= pivot <= table[pivotIndex+1..right]

The complexity of QuickSort is harder to evaluate than MergeSort because the pivotIndex need not always be in the middle of the array (in the worst case pivotIndex = left or right).

Partition is clearly O(n) because every comparison results in smallIndex or bigIndex moving toward the other and quit when they cross.

In the best case the pivot element is always in the middle and the analysis results in
O(n log n), exactly like MergeSort.

In the worst case the pivot is at the ends and QuickSort behaves like SelectionSort, giving O(n2).

Careful analysis shows that QuickSort is O(n log n) in the average case (under reasonable assumptions on distribution of elements of array). (Proof uses integration!)

Compare the algorithms with real data:

Complxity   100 elts    100 elts    500 elts    500 elts    1000 elts   1000 elts   
            unordered   ordered     unordered   ordered     unordered   ordered     
Insertion   0.033       0.002       0.75        0.008       3.2         .017        
Selection   0.051       0.051       1.27        1.31        5.2         5.3         
Merge       0.016       0.015       0.108       0.093       0.24        0.20        
Quick       0.009       0.044       0.058       1.12        0.13        4.5         

Notice that for Insertion or Selection sorts, doubling size of list increases time by 4 times (for unordered case), whereas for Merge and Quick sorts bit more than doubles time. Calculate (1000 log 1000) / (500 log 500) = 2 * (log 1000 / log 500) ~ 2 * (10/9) ~ 2.2