Math 176  - Data Structures
Fall 2000 - Lecture Topics
Instructor: Sam Buss

I will try to keep the following list of lecture topics up to date as much as possible.   If you miss class, it is highly recommended that you get notes from other students.

Day 1.  Friday, September 22. 

  1. General introduction, grading policies, office hours, etc.  See the course web page for most of this.
  2. Abstract data types (ADT's).  Separate interface specification from implementation.
  3. The Stack ADT.  Generally supports push, pop, peek, size, and empty methods.  Possible implementations include (i) using linked lists, (ii) using dynamically reallocated arrays, or (iii) using fixed-size arrays.
  4. The Queue ADT. Generally supports enqueue, dequeue, peek, size and empty methods.   Possible implementations include (i) using linked lists, (ii) using dynamically reallocated circular arrays, or (iii) using fixed-size circular arrays.
  5. The Deque ADT.  A double-ended queue (whence the name "d-e-que" abbreviating "double ended queue) that allows pushes and pops at both ends of the queue.  Generally supports pushLeft, pushRight, popLeft, popRight, peekLeft, peekRight, size, empty operations.  Possible implementations are the same as for queues.
  6. Discussion of constant time or O(1) time operations.  Most of the operations discussed above are constant time; however, when using an array based implementation, you only get amortized constant time because occasionally extra time is needed to reallocate the array.
  7. Advantages/disadvantages of linked list implementations versus (circular) arrays.   Disadvantages of linked lists include extra space required for pointer(s) and the potential for memory fragmentation and memory management overhead.  The disadvantages of array-based implementations are the fact that you get only amortized O(1) run times, and that you must write code to perform the dynamic reallocation.

Day 2.  Monday, September 25. 

  1. Definitions of Big-O, Little-o, Big-Omega, and Theta.  See beginning of Chapter 2.
  2. Linked list implementation of the Set ADT needs run time of O(N2) for N operations.
  3. Comparison of   time N2 microseconds and  N log N microseconds.  See the online Table of Runtimes.
  4. Definitions of height, depth, root, leaf for trees.

Day 3.  Wednesday, September 27. 

  1. Trees and binary trees.
  2. Implementation of trees with children of a node in a linked list.  Implementation of binary trees with children pointers.
  3. Binary search trees.  The key ordering property of binary search trees.
  4. Algorithms for contains(), add() and remove()  (a.k.a., find(), insert() and delete() methods) for binary search trees.  These were covered in class only informally.  You must read the textbook and understand the details of Weiss's implementations.  In particular, understand: (i) Why he returns TreeNode's or null.   (ii) How the use of recursive calls means that no a TreeNode does not need to have a pointer (reference) to its parent. (iii) Why the binary search tree property is preserved by the operations. (iii) Why keys have type Comparable.
  5. Performance analysis:  each operation takes time O(height of the tree).  You should be sure to understand why this is true!
  6. Guidelines for academic integrity for programming assignments.

Day 4.  Friday, September 29. 

  1. In-order traversal of binary trees.
  2. Bound on number of nodes in a binary tree in terms of the height h.  So height = Omega(number of nodes).  Goal is h = O(log N) so as to get a runtime of O(log N) for all individual operations on a binary search trees.
  3. AVL trees.  The AVL property.
  4. Theorem:  h= O(log N) for AVL trees.  Proof is postponed.
  5. Single rotations.  Double rotations.
  6. How to rebalance an AVL tree after a adding a new leaf to the tree.  You should read the corresponding parts of the text book and understand how the recursive implementation of Weiss's insert works.  In particular, why does he not take advantage of the fact that only one rotation is needed?  Why does his insert( x, t) method return an AvlNode object?  And, verify that his code properly updates the height values for the nodes after a rotation.
  7. We did not finish the last case of rebalancing the AVL trees after an add(), so we will cover this Monday.

Day 5.  Monday, October 2. 

  1. The rest of the rebalancing for add() in AVL trees.
  2. Iterators. Discussion of how they work in general, and how they work in Java.    You should look at the online Java documentation for more inormation on how to use iterators.
  3. Threaded AVL trees. 
  4. How to rebalance an AVL tree after a remove() operation.  I have made some figures showing how to rebalance after deletion on the web.

Day 6.  Wednesday, October 4. 

  1. Tree traversal orders: In-order, pre-order, post-order, level-order.
  2. Proof of the theorem that  h=O(log N)  for AVL trees.
  3. Advice on how to write your ThreadedAvlTree program.  See also the page about Software Aids and Suggestions for Programming Threaded AVL Trees.

Day 7.  Friday, October 6. 

  1. Discussion of virtual memory, paging, disk accesses are much more expensive then main memory accesses.  Tree seach methods will behave poorly once main memory is exhausted.
  2. B-trees.  Purposes: (a) disk bound data structures, and (b) substitute for binary search trees
  3. B-trees data structure.  Algorithm for find() or contains().
  4. B-tree insertion algorithm.
  5. B-tree deletion algorithm.

Day 8.  Monday, October 9. 

  1. Updates on the AVL tree programming assignment.
  2. Splay trees.  Purposes and algorithms.
  3. Beginning of hashing. 
  4. Collisions.
  5. Separate chaining. 
  6. The duplicate birthday phenomenon.

Day 9.  Wednesday, October 11. 

  1. Examples of good and bad hash functions.
  2. java's hash function for strings.
  3. Cryptographic applications: Hashing for message authentification.  (This topic is not required knowledge for the rest of Math 176.)

Day 10.  Friday, October 13. 

  1. Practical aspects of hash functions.  Using Java's hashCode.  The difference between "mod" and "%".
  2. Open addressing.
  3. Linear probing.  Find, add algorithms. Primary clustering.
  4. Lazy deletion algorithm for Open Addressing..

Day 11.  Monday, October 16. 

  1. Rehashing.
  2. Load factors.
  3. Non-lazy deletion for linear probing.  Algorithm available on-line too.
  4. Next programming assignment..

Day 12.  Wednesday, October 17. 

  1. More on the programming assignment.
  2. How to "break" Java's hashCode() method for strings and generate strings with the same hash code values.
  3. Fair comparison of load factors for open addressing versus separate chaining.
  4. Quadratic probing.  Works if load factor <0.5.
  5. Prime table size and load factor <0.5 mean quadratic probing always succeeds.

Day 13.  Friday, October 19. 

  1. Double hashing.  Algorithms.
  2. Proof of correctness of double hashing.
  3. Priority queues and applications
  4. BST's can implement priority queues in O(log n) time per operation.
  5. Binary heaps can be more efficient at priority queues.
  6. Complete binary tree property of binary heaps.
  7. Heap property: A key must be less than or equal to the keys of its children.
  8. Insert, findMind, deleteMin.
  9. PercolateUp.

Day 14.  Monday, October 23. 

  1. More on binary heaps.  Review of Friday's material.
  2. Using bit operations to compute indices of children and parents.
  3. PercolateDown.
  4. DeleteMin
  5. Buildheap.
  6. Proof of O(N) runtime for build heap.

Day 15.  Wednesday, October 24. 

  1. Leftist heaps.  The merge operation.
  2. Null path lengths.  The leftist property.
  3. Proof that the right path of a leftist heap has length O(log N).
  4. The merge algorithm.

Day 16.  Friday, October 26. 

  1. Review of  leftist heaps.
  2. The class structures for leftist heaps.
  3. Implementing deleteMin, findMin and insert in terms of merge.
  4. Skew heaps.
  5. Theorem on amortized cost of skew heap operations: O(log N) per operation.  Proof postponed.
  6. Disjoint Set ADT started.
  7. Equivalence relations.
  8. Example of connected components in a graph.
  9. Functional behavior of union and find in the Disjoint Set ADT.

Day 17.  Monday, October 30. 

  1. More on Disjoint Set ADT.
  2. Algorithm for find(), union().
  3. Weiss's union() works only on roots.  The algorithm in lecture works on arbitrary vertices.
  4. How the data for the Disjoint Set ADT is stored in an integer array.  Integer values as pointers.
  5. Union-by-size. 
  6. Union-by-height (also known as "union-by-rank:")
  7. Theorem: log n bounds on height for union-by-size and union-by-rank.
  8. Path compression.

Day 18.  Wednesday, November 1. 

  1. More on Disjoint Set ADT.
  2. Modification to return the minimum numbered element of the set.  Use a second M[] array.
  3. Proof of log n upper bound on height for union-by-size.  Proof of the corresponding result for union-by-height is similar.
  4. Ackermann function, "tower of twos" superexponential function.
  5. log* function.  Inverse Ackermann function (alpha).  Very slo growing functions.
  6. Theorem: log*n and alpha(n) upper bounds on tree height for union-find with path compression and either union-by-size or union-by-height.

Day 19.  Friday, November 3. 

  1. MIDTERM EXAM.  Covers thru binary heaps.  Any topics from the assignments or discussed in lecture may appear on the midterm.  Definitely will have asymptotic bounds questions.  No proofs. 

Day 20.  Monday, November 6. 

  1. Expected number of comparisons for hashing with separate chaining and with open addressing. Idealized calculations.
  2. Amortized cost analysis.  General framework. Cumulative time. Potential function.
  3. Handout on amortized cost analysis available in PDF format and postscript format.

Day 21.  Wednesday, November 8. 

  1. Amortized cost analysis for dynamically resizable arrays. 
  2. A outline of the proof of the amortized cost analysis for skew heaps.  The rest of the proof can be found in the textbook in Chapter 11.
  3. The next programming assignment will involve writing the heart of a search engine (ranking documents with regard to how well they match a two word search criteria).   This will posted to the web in the next few days and discussed in class on Monday.

Day 22.  Monday, November 13. 

  1. The details of the proof of the amortized run time bound for skew heaps.
  2. Programming assignment #4.
  3. Inverted lists for finding occurences of words in large document sets.

Day 23.  Wednesday, November 15. 

  1. More details on the implementation of programming assignment #4.  HashMap, ArrayList, etc.
  2. Walking through inverted lists.  An online handout is available..

Day 24.  Friday, November 17. 

  1. Skip lists: see Weiss, pp. 399-402 and 468-473.
  2. Introduction to Skip Lists, motivated by scanning Inverted Lists.
  3. Level of a element in a skip list.  Level of an edge in a Skip List.
  4. Finding a key in a Skip List, or the position after which the key would be inserted.

Day 25.  Monday, November 20. 

  1. Discussion of how search engines word.
  2. Discussion of what "close occurences" are.
  3. Insertion into skip lists.
  4. Randomized skip lists
  5. Deterministic skip lists, via a reduction to B-trees.

Day 26.  Wednesday, November 22. 

  1. Red-black trees.  See Weiss, pp. 460-462.
  2. Proof of O(log n) height bound for red-black trees.
  3. Insertion algorithm.

Day 27.  Monday, November 27. 

  1. Treaps.  See Weiss, pp. 481-483.
  2. Treaps are BST's with an extra priority field and are heaps with respect to priority.
  3. Insertion algorithm:  Ordinary BST tree insertion followed by single rotations.
  4. Random BST's.   Treaps on arbitrary input and BST's with randomly ordered insertions yield equivalent probability distribution on the shape of the tree. 
  5. Random trees as aove are chosen by choosing the root uniformly at random and the recursively constructing the two subtrees.
  6. Theorem: expected depth of random BST is O(log N).

Day 28.  Wednesday, November 28. 

  1. Proof of the Theorem from Day 27.
  2. Huffman codes.  See Weiss, pp. 357-362.
  3. Code words.  Binary strings encoding symbols.
  4. Prefix codes.  Unique decoding.
  5. Trie representation of a prefix code.
  6. There is a one-one correspondence between binary tries and prefix codes.

Day 29.  Friday, December 1.  Last day of class!

  1. Forthcoming. (Huffman's algorithm for finding the optimal prefix code given symbol frequency counts.)