Advertisement
rmloveland

Invitation to Algorithms with Scheme (Rough draft e46fa67)

Aug 5th, 2023
1,395
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
HTML 57.39 KB | None | 0 0
  1. <style type="text/css">
  2.  
  3. </style>
  4. <h1><a name="Invitation to Algorithms with Scheme-TOC">Invitation to Algorithms with Scheme</a></h1>
  5.  
  6. <p><a href="mailto:r@rmloveland.com">R. M. Loveland</a></p>
  7. <p><em>Gardiner, New York, USA</em>.</p>
  8. <p><em>April 2020</em>.</p>
  9. <div id="toc"><ul><li class="h2"><a href="#Preface-TOC">Preface</a></li>
  10. <li class="h2"><a href="#Introduction-TOC">Introduction</a></li>
  11. <li class="h2"><a href="#Prerequisites-TOC">Prerequisites</a></li>
  12. <li class="h2"><a href="#Sorting-TOC">Sorting</a></li>
  13. <li class="h2"><a href="#For the definition of the with-timing-output macro, see here.-TOC">For the definition of the with-timing-output macro, see here.</a></li>
  14. <li class="h2"><a href="#Searching-TOC">Searching</a></li>
  15. <li class="h2"><a href="#Trees-TOC">Trees</a></li>
  16. <li class="h2"><a href="#Graphs-TOC">Graphs</a></li>
  17. <li class="h2"><a href="#Strings-TOC">Strings</a></li>
  18. <li class="h2"><a href="#A hash table library-TOC">A hash table library</a></li>
  19. <li class="h2"><a href="#A regular expression library-TOC">A regular expression library</a></li>
  20. <li class="h2"><a href="#Glossary-TOC">Glossary</a></li>
  21. <li class="h2"><a href="#Loading the book code into a Scheme-TOC">Loading the book code into a Scheme</a></li>
  22. <li class="h2"><a href="#Bibliography-TOC">Bibliography</a></li>
  23. </ul></div>
  24. <h2><a name="Preface-TOC">Preface</a></h2>
  25.  
  26. <p>Why another book about algorithms?</p>
  27. <p>There are already many books about algorithms.  Most provide detailed analyses of topics such as "Big O notation" and are aimed at an academic audience.  As a result, they strive to be comprehensive in their treatment of the topic, and may run to many hundreds of pages in length.</p>
  28. <p>Most books use "industry-standard" programming languages like Java, C++, or even Python.  Some use pseudocode, which is arguably better, or at least not as ephemeral (the world will not always program computers in Java).  It's all very well, but we are interested in implementing algorithms in Scheme, which is not an ALGOL-derived language which distinguishes between statements and expressions and "fundamental" data types and "objects"?</p>
  29. <p>Why Scheme?  Because Java, C++, and Python do not have anything FUNDAMENTAL to say about computing.  They are languages of the moment. By contrast, Scheme is a small, well-designed language that is not tied to particular programming paradigms or hardware architectures. There have even been Scheme CPUs.  It is based on fundamental ideas about an ideal language for computing, that is, expressing the idea of a computational process.  In their paper "Lisp: A language for stratified design", Abelson and Sussman write:</p>
  30. <p><blockquote></p><p>Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.  The Scheme dialect of Lisp demonstrates that a very small number of rules for forming expressions, with no restrictions on how they are composed, suffice to form a practical and efficient programming language that is flexible enough to support most of the major programming paradigms in use today.</p><p></blockquote></p>
  31. <p>When we say Scheme is not tied to any particular programming paradigm, we mean that it can be extended by the programmer to use any paradigm: imperative; functional; object-oriented; declarative; and more.</p>
  32. <p>There is something different about S-expressions, the fundamental data structure of Scheme programs, than about the data structures that are used to represent other programming languages.  For one thing, S-expressions possess a conceptual unity, since every expression takes the form of symbols inside nested parentheses, e.g., this procedure to compute the greatest common denominator of two numbers:</p>
  33. <pre>
  34. (define (gcd a b)
  35.   (if (= b 0)
  36.       a
  37.     (gcd b (remainder a b))))
  38. </pre>
  39.  
  40. <p>As a result of this unity, Lisp programs can express metalinguistic abstractions.  In other words, unlike in say, Java, Python, or Go, the programmer is able to add locally-defined syntax to the language that will allow her to better express her intent when describing a computational process.</p>
  41. <p>Finally, there is very little written material out there for the novice-to-intermediate Scheme programmer (though there are many excellent programs, some of which we will link to in the references). Most of the Scheme content that is available on the internet is either for experts, or misleads beginners with naive, tree-recursive solutions that perform badly (I am certainly guilty of this on my own blog).</p>
  42. <p>Naturally, there are a number of excellent books on Scheme programming, which we will link to from the bibliography.  However, to our knowledge there are no algorithms books that use Scheme.  That is the niche this book is attempting to fill.</p>
  43. <h2><a name="Introduction-TOC">Introduction</a></h2>
  44.  
  45. <p>What is an <em>algorithm</em>?</p>
  46. <p>An algorithm is a recipe for getting a job done.  Many algorithms are very simple, since many of the jobs we need a computer to do are (or appear to be) quite simple.  For example, we may need to sort a list of numbers, or calculate the distance between two towns on a map.</p>
  47. <p>"Sort a list of numbers" sounds simple, but there is some complexity there that needs to be addressed.  How long is the list?  Is it longer than the available memory on our machine?  Are the numbers in the list in totally random order?  Or are they almost sorted already?</p>
  48. <p>Depending on the answers to these questions, you'll need to choose one sorting algorithm over another.  Some take more memory (space).  Some require more CPU cycles (time).</p>
  49. <p>Unless you are doing very specialized optimization work, you can probably get along quite nicely knowing one or two of the most common algorithms in the area you're currently working in.  If you really need something more advanced, you can always look it up and implement it.</p>
  50. <p>And that brings us to the point of this book.  We will study one or two common algorithms of each type (sorting, searching, etc.).  Then, we will implement these algorithms in the Scheme programming language. As our mastery increases in later chapters, we will take on several "real world" implementation projects, such as writing our own hash tables and a regular expression matcher, all using the basic algorithms we've learned.</p>
  51. <p>Along the way, we will observe several (related) themes that repeat:</p>
  52. <ul>
  53. <li> <em>Divide and conquer</em>: break a problem into smaller sub-problems, solve the sub-problems, and combine the intermediate results to get a final answer.</li>
  54. </ul>
  55. <ul>
  56. <li> <em>Recursion</em>: Do it again, and again, and again, until you're done.</li>
  57. </ul>
  58. <h2><a name="Prerequisites-TOC">Prerequisites</a></h2>
  59.  
  60. <h3><a name="Familiarity with Scheme-TOC">Familiarity with Scheme</a></h3>
  61.  
  62. <p>This book does not try to teach the Scheme language.  We assume you have already encountered an introduction to Scheme elsewhere. In particular, you should be pretty comfortable with recursion, since we'll be using it a lot (Scheme uses recursion for iteration by default).  By "comfortable with recursion", we mean that you understand code that looks like this:</p>
  63. <pre>
  64. (define (+ a b)
  65.   (if (= a 0)
  66.       b
  67.     (+ (decr a)
  68.        (incr b))))
  69. </pre>
  70.  
  71. <p>A good book for learning the basics of Scheme is <em>The Little Schemer</em> by Dan Friedman &amp; Matthias Felleisen.  For more reading recommendations, see the Bibliography.</p>
  72. <h3><a name="Access to a computer with a Scheme Interpreter-TOC">Access to a computer with a Scheme Interpreter</a></h3>
  73.  
  74. <p>This book's code has been tested with the following combinations of Scheme implementation and operating system:</p>
  75. <p><font color="red">TODO</font>: Fill in tested version numbers in the table below.</p>
  76. <table>
  77. <tr><td> Implementation </td><td> Version </td><td> Platforms         </td></tr><tr><td> Larceny       </td><td> X       </td><td> Windows 10, Linux </td></tr><tr><td> Chez (Petite) </td><td> X       </td><td> Windows 10        </td></tr><tr><td> Gambit        </td><td> X       </td><td> Windows 10, Linux </td></tr><tr><td> Scheme 48     </td><td> 1.9.2   </td><td> Windows 10, Linux </td></tr><tr><td> Kawa          </td><td> 3.0     </td><td> Windows 10, Linux </td></tr><tr><td> JScheme       </td><td> 7.2     </td><td> Windows 10        </td></tr>
  78. </table>
  79.  
  80. <h3><a name="Required libraries-TOC">Required libraries</a></h3>
  81.  
  82. <p>The portable module system (which has been tested in all of the implementation/OS combinations in the previous section) can be downloaded from <a href="https://github.com/rmloveland/load-module">https://github.com/rmloveland/load-module</a>.</p>
  83. <p>The code for working with this book can be found at <a href="https://github.com/rmloveland/intro-algos-with-scheme/tree/master/code">https://github.com/rmloveland/intro-algos-with-scheme/tree/master/code</a>.</p>
  84. <p>To load the book's code (<font color="red">TODO</font>: Update these instructions):</p>
  85. <ul>
  86. <li> Copy <code>load-module/load-module.scm</code> into the book's <code>intro-algos-with-scheme/code</code> directory.</li>
  87. <li> Change into the <code>intro-algos-with-scheme/code</code> directory.</li>
  88. <li> Start your Scheme.</li>
  89. <li> Load the module system as follows: <code>(load "load-module.scm")</code></li>
  90. <li> Load the required modules as described at the beginning of each chapter. E.g., <code>(load-module 'mergesort)</code>.</li>
  91. </ul>
  92. <h3><a name="Typographical Conventions-TOC">Typographical Conventions</a></h3>
  93.  
  94. <p>Procedure names are written in capital letters like this:</p>
  95. <p><code>CHAR-READY?</code></p>
  96. <p>Code samples are usually set off from the surrounding text like this:</p>
  97. <pre>
  98. (define (double n)
  99.   (+ n n))
  100. </pre>
  101.  
  102. <p>Most procedures are annotated with comments about the input and output types we expect.  These comments serve as a form of documentation. They makes it a little easier to remember (at least part of) what a procedure does without having to read the entire text of the procedure.</p>
  103. <p>For example, given the following procedure we can see that it takes a two Numbers and returns a Number.</p>
  104. <pre>
  105. (define (plus a b)
  106.   ;; Num Num -&gt; Num
  107.   (+ a b))
  108. </pre>
  109.  
  110. <p>Procedures that are meant to be "internal" A.K.A. not part of a user-visible API are prefixed with a <code>^</code> character, e.g.,</p>
  111. <pre>
  112.   (define (^merge pred a b)
  113.     ;; Pred List List -&gt; List
  114.     ;; Note: this implementation is a no-op.
  115.     '())
  116. </pre>
  117.  
  118. <p><font color="red">TODO</font>: Finish filling this in -- look at an ORA book for ideas.</p>
  119. <h2><a name="Sorting-TOC">Sorting</a></h2>
  120.  
  121. <p>The list data structure is ubiquitous in Scheme programming.  One of the most common patterns is to gather up a list of elements, and then process them in turn.  (When we say that something is a "list" in Scheme, we mean that it's actually a linked list.)</p>
  122. <p>For example, we might like to walk down through a directory of files, checking each file for some interesting property.</p>
  123. <pre>
  124. (define (dir-walk* interesting? queue)
  125.   (let ((file #f))
  126.     (lambda ()
  127.       (if (not (null? queue))
  128.           (begin (set! file (car queue))
  129.                  (set! queue (cdr queue))
  130.                  (cond
  131.                   ((file-directory? file)
  132.                    (let ((new-files (directory-files file)))
  133.                      (set! queue
  134.                            (append queue
  135.                                    (filter interesting? (map (lambda (filename)
  136.                                                                (string-append file "/" filename))
  137.                                                              new-files))))
  138.                      (if (interesting? file) file)))
  139.                   ((interesting? file) file)
  140.                   (else #f)))
  141.         #f))))
  142. </pre>
  143.  
  144. <p><font color="red">TODO</font>: Intro needs to be completely (re)written.</p>
  145. <p>We'll begin our discussion of sorting by implementing the merge sort algorithm.  This is useful for a few reasons:</p>
  146. <ul>
  147. <li> It's a pretty good sorting algorithm.</li>
  148. <li> It's a good example of the generally useful "divide and conquer" strategy for algorithm design.</li>
  149. <li> It suits Scheme because of the way Scheme lists are actually "linked lists".</li>
  150. </ul>
  151. <p>It's also useful because we need to be able to sort before we can search.  As a general rule in this book, we will not use a technique until we have implemented the prerequisite algorithms.  As we just said, a list has to be sorted before it can be searched (efficiently).</p>
  152. <p>If you can understand the merge sort implementation described in this chapter, you should have no problem learning more about (and implementing your own versions of) other sorting algorithms that you encounter.</p>
  153. We'll look at several implementations of merge sort, starting with the naive version you can easily find on the internet that generates a recursive process, and which performs quite badly.  We will then improve on that by implementing an iterative\footnote{An iterative
  154. <p>  process is one that does not consume growing amounts of stack space</p><p>  while it runs.} implementation that is necessarily a bit more complex (but not terribly so), and which performs better.</p>
  155. <p>   <font color="red">TODO</font>: Write the naive version and some tests that exercise it.  We can run those same tests later on against the better implementation.</p>
  156. <p>Merge sort is most easily implemented in two parts that work together:</p>
  157. <ul>
  158. <li> An internal merging procedure that merges two partially sorted lists.  This procedure handles splicing two sublists together in an ordering determined by a predicate such as <code><</code>. It walks both lists, putting things in pairwise order based on which of them is (in the case of <code><</code>) less than the other.  However, it doesn't fully sort the lists; it only works on one item from each list at a time.  That's why we need the driver procedure (described next).</li>
  159. </ul>
  160. <ul>
  161. <li> A user-accessible driver procedure that calls the internal merging procedure repeatedly until all of the sub-lists are sorted, at which point the entire list is sorted.  This feels a bit magical, but in fact there is no magic to it at all.</li>
  162. </ul>
  163. <p>Sometimes it's easier to understand something by looking at its inputs and outputs.  Here are a few examples showing expected inputs and outputs of <code>MERGE</code>, which is our name for the internal merging procedure.</p>
  164. <p>Note: to use the code below</p>
  165. <pre>
  166.   &gt; (load-module 'merge-sort)
  167.   &gt; (merge &lt; '(2 191) '(18 45))
  168.   (2 18 45 191)
  169.  
  170.   &gt; (merge string&lt;? '("scheme" "hacking") '("is" "fun"))
  171.   ("is" "fun" "scheme" "hacking")
  172. </pre>
  173.  
  174. <p>You will now have the following additional procedures in your</p><p>environment:</p>
  175. <ul>
  176. <li> MERGE</li>
  177. <li> MERGE-SORT</li>
  178. <li> MERGE-SORT-TRACED works just as MERGE-SORT, but prints some some additional output to make it easier to visualize what is happening.  Specifically, it prints the intermediate results of recursive calls to MERGE to make it easier to see how MERGE-SORT works by building up bigger and bigger sorted sub-lists and merging them together.</li>
  179. </ul>
  180. <p>The user-facing driver procedure will handle calling <code>MERGE</code> repeatedly and splicing the partially sorted lists returned by successive calls to <code>MERGE</code> together into a final list that is fully sorted.</p>
  181. <p>Here are examples showing some expected inputs and outputs of MERGE-SORT:</p>
  182. <pre>
  183.   &gt; (merge-sort '(17 51 55 13 12 75 98 48 98 89 68 86 89 51) &lt;)
  184.   (12 13 17 48 51 51 55 68 75 86 89 89 98 98)
  185. </pre>
  186.  
  187. <p>As we said above, <code>MERGE</code> splices two lists together in a pairwise ordering determined by a predicate.  In other words, it walks two lists, let's call them <em>A</em> and <em>B</em>, and compares the elements of each list in turn.  It keeps another list, <em>C</em>, where it stores the output.  If the current element from <em>A</em> is less than the current element from <em>B</em>, <code>MERGE</code> pushes it onto the output list <em>C</em>. Otherwise, it pushes the current element from <em>B</em> onto the output list.</p>
  188. <p>Once <code>MERGE</code> has traversed both of its input lists <em>A</em> and <em>B</em>, and processed all the elements of each, it returns its result.</p>
  189. <p>For example, in order for <code>(let ((a '(12 75)) (b '(1024 55))) (merge a b <))</code> to return the output <code>(12 75 1024 55)</code>, <code>MERGE</code> follows these steps:</p>
  190. <ul>
  191. <li> Grab the first item of <code>a</code> (12) and the first item of <code>b</code> (1024).</li>
  192. </ul>
  193. <ul>
  194. <li> Compare 12 and 1024 and pairwise order them using the predicate <code><</code>, yielding an intermediate list <code>'(12 1024)</code>.</li>
  195. </ul>
  196. <ul>
  197. <li> Grab the next element of <code>a</code> (75) and <code>b</code> (55).</li>
  198. </ul>
  199. <ul>
  200. <li> Compare 75 and 55 using <code><</code>, and push the now-ordered pair onto another intermediate list to make <code>'(55 75)</code>.  We now have two intermediate lists, (55 75) and (12 1024).</li>
  201. </ul>
  202. <ul>
  203. <li> MERGE recurs on the intermediate lists '(12 1024) and '(55 75) Since these intermediate lists are each themselves sorted from the previous pass, it (again) walks them in pairwise order, grabbing 12 and 55 and sorting them to make '(12 55), and grabbing 1024 and 75 and sorting them to make '(75 1024).  It then puts the two together to make (12 55 75 1024).</li>
  204. </ul>
  205. <p>For another way of looking at how the above process works, use the MERGE-SORT-TRACED procedure from the 'mergesort module.</p>
  206. <pre>
  207. &gt; (merge-sort-traced '(12 1024 75 55) &lt;)
  208.  
  209. 12
  210. 1024
  211.  
  212. 75
  213. 55
  214.  
  215. (55 75)
  216. (12 1024)
  217.  
  218. (12 55 75 1024)
  219. </pre>
  220.  
  221. <p>As noted above, the output of <code>MERGE</code> is not a sorted list.  It has been put into "pairwise order".  This is a fancy way of saying that we only compared two elements at a time, one each from <em>A</em> and <em>B</em>, as we were building it.</p>
  222. <p>Based on our description of the merge algorithm above, let's try to generate some inputs to MERGE and guess their expected outputs.</p>
  223. <p>Let's look at the outputs of a few more calls to MERGE with 1, 2, 3, and 4-element lists, respectively, to build our intuition for its behavior.</p>
  224. <pre>
  225. (merge '(769) '(485) &lt;)
  226. ; =&gt; (485 769)
  227.  
  228. (merge '(769 1023) '(485 99) &lt;)
  229. ; =&gt; (485 99 769 1023)
  230.  
  231. (merge '(769 1023 3) '(485 99 293) &lt;)
  232. ; =&gt; (485 99 293 769 1023 3)
  233.  
  234. (merge '(769 1023 3 12) '(485 99 293 13) &lt;)
  235. ; =&gt; (485 99 293 13 769 1023 3 12)
  236. </pre>
  237.  
  238. <p>We can see that:</p>
  239. <ul>
  240. <li> Given 1-element lists, it sorts the 2 elements using the predicate and returns a sorted list.</li>
  241. <li> Given 2-element lists <em>A</em> and <em>B</em>, it returns <code>'(B1 B2 A1 A2)</code>.</li>
  242. <li> Given 3-element lists, it returns '(B1 b2 b3 a1 a2 a3).</li>
  243. <li> Given 4-element lists, it returns</li>
  244. </ul>
  245. <p>Are you seeing the pattern?  Oddly, it appears that MERGE is only merging lists according to a check on the first elements of each list using the supplied predicate, and then leaving the rest of the lists in their original order.</p>
  246. <p>Clearly this is useful but not sufficient to get a list sorted! Intuitively we can see that MERGE is useful as it gets down to shorter lists, since it is able to put everything in sorted order in the "1-element lists" case.</p>
  247. <p>Next let's look at how MERGE behaves with some deliberately "odd" inputs.</p>
  248. <pre>
  249.   ; Case 1
  250.   (merge '() '() &lt;)
  251.   ; =&gt; ()
  252.  
  253.   ; Case 2
  254.   (merge 19 23 &lt;)
  255.   ; =&gt; (19 23)
  256.  
  257.   ; Case 3
  258.   (merge 19 '(1 2 3) &lt;)
  259.   ; =&gt; (1 2 3 19)
  260.  
  261.   ; Case 4
  262.   (merge '() '(485) &lt;)
  263.   ; =&gt; (485)
  264.  
  265.   ; Case 5
  266.   (merge '19 () &lt;)
  267.   ; =&gt; (19)
  268.  
  269.   ; Case 6
  270.   (merge '(12) '(485 99 293 13) &lt;)
  271.   ; =&gt; (12 485 99 293 13)
  272.  
  273.   ; Case 7
  274.   (merge '(12023) '(485 99 293 13) &lt;)
  275.   ; =&gt; (485 99 293 13 12023)
  276. </pre>
  277.  
  278. <p>We can codify the above behavior in the following specification of cases:</p>
  279. <ul>
  280. <li> The "base case" of <code>(merge '() '() <)</code> should return (). (Case 1)</li>
  281. </ul>
  282. <ul>
  283. <li> MERGE needs to work on numbers as well as lists. (Case 2, 3)</li>
  284. </ul>
  285. <ul>
  286. <li> MERGE should accept an empty list as one of its inputs.</li>
  287. </ul>
  288. <ul>
  289. <li> MERGE should accept as input 2 lists which do not have the same</li>
  290. <p>  number of elements.</p></ul>
  291.  
  292. <p>We can see from the test cases we generated above that we will have to handle a number of cases to handle, namely:</p>
  293. <ul>
  294. <li> MERGE takes two lists.  If the lists contain strings or numbers as elements, we will need to XXX</li>
  295. </ul>
  296. <ul>
  297. <li> If LEFT and RIGHT are both numbers, "listify" them so MERGE-AUX can work with them.</li>
  298. </ul>
  299. <ul>
  300. <li> If LEFT is just a number, "listify" it so MERGE-AUX can work with it.</li>
  301. </ul>
  302. <ul>
  303. <li> Likewise, if RIGHT is just a number, "listify" it for MERGE-AUX.</li>
  304. </ul>
  305. <ul>
  306. <li> If LEFT and RIGHT are empty, we're done merging. Return the result.</li>
  307. </ul>
  308. <ul>
  309. <li> If LEFT and RIGHT still have elements to be processed, call PRED and run them through MERGE-AUX again.</li>
  310. </ul>
  311. <ul>
  312. <li> If the cases above haven't matched, and LEFT is not NULL?, call MERGE-AUX again.</li>
  313. </ul>
  314. <ul>
  315. <li> If the cases above haven't matched, and RIGHT is not NULL?, call MERGE-AUX again.</li>
  316. </ul>
  317. <pre>
  318. (define (^merge pred l r)
  319.   ;; Procedure List List -&gt; List
  320.   (define (merge-aux pred left right result)
  321.     (cond
  322.      ((and (number? left)     ; Case 1.
  323.            (number? right))
  324.       (merge-aux pred (list left) (list right) result))
  325.      ((number? left)          ; Case 2.
  326.       (merge-aux pred (list left) right result))
  327.      ((number? right)         ; Case 3.
  328.       (merge-aux pred left (list right) result))
  329.      ((and (null? left)       ; Case 4.
  330.            (null? right))
  331.       (reverse result))
  332.      ((and (not (null? left)) ; Case 5.
  333.            (not (null? right)))
  334.       (if (pred (car left)
  335.                 (car right))
  336.           (merge-aux pred
  337.                      (cdr left)
  338.                      right
  339.                      (cons (car left) result))
  340.         (merge-aux pred
  341.                    left
  342.                    (cdr right)
  343.                    (cons (car right) result))))
  344.      ((not (null? left))      ; Case 6.
  345.       (merge-aux pred (cdr left) right (cons (car left) result)))
  346.      ((not (null? right))     ; Case 7.
  347.       (merge-aux pred left (cdr right) (cons (car right) result)))
  348.      (else #f)))              ; We should never get here.
  349.   (merge-aux pred l r '()))
  350. </pre>
  351.  
  352. <h4><a name="Merge Sort-TOC">Merge Sort</a></h4>
  353.  
  354. <p>Recently I've begun a project to implement a number of basic algorithms in Scheme, which I'd like to eventually grow into a free (as in freedom) ebook. Having just done a Binary Search in Scheme, I thought it would be fun to give merge sort a try.</p>
  355. <p>According to the mighty interwebs, merge sort is a good choice for sorting linked lists (a.k.a., Lisp lists). Unfortunately the only Lisp merge sort implementation examples I've been able to find on the web have been recursive, not iterative.</p>
  356. <p>The implementation described here is an iterative, bottom-up merge sort, written in a functional style. (I daren't say the functional style, lest any real Scheme wizards show up and burn me to a crisp.)</p>
  357. <h5><a name="First, generate a list of random numbers-TOC">First, generate a list of random numbers</a></h5>
  358.  
  359. <p>In order to have something to sort, we need a procedure that generates a list of random numbers --- note that the docstring is allowed by MIT/GNU Scheme; YMMV with other Schemes.</p>
  360. <pre>
  361. (define (make-list-of-random-numbers list-length max)
  362.   ;; Int Int -&gt; List
  363.   "Make a list of random integers less than MAX that's LIST-LENGTH long."
  364.   (letrec ((maker
  365.             (lambda (list-length max result)
  366.               (let loop ((n list-length) (result '()))
  367.                 (if (= n 0)
  368.                     result
  369.                     (loop (- n 1) (cons (random max) result)))))))
  370.     (maker list-length max '())))
  371. </pre>
  372.  
  373. <h5><a name="Then, write a merge procedure-TOC">Then, write a merge procedure</a></h5>
  374.  
  375. <p>This implementation of the merge procedure is a straight port of the one described on the Wikipedia Merge Sort page, with one minor difference to make the sort faster 1.</p>
  376. <p>An English description of the merge operation is as follows:</p>
  377. <p>If both items passed in are numbers (or strings), wrap them up in lists and recur. (In this example we only care about sorting numbers)</p>
  378. <p>If both lists are empty, return the result.</p>
  379. <p>If neither list is empty:</p>
  380. <p>If the first item in the first list is "less than" the first item in the second list, cons it onto the result and recur.</p>
  381. <p>Otherwise, cons the first item in the second list on the result and recur.</p>
  382. <p>If the first list still has items in it, cons the first item onto the result and recur.</p>
  383. <p>If the second list still has items in it, cons the first item onto the result and recur.</p>
  384. If none of the above conditions are true, return <code>\#f</code>. I put this here for debugging purposes while writing this code; now that the procedure is debugged, it is never reached. (Note: "debugged" just means I haven't found another bug yet.)
  385.  
  386. <pre>
  387. (define (rml/merge pred l r)
  388.   (letrec ((merge-aux
  389.             (lambda (pred left right result)
  390.               (cond
  391.                ((and (number? left)
  392.                      (number? right))
  393.                 (merge-aux pred
  394.                            (list left)
  395.                            (list right)
  396.                            result))
  397.                ((and (string? left)
  398.                      (string? right))
  399.                 (merge-aux pred
  400.                            (list left)
  401.                            (list right)
  402.                            result))
  403.                ((and (null? left)
  404.                      (null? right))
  405.                 (reverse result))
  406.                ((and (not (null? left))
  407.                      (not (null? right)))
  408.                 (if (pred (car left)
  409.                           (car right))
  410.                     (merge-aux pred
  411.                                (cdr left)
  412.                                right
  413.                                (cons (car left) result))
  414.                   (merge-aux pred
  415.                              left
  416.                              (cdr right)
  417.                              (cons (car right) result))))
  418.                ((not (null? left))
  419.                 (merge-aux pred (cdr left) right (cons (car left) result)))
  420.                ((not (null? right))
  421.                 (merge-aux pred left (cdr right) (cons (car right) result)))
  422.                (else #f)))))
  423.     (merge-aux pred l r '())))
  424. </pre>
  425.  
  426. <p>We can run a few merges to get a feel for how it works. The comparison predicate we pass as the first argument will let us sort all kinds of things, but for the purposes of this example we'll stick to numbers:</p>
  427. <pre>
  428. &gt; (rml/merge &lt; '(360 388 577) '(10 811 875 995))
  429. (10 360 388 577 811 875 995)
  430.  
  431. &gt; (rml/merge &lt; '(8 173 227 463 528 817) '(10 360 388 577 811 875 995))
  432. (8 10 173 227 360 388 463 528 577 811 817 875 995)
  433.  
  434. &gt; (rml/merge &lt;
  435.            '(218 348 486 520 639 662 764 766 886 957 961 964)
  436.            '(8 10 173 227 360 388 463 528 577 811 817 875 995))
  437. (8 10 173 218 227 348 360 388 463 486 520 528 577 639 662 764 766 811 817 875 886 957 961 964 995)
  438. </pre>
  439.  
  440. <h5><a name="Finally, do a bottom up iterative merge sort-TOC">Finally, do a bottom up iterative merge sort</a></h5>
  441.  
  442. <p>It took me a while to figure out how to do the iterative merge sort in a Schemely fashion. As usual, it wasn't until I took the time to model the procedure on paper that I got somewhere. Here's what I wrote in my notebook:</p>
  443. <pre>
  444. ;;  XS                   |      RESULT
  445. ;;---------------------------------------------
  446.  
  447. '(5 1 2 9 7 8 4 3 6)            '()
  448.     '(2 9 7 8 4 3 6)            '((1 5))
  449.         '(7 8 4 3 6)            '((2 9) (1 5))
  450.             '(4 3 6)            '((7 8) (2 9) (1 5))
  451.                 '(6)            '((3 4) (7 8) (2 9) (1 5))
  452.                  '()            '((6) (3 4) (7 8) (2 9) (1 5))
  453.  
  454. ;; XS is null, and RESULT is not of length 1 (meaning it isn't sorted
  455. ;; yet), so we recur, swapping the two:
  456.  
  457. '((6) (3 4) (7 8) (2 9) (1 5))  '()
  458.           '((7 8) (2 9) (1 5))  '((3 4 6))
  459.                       '((1 5))  '((2 7 8 9) (3 4 6))
  460.                            '()  '((1 5) (2 7 8 9) (3 4 6))
  461.  
  462. ;; Once more XS is null, but RESULT is still not sorted, so we swap
  463. ;; and recur again
  464.  
  465. '((1 5) (2 7 8 9) (3 4 6))      '()
  466.                   '(3 4 6)      '((1 2 5 7 8 9))
  467.                        '()      '((3 4 6) (1 2 5 7 8 9))
  468.  
  469. ;; Same story: swap and recur!
  470.  
  471. '((3 4 6) (1 2 5 7 8 9))        '()
  472.                      '()        '((1 2 3 4 5 6 7 8 9))
  473.  
  474. ;; Finally, we reach our base case: XS is null, and RESULT is of
  475. ;; length 1, meaning that it contains a sorted list
  476.  
  477. '(1 2 3 4 5 6 7 8 9)
  478. </pre>
  479.  
  480. <p>This was a really fun little problem to think about and visualize. It just so happens that it fell out in a functional style; usually I don't mind doing a bit of state-bashing, especially if it's procedure-local. Here's the code that does the sort shown above:</p>
  481. <pre>
  482.   (define (rml/merge-sort xs pred)
  483.     (let loop ((xs xs)
  484.                (result '()))
  485.          (cond ((and (null? xs)
  486.                      (null? (cdr result)))
  487.                 (car result))
  488.                ((null? xs)
  489.                 (loop result
  490.                       xs))
  491.                ((null? (cdr xs))
  492.                 (loop (cdr xs)
  493.                       (cons (car xs) result)))
  494.                (else
  495.                 (loop (cddr xs)
  496.                       (cons (rml/merge &lt;
  497.                                        (first xs)
  498.                                        (second xs))
  499.                             result))))))
  500. </pre>
  501.  
  502. <p>That's nice, but how does it perform?</p>
  503. <p>A good test of our merge sort is to compare it to the system's built-in sort procedure. In the case of MIT/GNU Scheme, we'll need to compile our code if we hope to get anywhere close to the system's speed. If your Scheme is interpreted, you don't have to bother of course.</p>
  504. <p>To make the test realistic, we'll create three lists of random numbers: one with 20,000 items, another with 200,000, and finally a giant list of 2,000,000 random numbers. This should give us a good idea of our sort's performance. Here's the output of timing first two sorts, 20,000 and 200,000 2:</p>
  505. <pre>
  506. ;;; Load compiled code
  507.  
  508. (load "mergesort")
  509. ;Loading "mergesort.so"... done
  510. ;Value: rml/insertion-sort2
  511.  
  512. ;;; Define our lists
  513.  
  514. (define unsorted-20000 (make-list-of-random-numbers 20000 200000))
  515. ;Value: unsorted-20000
  516.  
  517. (define unsorted-200000 (make-list-of-random-numbers 200000 2000000))
  518. ;Value: unsorted-200000
  519.  
  520. ;;; Sort the list with 20,000 items
  521.  
  522. (with-timing-output (rml/merge-sort unsorted-20000 &lt;))
  523. ;Run time:      .03
  524. ;GC time:       0.
  525. ;Actual time:   .03
  526.  
  527. (with-timing-output (sort unsorted-20000 &lt;))
  528. ;Run time:      .02
  529. ;GC time:       0.
  530. ;Actual time:   .021
  531.  
  532. ;;; Sort the list with 200,000 items
  533.  
  534. (with-timing-output (rml/merge-sort unsorted-200000 &lt;))
  535. ;Run time:      .23
  536. ;GC time:       0.
  537. ;Actual time:   .252
  538.  
  539. (with-timing-output (sort unsorted-200000 &lt;))
  540. ;Run time:      .3
  541. ;GC time:       0.
  542. ;Actual time:   .3
  543. </pre>
  544.  
  545. <p>As you can see, our sort procedure is on par with the system's for these inputs. Now let's turn up the heat. How about a list with 2,000,000 random numbers?</p>
  546. <pre>
  547. ;;; Sort the list with 2,000,000 items
  548.  
  549. (define unsorted-2000000 (make-list-of-random-numbers 2000000 20000000))
  550. ;Value: unsorted-2000000
  551.  
  552. (with-timing-output (rml/merge-sort4 unsorted-2000000 &lt;))
  553. ;Aborting!: out of memory
  554. ;GC #34: took:   0.80 (100%) CPU time,   0.10 (100%) real time; free: 11271137
  555. ;GC #35: took:   0.70 (100%) CPU time,   0.90  (81%) real time; free: 11271917
  556. ;GC #36: took:   0.60 (100%) CPU time,   0.90  (99%) real time; free: 11271917
  557.  
  558. (with-timing-output (sort unsorted-2000000 &lt;))
  559. ;Run time:      2.48
  560. ;GC time:       0.
  561. ;Actual time:   2.474
  562.  
  563. </pre>
  564.  
  565. <p>No go. On a MacBook with 4GB of RAM, our merge sort runs out of memory, while the system sort procedure works just fine. It seems the wizards who implemented this Scheme system knew what they were doing after all!</p>
  566. <p>It should be pretty clear at this point why we're running out of memory. In MIT/GNU Scheme, the system sort procedure uses vectors and mutation (and is no doubt highly tuned for the compiler), whereas we take a relatively brain-dead approach that uses lists and lots of consing. I leave it as an exercise for the reader (or perhaps my future self) to rewrite this code so that it doesn't run out of memory.</p>
  567. <p>Footnotes:</p>
  568. <h1><a name="An earlier implementation started off the sort by "exploding" the list to be sorted so that <code>'(1 2 3)</code> became <code>'((1) (2) (3))</code>. This is convenient for testing purposes, but very expensive. It's also unnecessary after the first round of merging. We avoid the need to explode the list altogether by teaching merge to accept numbers and listify them when they appear. We could also do the same for strings and other types as necessary.-TOC">An earlier implementation started off the sort by "exploding" the list to be sorted so that <code>'(1 2 3)</code> became <code>'((1) (2) (3))</code>. This is convenient for testing purposes, but very expensive. It's also unnecessary after the first round of merging. We avoid the need to explode the list altogether by teaching merge to accept numbers and listify them when they appear. We could also do the same for strings and other types as necessary.</a></h1>
  569.  
  570. <h2><a name="For the definition of the with-timing-output macro, see here.-TOC">For the definition of the with-timing-output macro, see here.</a></h2>
  571.  
  572. <h2><a name="Searching-TOC">Searching</a></h2>
  573.  
  574. <p>Now that we've sorted a list of elements, we can search it.  It turns out that searching through a list of things is much faster if you can sort that list first.</p>
  575. <p>In this section we'll look at a particular type of search algorithm called binary search.  Binary search is so named because it cuts the search space in half with every iteration.</p>
  576. <p>Unlike some other searches, binary search only works on ordered lists of things.  That is why we had to go through the trouble of sorting our list earlier: so that we could search through it now.</p>
  577. <p>Load the binary search library:</p>
  578. <pre>
  579. (load-module 'binary-search)
  580. </pre>
  581.  
  582. <h3><a name="Binary Search-TOC">Binary Search</a></h3>
  583.  
  584. <p>Binary search is a method for finding a specific item in a sorted list. Here's how it works:</p>
  585. <p>Binary search works like this:</p>
  586. <ol>
  587. <li class="ordered"> Pick the element in the middle of the list.</li>
  588. <li class="ordered"> Is it the word you're looking for?</li>
  589. </ol>
  590. <p>If yes, you're done.</p>
  591. <p>If no, check it against the element you're looking for:</p>
  592. <p>If it's less than the element you're looking for:</p>
  593. <p>Split the list in half at the current element</p>
  594. <p>Search again, this time using only the high half of the list as input</p>
  595. <p>If it's greater than the element you're looking for:</p>
  596. <ol>
  597. <li class="ordered"> Split the list in half at the current element</li>
  598. <li class="ordered"> Search again, this time using only the low half of the list as</li>
  599. <p>input</p></ol>
  600.  
  601. <p><font color="red">TODO</font>: Merge the above and below descriptions.</p>
  602. <p>Take a guess that the item you want is in the middle of the current search "window" (when you start, the search window is the entire list).</p>
  603. <p>If the item is where you guessed it would be, return the index (the location of your guess).</p>
  604. <p>If your guess is "less than" the item you want (based on a comparison function you choose), recur, this time raising the "bottom" of the search window to the midway point.</p>
  605. <p>If your guess is "greater than" the item you want (based on your comparison function), recur, this time lowering the "top" of the search window to the midway point.</p>
  606. <p>In other words, you cut the size of the search window in half every time through the loop. This gives you a worst-case running time of about (/ (log n) (log 2)) steps. This means you can find an item in a sorted list of 20,000,000,000 (twenty billion) items in about 34 steps.</p>
  607. <h4><a name="Reading lines from a file-TOC">Reading lines from a file</a></h4>
  608.  
  609. <p>Before I could start writing a binary search, I needed a sorted list of items. I decided to work with a sorted list of words from /usr/share/dict/words, so I wrote a couple of little procedures to make a list of words from a subset of that file. (I didn't want to read the entire large file into a list in memory.)</p>
  610. Note: Both <code>format</code> and the Lisp-inspired <code>\#!optional</code> keyword are available in MIT Scheme; they made writing the re-matches? procedure more convenient.
  611.  
  612. <p>re-matches? checks if a regular expression matches a string (in this case, a line from a file).</p>
  613. <p>make-list-of-words-matching is used to loop over the lines of the words file and return a list of lines matching the provided regular expression.  Now I have the tools I need to make my word list.</p>
  614. <pre>
  615. (load-option 'format)
  616.  
  617. (define (re-matches? re line #!optional display-matches)
  618.   ;; Regex String . Boolean -&gt; Boolean
  619.   "Attempt to match RE against LINE. Print the match if DISPLAY-MATCHES is set."
  620.   (let ((match (re-string-match re line)))
  621.     (if match
  622.         (if (not (default-object? display-matches))
  623.             (begin (format #t "|~A|~%" (re-match-extract line match 0))
  624.                    #t)
  625.             #t)
  626.         #f)))
  627.  
  628. (define (make-list-of-words-matching re file)
  629.   ;; Regex String -&gt; List
  630.   "Given a regular expression RE, loop over FILE, gathering matches."
  631.   (call-with-input-file file
  632.     (lambda (port)
  633.       (let loop ((source (read-line port)) (sink '()))
  634.         (if (eof-object? source)
  635.             sink
  636.             (loop (read-line port) (if (re-matches? re source)
  637.                              (cons source sink)
  638.                              sink)))))))
  639. </pre>
  640.  
  641. Since I am not one of the 10\% of programmers who can implement a correct binary search on paper, I started out by writing a test procedure. The test procedure grew over time as I found bugs and read an interesting discussion about the various edge cases a binary search procedure should handle. These include:
  642.  
  643. <ul>
  644. <li> Empty list</li>
  645. <li> List has one word</li>
  646. <li> List has two word</li>
  647. <li> Word is not there and "less than" anything in the list</li>
  648. <li> Word is not there and "greater than" anything in the list</li>
  649. <li> Word is first item</li>
  650. <li> Word is last item</li>
  651. <li> List is all one word</li>
  652. </ul>
  653. <p>If multiple copies of word are in list, return the first word found (this could be implemented to return the first or last duplicated word)</p>
  654. <p>Furthermore, I added a few "sanity checks" that check the return values against known outputs. Here are the relevant procedures:</p>
  655. <p>assert= checks two numbers for equality and prints a result</p>
  656. <p>assert-equal checks two Scheme objects against each other with equal? and prints a result</p>
  657. <p>run-binary-search-tests reads in words from a file and runs all of our tests</p>
  658. <pre>
  659. (define (assert= expected got #!optional noise)
  660.   ;; Int Int -&gt; IO
  661.   (if (= expected got)
  662.       (format #t "~A is ~A\t...ok~%" expected got)
  663.       (format #t "~A is not ~A\t...FAIL~%" expected got)))
  664.  
  665. (define (assert-equal? expected got #!optional noise)
  666.   ;; Thing Thing -&gt; IO
  667.   (if (equal? expected got)
  668.       (format #t "~A is ~A\t...ok~%" expected got)
  669.       (format #t "~A is not ~A\t...FAIL~%" expected got)))
  670.  
  671. (define (run-binary-search-tests)
  672.   ;; -&gt; IO
  673.   "Run our binary search tests using known words from the 'words' file.
  674. This file should be in the current working directory."
  675.   (with-working-directory-pathname (pwd)
  676.     (lambda ()
  677.       (if (file-exists? "words")
  678.           (begin
  679.             (format #t "file 'words' exists, making a list...~%")
  680.             (let* ((unsorted (make-list-of-words-matching "acc" "words"))
  681.                    (sorted (sort unsorted string&lt;?)))
  682.               (format #t "doing binary searches...~%")
  683.               (assert-equal? #f (binary-search "test" '())) ; empty list
  684.               (assert-equal? #f (binary-search "aardvark" sorted)) ; element absent and too small
  685.               (assert-equal? #f (binary-search "zebra" sorted)) ; element absent and too large
  686.               (assert= 0 (binary-search "accusive" '("accusive"))) ; list of length one
  687.               (assert= 0 (binary-search "acca" sorted)) ; first element of list
  688.               (assert= 1 (binary-search "aardvark" '("aardvark" "aardvark" "babylon"))) ; multiple copies of word in list
  689.               (assert= 1 (binary-search "barbaric" '("accusive" "barbaric"))) ; list of length two
  690.               (assert= 98 (binary-search "acclamator" sorted))
  691.               (assert= 127 (binary-search "aardvark" (map (lambda (x) "aardvark") test-list))) ; list is all one value
  692.               (assert= 143 (binary-search "accomplice" sorted))
  693.               (assert= 254 (binary-search "accustomedly" sorted))
  694.               (assert= 255 (binary-search "accustomedness" sorted)))))))) ; last element of list
  695. </pre>
  696.  
  697. <h4><a name="The binary search procedure-TOC">The binary search procedure</a></h4>
  698.  
  699. <p>Finally, here's the binary search procedure; it uses a couple of helper procedures for clarity.</p>
  700. <p>->int is a helper procedure that does a quick and dirty integer conversion on its argument</p>
  701. <p>split-difference takes a low and high number and returns the floor of the halfway point between the two</p>
  702. <p>binary-search takes an optional debug-print argument that I used a lot while debugging. The format statements and the optional argument tests add a lot of bulk --- now that the procedure is debugged, they can probably be removed. (Aside: I wonder how much "elegant" code started out like this and was revised after sufficient initial testing and debugging?)</p>
  703. <pre>
  704. (define (-&gt;int n)
  705.   ;; Number -&gt; Int
  706.   "Given a number N, return its integer representation.
  707. N can be an integer or flonum (yes, it's quick and dirty)."
  708.   (flo:floor-&gt;exact (exact-&gt;inexact n)))
  709.  
  710. (define (split-difference low high)
  711.   ;; Int Int -&gt; Int
  712.   "Given two numbers, return their rough average."
  713.   (if (= (- high low) 1)
  714.       1
  715.     (-&gt;int (/ (- high low) 2))))
  716.  
  717. (define (binary-search word xs #!optional debug-print)
  718.   ;; String List -&gt; Int
  719.   "Do binary search of list XS for WORD. Return the index found, or #f."
  720.   (if (null? xs)
  721.       #f
  722.     (let loop ((low 0) (high (- (length xs) 1)))
  723.          (let* ((try (+ low (split-difference low high)))
  724.                 (word-at-try (list-ref xs try)))
  725.            (cond
  726.             ((string=? word-at-try word) try)
  727.             ((&lt; (- high low) 1) #f)
  728.             ((= (- high try) 1)
  729.              (if (string=? (list-ref xs low) word)
  730.                  low
  731.                #f))
  732.             ((string&lt;? word-at-try word)
  733.              (if (not (default-object? debug-print))
  734.                  (begin (format #f "(string&lt;? ~A ~A) -&gt; #t~%try: ~A high: ~A low: ~A ~2%" %
  735.                                  word-at-try word try high low)
  736.                         (loop (+ 1 try) high)) ; raise the bottom of the window
  737.                         (loop (+ 1 try) high)))
  738.             ((string&gt;? word-at-try word)
  739.              (if (not (default-object? debug-print))
  740.                  (begin (format #f "(string&gt;? ~A ~A) -&gt; #t~%try: ~A high: ~A low: ~A ~2%" %
  741.                                  word-at-try word try high low)
  742.                         (loop low (+ 1 try))) ; lower the top of the window
  743.                         (loop low (+ 1 try))))
  744.             (else #f))))))
  745. </pre>
  746.  
  747. <h4><a name="Takeaways-TOC">Takeaways</a></h4>
  748.  
  749. <p>This exercise has taught me a lot.</p>
  750. <p>Writing correct code is hard. (I'm confident that this code is not correct.) You need to figure out your invariants and edge cases first. I didn't, and it made things a lot harder.</p>
  751. <p>It's been said a million times, but tests are code. The tests required some debugging of their own.</p>
  752. <p>Once they worked, the tests were extremely helpful. Especially now that I'm at the point where (if this were "for real") additional features would need to be added, the format calls removed, the procedure speeded up, and so on.</p>
  753. <p>I hope this has been useful to some other aspiring Scheme wizards out there. Happy Hacking!</p>
  754. <h2><a name="Trees-TOC">Trees</a></h2>
  755.  
  756. <h3><a name="Binary trees-TOC">Binary trees</a></h3>
  757.  
  758. <p>CSRMs: constructors, selectors, recognizers, and mutators.</p>
  759. <p>Load the library:</p>
  760. <pre>
  761. &gt; (load-module 'binary-tree)
  762. </pre>
  763.  
  764. <p>Basic operations:</p>
  765. <ul>
  766. <li> creation</li>
  767. <li> insertion</li>
  768. <li> updating (destructive/in-place)</li>
  769. <li> deletion</li>
  770. </ul>
  771. <p>Walking the tree using higher order functions (see notes from ADuni lectures).</p>
  772. <p><font color="red">TODO</font>: Mention tree-sort here, and note that this is only fast if the tree is already balanced, so give the "slow version" first, since balanced trees are not introduced yet. Explain why it can be slow.</p>
  773. <h3><a name="Balanced binary trees-TOC">Balanced binary trees</a></h3>
  774.  
  775. <p><font color="red">TODO</font>: Red-black tree or AVL tree? AVL is supposedly simpler to implement but red-black is said to have superior tree rotation runtime -- once we have a self-balancing tree of either type we can write the "fast" treesort!</p>
  776. <p><font color="red">TODO</font>: Mention that when trees are balanced, then TREE-SORT can now be fast.  Add a link back to the TREE-SORT section from here.</p>
  777. <p><font color="red">TODO</font>: Write sections for the following operations:</p>
  778. <ul>
  779. <li> balancing (on insert?)</li>
  780. <li> searching</li>
  781. </ul>
  782. <h2><a name="Graphs-TOC">Graphs</a></h2>
  783.  
  784. <p><font color="red">TODO</font>: How to represent graphs with another data structure: matrix, hash table, or association list. We might want to implement our own hash tables first using balanced binary trees -- that would be way cool!</p>
  785. <p>In other words, it might be cool to build everything from the</p><p>bottom up, e.g.:</p>
  786. <ol>
  787. <li class="ordered"> Balanced binary tree</li>
  788. <li class="ordered"> Hash Table</li>
  789. <li class="ordered"> Graph (using hash table representation)</li>
  790. </ol>
  791. <p>Write an implement the following:</p>
  792. <ul>
  793. <li> Traversal</li>
  794. <li> Search: DFS, BFS, Dijkstra's Algorithm, A*</li>
  795. </ul>
  796. <h3><a name="Searching Graphs-TOC">Searching Graphs</a></h3>
  797.  
  798. <h4><a name="Depth-first search-TOC">Depth-first search</a></h4>
  799.  
  800. <p><font color="red">TODO</font>: Add winston-horn-network.png here.</p>
  801. <p>I've been having fun translating some of the code in Winston and Horn's <em>Lisp</em> into Scheme.  This book is amazing --- clearly written, with lots of motivating examples and applications.  As SICP is to language implementation, <em>Lisp</em> is to application development, with chapters covering constraint propagation, forward and backward chaining, simulation, object-oriented programming, and so on.  And it does include the obligatory Lisp interpreter in one chapter, if you're into that sort of thing.</p>
  802. <p>In this installment, based on Chapter 19, we will look at some simple strategies for searching for a path between two nodes on a network (a graph).  The network we'll be using is shown in the diagram above.</p>
  803. <p>Here's the same network, represented as an alist where each <code>CAR:CDR</code> pair represents a <code>NODE:NEIGHBORS</code> relationship:</p>
  804. <pre>
  805. '((f e)
  806.   (e b d f)
  807.   (d s a e)
  808.   (c b)
  809.   (b a c e)
  810.   (a s b d)
  811.   (s a d))
  812. </pre>
  813.  
  814. <p>The high-level strategy the authors use is to traverse the network, building up a list of partial paths.  If a partial path ever reaches the point where it describes a full path between the two network nodes we're after, we've been successful.</p>
  815. <p>As with trees, we can do either a breadth-first or depth-first traversal.  Here's what the intermediate partial paths will look like for a breadth-first traversal that builds a path between nodes <code>S</code> and <code>F</code>:</p>
  816. <pre>
  817. (s)
  818. (s a)
  819. (s d)
  820. (s a b)
  821. (s a d)
  822. (s d a)
  823. (s d e)
  824. (s a b c)
  825. (s a b e)
  826. (s a d e)
  827. (s d a b)
  828. (s d e b)
  829. '(s d e f)
  830. </pre>
  831.  
  832. <p>Based on that output, we can deduce that every time we visit a node, we want to extend our partial paths list with that node.  Here's one option --- its only problem is that it will happily build circular paths that keep us from ever finding the node we want:</p>
  833. <pre>
  834. (define (%buggy-extend path) ;; Builds circular paths
  835.      (map (lambda (new-node)
  836.             (cons new-node path))
  837.           (%get-neighbor (first path))))
  838. </pre>
  839.  
  840. (Incidentally, I've become fond of the convention whereby internal procedures that aren't part of a public-facing API are prefixed with the <code>\%</code> character.  This can be found in some parts of the MIT Scheme sources, and I believe it's used in Racket as well.  I've started writing lots of my procedures using this notation to remind me that the code I'm writing is not the real `API', that the design will need more work, and that the current code is just a first draft.  I'm using that convention here.)
  841.  
  842. <p>Here's a better version that checks if we've already visited the node before adding it to the partial paths list --- as a debugging aid it prints out the current path before extending it:</p>
  843. <pre>
  844. (define (%extend path)
  845.     (display (reverse path))
  846.     (newline)
  847.     (map (lambda (new-node)
  848.            (cons new-node path))
  849.          (filter (lambda (neighbor)
  850.                    (not (member neighbor path)))
  851.                  (%get-neighbor (first path)))))
  852.  
  853. </pre>
  854.  
  855. You may have noticed the <code>\%GET-NEIGHBOR</code> procedure; it's just part of some silly data structure bookkeeping code.  Please feel free to deride me in the comments for my use of a global variable.  What can I say?  I'm Scheming like it's 1988 over here!  Here's the boilerplate:
  856.  
  857. <pre>
  858. (define *neighbors* '())
  859.  
  860. (define (%add-neighbor! k v)
  861.   (let ((new-neighbor (cons k v)))
  862.     (set! *neighbors*
  863.           (cons new-neighbor *neighbors*))))
  864.  
  865. (define (%get-neighbor k)
  866.   (let ((val (assoc k *neighbors*)))
  867.     (if val
  868.         (cdr val)
  869.       '())))
  870.  
  871. (%add-neighbor! 's '(a d))
  872. (%add-neighbor! 'a '(s b d))
  873. (%add-neighbor! 'b '(a c e))
  874. (%add-neighbor! 'c '(b))
  875. (%add-neighbor! 'd '(s a e))
  876. (%add-neighbor! 'e '(b d f))
  877. (%add-neighbor! 'f '(e))
  878. </pre>
  879.  
  880. Now that we have our data structure and a way to extend our partial path list (non-circularly), we can write the main search procedure, <code>\%BREADTH-FIRST</code>.  The authors have a lovely way of explaining its operation:
  881.  
  882. <p><blockquote></p><p><code>BREADTH-FIRST</code> is said to do a breadth-first search because it extends all partial paths out to uniform length before extending any to a greater length.</p><p></blockquote></p>
  883. <p>Here's the code, translated to use a more Schemely, iterative named <code>LET</code> instead of the linear-recursive definition from the book:</p>
  884. <pre>
  885. (define (%breadth-first start finish network)
  886.   (let ((queue (list (list start))))
  887.     (let loop ((start start)
  888.                (finish finish)
  889.                (network network)
  890.                (queue queue))
  891.       (cond ((null? queue) '())         ;Queue empty?
  892.             ((equal? finish (first (first queue))) ;Finish found?
  893.              (reverse (first queue)))              ;Return path.
  894.             (else
  895.              (loop start
  896.                    finish               ;Try again.
  897.                    network
  898.                    (append
  899.                     (rest queue)
  900.                     (extend (first queue))))))))) ;New paths in front.
  901.  
  902. </pre>
  903.  
  904. <p>(A better way to write this procedure would be to implement a generic internal search procedure that takes its `breadthiness' or `depthiness' as a parameter.  We could then wrap it with nicer public-facing search procedures specific names.)</p>
  905. <p>Meanwhile, back at the REPL, we remind ourselves of what <code><strong>NEIGHBORS</strong></code> actually looks like, and then we search for a path between the nodes <code>S</code> and <code>F</code>.</p>
  906. <pre>
  907.      &gt; *neighbors*
  908.      '((f e) (e b d f) (d s a e) (c b) (b a c e) (a s b d) (s a d))
  909.      &gt; (%breadth-first 's 'f *neighbors*)
  910.      (s)
  911.      (s a)
  912.      (s d)
  913.      (s a b)
  914.      (s a d)
  915.      (s d a)
  916.      (s d e)
  917.      (s a b c)
  918.      (s a b e)
  919.      (s a d e)
  920.      (s d a b)
  921.      (s d e b)
  922.      '(s d e f)
  923. </pre>
  924.  
  925. <p>What fun!  I can almost imagine using a three-dimensional variant of these searches for a space wargame with wormhole travel.  Except, you know, they'd need to be much faster and more skillfully implemented. There's also the tiny requirement to write the surrounding game.</p>
  926. <p>It shouldn't need to be said, but: Of course the authors knew better; they were trying to hide that unnecessary complexity from you until later.</p>
  927. <h4><a name="Shortest path between nodes (aka Breadth-first search)-TOC">Shortest path between nodes (aka Breadth-first search)</a></h4>
  928.  
  929. <h3><a name="Graph coloring-TOC">Graph coloring</a></h3>
  930.  
  931. <h2><a name="Strings-TOC">Strings</a></h2>
  932.  
  933. <p><font color="red">TODO</font>: Figure out what the (say) 2-3 most basic algorithms are that we need to cover.</p>
  934. <h2><a name="A hash table library-TOC">A hash table library</a></h2>
  935.  
  936. <p>In this chapter, we're going to implement our own hash tables.</p>
  937. <p>In day-to-day programming, the hash table is probably the most important real-world data structure.</p>
  938. <p>The hash table also gives us a nice real-world proving ground for our algorithms skills, since implementing hash tables requires putting together several different data structures into one --- in other words, it is a <em>compound data structure</em>.</p>
  939. <h2><a name="A regular expression library-TOC">A regular expression library</a></h2>
  940.  
  941. <p>In this chapter, we're going to implement our own regular expression matching library.</p>
  942. <h2><a name="Glossary-TOC">Glossary</a></h2>
  943.  
  944. <h3><a name="Iterative process-TOC">Iterative process</a></h3>
  945.  
  946. <p>In terms of Scheme code, code that describes an iterative process usually looks like this:</p>
  947. <pre>
  948.     (define (+ a b)
  949.       (if (= a 0)
  950.           b
  951.           (+ (decr a)
  952.              (incr b))))
  953. </pre>
  954.  
  955. <p>You can visualize the operation of an iterative process like this:</p>
  956. <pre>
  957.     (+ 4 3)
  958.     (+ 3 4)
  959.     (+ 2 5)
  960.     (+ 1 6)
  961.     (+ 0 7)
  962.     7
  963. </pre>
  964.  
  965. <p>Notice how the "shape" of the successive calls to <em>+</em> stays the same "size"?  In other words, it doesn't grow out to the right.</p>
  966. <p>Using Big O notation <a href="./3.html">3</a>, you can say that <em>+</em> uses <em>O(1)</em> space (memory), and <em>O(n)</em> time (CPU).</p>
  967. <h3><a name="Recursive Process-TOC">Recursive Process</a></h3>
  968.  
  969. <p>A recursive process is one that consumes growing amounts of stack space while it runs.  It terms of Scheme code, code that describes a recursive process usually looks like this:</p>
  970. <pre>
  971. (define (incr n) (+ n 1))
  972. (define (decr n) (- n 1))
  973.  
  974. (define (+ a b)
  975.   (if (= a 0)
  976.       b
  977.       (incr (+ (decr a) b))))
  978.  
  979. </pre>
  980.  
  981. <p>You can visualize the operation of a recursive process like this:</p>
  982. <pre>
  983. (+ 3 4)
  984. (incr (+ (decr 3) 4))
  985. (incr (incr (+ (decr 2) 4)))
  986. (incr (incr (incr (+ (decr 1) 4))))
  987. (incr (incr (incr 4)))
  988. 7
  989.  
  990. </pre>
  991.  
  992. <p>Notice how the "shape" of the successive calls to <em>+</em> get "larger"?</p>
  993. <p>Using Big O notation, you can say that <em>+</em> uses <em>O(n)</em> time in its first argument and <em>O(n)</em> space in its second.</p>
  994. <h3><a name="Big O Notation-TOC">Big O Notation</a></h3>
  995.  
  996. <p>"Big O" notation is a way to talk about the resource usage of an algorithm.  This usage can be along several axes:</p>
  997. <ul>
  998. <li> Time (CPU --- how many instructions will it take to compute?)</li>
  999. <li> Space (Memory --- how much storage will it use?)</li>
  1000. </ul>
  1001. <p>Instead of "resource usage" you can also say the "complexity" of the algorithm.  This is the term you are likely to find in more academic writings.</p>
  1002. <p>To be more precise, this notation describes the upper bound of the resource usage.  This means that, even in the worst case scenario, the algorithm will not use more than a given amount of resources.</p>
  1003. <p>For more details, check out the following references:</p>
  1004. <ul>
  1005. <li><a href="https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation">https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation</a></li>
  1006. </ul>
  1007. <ul>
  1008. <li><a href="http://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation">http://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation</a></li>
  1009. </ul>
  1010. <ul>
  1011. <li><a href="http://bigocheatsheet.com/">http://bigocheatsheet.com/</a></li>
  1012. </ul>
  1013. <p>The last page has a nice big graph that makes it easy to visualize the different complexities.  Further down the page there are tables that list algorithm complexities for various operations (insert, delete, search) on data structures such as stacks, lists, hash tables, etc. Add this to your bookmarks so you can refer back to it as needed.</p>
  1014. <h2><a name="Loading the book code into a Scheme-TOC">Loading the book code into a Scheme</a></h2>
  1015.  
  1016. <p><font color="red">TODO</font>: Write instructions for loading the book code into each of the supported Schemes.</p>
  1017. <h2><a name="Bibliography-TOC">Bibliography</a></h2>
  1018.  
  1019. <ul>
  1020. <li> Abelson &amp; Sussman, <em>Structure and Interpretation of Computer Programs</em>, 1st ed., 1986.</li>
  1021. <li> Winston &amp; Horn, <em>Lisp</em>, 198?.</li>
  1022. <li> Gabriel, <em>Performance and Evaluation of Lisp Systems</em>, 1985.</li>
  1023. <li> Rawlins, <em>Compared to What?</em>, ???.</li>
  1024. </ul>
  1025.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement