Loops and Haskell's "iterate"

haskell

Coming from imperative languages, one of the biggest initial shocks with Haskell has to be the lack of loops (or more accurately, the lack of special syntactic sugar for them). Imperative loops are rather “unstructured” from a functional perspective: you set up a counter and then insert a bunch of effectful tasks to execute on each iteration that may or may not be related to each other. In functional languages, there are several different replacements for loops, each of which is useful depending on what, exactly, you intend to do with each iteration. For example, folding is all about accumulating some value by iterating over the elements of some structure, whereas scanning keeps the intermediate accumulated values. General recursion is certainly possible but discouraged in Haskell given these more structured alternatives. In a sense, this goes beyond goto considered harmful to unstructured loops (i.e. general recursion) considered harmful. Having written imperative code for a while now, I appreciate the notion that I may never have to analyze nested unstructured loops again (at least not in the usual way that one does in C and its derivatives).

Is this too extreme a view? It seems so at first: how, for example, can we possibly deal with situations where we have an algorithm that needs to run again and again until it converges to a value that satisfies a certain condition? But consider this lovely function from the Haskell prelude:

iterate :: (a -> a) -> a -> [a]

The first argument is a function that starts with a value and “refines” it as necessary to produce a new value of the same type, and the second input is the initial value. The output is a list of intermediate values produced by iterating this function. If we have some condition to wait for, we could write something like

f x = iterate go x
   where go x' = g x'

and then

fUntil cond x = fst . head . dropWhile (\x y -> not $ x `cond` y) $ 
                zip (f x) (tail $ f x)

There are at least a couple of significant advantages to structuring things this way: first, all of the intermediate “states” of the “loop” are immediately accessible via f x (printf insertion considered wasteful, etc.). Secondly, we get the notion of partially applying a loop, so instead of just letting our function run until it converges to an answer, we also get the option (for free!) of running it a specified number of times. This is an example of the compositional advantages of pervasive laziness: take as many iterations as you want and the runtime ignores the rest.

In the actual application I had, there were two such functions that could be used to converge all the way to the answer, and it turned out that interleaving partial applications of them turned out to give better performance than either of them alone. I now realize what people mean when they talk about laziness allowing for separation of “producers” from “consumers”, subject to resource allocation/buffering issues.