Disclaimer
It’s been a few years since I’ve tried to explain continuations to anyone, much less in English. This is my take on what, why, when, how, and WTF when using continuations for fun and profit. I’m probably not the first one explaining them this way, and I’ve borrowed ideas from many places.
I hope former CI3641 and CI4251 students who heard the hand-wavy «I want a sandwich» explanation realize I was trying to convey a very complicated topic that wasn’t worth mentioning in class. Also hope former CI4721/CI4722 students feel my pain at having to focus on less interesting non-functional code generation instead of vastly superior continuation based code generation for functional languages.
Comments are welcome at the usual e-mail addresses.
Going with the flow
There are several ways to describe continuations, all equally confusing and «mystic». Many rely on your understanding on how a programming language actually works, and many requiring you to suspend all that knowledge as well. I agree and disagree with all of them, so this would be my explanation.
Imagine your program executing. Don’t think of the actual code
you see on your editor, but «the flow» of the instructions chugging
along, after several function calls and what have you. At some point,
your program will reach some instruction S
and just as it finishes
executing said instruction:
There are a bunch of current active values. From the low-level notion of «the machine registers and its values» all the way to the higher-level «all active variables» and «the call stack».
There is a particular next instruction
S'
that is about to be executed in the «normal» flow of execution. It could be the immediately next one in the program text, or it could be a non-local one thanks to an explicit or implicit branch.
Those two things together would be a good enough approximation for what a «continuation» is: some sort of snapshot of the current «interesting values» and immediate «next step».
In most programming languages, continuations appear and disappear with each program step. As a programmer, you’re oblivious to them because the language definition already has a somewhat precise behavior for what must happen at each step. And that’s fine because «normal» people rarely ever handle continuations.
However, there are languages that allow you to save the «current continuation» before moving along, giving you the opportunity to reenact said continuation. If you are thinking «travel back in time» or being able to «choose the future», you wouldn’t be right, but you wouldn’t be wrong. There’s a catch: side effects will not be reversed nor reenacted.
So, let’s build knowledge to go from letting the language control the flow of execution for you, to you being able to create arbitrary flow of execution.
Continuation Passing Style (CPS)
Your average run-of-the-mill function normally takes some input, in the form of arguments, and returns some useful values. A piece of code (the caller) uses the function (the callee). Once the callee «returns» results, the caller continues execution.
Basic knowledge of how programming languages work tells us this means the caller must «pass parameters» and reserve space from the return values. Since this calling of functions can be arbitrarily nested, languages use a «call stack» to store this information. The most recently called function will be at the top of the stack, a new call will add a stack frame, while returning to a previous caller will remove the top frame.
The callee has access to the same stack, so it can take out arguments to perform its computations. Once computation is complete, the callee will store the results in said stack, and will be able to return execution to the exact instruction after it was invoked. Again, because said position is stored in the «call stack».
So, if we have a regular function such as
When a caller invokes fac 42
, it has to pass 42 through
the stack, jump to fac
’s body of code, and wait for fac
to
compute. Since fac
is recursive, it will call (and wait) on
itself to compute intermediate values.
In Continuation Passing Style every function receives its usual
arguments, and also receives the «continuation function»,
as an additional argument customarily named k
. That is, instead
of returning a value to the caller, a CPS function completes its
computation and feeds the result directly to the continuation
function as its last step – hence, it «continues» computing.
Assuming arithmetic is «primitive» 1 we can rewrite the above function into CPS like so
Notice how there’s a new second argument, the continuation
function k
, that must take the Integer
produced by fak
and may return whatever is required. This allows fak
to
be «continued» by any given function that consumes
an Integer
.
So, something like fak 42 print
would print the result,
something like fak 42 even
would tell if the result
is even or not, and something like fak 42 id
would
just give the result.
In general, any «regular» function with type
-> b a
can be rewritten to become a «CPS» function with type
-> (b -> r) -> r a
How is CPS useful?
First of all, notice how fac
is not tail recursive,
whereas fak
is fully tail recursive. If you’re using
a civilized programming language that does tail-call
optimization, your functions will «never return», they
just continue one after the other. The callers never
have to wait for the callee to return a value.
Secondly, notice how this transformation has made many function call related notions explicit. There’s no need to save a return address in the stack, just jump to the continuation. In fact, you don’t need a complete call stack, because you just follow the sequence of continuations. That’s why many people think of continuations as «goto with arguments», or something like that.
Notice how evaluation order and needed intermediate results are clearly established at any given step. Any programming language that employs or translate into CPS as intermediate representation, makes it easy for their compiler to generate better code. Languages that allow «side effects» anywhere cannot fully translate everything to CPS precisely because of those side effects, as well as aliasing caused by mutable pointers and references. Purely functional languages like Haskell do translate into CPS, this being one of the reasons of the generated code quality.
Last, but not least, CPS allows you to come up with arbitrary flow control management. Regular functions can only return once they finish computation, and to the place where they were invoked. CPS functions can receive one or more continuations, and jump or pass them around again. CPS functions can use continuations from enclosing lexical functions. CPS functions can even save continuations in data structures for invocation at a later time.
Therefore, CPS becomes useful:
As a general way to re-write functions into tail-call form, to improve «stack safety» and execution speed. This is most helpful on languages that have full Tail Call Optimization (TCO) such as Haskell or Racket.
Note that in many cases explicit iterative tail-recursion via
fold
has better speed performance due to fusion, but is less flexible that using continuations. The Haskell compiler allows you to take advantage of both.As a way to provide non-local exits. It makes it really easy to jump to any particular part of the code as long as we have the corresponding continuation available. This means you can selectively express what
break
,continue
andgoto
imply in imperative languages, and then some.Working with the proverbial recursive
Tree
data type, we’re asked to implement a functionsumT
to add all values present in any givenTree
.data Tree = Leaf Int | Node Tree Int Tree sumT :: Tree -> Int sumT (Leaf v) = v sumT (Node l v r) = sumT l + v + sumT r
This function can be translated into CPS with little effort
sumTCPS :: Tree -> (Int -> r) -> r sumTCPS (Leaf v) k = k v sumTCPS (Node l v r) k = sumTCPS l $ \sl -> sumTCPS r $ \sr -> k (sl + v + sr)
Now suppose there’s a new (absurd ;-)) requirement that if any
Node
orLeaf
holds the value 13, you don’t care about the actual sum, and must return 42 instead.Solving that with the pure recursive function implies two traversals (first to check, then sum), fusing two traversals in one (check while summing), or rewrite to an iterative traversal. They will work, but they’ll be doing more work than needed, as well as harder to understand, optimize, and even justify during code review.
However, thanks to CPS we can write
The outer continuation (k0
) expects the «final» result, while
the inner continuation (k1
) expects the current «local» result.
If this looks suspiciously like return
in any programming
language other that Haskell, it’s because, well, it is…
As a compact way to express backtracking.
There are many computation problems complex enough that the only currently known effective way to solve them is by traversing a possibly (very) large search space. There are also less complex problems that are easier to express and understand when written in a backtracking fashion.
In these algorithms the code «guesses» the best branch to follow, remembers said choice, and carry on computing. If it hits a wall that forces to retreat to this choice point, it takes a different guess and proceeds. This process repeats until a solution is found or all possible guesses have been covered without finding one.
Prolog was designed with that idea in mind. You can express Prolog’s flow control strategy using continuations.
(I’m still looking for a simple yet meaningful example of backtracking to place here – n-Queens is too much, regex matching is too*o much)
The Haskell road to continuations
Math ensues
We’ve seen that CPS takes functions with types
-> b a
and turns them into functions with types
-> (b -> r) -> r a
This follows from the fact (Theorem) that types
b
and
-> r) -> r (b
are isomorphic 2, thus they can be interchanged. You can continue to the next subsection if you’re not interested in the why, and won’t miss a thing.
Good for you. You’ll see it is indeed true (Proof) if you consider functions
regularToCPS :: b -> ((b -> r) -> r)
= flip ($) regularToCPS
a compact, idiomatic way of saying
= \b -> (\f -> f b) regularToCPS
that takes any b
into ((b -> r) -> r)
;
and then consider function
cpsToRegular :: ((b -> r) -> r) -> b
= ($ id) cpsToRegular
a compact, idiomatic way of saying
= \k -> k id cpsToRegular
and finalize realize that
. regularToCPS :: b -> b cpsToRegular
which is id
for single values of type b
, and
. cpsToRegular :: ((b -> b) -> t) -> ((t -> r) -> r) regularToCPS
which is id
for continuations, in the sense that the
first continuation doesn’t change the input value. ∎
What type of sorcery is this?
Alas, we have this interesting type ((a -> r) -> r)
and
we’ll make the most out of it, by wrapping it with a
newtype
and studying its associated abstract behaviors
newtype Cont r a = Cont { runCont :: ((a -> r) -> r) }
Since a continuation is just a function computing the
actual result, we can fiddle with the intermediate result
to said function before feeding it to the continuation.
This means Cont r
is a Functor
instance Functor (Cont r) where
fmap f m = Cont $ \k -> runCont m (k . f)
We just transform the input value a
with f
before
feeding it to the original continuation k
that
expects a value of type b
. Note that f
is a «regular»
function, so this allows using primitives or any
non-continuation based function within a flow of
continuation based control.
Now, realizing that a continuation just needs a
value to «continue computing», and that we can have
functions that produce continuations and functions
that produce values to be consumed by said functions,
we can see that Cont r
is an Applicative
instance Applicative (Cont r) where
pure x = Cont $ \k -> k x
<*> v = Cont $ \kf -> runCont f $ \kv -> runCont v (kf . kv) f
Let’s kick «functions than produce continuations» up a notch.
Suppose we have a plain continuation ((a -> r) -> r)
and
a function that given an a
is able to produce a continuation on a
different type b
, i.e. (a -> ((b -> r) -> r)
. Turns out
we can «chain» them up thanks to
chainCPS :: ((a -> r) -> r)
-> (a -> ((b -> r) -> r))
-> ((b -> r) -> r)
= s . flip f chainCPS s f
and this suggests Cont r
is a Monad
because
((a -> r) -> r) ≡ Cont r a
(a -> (b -> r) -> r) ≡ a -> Cont r b
((b -> r) -> r) ≡ Cont r b
which makes
chainCPS :: Cont r a
-> (a -> Cont r b)
-> Cont r b
look eerily similar to the monadic >>=
. So we write
instance Monad (Cont r) where
return = pure
>>= f = Cont $ \k -> runCont m (\a -> runCont (f a) k) m
The f a
part matches the flip f
fragment above, and the
explicit chaining matches the composition. The newtype
wrapper forces us to write it this way, that’s all.
Can I have a continuation?
At this point we can write continuations by hand, we can compose continuations, and we can meddle with the values being computed within a continuation flow. This means we can convert regular function composition and monadic sequencing into continuation sequencing.
The last thing we need to have First Class 3 continuations is a way to capture any given continuation within a control flow and pass it as an argument to another continuation. That is, at any given point during execution we should be able to save the «current values» and «next step», and feed that to an alternate continuation for it to use or ignore.
In a purely functional language like Haskell, every function already has every «interesting value» it needs to compute. They are closures: bodies of pure functional code with accompanying relevant variables. This means that we only need to capture the «immediate continuation».
= Cont $ \k -> runCont (f (\a -> Cont $ \_ -> k a)) k callCC f
The name callCC
stands for «call with current continuation» 4,
meaning call function f
passing as argument the current
continuation. Let that marinate for a while…
Indeed, you’ll write something like
... code in the Cont monad
result <- callCC $ \k -> ...
-- code on the right block
-- code on the right calls other functions
-- more code on the right block
... right here
callCC
will call the block on the right side making k
equal
to the continuation that is expecting result
to work with.
Within the right side:
You can ignore
k
, do whatever and return a value. The value will be bound toresult
and code will continue «right there». That’s just the normal, unsurprising, flow of things.You can actually use
k v
to escape early returningv
The value will be bound toresult
and code will continue «right there». That’s convenient custom local flow control.You can pass
k
down as many times as you want, to as many auxiliary functions as you need, that can in turn do whatever they see fit, in particular usingk
. No matter how deeply nested the calls are, the first use ofk v
will come back to bindv
toresult
, and code will continue «right there». That’s custom non-local flow control.
But wait, there’s more!
Every Monad can be modified to become a Monad Transformer
and Cont r
is not the exception. Its transformer is
called, unsurprisingly, ContT r
. This means we can add
custom flow control to any given Monad stack provided
you do it Carefully®:
Use
callCC
directly to capture continuationsUse
>>=
and>>
implicitly to sequence continuationsUse
lift
to have access to computations in the underlying Monad.
I mention this because the simplest examples of
custom control flow require some evidence of «weirdness»
happening, and for that I’m going to use functions
running on ContT r IO
.
Loop control
Functional languages don’t have while
loops as a
consequence of not supporting mutable state, the fact
that recursion is equivalent to while
loops, and the
ability to separate recursion schemes from operations
(hard to achieve using while
loops), in a clean fashion.
So, this explanation is not meant to advocate the use
of loops in functional environments. Instead, look at
it as showing that having continuations affords a
way to build your own while
loops even if the underlying
language does not provide it. Doing this other than
for learning purposes is something that would make a
competent functional programming look incompetent.
A generic loop over a collection can be thought of
as orderly applying a function f
to each element of the
collection, and said f
returning the continuation after
the current element has been processed. Recall that if
we have two continuations c0
and c1
in any monadic
context, c0 >> c1
is just «do c0
, then c1
» or the
semicolon (or lack thereof) in imperative languages. This
means we can implement the «imperative style» foreach
that
one has in Perl (or for
in Python) with
foreach :: Monad m => [a] -> (a -> ContT () m c) -> m ()
= runContT (mapM_ f items) return foreach items f
Note the use of a generic underlying monad m
, so I can
loop over a collection to perform any arbitrarily complex
computation as provided by m
.
At each step, the function f
must return the next step
in the form of a continuation. Since f
works in the
ContT ()
monad, it doesn’t need to do anything special
to «continue to the next», because that’s what ContT
does. However, we need some way for f
to return an
explicit computation to abort the rest of the loop.
breakOut :: Monad m => ContT () m c
= ContT $ \_ -> return () breakOut
So breakOut
returns a continuation that ignores the
default continuation and just terminates. This allows us
writing
= foreach [1..] $ \i -> do
infLoop0 if i > 10
then breakOut
else lift $ print i
that looks like an infinite loop on account of the lazy infinite list, but when evaluated.
> infLoop0
ghci1
2
3
4
5
6
7
8
9
10
it :: ()
behaves as a while
loop exited via a break
.
Implementing continue
to «skip» to the next step of the loop
is actually quite boring. If you want a continuation to
«finish» and proceed to the next continuation, just
continue :: Monad m => ContT () m ()
= return () continue
Make sure you understand this return
is in the immediate
context, while the return
in breakOut
is delayed as
part of the continuation being produced.
Now, having breakOut
and continue
, we can write «loops»
like
= foreach [1..] $ \i -> do
infLoop1 if i > 10
then breakOut
else if i `mod` 2 == 0
then continue
else lift $ print i
that, when evaluated
> infLoop1
ghci1
3
5
7
9
it :: ()
behaves exactly as you would expect.
You don’t need while
loops on a functional programming
language, yet you can build your own while
loops using
continuations. If only, to show that while
is a way
to have regular programmers not have to think about
continuations.
Modern imperative languages have «labeled break
/continue
»
statements, where nested loops can be labeled, and you
can break
from or continue
to the next specific label.
That requires the notion of a syntactic label like those
found in C and Perl.
Labeling statements
In our discussion about loops, we realized that «sequencing»
is just having a continuation follow one after the other
thanks to >>
and that shouldn’t be a surprise. We also
realized that «breaking» is just forcing a final continuation
that does nothing.
We can have a continuation continuationProducer
that
returns a continuation and continues on, pun intended. This
would allow us to write something like
aContinuation <- continuationProducer ...
... keep going ...
where aContinuation
will refer to some continuation created
by continuationProducer
. Program flow would continue
normally, possibly using aContinuation
.
Here’s the plot twist. Imagine aContinuation
is built in
such a way that when used (continued into) it computes at
the point it was created, and keeps re-creating itself.
label :: ContT r m (ContT r m b)
= callCC $ \next -> let here = next here
label in return here
The type of label
confirms it is a continuation that
returns a continuation. As a continuation it can be
sequenced, and it has the notion of current continuation
to use as «next step».
Recall that callCC
passes the current continuation to
the function, in this case bound to next
, i.e. next
is a continuation. Thanks to Haskell’s laziness, we
can build here
as an infinite continuation that
returns itself within itself every time it is
used. Since label
just returns here, it means it
moves on to the next instruction in the sequence, but in
so doing it returns the lazy value
next (next (next (...)))
where next
happens to be, well, the next instruction.
This means we can use label
to «save» the current continuation
in the form of a continuation that we can ask to continue to,
and at each step it will return itself, ad nauseam.
And this is all we need to have «backward labeled jumps»,
kind of a goto
-light. This allows us to write ugly
basic style loops, pun intended.
basicLoop :: IO ()
= flip runContT return $ do
basicLoop <- label
goto10 $ print "one"
lift <- label
goto20 $ print "two"
lift num :: Int) <- lift $ randomRIO (0,2)
($ \end ->
callCC case num of
0 -> goto10
1 -> goto20
2 -> do lift $ print "this"
end ()$ print "done" lift
Every use of label
produces a unique continuation
returning itself infinitely many times. So goto10
is
a continuation that returns its next instruction
infinitely many times, and goto20
is a continuation
that returns its next instruction infinitely many
times.
Throw a dice between 0 and 2, and use different
continuations depending on its result. Using goto10
has the effect of following that continuation,
going back to the appropriate place in the program,
and its thunk being updated to itself, on account of
its self-referencing construction. That is, using goto10
jumps to where it was created, and uses the nested
continuation. That’s why I named them «goto».
Running the function will produce output showing the «jumping back» behavior, and the fact that both labels are used more than once, and intermixed. For instance,
*Main> basicLoop
"one"
"two"
"two"
"one"
"two"
"one"
"two"
"two"
"two"
"this"
"done"
it :: ()
Yes, I know end
does nothing special. It’s there to
reinforce the point of where callCC
is going to
continue.
This way of using continuations to return continuations is a consequence of Haskell treating continuations as First Class. Being able to construct lazy self-referencing values (continuations or not) is unique to Haskell, arguably one of the reasons of its power.
Arbitrary jumping leads to coroutines
Being First Class values, the next step is to capture
continuations using callCC
, store them in some
data structure for «further reference», eventually fetching
and continuing onto them as program flow progresses.
A very basic yet flexible approach is to have a queue (FIFO) where «pending» continuations are enqueued at one end, and dequeued at the other. In order to achieve this in the simplest possible way – while performing other computations at the same time – consider the following Monad stack
newtype CoRoT r m a = CoRoT { runCoRoT' :: ContT r (StateT [CoRoT r m ()] m) a }
deriving (Functor,Applicative,Monad,MonadCont,MonadIO)
The CoRoT
stack has continuations (ContT r
) on top,
followed by a state transformer (StateT
) holding a list of
continuations as modifiable state. This list will be used as a reasonable
start point for «effects» (note the unit type for
the results) to keep it simple. Finally, you can use this
stack on top of any Monad m
you like – this example uses IO
.
A couple of auxiliary functions allow the executing continuation currently «on top» to, either get the list of currently stored continuations
getCCs :: Monad m => CoRoT r m [CoRoT r m ()]
= CoRoT $ lift get getCCs
or replace the list of currently stored continuations with a new one.
putCCs :: Monad m => [CoRoT r m ()] -> CoRoT r m ()
= CoRoT . lift . put putCCs
Note the need to lift
operations from the StateT
to the ContT
on top.
Combining both primitives, it will be possible for the currently executing continuation to add a new continuation to the end of the queue. 5
queue :: Monad m => CoRoT r m () -> CoRoT r m ()
= do
queue p <- getCCs
ccs $ ccs ++ [p] putCCs
If we think of this list as the list of «ready to go, but paused» continuations, it follows we can implement a trivial, yet useful, round-robin scheduler:
schedule :: Monad m => CoRoT r m ()
= do
schedule <- getCCs
ready case ready of
-> return ()
[] :ps) -> putCCs ps >> p (p
The currently executing continuation looks at the
«ready queue». When there’s at least one ready
continuation p
, it’s taken out of the queue,
the queue replaced with whatever is left, and
computation continues onto p
.
There are two ways in which schedule
is
useful:
When the currently executing continuation wants to relinquish control, giving it to the next «ready» continuation (if any). This is customarily known as
yield
6. UsingcallCC
captures the current continuationk
, so it can be enqueued, and then letschedule
choose the continuing continuation (pun intended).yield :: Monad m => CoRoT r m () = callCC $ \k -> queue (k ()) >> schedule yield
When the currently executing continuation wants to create and start a new continuation. This is customarily known as
fork
. UsingcallCC
captures the current continuationk
, so it can be enqueued before continuing to the new continuationp
. Note the need to add a «fail-safe» continuation toschedule
just in casep
does nothing or terminates beforeyield
-ing. Concis hard!urrency.fork :: Monad m => CoRoT r m () -> CoRoT r m () = callCC $ \k -> queue (k ()) >> p >> schedule fork p
We have enough primitives for any continuation to spawn new continuations, and for any continuation to yield control in a voluntary fashion. There are two edge cases we need to cover before having fully functional cooperative concurrency.
The first issue is finishing cleanly when there are no
pending continuations. Recall that the pending continuations
have type CoRoT r m ()
so, «finish cleanly» just means
producing a unit value.
A new action finished
will signal termination when
invoked by any executing continuation. If there’s at
least one continuation in the «ready» queue, a yield
is forced, continuing to finished
just in case the
next continuation has nothing to do as well. This
tail recursive forcing of yield
ensures that no pending
continuation will be left behind. When there are no
more ready continuations in the queue, finished
will
produce unit so computation finishes altogether.
finished :: Monad m => CoRoT r m ()
= do
finished <- null <$> getCCs
remaining if not remaining
then yield >> finished
else return ()
The second issue is how to actually start computing.
That is, how to run the «main» continuation such that it
can fork
other continuations at will, let them yield
control or fork
until finishing, and ensure a clean
finish once all are done.
As usual with stack transformers, we have to
unravel them from the bottom up. We don’t care
what the underlying monad m
is 7 so, from
the bottom up:
Evaluate the
StateT
with an initial empty list for the «ready queue». This will take care of feeding the final unit value to the underlying monad.Unwrap (
runCoRot'
) and run (runContT
) the supplied initial continuation, making sure to use plainreturn
as termination step. This will take care of actually allowing all of the utility functions (fork
,yield
, etc.) to be used to build the arbitrary control flow our code follows.Tack a mandatory
finished
as the final step of our computations, just in case the initial computation does nothing at all or finishes before other active continuations forked directly or indirectly. Edge case is edgy.
runCoRoT :: Monad m => CoRoT r m r -> m r
= flip evalStateT []
runCoRoT . flip runContT return
. runCoRoT'
. (<* finished)
All this machinery allows us to express Cooperative
Coroutines, hence the name CoRoT
.
In order to show the interleaving of effects, the following polymorphic function provides a simple yet powerful foundation.
printOne :: (MonadIO m, Enum a, Show a) => a -> CoRoT () m ()
= do
printOne n $ print n
liftIO
yield$ print $ succ n
liftIO yield
For a given value n
, print it and yield
control
mid computation. Once control is gained back,
print n
’s succesor, and yield
control again.
Once control is gained back yet again, it has
nothing else to do, so our finished
cleanup will
take care of actually terminating computation.
I’ve chosen to make the function polymorphic
on the restriction that n
’s type be an Enum
,
so it can be used with Bool
, Int
, and Char
.
Now, our example program is nothing more than a «main»
continuation that fork
s a couple of coroutines, each
one using printOne
more than once over different
initial values and a different number of times, so we
can verify the interleaving of computation, and the
correctness of our cleanup strategy.
start :: IO ()
= runCoRoT $ do
start $ replicateM_ 3 (printOne (3 :: Int))
fork $ replicateM_ 4 (printOne 'a')
fork 2 (printOne False) replicateM_
Evaluating this results in
*Main> start
3
'a'
4
False
'b'
3
True
'a'
4
False
'b'
3
True
'a'
4
'b'
'a'
'b'
it :: ()
Interleaving is completely deterministic:
The first thing
start
does isfork
. This creates and passes control to the number printing coroutine.The number printing coroutine prints
3
and immediately yields. At that point, onlystart
is ready, so it runs andfork
. This creates and passes control to the letter printing coroutine.The letter printing coroutine prints
a
and immediately yields. At that point, the number printing coroutine is ready.The number printing coroutine prints
4
and immediately yields. At that point,start
is ready.Control in
start
moves on to printingFalse
and yielding. At that point, the letter printing coroutine is ready.The letter printing coroutine prints
b
and immediately yields. At that point, the number printing coroutine is ready.The number printing coroutine is at the last statement of
printOne
where there’s nothing to do, but within areplicateM_
. Therefore, it evaluates a newprintOne
within the same continuation, that just prints3
and yields.
And so on and so forth…
You’ll notice that once the start
continuation
has printed True
and False
twice, it reaches
its end. This is when the force
finished
within runCoRoT'
will take care of
passing control to the next ready continuation.
It happens to be the letter printing one,
that will print a
and yield
to the number
printing continuation.
The number printing continuation will be in the
last print of its second repetition, so it
will print 4
and yield
, but having nothing
else to compute. Control will pass to the letter
printing continuation.
The letter printing continuation will print b
and yield
. But the number printing continuation
has nothing else to do. Here’s when the tail-recursive
finished
within finished
will take care of
termination, and yield to the letter printing
coroutine.
These primitives have allowed us to build full cooperative coroutines. Note that control can be transferred at any point in the execution flow, and once a continuation is paused, it will return to the exact point it yielded.
There aren’t many traditional languages supporting this flow control, because saving call-stacks and captured mutable state is hard and expensive. But on a pure functional language with continuations, it’s very straightforward to implement. And we did not need special runtime support to build a fully functional multiprogramming environment with flexibility beyond simple threads [8].
There are languages were you can transfer control to
an explicitly named co-routine, i.e. yieldTo
with a
specific name. This can be accomplished by having an extended
data structure that include names for each co-routine, as
well as having fork
«name» them on creation. This is left
as an exercise to the reader.
Speaking of exercises, you can try solving (literate Haskell source here) to test your understanding.
But I don’t (like/understand) Haskell
I pity the fool.
Followup articles will show you how to do this in Racket and Perl, the former being the best modern true-LISP, and the latter still being the truest modern imperative LISP-y language.
END]]
«Primitive» means there’s no need to handle them as continuations because there’s a low-level instruction that performs the computation easily and cheaply. Things like addition and multiplication usually are, things like trigonometry usually are, anything more complicated would have to be in continuation style as well. Continuations all the way down.↩︎
Type isomorphism is a general notion of conversion between types. We say that type
a
andb
are isomorphic, if we have conversion functionsa2b :: a -> b
andb2a :: b -> a
such that
↩︎a2b . b2a = id -- for type b b2a . a2b = id -- for type a
A value is considered «First Class» in a programming language when it can be written literally, assigned to a variable, returned from a function, and passed as an argument. Numbers, characters, and file descriptors are usually First Class. Functions are First Class only on languages that facilitate the functional style. Monads are First Class in Haskell. Continuations are First Class in Haskell and Racket.↩︎
This was invented in Scheme, a long time ago. Also, look for Pierce’s Law and connect it to Logic.↩︎
Using lists as queues is quite inefficient, yet simple enough for prototyping. If you want efficient queues, try
Data.Sequence
instead.↩︎Even though other languages use
yield
(incorrectly) to mean other things, albeit related to continuations. For instance, Python and Ruby use it to mean «yield the next value» from an iterator to the consumer. By the way, you can implement iterators with continuations as well.↩︎It’s going to be
IO
in our examples, but it could be any monad at all.↩︎