A while back I published a post about SKI combinator calculus, and how it was computationally equivalent to lambda calculus. Specifically, you can mechanically convert from one to the other, and performing the computation on the SKI version or in the lambda calculus version will give you the same result.
I recently tried implementing a system based on SKI combinators and realized I only half understood the runtime complexity of this conversion. This post gives more background about SKI combinators and refinements of the known algorithms.
Why use SKI Combinator Calculus?
I first learned about SKI Combinator Calculus as a mathematical curiosity, in the context of Unlambda. (Actually, I first learned about it in To Mock a Mockingbird, a puzzle book by the great Raymond Smullyan, but didn’t really recognize it as such there). I didn’t consider whether it had practical applications.
But I recently watched a video by Simon Peyton Jones about the history of Haskell, where he explained that SK combinators were once seen as a practical way to implement functional programming languages! In fact, Peyton Jones’ master’s thesis was about the complexity of combinators vs lambda terms.
It turns out the property of simulating lambda calculus without variables is not only very satisfying, it removes a lot of the complexity in reducing terms. Additionally, they’re a good way to implement lazy programs, since applying a combinator like K
or S
to just one (or two for S
) argument can be done by saving the argument without needing to evaluate it. For these reasons, it’s been proposed as a lowlevel implementation mechanism for lambdacalculus based languages, initially by David Turner. This could even have been implemented directly in hardware, at a time when implementing functional languages efficiently in hardware was a significant research field.
It was a relatively active field of development, until innovations like the spineless tagless Gmachine, the technical basis for the Haskell compiler (good discussions on StackOverflow and on the Haskell wiki), that could efficiently run lambdacalculusstyle code on commodity hardware without the drawbacks, supplanted it.
Complexity
The algorithm I’ve given in the SKI post, known as the bracket abstraction, was first proposed by David Turner^{1}Turner, David A. “A new implementation technique for applicative languages.” Software: Practice and Experience 9.1 (1979): 3149. in 1979. Unfortunately, this algorithm produces SK expressions of size $\Theta(n^3)$ from a lambda term of size $n$^{2}Lachowski, Łukasz. “On the complexity of the standard translation of lambda calculus into combinatory logic.” Reports on Mathematical Logic 53 (2018): 1942., which is generally too large (interestingly, it seems this was proven very recently, but people seem to have been aware that it tended to produce large results).
Adding a few other combinators, $B$, $C$, $B’$, $C’$ and $S’$, all of which specialize $S$ in various ways, the translation can be brought down to $O(n^2)$^{3}J.R. Kennaway, The complexity of a translation of λcalculus to combinators, Rept., School of Computing Studies and Accountancy, University of East Anglia, Norwich, 1982., as shown by Kennaway. This was noticed by Turner in a followup paper. ^{4}Turner, David A. “Another algorithm for bracket abstraction.” The Journal of Symbolic Logic 44.2 (1979): 267270. This is not as conceptually nice as our minimal set of 2/3 combinators, but still very reasonable for a practical implementation, even one that is based in hardware. Noshita proposed an $O(n\log n)$ transformation from lambda calculus to extended Turner combinators^{5}Noshita, K. (1985). Translation of Turner combinators in O (n log n) space. Information processing letters, 20(2), 7174..
Unfortunately, Frandsen and Sturtivant proved that the worsecase complexity of evaluating converted expressions was exponential in the length of the original lambda expression^{6}Frandsen, Gudmund S., and Carl Sturtivant. “What is an Efficient Implementation of the λcalculus?.” Conference on Functional Programming Languages and Computer Architecture. Springer, Berlin, Heidelberg, 1991.. I’m not entirely sure how wellknown this result is, as the paper doesn’t seem to have that many citations, but it definitely seems like a major concern for this technique.
That said, by the 1990s, it seems like research on interpreting or compiling functional languages had moved on towards less esoteric representations of the compiled program.
Other resources
I recommend chapter 16 of The Implementation of Functional Programming Languages, by Simon Peyton Jones, which covers SK combinators and various extensions. It also gives a good explanation of the combinators in terms of director strings, which help make sense of the jumble of S and K combinators resulting from using the bracket abstraction.

Turner, David A. “A new implementation technique for applicative languages.” Software: Practice and Experience 9.1 (1979): 3149. ↩

Lachowski, Łukasz. “On the complexity of the standard translation of lambda calculus into combinatory logic.” Reports on Mathematical Logic 53 (2018): 1942. ↩

J.R. Kennaway, The complexity of a translation of λcalculus to combinators, Rept., School of Computing Studies and Accountancy, University of East Anglia, Norwich, 1982. ↩

Turner, David A. “Another algorithm for bracket abstraction.” The Journal of Symbolic Logic 44.2 (1979): 267270. ↩

Noshita, K. (1985). Translation of Turner combinators in O (n log n) space. Information processing letters, 20(2), 7174. ↩

Frandsen, Gudmund S., and Carl Sturtivant. “What is an Efficient Implementation of the λcalculus?.” Conference on Functional Programming Languages and Computer Architecture. Springer, Berlin, Heidelberg, 1991. ↩
Comments