22 1 / 2012
It’s not dead
We’ve not given up yet, just real work is getting in the way unfortunately.
Currently where we’re at is the lexer/parser/AST is pretty much set, and we have an implementation of Hindley–Milner type inference with Haskell98-like typeclasses. At the moment the main task is the tedious part dealing with modules/namespaces, all that stuff to do with resolving references across modules, inferring types with dependencies across modules, circular dependencies, etc. After that we need to sort out the transformation that reifies typeclasses.
We’ve got a good idea of where we want to go still, so once things get back to normal at work I’ll get my head back into it and hopefully we’ll make some good progress again.
07 9 / 2011
Occasionally you make a fairly simple change to a codebase and it magically makes everything better. We ran into one of those situations to day.
Functions in Tap have always been intended to behave as if they are curried, but until today we’d been explicitly dealing with that case when a function was applied with fewer arguments that it was defined with - so more a case of partially applying functions automatically than representing them as being curried.
Today we changed the internal representations of functions and their types to always be unary. This one little change fixed a tricky bug we discovered in type inference, removed the need for code that deals with partial application, and gave us automatic uncurrying for free - something that the previous approach wouldn’t have dealt with when a function was defined explicitly as being curried.
To keep things friendly from a usability point of view functions can still be defined and called as normal (by normal I mean as if they have multiple arguments), it’s just that’s achieved by syntactic sugar now, with a minor change to the AST to IR translation scheme.
Perhaps not very revelatory, but it’s rare to find solutions to problems that actually make your code simpler and result in an even better system!
Permalink 13 notes
15 8 / 2011
It’s been ages since the last update, but we’re still been working on the language when we have a chance. There have been some fairly major changes and steps forward.
We dropped the syntax we had been using for the language in favour of a very simple grammar based on s-expressions, this may change back at a later point, but for now it makes for much easier changes to the language syntax and grammar while it’s in development.
Type inference has been fully implemented, based on the Hindley-Milner algorithm. We also have Haskell-like type classes for types of kind *, but that is something that will definitely be improved later, again, it’s good enough for now while developing the language.
Hopefully we’re not too far off having a self-hosting compiler, at which point we’ll probably have a lot more thoughts on where the language is lacking!
I have a post half written about switching over to Scala’s parser combinators (retiring our old ANTLR grammar), but it’s a bit behind where we’re at now, so I’ll perhaps revise and post that at some point, as I remember it being pretty hard to get the parser off the ground at first.
Permalink 5 notes
26 3 / 2011
call/cc and “fork/cc”
Made some good progress this week with getting the CPS transform working as it should, to the point where call/cc is usable. I haven’t covered every expression in the language yet so there is some more work to do to ensure proper output is generated, and currently it doesn’t allow recursive functions without explicitly using Y-combinators – this is something we think the compiler should definitely handle for the programmer though.
While working on this it occurred to me that in an environment that usually supports asynchronous calls it would probably be useful to be able to trigger them manually, not just when using external calls in the host environment. What I mean to say is we needed something that bypasses continuation-like behaviour for a function call, much like forking a process. This new in-built function – “fork/cc” – accepts a function or continuation as an argument but ignores the result and immediately continues in the current context rather than waiting for a return.
Adding “fork/cc” does have the downside of re-introducing the potential of the complexities inherent in a system based on asynchronous callbacks – one of the reasons we adopted the CPS approach in the first place – but at least now it will be introduced explicitly by the programmer, and therefore should be easier to manage.
Permalink 7 notes
10 3 / 2011
CPS and Y combinators
So after much discussion over the last week or two we finally started work on adding a translation-to-CPS stage to the compiler today.
I’m pretty sure it will help resolve some problems I was having when trying to maintain SSA form while having the IR graph constructed with immutable data structures as well as letting us implement a feature I got a little obsessed with recently: call/cc.
Call/cc should allow us to do some pretty great stuff that will help when writing asynchronous code, as well as the ability to create custom control operators along with the usual suspects like break/continue/return without having to add these explicitly to the language. I’m mostly interested in it for its ability to help manage asynchronous processes really, and it probably would be easier to just add features to the language for dealing with that, but by taking the more general approach it should give us a better base to build on.
After spending a while working out what CPS-transformed version of various pieces of code would look like (it took a while to realise that we’d have to add something equivalent to an SSA Φ function to deal with assignments made in conditional branches) we seemed to be flying. Then I tried converting a loop, and things got tricky again.
There are no explicit loop expressions in the language, so looping is done with recursive functions, I realised the only way it was going to work was by using a Y-combinator to allow the loop function to reference itself. At first I dropped in a non-CPS implementation of the Y-combinator, as “cheating” there wouldn’t matter, I could still implement the actual loop function using CPS, but then I figured it would be good to do it all properly and implement the Y-combinator using CPS as well. The rest of the afternoon was spent puzzling over how to do this, and in the end I figured it would be easier to just implement the transformation in the compiler and then let it do the work for us and convert our standard Y-combinator into the CPS version instead.
So, not an entirely successful day, but I’m quite excited by what this transformation will do for the compiler once we get it working as we’d like, as it should make certain optimisation a lot easier as well as giving us the power of call/cc.