Better poly than sorry!

Awesome WM and an animated 3x3 grid of virtual desktops

My new window manager is Awesome

I wrote a few posts in the past about my window manager - I was using StumpWM, which is a WM written in Common Lisp. It's great, and I used it for a long time, but had to switch. Unfortunately, I got a 4k laptop from work, and Stump couldn't handle that - at least not without a lot of work. The fact that Stump draws everything with X APIs (XCB) doesn't make it any easier, too.

So, reluctantly, I was forced to switch to GNOME. It's a surprisingly solid environment, as long as you install a few add-ons and disable most "user friendly" features (I wonder who ever thought the "reversed scroll direction" is "natural"!!)

Unfortunately, GNOME is not an interactively extendable piece of software. It is scriptable with JavaScript (and probably other languages), but it doesn't offer a REPL where you could explore the system from the inside, access and modify all global state, and add or edit functions on the fly. These are important tools, which help customize the software to your needs - a lot faster than without them.

I started looking for a solution. I wanted a reasonably stable, maintained window manager with a (or even built around) REPL - a simple requirement, but it filtered out 99% of the candidates, leaving just three:

Sawfish is scriptable with "a Lisp variant", described as a "mix of Emacs Lisp and Scheme", though I'm not sure how exactly that's supposed to look like. Unfortunately, it looked the least maintained of the bunch, so I decided to pass on it.

XMonad uses Haskell for scripting, which is an interesting choice. I don't know Haskell - I just skimmed a few books on it and played in the REPL, but that's not the level of "knowing" a language - but I could learn. After taking note of this, I went to examine the third option.

As already said in the title, I ultimately chose Awesome WM. My main reason (besides the name) was that it is built around a library of lightweight widgets, which render using Cairo lib, not the antiquated X widgets (like in StumpWM). Furthermore, it was scripted in Lua, a language which I know, and which is also a target for the transpiler I happened to be interested in. The reasonably complete documentation helped, though you can see a few lacking areas.

I will write the rest - how I set up Awesome, explored its library (called "awful"...), re-learned Lua from the ground up, or how I ended up forking the mentioned language, you know, the uninteresting details - in the follow-up posts, so now I just want to share a minor success of coding a complete widget by myself!

3x3 virtual desktop grid

Right, its about virtual desktops. All modern OSes offer the functionality where you can switch to another "desktop" with its own set of windows, then switch back. In some cases changing the desktop is accompanied with an animation (eg. sliding vertically or horizontally), and there's often a "zoomed out" mode for viewing windows from all desktops at once. In most popular implementations, the desktops are chained in a straight line, even without a wraparound.

Awesome's of course has virtual desktops, though it calls them tags: switching to a desktop number 3 makes only the windows tagged with "3" to become visible. The default configuration has ten tags configured, users can change it however they wish. With the larger number of desktops, though, comes another problem: how do you remember what window is on which desktop?

My solution to that problem was, since my early Linux experiences with Enlightenment 0.16, arranging the desktops in a grid. I'm not sure what it is, but there's something about path-finding and spacial metaphor that our brains seem to like a lot... Anyway, Awesome sadly lacked the ability to display a grid of my desktops - the "taglist" is a single row of labels (often numerical), like here:

So the first thing I want to share is that, after a long while, I managed to display a three by three, free floating taglist in Awesome! It was much more troublesome than I expected, due to the dual (schizophrenic) nature of the widget system (X / Cairo) and lack of good docs on Awesome architecture. Well, after a lot of tinkering, I made it! See the screenshot below (on the right).

Well, it was almost perfect, but unfortunately, no matter where I placed it or how I played with opacity, there was always a moment where it displayed over some important details, requiring manual intervention. I understood that, to be fully ideal, it would need to hide and show itself at just the correct moments: show after desktop change (or after key shortcut, or mouse-over), then hide a few seconds later or immediately if clicked on.

This time it was a bit easier, mostly because I already learned most of the API, but also thanks to Lua coroutines. Awesome has no direct support for animations, doesn't use them for anything (that I'm aware of) and apparently is not very interested in them. Fortunately, Awesome implements a timer: a piece of code which can call the callback function after some time, then repeat (or not). That is enough to allow users to implement their own stepping logic. Thanks to Lua coroutines, you can describe your animations as simple functions, wrap them with coroutine and plug straight into a timer. Here's the result of my efforts in action:

The code for this is in the repo on Github (as usual), but if you'd like to use it, ping me first - it's my personal config and I didn't bother with too much cleanup, but I can do it if it's going to be useful to someone.


Extending Io to add tuple unpacking (aka destructuring bind)

NOTE: As usual, all the code is on GitHub
NOTE: You can learn more about Io from its home page and its GitHub repository.

What is Io? A quick review

Io is a programming language, obviously, created in the early 2000's by Steve Dekorte. It was featured in the original Seven Languages in Seven Weeks [1] book, which is how I learned of its existence. It's a general purpose, interpreted language, with semantics inspired by Smalltalk, Self and Lisp, and with look and feel reminiscent of TCL and Scheme.

While TCL tries to answer the question what can be done with just strings (and lists) and Scheme shows what happens when everything is a lambda (and an s‑exp), Io explores the consequences of assuming that everything is an object (and a message). They all share traits of conceptual purity and elegant minimalism of their designs and implementations. It would be a mistake, however, to think about them simply as works of art - this may come as a surprise to the uninitiated, but despite the simplicity (some would say, because of it) all these languages are mind-bendingly expressive and powerful. It's true that you need to change the way you approach problems to use their full power - but once you do, you will quickly realize that that power is over 9000!

Even though I talk about simplicity, Io is a fairly complete and usable high-level language. Its strongest superpower is probably its dynamism: nearly everything, everywhere, every time is inspectable and changeable from within the language, including the syntax. As for other features, from Smalltalk (and Ruby) it takes its purely object-oriented character, while Self and Lua inspired some of its object system, which is prototypal (you probably know this style of OO from JavaScript) and with multiple inheritance. It has very simple syntax, which enables its incredibly powerful and sophisticated meta-programming tools.

Non-blocking, asynchronous Input/Output and concurrency support (with co-routines) are important parts of the language (see example on the left) - it even provides automatic deadlock detection. It has relatively simple C API for both embedding and extension and built-in CFFI for wrapping C libraries.

There are of course some disadvantages and problems: for example, Io is not very performant at this point[2] and should not be used for number-crunching (unless it can be vectorized[3]) or other performance-critical code. On the other hand, it works well even for writing bigger, complex programs which are IO-bound - all kinds of servers and (micro)services come to mind. Incredible expressive power and ease of DSL creation allows you to express business logic with minimal ceremony and with the just right level of verbosity, so it can also be a good tool for configuring larger systems (which you'd do with XML or YAML otherwise).

Another thing is that it isn't very popular currently (to put it mildly), which means the selection of ready to use libraries is narrower than with more mainstream languages. There are use-cases where it doesn't matter, though, like when embedding it as a scripting language for larger programs. It could also be used as BASH replacement for more complex scripting - it starts up quickly enough.

The problem, the hack and its effects

I picked Io up because it seemed a good fit for one of my projects. When starting it, I considered various languages, but, taking all the requirements into consideration, only a couple were viable: LPC, Pike, Lua and Io. The process of eliminating all the other languages deserves its own post, but for now, let's focus on the winner, Io.

One way or another, my project evolved and at some point I decided that I need a ported implementation of PyParsing[4], in Io. After defining some helper methods it was an easy job, with nearly one-to-one correspondence between lines of code in both languages. This is important, because in that case it's easy to convert between the languages with a set of simple regexes ran over the lines in a file. Io expressivness allowed it to emulate most of Python features used in the library - with a glaring exception of tuple unpacking, also known as destructuring bind or (a weak version of) pattern matching.

As a quick reminder, tuple unpacking is a feature of an assignment operator in Python (and many others), which is used to extract elements from a sequence and giving each of them a name. It looks like this in Python:

  def some_fun():
      some_tuple = (1, 2, 3)
      (one, two, three) = some_tuple
      assert one == 1 and two == 2 and three == 3

It's very convenient, especially if you keep most of the data in tuples of a known length, which is how most of PyParsing is written. As mentioned, it's also missing from Io, which made the translation process more tedious than it could be.

It was a surprising omission on Io part, considering that the language already gives you ways of defining custom operators, including assignment operators. To simplify the porting of Python code (and for fun, obviously!), I decided to try defining a left-arrow (<-) operator, which would implement the semantics of destructuring bind. In more specific terms, I wanted to extend Io so that the following code is valid and gives expected results:

  someMethod := method(object,
      object := Object clone
      object [one, two, three] <- [1, 2, 3]
      assert(object one = 1, object two = 2, object three = 3)

It should be possible to do this, thanks to the mentioned features of Io: its extensible parser, which allows you to define new operators, and the lazy, on-demand, evaluation of method arguments. Actually, the code for doing so is hilariously simple, just a couple of lines of code[5]:

  Object destructure := method(
      # target [a, b] <- list(9,8) becomes: target a := 9; target b := 8
      msg := call argAt(0)
      lst := call evalArgAt(1)
      target := call target
      msg arguments foreach(i, x,
          target setSlot(x name, lst at(i))
  # inform the parser about our new operator
  OperatorTable addAssignOperator("<-", "destructure")

It should have worked! - but it didn't ☹. Why? Also, what's perhaps more important to you right now (if you don't know Io), how was it supposed to work in the first place, anyway? And then, finally, is it even possible to make it work? (hint: yes!)

Relevant Io semantics explained

One trick to reading the above code is to realize that whitespace between words is not a separator, but attribute access operator. In other languages, attribute access is very often written with a dot (.) - so the literal translation to JavaScript (for example), would look like this:

  Object.destructure = function () {
      let msg = call.argAt(0)
      let lst = call.evalArgAt(1)
      let target = call.target
      msg.arguments.foreach( (i,x) =>
        target.setSlot(x.name, lst.at(i))
      return target

Now, to explain the rest of the example, we just need to know what call object is, what attributes it has and what it does. It's really simple (in a monad-like kind of way...): the call is just a runtime representation of a message send! (Just a Monoid in the category of endofunctors, right...)

Joking aside, what is a message send? Known as a method call in other languages, the message send is simply another name for a syntactic construct describing an invocation of a method with given arguments on some object.

I prepared a little diagram[6] illustrating the concept:

Some additional description:

  • message arguments - any expression is allowed, not just simple variables or literals. Expressions passed as arguments are only evaluated on demand. It's a very important feature, which means that you can pass unquoted (but still syntactically correct) code as an argument and it won't be evaluated unless the body of a method explicitly extracts and evaluates them. You can access a list of unevaluated argument expressions via call message arguments and you can access individual arguments with shortcut methods call argAt(n) and call evalArgAt(n).
  • target - an object whose method is going to get called. Target may be a variable name in the simplest case, but in general it is an arbitrary expression, which is evaluated to obtain a reference to an object. If Io interpreter encounters a message send without a target, it is by default set to the sender (aka. context, see below) of the call.
  • context - a dynamic environment in which the message send is going to be evaluated. Is has no representation during compile-time, it only exists during runtime, and it's simply a mapping from variable names to object references, like what you get out of locals() function in Python. It is resolved on runtime. It's accessible via call sender attribute.

Io has no other syntax than a message send, which means that message sends and expressions are the same thing. Io doesn't have statements (in imperative sense) at all, which in turn means that the message send is the whole syntax of Io. As is normal for expressions, message sends return a value when evaluated.

A call, then, represents a (parsed) message send coupled with an environment (a context) in which the message send is being evaluated. It is accessible (via interpreter magic) in block and method bodies, not unlike the arguments object in JavaScript. Such call is an object of type (ie. with a prototype set to) Call, which has message, sender (context) and target attributes. Further, message is an object of type Message, which has name and arguments attributes.

That's it - it's almost complete description of Io syntax. As you can see, Io is conceptually very simple and consistent. It manages to stay very expressive thanks to this.

Defining operators

One of the missing pieces in the above description is the syntactic sugar which Io offers to enable operators. In Io, operators are simply messages which do not need to have their arguments parenthesized. The parser maintains a special object, called OperatorTable, which contains names of all the operators along with their corresponding precedence. It then automatically inserts parentheses around values to the right of an operator, taking precedence into account. For example:

  1 + 2 * 2
  # is converted (while parsing) to:
  1 +(2 *(2))
  # ==> 5

There are languages, such as Smalltalk, where this isn't the case: they use strict left-to-right evaluation order, only alterable by explicit parentheses. This parentheses inference in Iois used for arithmetic operators, string concatenation operators and so on, but it's also used for faking statements, like return, break or yield.

Parse transforms of assign operators

Operators of a special kind, called assign operators, are parsed in yet another way. In this case, operator name (eg. :=) is first mapped to a method name (eg. setSlot) via OperatorTable object, and then the call is transformed like this:

  someObject someName := someExpression
  # is converted (while parsing) to:
  someObject setSlot("someName", someExpression)

It should now be easy to see why my attempt at writing destructure (as shown above) operator failed. The parse transform assumes the first argument to the assign operator to always be a simple name. It then puts quotes around that name and passes resulting string as the first argument to the method implementing the operator. If that assumption is broken (by eg. operating on a more complex expression), the conversion to a quoted string fails and the whole thing errors out.

This is enforced during parsing, before any Io code has a chance to run, so it's impossible to change it from within the language. If you take a second to think about it, that restriction (to simple names only) doesn't make any sense and is not present anywhere else in the language. To fix it, however, I had to delve deeper, into the C code of Io interpreter.

Deep dive into Io interpreter

Io is implemented in C, with implementation consisting of a custom, tri-color mark and sweep[7] Garbage Collector, low-level co-routines[8] implementation and an interpreter[9].

It took me more time than I'm comfortable admitting to find the relevant code. While the documentation for Io exists, it's not very extensive - there's little information about the internals of the interpreter and its overall design. However, the bigger problem was wrestling with CMake and my own lack of knowledge about typical C tooling. It's slightly embarassing, since I worked with C and later C++ for years (although it was decades ago at this point...). Once I brushed up on my skills in this area, locating the place to fix and developing patch wasn't that hard, fortunately.

The IoMessage_opShuffle.c module

As mentioned above, operators in Io are implemented as a parse transform. Most of it is implemented in IoMessage_opShuffle.c file. Main definitions there are Level and Levels structs:

  enum LevelType {ATTACH, ARG, NEW, UNUSED};

  typedef struct {
      IoMessage *message;
      enum LevelType type;
      int precedence;
  } Level;

  typedef struct {
      Level pool[IO_OP_MAX_LEVEL];
      int currentLevel;

      List *stack;
      IoMap *operatorTable;
      IoMap *assignOperatorTable;
  } Levels;

Io C source[10] is written in an object-oriented style, with C struct types being treated as classes and structs instances as objects. Following this style, function names are prefixed with class name on which instances they are supposed to operate; they also always take a pointer to a structure as their first argument (most often called self, like in Python).

Ignoring the boilerplate code for constructing the Levels objects out of raw Messages, the method which does the actual shuffling of operators is called Levels_attach, with the following signature:

  void Levels_attach(Levels *self, IoMessage *msg, List *expressions)

IoMessage objects contain an IoMessage* next field, which makes them a low-level implementation of a linked list (not to be confused with the List type!). Despite the singular form of msg, it represents both a single message and a list of messages (just like char * represents both a string and a pointer to first character). The function takes the message, transforms it (warning: in-place!) and appends the following (next) message to expressions list. The IoMessage_opShuffle function (defined in lines 549-573), which calls Levels_attach, does so in a loop and repeats until there are no messages to process:

  List *expressions = List_new();

  List_push_(expressions, self);

  while (List_size(expressions) >= 1) {
      IoMessage *n = List_pop(expressions);

      do {
          Levels_attach(levels, n, expressions);
          List_appendSeq_(expressions, DATA(n)->args);
      } while ((n = DATA(n)->next));


The Levels_attach function has to handle at least two cases: that of normal and assign operators. We're currently not interested in normal operators - what we need is to locate the code which handles messages of the form:

  optionalTarget msg(name1, name2) assignOp(arg1, arg2) nextMsg

Fortunately, it was easy to find it - there's even a comment showing our exact case next to the code, in lines 396-400. The problem is that this case is apparently considered an error and is handled as follows:

  if (IoMessage_argCount(attaching) > 0) { // a(1,2,3) := b ;
      state, msg,
      "compile error: The symbol to the left of %s cannot have arguments.",

Right, but Steve, why?! I'd like to know why that restriction was put in place; my intuition tells me that this code was written relatively early in language development and later nobody wanted (or needed) to touch it[11]. Well, it at least explains why my initial attempt failed.

The patch - proper handling of our case

Well, at this point I at least knew exactly, where to put my code for handling this! After checking out the source and setting up a build environment, I started implementing the code. It wasn't as easy as I'd like: first, internal APIs are mostly undocumented (which is rather common for internal APIs) and second, most functions which implement "methods", are defined using IO_METHOD macro, which my "Go to definition" plugin didn't like☹. Other than this, the mutable nature of IoMessage objects and the need to deep copy (not just clone) them[12] were a bit of a PITA.

Still, after a bit of tinkering, putting a lot of printfs here and there and a fair share of segfaults, I managed to produce a working implementation. Actually, I'm still surprised that it works... but it does! Let me show you (assuming the destructure operator is defined as shown at the begining):

  o := Object clone
  o [wa, wb, x] <- list(3, 123)
  o println

  # prints:
  #  Object_0x19bcb60:
  #  wa               = 3
  #  wb               = 123
  #  x                = nil

As you can see, it works and returns desired results! The patch to Levels_attach is not too long (about 20 LOC) and not very complicated, which was a pleasant surprise. Let's explain what happens in it, line by line. It goes like this:

  Level *currentLevel = Levels_currentLevel(self);
  IoMessage *attaching = currentLevel->message;
  IoSymbol *setSlotName;

  /* ... */

  if (IoMessage_argCount(attaching) > 0) { // a(1,2,3) := b ;
    // Expression: target msgName(v1, v1, v3) assignOp   v4   ; rest
    //                    ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^  ^^^^  ^^^^^^^
    //                      slotNameMessage     msg      val    rest
    // becomes:    target assignOpName(msgName(v1, v2, v3), v4) ; rest

    IoSymbol *tmp = IoSeq_newSymbolWithCString_(state, "");
    setSlotName = Levels_nameForAssignOperator(
      self, state, messageSymbol, tmp, msg

    IoMessage *slotNameMessageCopy = IoMessage_deepCopyOf_(attaching);

    IoMessage *slotNameMessage = attaching;
    DATA(slotNameMessage)->name = setSlotName;
    DATA(slotNameMessage)->args = List_new();

    IoMessage_rawSetNext_(slotNameMessageCopy, NULL);
    IoMessage_addArg_(slotNameMessage, slotNameMessageCopy);

    IoMessage *value = IoMessage_deepCopyOf_(DATA(msg)->next);
    IoMessage_rawSetNext_(value, NULL);
    IoMessage_addArg_(slotNameMessage, value);

    IoMessage *rest = IoMessage_deepCopyOf_(DATA(DATA(msg)->next)->next);
    IoMessage_rawSetNext_(slotNameMessage, rest);


Let's start with lines 13-16

    IoSymbol *tmp = IoSeq_newSymbolWithCString_(state, "");
    setSlotName = Levels_nameForAssignOperator(
      self, state, messageSymbol, tmp, msg

setSlotName here is a Symbol struct pointer (aka. object instance), containing a string extracted from OperatorTable, which we know as the second argument in the call to OperatorTable addAssignOperator. In other words, it's a name of the method which implements given operator. Once we have the name, in lines 18-22:

    IoMessage *slotNameMessageCopy = IoMessage_deepCopyOf_(attaching);

    IoMessage *slotNameMessage = attaching;
    DATA(slotNameMessage)->name = setSlotName;
    DATA(slotNameMessage)->args = List_new();

we create a copy of current slotNameMessage message, and start modifying the original. We set its name to the one obtained above, and we set its argument list to an empty list. Then, in lines 24-25:

    IoMessage_rawSetNext_(slotNameMessageCopy, NULL);
    IoMessage_addArg_(slotNameMessage, slotNameMessageCopy);

we mutate the slotNameMessageCopy by cutting off its tail (as mentioned, every message carries a pointer to the following messages) and put it as the first argument to the original slotNameMessage. This is the most important change to the logic of Levels_attach: without it, the destructure operator wouldn't work.

Further, in lines 27-29:

    IoMessage *value = IoMessage_deepCopyOf_(DATA(msg)->next);
    IoMessage_rawSetNext_(value, NULL);
    IoMessage_addArg_(slotNameMessage, value);

we get the first message to the right of the original operator (the one before conversion to method name, <- in our case) - in other words, the value that we want to destructure - and put it as a second argument to the operator method. Again, we need to cut off its tail, otherwise we'd put the following messages inside the argument list as well.

Finally, in lines 31-32:

    IoMessage *rest = IoMessage_deepCopyOf_(DATA(DATA(msg)->next)->next);
    IoMessage_rawSetNext_(slotNameMessage, rest);

we attach the first of the rest of the messages as a tail of slotNameMessage. This completes the transformation.

That's it!

To be honest, the whole thing took me nearly a year to finish (including writing this post). I could work on it only once in a while, and I hit a few roadblocks which took many sessions to work around. I mentioned it briefly before, but getting Io code to compile was one such roadblock - I've never used CMake before, for one, and then some add-ons refused to compile on my system. It took me a while to sort all that out, and I couldn't start hacking without this.

With that done, I realized that I have no idea about what is where in the codebase. I had a rough idea of the architecture, as it is mentioned in the docs, but it was on a level high enough as to be (coupled with my lack of experience) mostly useless. I spent many an evening just reading the C sources, trying to familiarize myself with its style and design.

Once I felt vaguely comfortable with my knowledge, I started poking here and there with printfs. It's not that easy to print Io objects from C side - there's a bit of a ceremony involved. For example, to display the slotNameMessage I'd do:

  printf("slotNameMessage: %s\n",
         CSTRING(IoObject_asString_(slotNameMessage, msg)));

Memory management wasn't that big of an issue - Io objects are garbage collected, and I didn't need to allocate memory on the heap from C at all. Understanding how the GC works, and how different structs are to be interpreted as classes in a class hierarchy was a real challange, though.

Once I understood most of the IoMessage_opShuffle.c and implemented my fix, I realized that many parts of the interpreter could really use a serious refactoring. My first reflex was to put const qualifier almost everywhere - which backfired, because mutable state is everywhere and it's hard to say which argument to the function will be modified and which will be left alone. Some functions use multiple return statements, each in a surprising place; all these returns return nothing, basically acting as goto to the end of a function.

The internal APIs for manipulating objects are underdocumented (yes, I know I said it already) and incomplete. Working at the C level is made awkward because of that, as it's never clear if you should call a method or a helper function. Methods are also defined incosistently, either with IO_METHOD macro or without. It took me awhile to understand the DATA macro and why is it redefined for every Io object. There's a lot of commented out code and some areas of the code are simply a mess.

Despite all this, it was an interesting, if a bit long, journey. I learned a lot, which was my main goal anyway, and also managed to make the desired feature work, which was a nice by-product.

The End

If you reached this point - thanks for reading. I hope it wasn't too boring a write-up. I'd be incredibly happy if this post inspired you to take a closer look at Io, to try to use it, or perhaps even to try developing it [13].

Despite some messy parts, Io codebase is relatively short and simple, and Io as a language has a couple of features that make it a viable alternative to other languages in certain circumstances. With Io, you get Lisp-level meta-programming support without being tied to s-exp syntax. The ability to add your own syntax to the language, coupled with its incredible reflection support makes molding the language to closer fit your problem domain a breeze. Unlike Scheme, Io is based on a familiar, object-oriented metaphor, which makes it easier to read and learn by most programmers.

The slowness of Io - and I'm not even sure how big it is, I didn't measure - only means that there probably are many low-hanging optimizations to be done in its source. After reading the code I get the feeling that the authors wanted the language more than they wanted the implementation, which means they chose to add features instead of polishing existing ones. It's actually a good strategy early in language development and it's usually left to the 1.0 release preparation work to clean the code and make it efficient. It's just that Io died before the effort to make it 1.0-grade software even started.

I think it's still not too late, that Io still has potential and that it could, with time, be made into a serious competitor for Lua in some cases.

  • More precisely, "Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages", a book which could serve as a Polyglot Manifesto if only polyglot was more of a thing...
  • It could get much better with JIT compilation, but it's not implemented currently.
  • Io has built-in support for vectorized operations, somewhat similar to NumPy, but lower-level.
  • Python PEG library; I wanted to use it for parsing user commands.
  • The method is called "destructure", because it has a potential to cover more cases than just sequence unpacking: it could, for example, allow extracting values from dicts and other containers and support wildcards. The code for this is not shown, but it would be very similar.
  • To be honest, my wife made it for me - I'm hilariously incompetent in the graphics department. Thanks, honey!
  • Implemented in libs/garbagecollector/source. See GC page on Wiki for more info on the whole mark and sweep business.
  • Implemented partially in assembly, in libs/coroutine/source. There's also a Wiki page about co-routines.
  • It's implemented as a virtual machine and lives in libs/iovm/source.
  • BTW, I took some liberties with formatting to reduce the height and width of the examples, hope you don't mind.
  • The "clean up this method" comment suggests as much.
  • Honestly, I think I simply don't fully understand the IoMessage intended semantics - it should be possible to get away with just clones, I just didn't manage to find out how.
  • Let me know if you'd like to start hacking! You can find my email in contact page.

Adding string interpolation to Racket

A minor update, added an illustration and another interpolation implementation suggested on HN comments.

NOTE: All the code, as usual, is on GitHub
NOTE: This is just an example of extending the Reader - it's not meant to be practical.

String interpolation is a convenient language feature which makes creating human-readable strings that contain variables easier. The classical approach to this in languages without string interpolation is something like printf - a function which accepts a format string and a variable number of arguments. The format string contains special sequences of characters which get replaced with the values of passed arguments, like this:

printf("An error occured at %s:%d.", "file-name.c", 42);
// Displays: An error occured at file-name.c:42.

This has various problems, for example the possibility to supply more (or fewer) arguments than expected. String interpolation allows you to embed variables directly into the format string:

echo "An error occured at $FILE:$LINE."

As with many features, some languages provide string interpolation and some don't. In the dark and troubled past of JavaScript, for example, some people switched to CoffeeScript for its string interpolation support (among other things.) More recently, string interpolation came to Python in the form of f-strings (PEP-498) and to plain JavaScript in the form of template literals (supported by Babel and other transpilers). While the feature is most often associated with dynamic, dynamically typed languages, some statically typed ones, for example C# and Swift, also support it, so it's not exclusive to the former.

Help! No string interpolation in Racket!

Racket - like JS at one point - lacks the string interpolation support. If Racket was JS, the only options would be to either write a transpiler, or switch to another language. Both choices normally require a lot of work, so developers tend to learn to live without the feature instead and move on.

But Racket is not JS, it's a Lisp. All Lisps are programmable programming languages[1], but Racket takes it up to eleven with the multidute of tools for language extension (and creation)[2]. How much work would it be to make Racket interpolate strings?

How Racket processes code

To process any code, Racket performs these steps:

  • reading - accomplished by two functions, read and read-syntax, reading means taking a string and returning an AST[3] (which happens to be a normal list with normal symbols, strings, numbers, etc. Such a list can be wrapped with a syntax object, which adds some meta-data to the raw list. You can always get a raw list out of syntax object with syntax->datum).
  • expansion - all the special forms and macros in the read code are expanded until only the most basic forms (modules, function applications, value literals) are left.
  • compilation - Racket performs byte-code compilation on the expanded code. This is where most of the optimizations are applied.
  • execution - compiled byte-code is handed to the VM for execution. At this stage, on some architectures, further JIT compilation happens, turning (possibly portions of) bytecode into native code.

The first two steps are completely customizable: code expansion with macros, and code reading via hooks into the Racket parser (you can also write a new parser from scratch (or generate it) if you want, but frequently it's not necessary). Symbolically, what happens during reading and expansion can be presented[4] like this:

Extending the parser

The Racket parser is a simple, recursive-descent one and it works by utilizing a readtable - a mapping from characters to functions responsible for parsing what follows the given character. The read and read-syntax functions utilize current-readtable, which is a parameter you can override. Adding a new syntax to the reader requires just three steps:

  1. create a function for parsing the new syntax into valid Racket forms
  2. get the current-readtable and add your function to it
  3. make that expanded readtable from the prev step the new current-readtable

Interpolated string syntax

We'll get to writing the parsing function in a minute, but first we need to know what is it we're going to parse and what the results of the parsing should be. The exact syntax I have in mind would look like this (with #@ distinguishing interpolated strings from normal ones):

#@"An error occured at @{file}:@{line}."

Interpolated string compilation

String interpolation, for example in CoffeeScript, is often compiled to an expression which concatenates the chunks of the string, like this:

"An error occured at #{file}:#{line}."
# would be compiled to:
"An error occured at " + file + ":" + line + "."

In Racket we have no + operator for concatenating the strings and even if we did have it, the prefix notation and resulting nesting wouldn't be pretty. The easier and more natural way is to use a string-append procedure, which - conveniently - accepts any number of arguments. The syntax introduced in the previous section would compile down to something like this:

(string-append "An error occured at " file ":" line ".")

The parsing function

The parsing function needs to take the string form as an argument and output the code as above. Here's the simplest implementation I could think of:

(define (parse str)
  ;; Assume that @{} cannot be nested and that the braces are always matched.
  ;; Obviously, in a real parsing function, these assumptions would need to be validated.
  (define lst (regexp-split #rx"@{" str))
  ;; After splitting we have a list like this: '("An error occured at " "file}: " "line}.")
  ;; We'll go over it, building a list of expressions to be passed to string-append as we go.
  (define chunks
    (for/fold ([result '()])
              ([chunk (rest lst)])        ; we don't need the first element here
          ;; convert a string into a port
          ([is (open-input-string chunk)]
           ;; call original read to get the expression from inside brackets
           [form (read is)]
           ;; read what remains in the port back into a string
           [after-form (port->string is)]
           ;; drop the closing brace
           [after-brace (substring after-form (add1 (s-index-of after-form "}")))])
        ;; ~a is a generic "toString" function in Racket
        (append result `((~a ,form) ,after-brace)))))
  ;; chunks now looks like this: ((~a file) ":" (~a line) ".")
  `(string-append ,(first lst) ,@chunks))

It's dead simple - just a couple of lines - and it glosses over important details, like the possibility of nesting the @{} expressions and syntax errors handling, but it works for the simple cases. We can easily test it, like this:

(parse "An error occured at @{file}:@{line}.")
;; Returns:
'(string-append "An error occured at " (~a file) ":" (~a line) ".")
;; which is exactly what we wanted!

The boilerplate code

The hard part ends here - the rest is just some boilerplate we need to make Racket aware of our extension to its reader. In Racket, this is often done by defining a new language - working with #lang lang-name syntax, but in our case we don't need (or want) to create a whole language (it's simpler than it sounds, but it's still a bit more work) - we just change the reader and can reuse everything else from some other language.

For situations like these, Racket supports a special syntax to its #lang line, which looks like this:

#lang reader <reader-implementation-module> <language-the-reader-will-be-used-with>
;; for example:
#lang reader "reader.rkt" racket

Generating a reader module

Let's first define reader.rkt. Writing a whole new reader is of course an option but it's a lot of unnecessary work, as the generic read and read-syntax[5] need to handle some more complex things, like module declarations, which we don't care about. Instead, we can use a special language for defining readers, which will generate appropriate read and read-syntax implementations for us provided we supply a single function to it:

#lang s-exp syntax/module-reader
#:language read
#:wrapper1 wrap-read

(require "ext.rkt")

The #:language option takes a function which accepts a string (of what remains after the reader module name on the #lang line) and returns a language name to use. A simple read happens to work here.

The #:wrapper1 option takes a function, which takes another function as input and returns the parsed code. The passed in function, when invoked, performs normal read.

Extending and installing a readtable

With #:wrapper1 option, in conjunction with parameterize, it's very easy to install our readtable[6]:

(define (wrap-read do-read)
  (define rt (make-readtable (current-readtable)
                             ;; register our function to be called when #@ is encountered
                             #\@ 'dispatch-macro
                             ;; name of the function to be called
  (parameterize ([current-readtable rt])

Wrapping parse so that it works inside readtable

The only part missing is a read-str function, which is a bit more complicated. The function passed to make-readtable may be called in two different ways, depending on whether it's used from inside read or read-syntax. The function distinguishes the cases based on a number of arguments it gets, and returns either syntax object or a simple list depending on the case. Here's its implementation:

(define read-str
    [(ch in src line col pos)
     ;; The caller wants a syntax object. The current position of the `in` port
     ;; is just after the #@ characters, before the opening quote.
         ;; Here we read the next expression (which we assume is a string)
         ([literal-string (read in)])
       ;; then we parse the contents of the string and return the result,
       ;; wrapped in a new syntax object. The `in` port is now just after the
       ;; closing quote and the parser continues from there.
       (datum->syntax #f (parse literal-string)))]

    [(ch in)
     ;; The caller wants a simple list. Let's parse the input into syntax and
     ;; transform to a list (this saves as writing the same code twice).
     (syntax->datum (read-syntax #f in))]))))

It works!

The reader extension can be used like this:

#lang reader "reader.rkt" racket

(define line 42)
(define file "file-name.rkt")
(displayln #@"An error occured at @{file}:@{line}.")
;; Displays: An error occured at file-name.rkt:42.

It also works for more complex expressions out of the box:

#lang reader "reader.rkt" racket
(require racket/date)

(displayln #@"Current date&time: @{(date->string (current-date) #t)}")
;; Displays: Current date&time: Friday, April 28th, 2017 7:57:11pm

This is of course a pretty basic as far as string interpolation features go, but it took thirty (34 to be precise) lines of code (excluding comments) to build it. Adding another twenty lines would be enough to implement error checking, and using one of the parser generators could even shorten the code.

That's it!

Aside from how easy it is to extend Racket syntax one thing deserves a mention. That is, such syntax extensions are easily composable (as long as they don't want to install extensions to the same entry in the readtable) - you can load them one by one, each extending the readtable.

The language extension and creation tools of Racket don't end there. Robust module system with support for exporting syntax extensions, powerful syntax-parse, traditional syntax-rules and syntax-case, many tools for generating parsers and lexers - all these features are geared towards meta-lingustic programming. Racket is a programming language laboratory and allows you to bend it to better suit your needs safely (ie. ensuring that only your code will be affected). Very few languages or transpilers offer that much freedom with that kind of safety guarantees.

It's an ideal environment for people who know what they need from their language - with Racket you're never stuck, waiting for the next release to support some feature. On the other hand, despite there being a great many helpers and libraries, you need to know at least some basics of programming language creation to do this yourself. The good news is that the extensions written by others are easily installable with a single command of a Racket package manager.

Addendum: doing the same without messing with the reader

A commenter on Hacker News provided an interesting solution using normal macros only. Turns out there's a special macro, called #%datum, which gets wrapped around every literal in the code (when the code is read). By default it's a noop, but we can override it with arbitrary logic. In this solution there's no need for marking strings for interpolation: all strings are interpolate-able by default. Here's the implementation:

  (rename-in racket (#%datum core-datum))
  (for-syntax syntax/parse))

(define-syntax (#%datum stx)
  (syntax-parse stx
    [(_ . x:str) (interpolate #'x)]
    [(_ . x)     #'(core-datum . x)]))

(define-for-syntax (interpolate stx)
  (define re            #rx"@{[^}]+}")
  (define (trim match)  (substring match 2 (- (string-length match) 1)))
  (define (to-stx val)  (datum->syntax stx val))
  (define (datum val)   (to-stx (cons #'core-datum val)))
  (define (source text) (to-stx (read (open-input-string text))))
  (let* ([text     (syntax->datum stx)]
         [matches  (regexp-match* re text)]
         [template (datum (regexp-replace re text "~a"))]
         [values   (map (compose source trim) matches)])
    (if (null? matches)
        (to-stx `(,#'format ,template ,@values)))))

I don't have the time to go over this line by line and explain it right now, but it's possible I'll write another post featuring this solution in the future.

  • The quote comes from Paul Graham's famous book and is attributed to John Foderaro.
  • A good starting point for learning more would be Racket is ... article by Matthias Felleisen. At least I assume it's his - it's not signed, but clearly in Matthias home directory ;)
  • Abstract Syntax Tree, see Wikipedia
  • Taken from a paper Type Systems as Macros by Stephen Chang, Alex Knauth, Ben Greenman
  • Any module, which exports these two functions, can be used as a reader.
  • In the actual source code the (make-readtable ...) part is extracted into separate procedure, which makes it easier to compose this syntax extension with others.

Clojure and Racket: Origins

Most programming languages are created with some goals in mind, which the new language tries to meet. Over time, the goals may (but don't have to) shift as languages evolve. Either way, knowing current goals and history of a language should make understanding the bad and good sides of the language easier. I prepared a short introduction to both Clojure and Racket, before getting to the main issue in the following posts.

Racket: in the beginning, there was (PLT) Scheme

Racket started its life in 1994, as an implementation of Scheme language[1]. Scheme is a rather minimal Lisp-1[2] dialect popular with Programming Languages researchers, Computer Science professors, and teachers because of its simplicity and elegance.

Matthias Felleisen initially created a research group with a goal of providing better teaching material for novice programmers. The group was called PLT, and after deciding to write a new language as part of their efforts, they decided to base it on Scheme, creating first versions of PLT Scheme. The initial goal was consequently pursued for many years, resulting in a couple of books and many papers, but there were other groups of users of the Scheme - the language proved to be a good platform for implementing novel PL concepts, so researchers took a liking to it. From that point, the design and implementation of what would become Racket tried to meet the two goals: to be a beginner-friendly pedagogical environment and to be a powerful, easily extendable general purpose language.

On the one hand, Racket included delimited continuations[3], more powerful macro systems[4] and soft typing and eventually statically typed dialect[5] were implemented in Racket, improving both the language and the state of research. On the other hand, the need for more beginner-friendly language features focused on making teaching and learning on different levels easier, resulted in a graphical IDE and support for restricted and contracted[6] subsets of the primary language. Between the two extremes was the often overlooked rest - a natively-looking, cross-platform GUI toolkit (used to write Racket's IDE), a lot of utility libraries distributed via a central repository[7] and the infrastructure like the JIT compiler, garbage collector and so on.

PLT Scheme changed its name to Racket sometime around 2010, when it became apparent that it evolved way past the Scheme specification and showed no signs of stopping its further development and evolution. Current Racket is a product of seven more years of cutting-edge language research, teaching material preparation[8], adding requested features and abstractions and implementing even more libraries.

If it sounds like a kitchen sink, that's because it is. Some parts of Racket are explicitly designed, but some others evolved on their own. When meeting all the challenges during the years Racket not only added tools for dealing with them but also perfected the meta-tools to create the tools, which made it into a perfect kitchen sink: one where implementing new linguistic features is not only possible but easy by design.

Clojure: quite a different story

Rich Hickey wanted a simple, powerful and successful Lisp for a long time. During the 00's such a Lisp arguably didn't exist. Common Lisp was in a decline which started a decade earlier[9]. Scheme was impractical (because of problems with portability between implementations, also Scheme's minimalist approach to stdlib didn't help). Other Lisps - other than the embedded ones, like Emacs Lisp or a dialect used in AutoCAD - hardly had any following at all.

Before writing Clojure, Rich was familiar with Common Lisp and tried integrating CL with Java on a couple of occasions, but ultimately decided to create an entirely new Lisp, using JVM as a runtime environment. His goals were: symbiotic relation with the hosting platform, Functional Programming, and concurrency. FP and concurrency implied immutability: almost all native data structures in Clojure are immutable and persistent[10], and the mutable ones only allow mutation in certain, safeguarded contexts. The desire for expressive power resulted in an extended syntax, used for data structure literals and pattern matching[11] and dereferencing shortcuts. The standard library is not too big because Clojure can use all of the Java stdlib directly; the largest part of the library are functions for dealing with collections, which are an abstraction similar to iterable in Python. The four basic data structures - string, vector, set and map - are all collections and may be operated by the same set of functions.

In the first couple of years after launch, Clojure marketing was focused on Java interop and concurrency. Clojure indeed offered a lot of options for safe concurrency and safe parallelism (leveraging JVM threads and other constructs, but hiding them behind novel APIs) and was definitely more productive than Java (but comparable to Groovy or Jython, I think). It was quickly recognized as a language well suited for server-side web development. Later, when ClojureScript appeared[12], Clojure steered even more towards web development, now promising performant servers and almost effort-free (i.e., reusing the server-side code) frontend. It didn't work that well at first: ClojureScript was lacking features and Clojure to ClojureScript interop wasn't easy to setup. Things improved over the years, and in 2015, according to Wikipedia, 66% of surveyed Clojure users also used ClojureScript.

Over the years, the language became one driven by the community, with Rich serving as its BDFL, analogous to Python's Guido van Rossum. The community produces new versions of the language in 1-2 year spans; the new features are generally few in number, but powerful, with a broad impact on how the code is supposed to be written[13]. The tooling and library ecosystem, initially dependent on Java, also improved and matured over the years, resulting - among other things - in a good build tool and package manager[14]. The core language remained relatively small and focused, but its expressivity allowed for many features to be added as libraries.

Cross-pollination and differences

As both languages were rapidly evolving around the same time (and still being developed), some features from one language were implemented in the other. For example, in Clojure, core.typed is based on TypedRacket, while Racket sequences idea is suspiciously similar to Clojure's collections (especially when using a helper library, because in stdlib support for sequences is very basic).

That doesn't really make the languages similar, though - they are based on completely different philosophies and were developed very differently. Honestly, I'm tempted to say that Lispiness-1 of the dialects is the only thing they have in common (I'm exaggerating a bit): they offer similar functionality but deliver it in different ways, and the general feel of using them is very different.

Of course, how the language feels when used is entirely subjective, but language designers and developers often optimize languages to "feel natural" or "feel right," and language users often evaluate them based on this vague notion. That is to say: even if two languages are technically equivalent, if they "feel" different, they attract different users, cover different use-cases and specialize in different things.

The differences between Racket and Clojure are just enough to make it worth learning both of them, I think... But, well, as a programming languages nerd I may be a bit biased. Still, it's worth seeing at least some of the capabilities and features they provide - I plan to present a few of them in the third post in the series. But before that, in the next post, I'll tell you how and why my work on a Clojure-based project was a nightmare.


Upgrading Fedora 20 to 25

Apparently, I can't directly upgrade to whichever version I choose, I need to go one by one, doing fedora-upgrade and rebooting in a loop as needed.

I realize that my case is somewhat extreme, but watching the same set of packages getting downloaded five times (~640MB each time) feels wrong somehow.

The good news are that the upgrade took its time, but in the end finished successfully. Quite a feat for that large defference in versions: default package manager and system upgrade tools changed in the meantime, but still managed to produce a working system with minimal fuss.

Damn, I'm getting back to Linux as fast as I find a decent laptop; working on Mac OS - although, with effort, made vaguely ok - is tiring as I need to cope with all the "oh, it doesn't work that way here" moments...


Better blog generator - with RSS feed!

You can now subscribe to RSS feed for this blog!

RSS feed

As you can probably guess with a quick glance, this blog is self-hosted by me and is a result of "static page generation", which means there's a bunch of files and a few scripts which output all the needed HTML/CSS/JS to run the site, you just upload it to the server and you're done. This approach is very popular lately and many bloggers do the same.

One - slightly - unusual bit is that I wrote all of the generators and scripts myself, from scratch (using mainly Python). This means, among other things, that I have no pile of already written plugins or modules I could resort to in case I found my generator lacks an important feature...

For a long time I was pretty much the only user of both the generator and the blog, so I didn't mind. Recently, though, quite a few people who said they'd like to read my next post (my prospective readers! Finally, after four years, I have a chance to get my very own readers! ...err, sorry, got a bit too happy there for a moment...) suggested adding RSS feed to the blog.

I had known nothing about RSS and it took a couple of hours, but I finally added RSS feed generation to my scripts. Now every time I publish a post, a new /statics/feed.xml gets generated and, hopefuly, every subscriber will be notified![1] Even better, you will be even notified about post updates (probably)!

Thank you! (Or: Rkt vs. Clj is comming!)

Ok, so I know simply adding RSS feed is hardly important enough to write a post about it, so here is the other part: thank you very much! All of you who upvoted my comment on Hacker News or even sent me an email - only to encourage me to write the promised post, thank you. It's thanks to you that I got motivated enough to finally start writing that post about Clojure and Racket comparison.

Please stay tuned and be patient: it's a lot of work, especially because I want to stay as fair as possible and give as many detailed examples as possible (without copyright-related problems).

So, once again, thank you for expressing your interest in my future post and please stay patient while waiting for me to finish; I'll try my best, on my part, to write it all well and on time!

  • Let me know if it doesn't work for you or your reader! I don't know much about RSS and could have messed up).

How to quickly get over the parens problem of Clojure syntax.

NOTE: When first starting to use Clojure, get Nightcode. The experience is going to be much better.

It's normal, I think, to get overwhelmed by the sheer amount of parens you need to manage when learning a Lisp. When taken on its own, the syntax really is simple, but introducing another strange thing on top of already unfamiliar concepts doesn't help learning.

Some people will tell you that you will "get used to it" soon enough and that it's just a matter of practice. While there is some truth to these claims, I'm a pragmatist and so I prefer another approach: simply set up a decent environment for working with Lisp before you begin. There are many tools which make reading and writing Lisp code much easier: you need to either configure your editor to enable them or change the editor. There are some Clojure-specific editors out there, you can simply use one of them (my thoughts on them below).

Main things/features useful when working with Lisps:

  • syntax highlighting for your Lisp (obviously!)
  • automatic indentation (and re-indent) for your Lisp
  • ability to highlight matching paren
  • ability to jump to matching paren
  • configurable syntax coloring and/or rainbow-style coloring of parens
  • ability to wrap a text selection with parens
  • automatically inserting closing paren
  • ability to delete parens in pairs automatically
  • par-edit or par-infer style structural editing
  • auto-completion for module/function names[1]
  • quick access to the docs for a name under cursor
  • "Go to definition..." is good to have, but you can usually make do with grep or "Find in files..." editor command

See the screenshots and a video for visual demonstration of what I'm talking about (click on the image to get a bigger version):

Also see here for a very good general introduction to editing Lisp along with explanation of what Parinfer is.

Clojure specific editors

After writing the above I realized that it would be good to give a couple of examples of beginner-friendly editors, which implement the features mentioned above. To my surprise there are some editors which target Clojure, some of them even maintained. Here's a short summary of what I found out:

Nightcode - I only played with it a little but I'm impressed at what it can do. I tried using parinfer some time back and it is much friendlier and easier to use than Paredit[2]. In short: you never need to worry about parens when using Nightcode. Place a cursor where you need it and start typing: the parens will magically appear. It worked out of the box for me on Mac, however it refused to run on Fedora.

Clooj - inactive since 2012, has nearly none of the useful features, looks ugly and is slow. Forget it.

LightTable - supports many things mentioned above as plugins, but they tend to be disabled by default and enabling them is not as easy as it should be. The plugin-based architecture makes it interesting for polyglot projects, but it needs some configuration to get started and Nightcode needs none.

Cursive - a Clojure plugin for IntelliJ IDEA from JetBrains. I don't have any of their IDEs installed and so I didn't try it. Its feature list looks decent, though, so if you like IntelliJ this may be the best option for you.

Emacs and CIDER - that's what I use. I'd recommend you to only try this route if you already know some Emacs, otherwise it's going to be frustrating for a good while until you internalize all the strange names and such. CIDER itself is great, though, and integgrates with Leiningen, offers inline code evaluation, auto-completion, connecting to a running REPL and so on.

In short: give Nightcode a try if you can, otherwise use a Lisp/Clojure plugins for your current editor, like Sublime, IntelliJ or Eclipse. Come over to Emacs side once you get bored with those.

  • A full "IntelliSense" - context aware auto-completion - is always nice to have, but you can live without it. For a while.
  • I stick with Paredit because I already got used to it and am efficient enough with it; I'd go for parienfer had I started learning about Lisps now.

Scripting Nginx with Lua - slides

I gave a talk at work yesterday about OpenRESTY, which I think is the easiest way to start scripting Nginx. Here are the slides:


One thing I'd like to add is that I don't trust OpenRESTY as an app platform yet. The project is being actively developed and the author writes tons of good code, but he is but a single man. This means there is severe lack of tools and libraries in the ecosystem: they simply don't exist yet.

On the other hand, OpenRESTY is a painless way to script Nginx with Lua and that's something you can do with just the modules provided by OpenRESTY. Being able to communicate asynchronously with anything (literally: anything that wants to talk over sockets), including databases, external services and so on, is very nice thing indeed.

There's one caveat though, which is that you can't use any blocking (non-async) constructs, which includes using normal Lua sockets. This is not the problem in your code, but if you happen to need a Lua library which comes as a C extension, and if the functions in the library block, you will probably need to fix the C code. At this point it's purely theoretical, because I didn't encounter any such library yet.


My adventure in X screen locking - slock re-implementation in Nim, part 1

NOTE: This part covers project motivation and setup
NOTE: The full source code is on GitHub

TL;DR or What is this post about?

In short: it's about me exploring - or at least getting into contact with - a couple of interesting things:

  • X Window System APIs via Xlib bindings
  • low-level Linux APIs for elevating privilages, checking passwords
  • GCC options and some C
  • and of course the Nim language


At my work we strongly discourage leaving logged-in accounts and/or unlocked screens of computers. I happen to agree that locking your computer is a good habit to have, so I've had no problems with this rule... Up to the point when I switched from KDE to StumpWM (I wrote about it some time ago, in this, this and this posts) and my trusty Ctrl + Alt + L stopped working.

The only idea that came to my mind was to use the venerable xscreensaver, but: a) I didn't really need any of the 200+ animations (I just wanted a blank screen) and b) I didn't like how the unlock dialog looked like[1].

XScreenSaver and its dialog straight from the 80ties.

I needed something more lightweight (xscreensaver is ~30k loc of C[2]), simpler and either better looking or without any UI altogether.

slock to the rescue

There's a site called suckless.org where you can find some impressively clean and simple tools, implemented mostly in C. You can read more about their philosophy here, which I recommend as an eye opening experience. Anyway, among the tools being developed there, there is also slock - a very simple and basic screen locker for X. It's 310 lines of code long and it's all pretty straightforward C.

The program suited my needs very well: minimal, fast, and good looking. Well, the last part it got through cheating, as slock simply has no interface at all - but this means there's no ugly unlock dialog, so it's all good.

Why Nim?

As I used slock I read through its code a couple of times. It seemed simple and learnable, despite the fact that I knew nothing about X and didn't use C seriously in 15 years. Fast forward to last week and I finally found some free time and decided to learn some Xlib stuff. Re-implementing slock in Nim looked like a good way of learning it: this may sound a bit extreme, but it allowed me to gain exp points in two stats simultaneously[3] - in Xlib usage and Nim knowledge!

Nim is a very interesting language. I'd call it "unorthodox" - not yet radical, like Lisp or Smalltalk, but also not exactly aligned with Java and the like. For one tiny example: return statement in functions is optional and if omitted, a function returns its last expression's value. That's rather normal way to go about it; hovewer, in Nim you can also use a special variable name result and assign to it to have the assigned thing returned. It looks like this:

  proc fun1() : int =
    return 1

  proc fun2() : int =

  proc fun3() : int =
    result = 1

  assert fun1() == fun2() == fun3()

At a first glance this may look strange, but consider this in Python:

  def fun():
      ret = []
      for x in something:
      return ret

It's practically an idiom, a very common construct. Nim takes this idiom, adds some sugar, adapts it to the world of static typing and includes in the language itself. We can translate the above Python to Nim with very minor changes[4]:

  proc fun4() : seq[int] =
    result = @[]
    for x in something:

As I said, it's not a groundbreaking feature, but it is nice and good to have, and it shows that Nim doesn't hesitate much when choosing language features to include. In effect Nim includes many such conveniences, which may be seen both as a strength and as a weakness. While it makes common tasks very easy, it also makes Nim a larger language than some others. It's not bad in itself, rather, depending on how well the features fit together it's harder or easier to remember and use them all. Nim manages well in this area, and also it most definitely is not like C++ with its backwards compatibility problem, so I think even Pythonistas with taste for minimalism will be able to work with Nim.

Being Nimble - the project setup

Many modern languages include some kind of task runner and package manager either as part of stadard distribution or as downloadable packages. Nim has Nimble, which takes care of installing, creating and publishing Nim packages. Assumming that you have Nim already installed[5], you can install Nimble with:

  $ git clone https://github.com/nim-lang/nimble.git
  $ cd nimble
  $ git clone -b v0.13.0 --depth 1 https://github.com/nim-lang/nim vendor/nim
  $ nim c -r src/nimble install
  $ export PATH="$PATH:~/.nimble/bin/"

Note the addition to the PATH variable: ~/.nimble/bin/ is where Nimble installed itself and where it will place other binaries it installs. Make sure to have this directory in your PATH before working with nimble.

Creating a project

Creating a project is easy, similar to npm init:

  $ mkdir slock
  $ cd slock
  $ nimble init
  In order to initialise a new Nimble package, I will need to ask you
  some questions. Default values are shown in square brackets, press
  enter to use them.
  Enter package name [slock]:
  Enter intial version of package [0.1.0]:
  Enter your name [Piotr Klibert]:
  Enter package description: Simplest possible screen locker for X
  Enter package license [MIT]:
  Enter lowest supported Nim version [0.13.0]:

  $ ls
  total 4.0K
  -rw-rw-r--. 1 cji cji 188 13/02/2016 01:02 slock.nimble
  $ cat slock.nimble
  # Package

  version       = "0.1.0"
  author        = "Piotr Klibert"
  description   = "Simplest possible screen locker for X"
  license       = "MIT"

  # Dependencies

  requires "nim >= 0.13.0"

  $ mkdir src
  $ touch src/slock.nim

The generated slock.nimble is a configuration file for Nimble; it's written in a NimScript, which looks like a very recent development in Nim and it replaces the previous INI-style config files. This means that many examples and tutorials on the Internet won't work with this format. The most important difference is the format of dependencies list for your app: it now has to be a seq. For example, to add x11 library to the project:

  requires @["nim >= 0.13.0", "x11 >= 0.1"]

The dependencies should be downloaded by Nimble automatically, but you can also dowload them manually, like in the example below. You'll notice that I also install a c2nim package - I will say more about it later.

  $ nimble install c2nim x11
  Searching in "official" package list...
  Downloading https://github.com/nim-lang/c2nim into /tmp/nimble_18934/githubcom_nimlangc2nim using git...
  Cloning into '/tmp/nimble_18934/githubcom_nimlangc2nim'...
  ...etc, etc, etc...

Workflow and tooling

Nim is a compiled language, but working with it proved to be comparable to working with a dynamic language, mainly thanks to type inference and blazingly fast compiler. You dcan ommit type declarations where they are obvious from the context and Nim will deduce the correct type for you. It's not a full program type inference like in OCaml, but rather a local type inference as used in Scala or modern C++ or Java. Even with this limitation it's immensely useful and reduces code verbosity by a lot.

Compiler speed is important, because it encourages frequent testing. If compilation is going to take a long time you tend to "batch" changes in your code together and only compile and run once in a while instead of after every single change. This, in turn, makes it harder to find and fix regressions if they appear. Dynamic languages work around this issue by rejecting compilation step completely, at the cost of run-time safety and performance. Nim - which is similar to Go in this respect - makes compilation effortless and nearly invisible. The command for compile-and-run looks like this:

  $ nim c -r module.nim

The problem with this command is that it doesn't know about Nimble and dependencies, so in reality I used a bit different commands:

  $ nimble build && sudo ./slock

  # or, if you need to pass some additional parameters to Nim compiler:
  $ nimble c --dynlibOverride:crypt --opt:size --passL:"-lcrypt" -d:release src/slock.nim  -o:./slock && sudo ./slock

While Nim's compiler and Nimble are very good tools, they're not enough to work comfortably on more complex codebases. Nim acknowledges this and provides a couple of additional tools for working with code, like nimsuggest. However, nimsuggest is rather new and it SEGFAULTed on me a couple of times[6]. I used it via - of course - Emacs with nim-mode, and again, I encountered a couple of irritating bugs, frequent "Mismatched parens" errors when trying to jump a word or expression forward. However, when nimsuggest works, it does a good job, particularly its "Go to definition" feature works and is very helpful.

Nim's definitive source of documentation is The Index, which does a surprisingly well as a reference. Ctrl + f works just as well as search boxes other documentation sites provide. Nim docs are also nice in that they link to the source: you get a link to the function implementation beside its documentation. I like this trend and I'm happy that more documentation is like this - a quick peek at the source can sometimes save hours of unneeded work.

In this project, I had to work with Xlib and turns out its documented in man pages, and the pages were already installed on my system. I don't remember installing them, so maybe they come with X by default on Fedora. Anyway, for Xlib tutorial I used Xlib Programming Manual and for reference I simply used the man pages: just type, for example, man XFreePixmap and you get XCreatePixmap (3) man page. The same is true for Linux/POSIX functions - try, for example, man getpwuid.

That's it, for now

In the next part I'm going to show some Nim code, translated - both manually and automatically - from C. I'll focus on Nim's language features that C doesn't have and I'll show how they can be used to write shorter, safer and more readable code. In the later, last part I'm going to write about Nim's interop with C and various ways of connecting the two, using Xlib as an example.

  • And it seems that JWZ doesn't like people modifying the looks of this dialog, so I didn't even try.
  • Very well written C.
  • Every power gamer will understand how nice this is!
  • But note the lack of unnecessary return statement in Nim.
  • I recommend installing from GitHub - it's always good to have the sources for your compiler and stdlib around, and you won't get it if you install just the binaries. And master branch is stable: checkout devel branch for the in-development code.
  • This might be because of my own incompetence, of course.

Python interoperability options

Recently I was writing a little set of scripts for downloading and analyzing manga meta-data, for use with my Machine Learning pet project. True to a polyglot style, the scripts were written in two languages: Python and Elixir. Elixir is a language that works on Erlang VM (see my previous blog post on it, Weekend with Elixir) and Python doesn't need any introductions I think. I used Elixir for network related stuff, and Python with pandas for analysis.

The biggest problem with such a setup (and with polyglot approach) is passing the data around between the languages. At first I just used a couple of temporary JSON files, but then I remembered a project I once saw, called ErlPort, which is a Python implementation of the Slang's external term protocol. In short, it enables seamless integration between Python and Erlang and - by extension - Elixir. ErlPort not only lets you serialize data, but lets you also call functions across language boundaries. It also supports Ruby besides Python.

In general, using ErlPort was a success in my case, but it's limited as it can only connect three languages. I'd normally say it's good enough and leave it, but a couple of days later, in "the best of Python lib of 2015" thread on HN, I discovered another project called python-bond, which provides similar interoperability features for Python and three more languages: PHP, Perl, JavaScript. The two libraries - ErlPort and python-bond - make it embarrassingly easy to integrate a couple of different languages in a Python project. Along with Cython, which lets you easily call C-level functions, it makes Python a very good base language for Polyglot projects.

The hardest part with polyglot style, and in particular architecture, is deciding which language should be a base one, the one which runs and integrates the rest of the languages. In my case, I ended up with most of the control flow inside Elixir, because of the very neat supervision tree feature it provides (thanks to Erlang/OTP). It remains to be seen if Elixir is capable of filling the role of a main implementation language in a polyglot project. I'll make sure to write a post about it in the future.


Jumping to next/previous occurrence of something at point via iedit

For a very long time I used normal search to get to a next occurence of something. However, it wasn't very comfortable. For this to work, I need to:

  • move point (cursor) to the beginning of a word I'd like to jump to
  • press C-s
  • press C-w, more than once if necessary, to copy current word into search box
  • keep pressing C-s to search for occurences

I found a better way. iedit is a mode for Emacs for editing all the occurences of something at once. I alternate between iedit and multiple-cursors mode when I need to do something simple to a word in many places in the code. However, iedit also provides an iedit-next-occurence, which by default is bound to TAB.

Using iedit I only need to:

  • move point to anywhere inside a word
  • press C-; for iedit
  • press TAB to jump to next and S-TAB (shift + tab) to jump to previous occurence

One more feature of iedit I find useful sometimes is a toggle-unmatched-lines-visible command. It duplicates occur mode functionality a bit, but it hides unmatched lines in your current buffer. This makes it easy to quickly switch between a global occurences list and a local occurence context.


Weekend with Elixir

So I was playing with Elixir in the last couple of days. I didn't write anything significant, but I scribbled enough code, I think, to get a feel of the language. I know Erlang rather well, so I suppose most "pain points" of Elixir were non-issues for me. I already mastered things like async message passing, OTP behaviors, as well as pattern matching, recursion as iteration and the like. Thanks to this I was able to concentrate on Elixir itself, instead of learning many unfamiliar concepts.

The syntax

One difference from Erlang which is highly visible is syntax. I don't hate Erlang's syntax; it's small and clean, probably owing to its Prolog roots. However, it's not like it couldn't be improved. Elixir has completely different syntax, closer to Ruby than to anything else I know of, but it turns out it works well.

For Erlangers, some interesting points are:

Dedicated alist syntax. With strict arity enforcement it's common for functions to accept a list of two-tuples (pairs) as an options list. First element of the tuple is a key, and the second is its value. It's natural to recurse and pattern-match over such a structure, so it's used a lot. Elixir comes with a special syntax for creating these, as long as the key is an atom (which it almost always is). It looks like this:

[key: val,
key2: val2,

It's just a bit of syntactic sugar, you could equivalently write:

[{:key, val},
{:key2, val},

and it would look similarly in Erlang. The sugared version is easier to read thanks to reduced number of parentheses, which clutter a non-sugared version.

Elixir syntax is also very consistent, with only a couple of pitfalls you need to look out for (mainly, a comma before do in statements in alist style). Every block and every instruction (macro call or special form) follow the same rules. For example, you can use both abbreviated and normal block syntax everywhere there is a block required. The syntaxe look like this:

def fun() do

or you can write it like this:

def fun(), do: (...)

You may get suspicious, as I did, seeing the second form. As it turns out, it's exactly as you think it is: it's the same syntax sugar that alists use. So, you can equivalently write:

def fun(), [{:do, val}]

It's nice, because it works uniformly, for every call, as long as the alist is the last parameter of the call. No matter if it was a macro or function call. Ruby has this in the form of hashes sugar.

Other syntactic features are much less interesting. For example, in Erlang all the variables has to begin with upper-case letter, while Elixir reverses this and makes variables begin with only lower-case letters. This means that atoms need a : prefix. What's fun is that capitalized identifiers are also special in Elixir and they are read as atoms (with Elixir. prefix). Things like this are very easy to get used to. Also, in most situations it's optional to enclose function arguments in parentheses. You can call zero-arity functions by just mentioning their name. To get a reference to a function you need to "capture" it with a &[mod.]func_name/arity construct. It's also easy to get used to, and not that different from Erlang fun [mod:]func_name/arity.

The module system

Erlang includes a module system for code organization. In Erlang, a module is always a single file, and the file name must match the module name. Also, the namespace for modules is flat, which means you can have conflicts if you're not careful. Elixir does away with the first restriction entirely and works around the second with a clever hack. You can define many modules inside a single file and you can nest the modules. It looks like this:

defmodule A do
    defmodule B do

Inside the A module you can refer to B simply, but on the outside you have to use A.B (which is still just a single atom!). Moreover, you can unnest the modules, too (order of modules doesn't matter):

defmodule A do

defmodule A.B do

So, the modules are still identified by a simple atom in a flat namespace, but they give an illusion of being namespaced with a convenient convention. By the way, it makes calling Elixir code from Erlang not too hard:


It's not that pretty, but it works well.

The other part of the module system is an ability to export and import identifiers to the module scope. In Erlang you can do both, but it's limited, because there is no import ... as ... equivalent, like in Python. Elixir, on the other hand, provides both an import and alias macros. Import works by injecting other module function names directly to the current lexical scope (this is another difference from Erlang, where you can only import at module level) and alias lets you rename a module you'd like to use. For example:

alias HTTPoison.Response, as: Resp

makes it easy to refer to the nested module without the need to write the long name every time. It also works in the current lexical environment. There's also require form, meant for importing macros (similar to how -include is used in Erlang).

These two features make modules in Elixir cheap. You're likely to create heaps more of modules in Elixir than you would in Erlang. That's a good thing, as it makes organizing the code-base easier.

There is more to the module system, like compile-time constants and a use directive, which make it even more powerful (and sometimes a bit too magical).

The tooling

Elixir comes with Mix, a task runner similar to Grunt or Gulp in the JS/Node.js-land. Every library can extend the list of available tasks by just defining appropriate module. One of such task providers is Hex, a package manager, also built-in.

The two work together to make pulling the dependencies (locally, like npm or virtualenv do) easier than most other solutions I've seen. You can install packages from central Hex repository or directly from GitHub. No manual fiddling involved. In Erlang this is also possible, but it's not as streamlined or convenient. Starting a new project, installing dependencies and running a REPL is ridiculously easy:

$ mix new dir
...edit dir/mix.exs, deps section...
$ iex -S mix

You can do mix run instead to run the project itself without a REPL. Every time your deps change, the change is automatically picked up and your deps get synced. Also, mix automatically compiles all your source code if it changes; in Erlang I was more than once wondering why the heck my changes aren't visible in the REPL, only to realize I didn't compile and load it. Of course, in the REPL you have both compile and load commands available, along with reload, which compiles and loads a module in one go. Mix is also used for running tests, and unit-test support is built into the language with ExUnit library, by the way.

The REPL itself is... colorful. That's the first difference I noticed compared to Erlang REPL. It supports tab-completion for modules and functions, which is the same for Erlang, but it also supports a help command, which displays docstrings for various things. This is useful, especially because the Elixir devs seem to like annotating modules and functions with docstrings. Everything else works just as in Erlang, I think.

The standard library

The library is not that big, and it doesn't have to, because there's Erlang underneath it. Instead it focuses on delivering a set of convenient wrappers which follow consistent conventions. There's and Enum module, which groups most collection-related operations, String module for working with binaries (strings in Elixir are binaries by default) and so on.

I said they follow consistent conventions. It's linked to the use of a threading macro (known as thread-first or -> in Clojure and |> in F#) - wherever possible, functions take the subject of their operations as a first argument, with other arguments following. This makes it easy to chain calls with the macro, which lets you unnest calls:

    |> String.split
    |> Enum.take(10)
    |> Enum.to_list
    |> Enum.join("\n")

Nothing fancy, and it has it's limitations , but it does make a difference in practice. It's a macro, so it's not as elegant as it would be in some ML, but it still works, so who cares? Of course, the problem is that not every function follows the convention, which makes you break the chain of calls for it. It also doesn't work with lambdas, which is a pain in the ass, because it doesn't let you (easily) do something like flip(&Mod.func/2). You could do it with macros, but that's the direction I have yet to explore.

Overall, Elixir standard library is rather compact, but provides most of the tools you'd expect, along with convenient OTP wrappers. And if you find that something is missing, calling Erlang functions is really easy (tab-completion in the shell works here, too):

> :erlang.now()
{1448, 491309, 304608}

The macros

Elixir sports a macro system based on quasiquote and unquote, known from Lisps. They are hygienic by default, work at compile time and can expand to any (valid) code you want. A lot of Elixir itself is implemented with macros.

You can treat macros simply as functions which take unevaluated code as arguments and return the final piece of code that's going to be run. I didn't investigate much, but one macro I found useful was a tap macro: https://github.com/mgwidmann/elixir-pattern_tap You can read its code, it's less than 40 lines of code, with quite a bunch of whitespace thrown in. This is also the reason why I believe that flip-like macro should be possible and easy to make.

I will probably expand on macros and other meta-programming features of Elixir after I play with it some more.

Structures and protocols

In Erlang you have records, but they are just a bit of (not that well looking, by the way) syntactic sugar on top of tuples. From what I understand, records (called structures) in Elixir are a syntactic sugar over Maps (the ones added in Erlang/OTP 17), however they act as a building block for protocols, which provide easy polymorphism to the language. Actually, it's still the same as in Erlang, where you can pass a module name along with some value, to have a function from that module called on the value. The difference is that in Erlang you need to do this explicitly, while protocols in Elixir hide all this from the programmer.

First, you need to define a protocol itself. It has to have a name and a couple of functions. Then you implement the functions for a structure of your choosing (built-in types, like Lists or even Functions, are included too). Then you can call the protocol functions (treating protocol name as module name) on all the structures which implement required functions. I think the protocols are nominative and not structural, so it's not enough to implement the functions in the same module as the structure, you need to explicitly declare them as an implementation for some protocol.

Protocols are used for a Dict module, for example, which allows you to treat maps, hashes, alists and other things like simple dictionaries, letting you manipulate them without worrying about the concrete type. However, the dispatch happens at run-time, so it's better to use implementation-specific functions if there's no need for polymorphism.

That's it

I mean, I only played with Elixir over a single weekend; I still have quite a few corners to explore. This is why I refrain from putting my perceived "cons" in this post - I still need to get used to the Elixir way of doing things some more.

I already decided, however, to use Elixir for programming Erlang in my next project. At a glance Elixir looks like it improves on Erlang in some areas, while not making it much worse at other places. I have a high expectations for macros which are actually usable (you have parse transforms in Erlang, but...).


Hidden Emacs command that's a real gem

I have no idea how it happened, but somehow for the last 3 years I missed a very useful command, called finder-by-keyword.

The command is almost undocumented, there's only a short explanation of what the module (finder.el) does:

;; This mode uses the Keywords library header to provide code-finding
;; services by keyword.

By default it's bound to C-h p. It let's you browse built-in packages by keyword, like "abbrev", "convenience", "tools" and so on. It's great for discovering packages you didn't know existed!

The module seems to come from 1992 though, which makes it ignore all the libraries installed via package.el. It shouldn't be that hard to make it search all the package directories too. Actually, that's my main problem with package.el - it only provides a simple, flat list view of the package. This little tool is much better for browsing packages lists, and it's even already written.


A simple Lens example in LiveScript

After reading Lenses in Pictures I felt rather dumb. I understood what is the problem, but couldn't understand why all the ceremony around the solution. I looked at OCaml Lens library and became enilghtened: it was because Haskell! Simple.

Anyway, the code looks like this:

z = require "lodash"            # `_` is a syntactic construct in LS
{reverse, fold1, map} = require "prelude-ls"

pass-thru = (f, v) --> (f v); v
abstract-method = -> throw new Error("Abstract method")

# LensProto - a prototype of all lens objects.
LensProto =
    # essential methods which need to be implemented on all lens instances
    get: abstract-method (obj) ->
    set: abstract-method (obj, val) ->
    update: (obj, update_func) ->
        (@get obj)
        |> update_func
        |> @set obj, _

    # convenience functions
    add: (obj, val) ->
        @update obj, (+ val)

# Lens constructors
make-lens = (name) ->
    LensProto with
        get: (.[name])
        set: (obj, val) ->
            (switch typeof! obj
                | \Object => ^^obj <<< obj
                | \Array  => obj.slice 0  )
            |> pass-thru (.[name] = val)

make-lenses = (...lenses) ->
    map make-lens, lenses

# Lenses composition
comp-lens = (L1, L2) ->
    LensProto with
        get: L2.get >> L1.get
        set: (obj, val) ->
            L2.update obj, (obj2) ->
                L1.set obj2, val

# Lensable is a base class (or a mix-in), which can be used with any object and
# which provides two methods for obtaining lenses for given names. The lens
# returned is bound to the object, which allows us to write:
#   obj.l("...").get()
#   obj.l("...").set new_val
# instead of
#   make-lens("...").get obj
#   make-lens("...").set obj, new_value
Lensable =
    # at - convenience function for creating and binding a lens from a string
    # path, with components separated by slash; for example: "a/f/z"
    at: (str) -> @l(...str.split("/"))

    l: (...names) ->
        # create lenses for the names and compose them all into a single lens
        lens = reverse names
            |> map make-lens
            |> fold1 comp-lens

        # bind the lens to *this* object
        lens with
            get:  ~> lens.get this
            set: (val) ~> lens.set this, val

to-lensable = (obj) -> Lensable with obj

Some tests for the code:

# Tests

o = Lensable with
        bobr: "omigott!"
        dammit: 0

[prop, dammit] = make-lenses "prop", "dammit"
prop-dammit = comp-lens dammit,  prop

console.log z.is-equal (prop.get o),
    { bobr: 'omigott!', dammit: 0 }

console.log (prop-dammit.get o) == 0

console.log z.is-equal (prop-dammit.set o, 10),
    { prop: { bobr: 'omigott!', dammit: 10 } }

    |> (.set o, "trite")
    |> (.l("prop", "bobr").set -10)
    |> z.is-equal { prop: { bobr: -10, dammit: 'trite' } }, _
    |> console.log

out = o
    .at("prop/bobr").set "12312"
    .at("prop/argh").set "scoobydoobydoooya"
    .at("prop/lst").set [\c \g]
    .at("prop/dammit").add -10
    .l("prop", "lst", 0).set \a
    .l("prop", "lst", 2).set \a

console.log z.is-equal out, {
    prop: {
        bobr: '12312', dammit: -10,
        argh: 'scoobydoobydoooya',
        lst: ["a", "g", "a"]}}

out = o
    .at("prop/bobr").set "12312"
    .at("prop/argh").set "scoobydoobydoooya"
    .at("prop/dammit").add -10

console.log z.is-equal out,
    { prop: { bobr: '12312', dammit: -10, argh: 'scoobydoobydoooya' } }

transform =
    (.at("prop/bobr").set "12312") >>
    (.at("prop/argh").set "scoobydoooya") >>
    (.at("prop/dammit").add -10)

console.log z.is-equal  (transform o),
    { prop: { bobr: '12312', dammit: -10, argh: 'scoobydoooya' } }

This showcases many of the ways you can use lenses in LiveScript. LS has many ways of creating functions and it has syntax for pipe'ing and composing functions, so it's a natural fit for everything FP-looking. LS does not do "curried by default" like Haskell or OCaml, but it gives you a partial application syntax and allows defining curried functions. It's much like Scala in this regard.

Anyway, this is what lenses are supposed to do - they should support functional updates over data sctructures. They should offer get, set and update methods, and also there should be a compose operator for them. And that's all - you can read the linked OCaml implementation to see that it's really that simple. Most of that implementation are convenience methods for creating lenses for various OCaml structures; the core code is really short.


Earlier posts