The Right Lang for the Job

Exploring the abbyss of non-mainstream programming languages

I finally made the flashcards!

Last updated on:

For a few years now I've been thinking of using "spaced repetition" for learning new and, crucially, retaining exisiting knowledge over the long term. My memory doesn't get any better with age, and it's probably going to get progressively worse as time goes by. I thought about countermeasures, and the "flash cards" idea seemed like a good bet. It works like this:

First, you prepare a bunch of "cards". You take some post-it notes and write a question on one side and an answer on the other. After you assemble a "deck" of such cards, you need to review them periodically. You take a card from the deck and read the question. Then you try to answer it. Then you flip the card and compare your answer with the one written down. Finally, you do some complicated calculations to asses how well you remember this topic and when to look at the card again in the future.

Having said that "I've been thinking" about it for years, it should be obvious that something in the process didn't really work for me. I attempted this many times, but I just couldn't get past that hurdle. That step, my fated enemy, was obviously this:

First, you prepare a bunch of "cards".

However, even if I somehow managed to best it, the next one - doing something consistently over some time... It's simply impossible, a no is a no. Not without software of some kind.

Not very surprising that I decided to build such a software. But, before software, I needed data, to make the cards. Around 6 months ago I asked on Github for programming-idioms.com's dataset, and the author generated a JSON dump for me (thanks a lot!) I could make cards en masse now, with the description and source code on them. I didn't have a way to display them, however. I considered coding the display part in one of two ways:

  1. Emacs route: new frame, run-at-time
  2. Awesome route: new "wibox" with Pango markup-based widget

Emacs wouldn't be a bad choice here, as it already supports syntax highlighting, a major requirement for displaying cards. On the other hand, at any given moment I can have 0 to 6 Emacsen open, and the scheduling part of the solution would need to account for that. Both edge cases (0 vs multiple) are problematic, as one would make it impossible to run any Elisp code without additional setup, and the other would require some kind of inter-process synchronization. So, a pain either way.

My window manager, on the other hand, has no such problems: there's exactly one instance at all[1] times, and it's always there, unless I'm doing something outside of X, which I don't.

Enough of a backstory, let's start with technical details. I use Awesome as my window manager[2]. Awesome is written in a mixture of C and Lua. It's homepage states that:

awesome has been designed as a framework window manager. It's extremely fast, small, dynamic and heavily extensible using the Lua programming language.

awesome provides a Lua API structured as a set of classes (emulated with Lua prototypal inheritance). The most prominent among these are widgets and wiboxes - boxes for widgets. Wiboxes are X windows, I think, and widgets are basically chunks of drawing logic. Widgets draw themselves on Cairo surfaces and display in wiboxes. An event system based on signals chains these elements together.

The idea is nice: Cairo seems to be the "industry standard" in 2d graphics and has Pango for text rendering, and Lua is a dynamic scripting language with a few interesting capabilities.

However, when I tried actually writing the thing, I ran into two major problems: a small number of built-in widgets, and the dynamism of Lua.

The former is self-explanatory: it's always better to reuse than to write from scratch, if possible. If you're used to the wealth of widgets that GTK & co provide, getting by on a fraction of that is going to be hard.

The problems with Lua are more complex. awesome has, in general, good documentation, but the API area is large, and lack of contributors means that many things are undocumented. At that point, you try to read the source, but to do that, you need to find where something is defined. You search the codebase but get completely nothing as a result. Turns out, the name you're looking for is composed from parts at runtime. Ok, so let's attack it from the other side: let's use introspection at runtime to dump the shapes of various objects. Then you realize that debug info for all methods points to a single line of code... where a decorator used for defining methods is itself defined. Extracting a class hierarchy from the running system is even less possible: some methods are injected directly into objects instead of being inherited from a class, which is a problem, but that there's literally no difference at all between classes and objects - it's all tables anyway - and the mismatch between classes and metatables make it hopeless.

It's not bad to have a dynamic system. However, looking at it only through it's dead body (Gilad Braha reference, dead programs and pathologists) is tragic. Smalltalks are as dynamic as Lua, but they provide all the tools you need to find, read, execute, and debug the code while the system is running. Neither Lua - a minimalist language, leaning to embedding as a main use case language - nor awesome provide any such tools. What you get is the source code on disk and the REPL (fortunately, at least the REPL is there).

I came up with a way to improve the situation: if I could get the API of awesome covered as Haxe externs, I would have a capable macroassembler for Lua, with a gradual type system and good tooling. Context-sensitive completion, go to definition and find references tools that Haxe compiler provides are good enough for my purposes. Creating externs itself was a problem, but I only needed to solve it once, so I tried doing that.

First thing I tried was to write a Raku script that would parse the LuaDoc annotation from Lua sources. I quickly realized that it would be a giant pain: LDoc allows definition of custom annotations, which, combined with mistakes in the docs, made it exceedingly hard to interpret the data correctly. To avoid that, I reused LDoc's own parser: passing a Lua function as a "filter" when running LDoc allowed me to dump the project documentation

Haxe is also minimalistic due to it having so many targets.
  • Well, almost, like with everything...
  • Wiki
Comments