POLYGLOT PROGRAMMING

Better poly than sorry!

Hacking Erlang shell to support Alt+Left/Right cursor movement

Last updated on:

My console setup

I spend a lot of time in the console. It's still a very productive environment, even though the need to preserve the compatibility with a character-oriented TeleTYpe[1] of the '60s prevents any radical improvements. It even works well out of the box until, at one point, it doesn't. This is a story of one such case.

I use URxvt[2] (terminal emulator), TMUX[3] (Terminal MUltipleXer), and ZSH[4] (shell) as my console. I switched to URxvt for the unicode support which wasn't common at the time. A relatively small number of issues with display and input with commonly used applications was a nice bonus.

The only problem I had with it is that pressing Alt+Left/Right inserts C or D characters instead of jumping over a word. Apparently, URxvt sends a sequence of key codes which is not, by default, mapped to anything in most programs. There's a very simple fix for it, at least for programs that use the Readline[5] library. Readline is configurable, and you can define your own bindings in the ~/.inputrc file (see the docs); this is what I use, for example:

  "\e[1;3D": backward-word ### Alt+Left
  "\e[1;3C": forward-word  ### Alt+Right

With just two lines, line editing becomes mostly a solved (for me) problem. Mostly. Though few and far between, some programs don't use Readline or emulate it half-heartedly (thereby ignoring the config file). One of such programs is the Erlang (and, by extention, Elixir) shell.

BTW: Yes, I do know about M-b and M-f. I use Emacs. I just really like my arrow keys, ok?

The journey begins

One Saturday evening, after a decade of putting up with it, I decided to try fixing the problem. I'm honestly not sure what I was thinking, I must've been bored out of my mind. I mean, honestly, who'd want to dig through entirely undocumented, complex internals of a multi-process beast with job control and remote capabilities...? (No, please don't answer, thanks.)

When I resolved myself to go down the rabbit hole, I searched on the net for docs or posts explaining how to customize anything more than the prompt in the shell. It took quite a few tries to get anything at all, but I finally found two (just two) relevant posts. One is a post from 2013 (by the author of "Learn You Some Erlang"[6], about the shell's architecture) and the other a StackOverflow post answering the question of how to add Readline features to Elixir's IO.gets and friends.

The TL;DR from these two is basically "go look at edlin.erl". On a diagram (borrowed from the first post) of processes (and modules) the shell consists of it's the part marked in red:

edlin.erl is part of a stdlib application and can be found in ./lib/stdlib/src/edlin.erl, starting in Erlang build directory root. You can see the whole file here. I made a fresh clone of the Erlang/OTP sources to avoid breaking my system-wide installation in case something went wrong. Compilation took some time, but it finished without problems.

Inside edlin.erl there's a state machine[7] used for parsing the incoming key codes into atoms denoting actions. It's a common Erlang pattern, where Tail Call Elimination is exploited to encode state transitions as normal function calls. It looks roughly like this:

edit([C|Cs], P, {Bef,Aft}, Prefix, Rs0) ->
    case key_map(C, Prefix) of
        meta ->
            edit(Cs, P, {Bef,Aft}, meta, Rs0);
        meta_o ->
            edit(Cs, P, {Bef,Aft}, meta_o, Rs0);
        meta_csi ->
            edit(Cs, P, {Bef,Aft}, meta_csi, Rs0);
        meta_meta ->
            edit(Cs, P, {Bef,Aft}, meta_meta, Rs0);
        {csi, _} = Csi ->
            edit(Cs, P, {Bef,Aft}, Csi, Rs0);
        meta_left_sq_bracket ->
        % ... more cases ...
        {undefined,C} ->
            {undefined,{none,Prefix,C},Cs,{line,P,{Bef,Aft},none},
             reverse(Rs0)};
        Op ->
            case do_op(Op, Bef, Aft, Rs0) of
                {blink,N,Line,Rs} ->
                    edit(Cs, P, Line, {blink,N}, Rs);
                {Line, Rs, Mode} -> % allow custom modes from do_op
                    edit(Cs, P, Line, Mode, Rs);
                {Line,Rs} ->
                    edit(Cs, P, Line, none, Rs)
            end
     end;

With the key_map function defined like this (out of order excerpts):

1
2
3
4
5
6
7
8
9
key_map($\^A, none) -> beginning_of_line;
key_map($\^B, none) -> backward_char;
key_map($\^D, none) -> forward_delete_char;
% ... more clauses ...
key_map($B, meta) -> backward_word;
key_map($D, meta) -> kill_word;
key_map($F, meta) -> forward_word;
key_map($T, meta) -> transpose_word;
% ... even more clauses ...

To be perfectly honest, the fact that it's a state machine wasn't obvious to me at first. My Erlang is a bit rusty, and one-letter identifiers don't make it the most reable code ever. It's also not trivial to see what sequence of codes will be actually sent to the edit function. I had to dig deeper.

Debugging the shell

First of all, this may be obvious, but in this case tried-and-true debugging with prints doesn't work. From withing edlin you get no direct access to the terminal, which makes sense, given it's itself a part of the terminal handling code. This tripped me up in the beginning a bit.

Fortunately, Erlang has an excellent graphical debugger, which you can attach to any process. To actually make use of it, you need to first reload a module you want to debug with it's instrumented, interpreted version. This is done with the int (or :int in Elixir) module[8]. Unfortunately, when I tried, it didn't work:

  -▶ ./bin/erl
  Erlang/OTP 24 [RELEASE CANDIDATE 1] [erts-11.2] [source-444144870c] [64-bit]

  Eshell V11.2  (abort with ^G)
  1> int:ni(edlin).
  =ERROR REPORT==== 29-Mar-2021::20:37:21.516194 ===
  Can't load module 'edlin' that resides in sticky dir

  ** exception error: no match of right hand side value {error,sticky_directory}
       in function  int:'-load/2-fun-0-'/3 (int.erl, line 531)
       in call from int:load/2 (int.erl, line 527)

Apparently, Erlang code server has a list of "sticky dirs" - modules living in them are not to be reloaded. Makes sense, most of the time. There has to be a way of disabling it though, right? Yes, there is - you can disable it globally with -nostick flag, or per directory or module, like this:

  2> code:unstick_mod(edlin).
  true

Unfortunately, that's still not enough. Apparently, to be interpretable, a module has to be compiled in a special way to include some debug data. If it isn't, you will get the following error:

1
2
3
  1> int:ni(edlin).
  ** Invalid beam file or no abstract code: edlin
  error

You can do this from the shell:

  2> compile:file("./lib/stdlib/src/edlin.erl", [debug_info]).

But then you have to remember to put the compiled file in the correct ebin directory yourself. Alternatively, you can pass a +debug_info to the erlc invocation (as you can see in its help message, +term "passes the term unchanged to the compiler"):

  -▶ ./bin/erlc +debug_info -o ./lib/stdlib/ebin/ ./lib/stdlib/src/edlin.erl

Now you should be able to unstick and instrument the module, and then start the debugger in one go:

  3> code:unstick_mod(edlin), int:ni(edlin), debugger:start().

Working with the debugger

In the newly opened window, click on Module menu and select edlin -> View. Then scroll down to the line you want to break on and double-click it (anywhere on the line). It looks like this on my computer (click to enlarge):

Now, when you switch back to the terminal and press a key you're interested in... nothing will happen! Instead, the process will reach the breakpoint and will stop. This is indicated by the break value showing up in the status column in the main window:

To actually start debugging, you need to use Process -> Attach menu item with the edlin process selected. It will open a window with the code, list of local variables with their values, and buttons for stepping over and into calls. Just remember that for the debugger to work, the module you want to debug has to be instrumented. If you try to step outside of the edlin module you won't see anything.

This is how the debugger looks like in action (click to enlarge):

Getting back to the code

After stepping through the execution of edit/5 function a couple times I was able to guess a few things. Here's the function head again:

edit([C|Cs], P, {Bef,Aft}, Prefix, Rs0) ->
  • The first argument is a list of keycodes (as integers, which also happens to be how Erlang encodes strings, which helps with the preview of values). C is the current code, while Cs contains the rest of the codes to be parsed. This argument is the first part of a state machine and represents state transitions.
  • The second argument is a prompt, as a string. It's not used much and can be ignored in this case.
  • The third argument is a pair of strings. They are the result of splitting the current line at cursor position: Bef keeps the part on the left of the cursor, and Aft the part from the cursor to the end of line. These change when inserting or deleting characters, but in this case they stay constant, so the argument can be ignored.
  • The third argument, Prefix, is an atom (or a tuple of an atom and a string, as we'll see in a moment) which says in what state the parser is currently. This may be none - a starting state; meta - after a modifier key was pressed; meta_meta - if we found two escape characters in a row - and quite a few other values. This is the second part of the state machine.
  • The last argument is, I think, a list of low-level commands (called "requests") for the TTY driver to add or remove characters, move the cursor, blink, and so on. Since I don't need to add any new functionality here, it is also safe to ignore for now.

The key_map function takes the next key code and the current state. It then returns the next state. The edit function interprets the new state and either loops to parse the rest of the codes list, or returns a list of commands for the driver to execute.

Recall my .inputrc: the terminal should send the following key codes sequence when Alt+Left is pressed (BTW: you can use Control+V[9] in the shell to quickly check what keycodes are sent):

  \e[1;3D

Looking at the values of C and Cs variables in the debugger proves that, indeed, this is what edlin receives. For the record: \e numeric value is 27, which you can see in the screenshot. Here is the sequence of key_map function calls (in other words, states of the parser) when given this sequence of codes:

1
2
3
4
5
6
7
key_map($\e, none) -> meta;
key_map($[, meta) -> meta_left_sq_bracket;
key_map($1, meta_left_sq_bracket) -> {csi, "1"};
key_map($;, {csi, "1"}) -> {csi, "1;"};
key_map($5, {csi, "1;"}) -> {csi, "1;5"};
key_map($C, {csi, "1;5"}) -> forward_word;
key_map($D, {csi, "1;5"}) -> backward_word;

The first character - \e - puts the parser in the meta state, the next - [ - in (aptly named) meta_left_sq_bracket. I can't guess what the csi atom is supposed to be an abbreviation of csi stands for "Control Sequence Indicator", which is a special code which causes the terminal to interpret the following codes instead of passing them to a program; it collects the key codes starting after \e[. Then finally, if all the codes in between match, we get to forward_word and backward_word states, which are passed to do_op function in in the last case of edit.

Once there are no more codes to parse, edit returns a list of rendering commands, tagged with either blink (self explanatory), more_chars (Enter was not pressed), or done (along with the full text of the line).

The problem and the fix

As you can see from the above code, edlin recognizes \e[1;5C and \e[1;5D as valid sequences of key codes, while my terminal sends codes with 3 instead of 5.

To fix this, the only thing needed is to add three new states to the key_map function, like this:

   key_map($5, {csi, "1;"}) -> {csi, "1;5"};
+  key_map($3, {csi, "1;"}) -> {csi, "1;3"};
   key_map($~, {csi, "3"}) -> forward_delete_char;
   key_map($C, {csi, "5"}) -> forward_word;
   key_map($C, {csi, "1;5"}) -> forward_word;
+  key_map($C, {csi, "1;3"}) -> forward_word;
   key_map($D, {csi, "5"})  -> backward_word;
   key_map($D, {csi, "1;5"}) -> backward_word;
+  key_map($D, {csi, "1;3"}) -> backward_word;
   key_map($;, {csi, "1"}) -> {csi, "1;"};

First, we make encountering $3 while in {csi, "1;"} state valid, and make it transition to a csi tuple (with 3 appended) as the next state. After that, we only need to handle the $C and $D characters when in the {csi, "1;3"} state by returning the forward_word and backward_word atoms. That's it!

Turns out that yes, the three lines above is all that it took to scratch my decade-old itch. As usual on such occasions, the solution is much less interesting than the path to it... Well, being able to use arrow keys is still nice, at least.

Afterword

Despite some early setbacks, the fix turned out quite simple. When I realized that there's a finite state machine in the code, the rest was relatively straightforward. Erlang encoding of a state machine using just function calls and pattern-matching on states (codes so far) and transitions (current key code) is elegant and lightweight. It's quite similar to how one would write a state machine in Prolog, especially given the Prolog-based syntax of Erlang.

I used the word "parse" a few times in the post. This isn't an accident: parsers can be often represented by a state machine. The edit function is a stateful parser as well as a state machine. It parses a grammar consisting of (among others) \e, [, 1, ;, 3 terminal tokens, with state atoms being the production rules.

It's correct to say that the code illustrated here either "interprets" (if you focus on the state machine being run), or "parses" (if you focus on the allowed sequences of states) the input codes list.

That's it, I hope you enjoyed the post as much as I enjoy my arrow keys working properly in the Erlang shell :-)

Comments

ffipf - jump to file in a project quickly (PoC)

Using Emacs loadable modules and native Nim code to speed up the file search

Last updated on:

NOTE: As usual, source code is on Github

ffi... what?

The name stands for Fuzzy Find in Project Fast, thanks for asking. Believe me, I tried to come up with a better name for the project... and failed miserably. Sorry. I'd be forever grateful for better name suggestions.

The project is in a proof-of-concept stage and my main reason for this writeup is to see if there's any interest in something like this. I'm happy with the functionality it has, but it's nowhere near polished enough for other people to easily reuse.

So what is it?

ffipf is a backend for fuzzy matching file paths. It's scoring and sorting algorithm is better suited for matching paths compared to more general fuzzy matching implementations, like Helm and Ivy. It's written in Nim and compiled to native code, which makes it faster and more memory-efficient than other solutions. The compiled DLL is a proper Emacs module and can be loaded as if it was Elisp.

Let's go over the features in a bit more detail next.

Better matching and sorting algorithm

The matching algorithm is designed specifically for patterns targeting paths. It matches each section of the pattern, delimited by /, fuzzily, but then matches each segment in sequence. The sorting is done based on how well the path conforms to the pattern, with the path most closely resembling the pattern at the top. The algorithm is said to be close to the one used by TextMate[1] by its author, whose code I ported to Nim from the Ruby original[2].

In practice, this means that you can skip large parts of the path and input very few characters, yet still arrive at the correct file. You can use it to also quickly list files within a set of similar directories, or files matching some pattern no matter where they are in the hierarchy.

Some examples of patterns I'd use to search for some files and results for them. Examples are from my .emacs.d directory, and the output is shortened (you can easily change how many candidates are returned).

  -▶ ./ffip
  > fo/mag/mag.el                       # wanted: magit.el
  .emacs.d/plugins-forked/magit/lisp/magit.el
  .emacs.d/plugins-forked/magit/lisp/magit-tag.el
  .emacs.d/plugins-forked/magit/lisp/magit-wip.el
  .emacs.d/plugins-forked/magit/lisp/magit-git.el
  .emacs.d/plugins-forked/magit/lisp/magit-log.el
  .emacs.d/plugins-forked/magit/lisp/magit-pkg.el
  .emacs.d/plugins-forked/magit/t/magit-tests.el
  .emacs.d/plugins-forked/magit/lisp/magit-core.el
  .emacs.d/plugins-forked/magit/lisp/magit-stash.el

  > stg/                                # wanted: all files in plugins-staging/
  .emacs.d/plugins-staging/f3/.gitignore
  .emacs.d/plugins-staging/f3/LICENSE
  .emacs.d/plugins-staging/f3/README.md
  .emacs.d/plugins-staging/f3/create-markdown.coffee
  .emacs.d/plugins-staging/f3/f3.el
  .emacs.d/plugins-staging/f3/package-lock.json
  .emacs.d/plugins-staging/f3/package.json
  .emacs.d/plugins-staging/f3/update-commentary.el
  .emacs.d/plugins-staging/ecb/ecb-advice-test.el

  > stg/.el                             # wanted: all Elisp files in plugins-staging/
  .emacs.d/plugins-staging/f3/f3.el
  .emacs.d/plugins-staging/ecb/ecb.el
  .emacs.d/plugins-staging/elx/elx.el
  .emacs.d/plugins-staging/esup/esup.el
  .emacs.d/plugins-staging/doom/init.el
  .emacs.d/plugins-staging/doom/core/autoload/ui.el
  .emacs.d/plugins-staging/unfill/test.el
  .emacs.d/plugins-staging/doom/core/cli/env.el
  .emacs.d/plugins-staging/ecb/ecb2/test.el
  .emacs.d/plugins-staging/doom/core/core.el

  > plu/REA                             # wanted: all plugin READMEs
  .emacs.d/plugins-forked/xr/README
  .emacs.d/plugins-staging/ecb/README
  .emacs.d/plugins-forked/muse/README
  .emacs.d/plugins-forked/distel/README
  .emacs.d/plugins-forked/lua-mode/README
  .emacs.d/plugins-forked/yaml-mode/README
  .emacs.d/plugins-forked/s/README.md
  .emacs.d/plugins-forked/f/README.md
  .emacs.d/plugins-forked/a/README.md
  .emacs.d/plugins-forked/ht/README.md
  .emacs.d/plugins-forked/gh/README.md

  > co/mini/h                          # wanted: my config file for Helm
  .emacs.d/config/interface/minibuffer/my-helm-config.el

  > co/mini/                           # wanted: all config files related to minibuffer
  .emacs.d/config/interface/minibuffer/my-helm-overrides.el
  .emacs.d/config/interface/minibuffer/my-ido-config.el
  .emacs.d/config/interface/minibuffer/my-selectrum-config.el
  .emacs.d/config/interface/minibuffer/my-helm-config.el
  .emacs.d/config/interface/minibuffer/my-yes-or-no-prompt.el
  .emacs.d/config/interface/minibuffer/my-ivy-config.el
  .emacs.d/plugins-forked/selectrum-group/marginalia/.gitignore
  .emacs.d/plugins-forked/selectrum-group/marginalia/LICENSE
  .emacs.d/plugins-forked/selectrum-group/marginalia/README.md

The details of the scoring algorithm are a bit more complex, but the effects are very satisfactory in my opinion.

Native module and execution speed

The main functionality is implemented in Nim and compiled to native code. Nim is a high-level language with a Pythonesque syntax, but its performance is closer to C or C++. Thanks to Yuuta Yamada[3] work it's possible to write Emacs extension modules easily in Nim.

My .emacs.d has nearly 40000 files under it. This is a lot, and simply traversing the directory hierarchy takes time; when you add the time needed to process the list of files in Emacs Lisp, the invocation of (for example) counsel-file-jump can take close to 2 seconds to initialize. Further, filtering can also feel sluggish during the input of the first few characters (it gets better when the pattern gets longer).

Traversing the whole hierarchy, or initializing the search, takes just 0.3 of a second with the Nim implementation. Moreover, the feedback - displaying candidates as you type - is always instantaneous.

There are downsides to the usage of native code. For example, it's possible for the module to crash Emacs as a whole in case there's a segfault triggered in the code. However, Nim makes it significantly harder to shoot yourself in the foot like that. Almost all of the module is written in a "safe" subset of Nim, and the only place where a crash is possible is in the parts which interact with Emacs module API. Fortunately, after wrapping the raw API calls with helper procedures, the chance of triggering unrecoverable error also goes down drastically.

Another downside is that you need to compile the module first in order to use it. Fortunately, Nim is much easier to compile than C, and it's available for all major platforms. After installing Nim you're just one make or nimble build command away from a working module. It's also possible to distribute binaries in case you don't want to install Nim, but that's something for the future (I have no way of cross-compiling for non-Linux OSes currently).

Displaying candidates

Currently, I use Ivy for displaying candidates and a built-in project.el for finding the root directory of the current project. The interface is very basic, for example it doesn't highlight the parts which were matched, but it does the trick.

That being said, my main focus is on the backend, the Nim-based dynamic module. It should be easy to write a Helm source for it or interface with it through Selectrum, IDo, or any other completion framework.

The main question

Before I start working on making the implementation bulletproof and usable for others I need to know if there's any interest towards a module like this. The code is currently nearly there in terms of features that I want it to have, and if it's for my personal use only, I can slowly improve only the parts I need. On the other hand, if there's any interest, I would need to clean up the code, remove all the assumptions specific to my setup, and add configuration options at the very least. For example, the directory blacklist (list of dirs that should not be traversed) is currently hardcoded on the Nim side, which doesn't bother me, while it could be a problem for others.

So here's the question: would you be interested in a blazing fast fuzzy file finder for your Emacs?

Comments

Clojure-like lambda expressions in Emacs Lisp

If it's this easy, why isn't it implemented?

Last updated on:

Anonymous function syntax

In Clojure[1] there are two syntaxes for anonymous functions. The first one is equivalent to lambda expressions in Emacs Lisp; they look like this:

  (fn [arg1] body)
  (lambda (arg1) body)

The other syntax, which is shorter and built into the reader, has no equivalent in Emacs Lisp. It looks like this:

  #(expr %1 %2)

This is equivalent to the following lambda expression:

  (fn [%1 %2] (expr %1 %2))

The shorter syntax is convenient and works really well with map/mapcar and other higher-order functions. It is, however, absent in Emacs Lisp.

Some time ago I found an implementation of short lambda syntax for Emacs Lisp. It's a simple macro which expands to lambda expression. You can test it like this:

  (funcall (short-lambda (cons 1 %1)) 2)  ; => (1 . 2)

The implementation, however, is incomplete. In the words of the author:

This assumes that there is a reader macro in the Emacs C code that translates #(STRUCTURE) to (short-lambda STRUCTURE), in the same way as for the backquote macro.

Indeed, as it is, it's even longer than normal lambda - clearly there's something missing. Namely: a support for the short lambda in the reader, which is implemented in C.

Implementing the missing part

How hard would it be to add the missing support to the reader? When I was thinking about this, I noticed that there is already something similar in Elisp, namely - hash-table syntax. It looks like this:

  (make-hash-table)
  ;; ⇒ #s(hash-table size 65 test eql rehash-size 1.5 rehash-threshold 0.8125 data ())

The #s(...) syntax is supported by the reader, ie. hash tables can be read with the same syntax they are printed in.

Wouldn't it be easy to create a short lambda syntax by changing #s to #f? Turns out - yes, it is easy. It took me an afternoon or two to figure it out, with no prior experience with Emacs C codebase. The whole implementation is less than 5 lines of code.

Changing the reader

First, I located the hash-table reader code in lread.c, which is where the reader lives. There's a function called read1, where the hash-table support is implemented. It looks like this (reformatted for brevity):

static Lisp_Object read1(Lisp_Object readcharfun, int *pch, bool first_in_list) {
  int c;
  // ... more variables here ...
retry:

  c = READCHAR_REPORT_MULTIBYTE (&multibyte);
  if (c < 0) end_of_file_error ();

  switch (c) {
    case '(': return read_list (0, readcharfun);
    case '[': return read_vector (readcharfun, 0);
    case ')':
    case ']': {
      *pch = c;
      return Qnil;
    }
    case '#':
      c = READCHAR;
      if (c == 's') {
        // ... lots of code specific to hash-tables ...

The only thing needed to add support for short lambda is to change the line 19 to this:

1
2
3
4
5
  if (c == 'f') {
    return list2(Qfn, read0(readcharfun));
  }
  else if (c == 's') {
    // ... lots of code specific to hash-tables ...

The list2 function creates a list of two elements, where the first one is a symbol, defined (that's the second, and last, change to C code) like this at the end of lread.c (fn is and alias for short-lambda):

  DEFSYM (Qfn, "fn");

The second element of the list is whatever can be read after the opening paren.

The whole diff looks like this:

diff --git a/src/lread.c b/src/lread.c
index 015bcc2e11..42fc4050ae 100644
--- a/src/lread.c
+++ b/src/lread.c
@@ -2880,7 +2880,10 @@ read1 (Lisp_Object readcharfun, int *pch, bool first_in_list)

     case '#':
       c = READCHAR;
-      if (c == 's')
+      if (c == 'f') {
+        return list2(Qfn, read0(readcharfun));
+      }
+      else if (c == 's')
        {
          c = READCHAR;
          if (c == '(')
@@ -5168,6 +5171,7 @@ syms_of_lread (void)
   DEFSYM (Qload_force_doc_strings, "load-force-doc-strings");

   DEFSYM (Qbackquote, "`");
+  DEFSYM (Qfn, "fn");
   DEFSYM (Qcomma, ",");
   DEFSYM (Qcomma_at, ",@");

When is it useful?

As mentioned, the shortened syntax works well with higher-order functions. It's not essential, but it is convenient. Especially if you use libraries like dash.el, which give you a lot of such functions.

Just yesterday I was writing a bit of code to replace literal characters with HTML entities. I fetched the list of entities from the gist published by Christian Tietze (blog post, the gist), and started coding. I had to parse the list of entities and needed to break each line into components, entity, entity name, and description. The whole code looks like this:

(defconst html-entity-list
  '("char: \" entity: &quot;    text: quotation mark (APL quote)"
    "char: & entity: &amp;      text: ampersand"
    "char: ' entity: &apos;     text: apostrophe (apostrophe-quote); see below"
    ;; ....
)
(require 's)
(require 'dash)
(require 'short-lambda)

(defconst html-entities-lookup-table
  (-> (-juxt
       #f(second (s-match "char: \\(.\\) " %))
       #f(second (s-match "entity: \\(&.+;\\) " %))
       #f(second (s-match "text: \\(.+\\)" %)))
    (-map html-entity-list)))

(defun my-html-replace-entity (p)
  (interactive "d")
  (let*
      ((ch (buffer-substring-no-properties p (1+ p)))
       (rep (-find #f(equal ch (car %))
                   html-entities-lookup-table)))
    (delete-char 1)
    (insert (second rep))))

There are 4 lambdas in the code - were it not for the short lambda, I would probably write this code differently. Using them, though, the code ended up being short and readable, without the need for any of the heavier abstractions.

That's it, so... why?

That's really everything you need to add short lambda support Emacs Lisp. I have this implemented in my Emacs since a few years back and I use the #f() syntax regularly. It's convenient. It's easy to implement. I wonder from time to time - why isn't it still implemented in Emacs?

Please let me know if you know the reason!

EDIT: so, um, yeah, one reason may be that nobody suggested this as a feature yet. I'm stupid, it totally slipped my mind. I assumed it was proposed already, for sure, given that the short-lambda.el repo[2] is 6 years old at this point. But I didn't check. My bad!

Comments

Merging ZSH history files from different hosts with Scala

Learning Scala - along with Groovy, Java and JVM ecosystem - was fun!

Last updated on:

NOTE: As usual, all the code is in a GitHub repository.

Motivation

For the past two years I've been dabbling in DIY home automation. I ended up with a bunch of Raspberry PIs[1] all over the house, all running under Linux and connected via WiFi. They service some sensors and cameras, and show current date and time (among other dashboard-y things, like calendar or weather forecast) on a few displays around the house.

I have a few Ansible[2] playbooks set up for running things like package- and system updates, but I still often ssh to the PIs for one-off tasks (in the beginning they all look like one-offs...) To do this comfortably, I installed ZSH[3] - my shell of choice - on the PIs, using the same .zshrc config everywhere. Just a handful of ifs was enough to make my config portable, which was a pleasant surprise.

One of the CLI tools I have set up in my .zshrc is fzf[4], which I use as a history search mechanism, replacing the default one under Control + r. It's incredibly useful, as it lets you narrow down the search incrementally and interactively, with fuzzy matching, which makes arriving at the intended command much faster than with the built-in search. You could say I got addicted to the ease of both browsing and searching through the shell history in an interactive manner. However, no matter what search method you use, if something's not there, you won't find it! That sounds trivial, but it connects to the topic of this post: how to merge ZSH history files.

The use case should be clear by this point: very often, whatever I did on one PI, I would need to do, sooner or later, on one or two others. In such cases I, reflexively, tried to search for the needed command in the history of the current shell, which never saw that command... Obviously, I couldn't find it.

Going through a few minutes of frustration a couple times was enough for me to start thinking about an automatic solution. How nice would it be if something gathered the histories from all the hosts and merged them into a single one, I thought.

Why Scala? Groovy? JVM?

As usual: by accident. It just so happened that I was learning Scala (and JVM in general) at the time for work, so I decided to use the urge to automate as a material for practice. As many of us do all the time, I thought, how hard could it be? and started setting up a project, which you can see on GitHub. In this post, I want to highlight nice things about Scala that I learned in the process.

Project setup using Gradle

I'm not against the use of Scala build tool, sbt. I chose Gradle[5] simply because I had some prior experience with it, from my previous JVM project. That project had parts written in Java, Groovy, and Scala, and Gradle was the first tool we found which handled this case without tons of boilerplate code. Gradle has plugins for nearly everything, lightweight config syntax, and good performance. When starting the current project, I simply copy-pasted the build.gradle from the previous one, which was the fastest way to get started. The file looks roughly like this (click to unfold):

plugins {
  id 'application'
  id 'scala'
}

application { mainClassName 'zsh.history.Main' }

sourceSets {
  main {
    scala { srcDirs = ['src/scala/'] }
  }
}

repositories { jcenter() }

dependencies {
  implementation 'com.github.pathikrit:better-files_2.13:3.8.0'
  implementation 'com.github.nscala-time:nscala-time_2.13:2.22.0'
  implementation 'org.scala-lang:scala-dist:2.13.1'
  implementation 'org.scala-lang.modules:scala-parser-combinators_2.13:1.1.2'
  implementation 'org.scala-lang.modules:scala-parallel-collections_2.13:0.2.0'
}

You execute tasks configured in build.gradle with gradle command, which works kind of like make (or gulp, or mix, etc.). For example, the plugin called 'application' defines a task called 'installDist' which compiles and packages the project, giving you a script which starts your application. To compile and run the project in one go, I ended up with a following script, run.sh:

1
2
3
4
5
6
#! /usr/bin/env bash

set -e

gradle installDist
./build/install/zsh-merge-hist/bin/zsh-merge-hist

It's worth noting that you can define your own tasks in build.gradle, using the full Groovy[6] language. Here's an example which creates scripts for running REPLs with the whole project on the CLASSPATH:

File mkfile(obj) { obj instanceof File ? obj : new File(obj.toString()) }
task makeScripts(type: DefaultTask) {
    String shebang = "#! /usr/bin/env sh\n"
    String root = projectDir.absolutePath
    String pre = ""
    File outFile

    String classpath = sourceSets.main.runtimeClasspath.toList()*.toString().join(":")
    try {
        String opts = mkfile(root + "/.java").text.split("\n").join(" ")
        pre += "\nexport JAVA_OPTS='$opts'\n\n"
    }
    catch(_){}

    outFile = mkfile("-scala")
    outFile.write shebang + pre + "scala -classpath ${classpath}\n"
    outFile.setExecutable(true, true)

    outFile = mkfile("-amm")
    outFile.write shebang + pre + "java -cp ${classpath}:/usr/local/bin/amm ammonite.Main\n"
    outFile.setExecutable(true, true)

    ArrayList<String> jshell_cp = classpath.split(":").findAll({
        // TODO: filter out all nonexistent dirs, not just the blacklisted ones
        !(it.contains("build/classes/java/main") ||
          it.contains("build/resources/main"))
    })
    outFile = mkfile("-jshell")
    outFile.write shebang + pre + "jshell --class-path ${jshell_cp.join(':')}\n"
    outFile.setExecutable(true, true)
}

Interactive shell is always nice to have, and one of the three, called Ammonite[7], is a Scala equivalent of IPython[8], and is a pleasure to work with - I used it extensively to experiment with unfamiliar libraries, for example.

At this point the setup is done. Let's start examining the implementation.

Fetching history files with SCP & Scala external commands

Recounting the problem: I have a number of hosts on the network, where I tend to log in via ssh - which means I have authorized_keys properly configured already. I would like the shell history, which is a plain text file, to be synchronized among all the hosts. To do this, I need to fetch the history files from the network, process them somehow and put the merged file back on the hosts.

In this situation, using scp[9] command seemed the easiest way of copying files across my network. It's not Scala-specific, but there's no need to reinvent the wheel and write custom networking code for something this simple. Fortunately, Scala has a simple DSL for running external programs built-in, in the sys.process namespace (see the docs.)

While the DSL is powerful - even chaining I/O of a sequence of programs (like what the | operator does in most shells) is possible - what I really needed was just a return code. If it's equal to 0, we know the transfer finished successfully; anything else signifies an error. To execute a command and get the exit code, you use ! method, implicitly added to Strings and Seqs:

object Transfer {
  import scala.sys.process._
  import better.files.File
  import Configuration.config

  type DownloadInfo = (String, File)
  type DownloadResult = Either[DownloadInfo, DownloadInfo]

  def downloadSingleFileFrom(host: String): DownloadResult = {
    val (from, path) = config.connectionData(host)
    val localHistFile = File(path)

    println(s"Transfer $from to $path")
    val returnCode = s"scp $from $path".!  // could be: Seq("scp", from, path).!
    if (returnCode != 0 || !localHistFile.exists) {
      // clear the empty or partial file, don't throw if it doesn't exist at all
      localHistFile.delete(swallowIOExceptions=true)
      Left((from, localHistFile))
    }
    else {
      Right((from, localHistFile))
    }
  }
  // ...
}

Downloading multiple files at a time with parallel collections

Most of my PIs are connected via WiFi, some of the older ones even use the 2.5Ghz network. The transfer is not slow enough to be irritating, but that's only if you need something from a single host. If you try downloading many files in sequence, the latency will add up to the level where it's noticeable.

Scala provides a wonderful, high-level construct for parallelizing processing: parallel collections. Basically, whenever you have code which maps (or folds) a function over a collection, you can transform said collection into a parallel one. Parallel collections implement map method which executes given function on a pool of threads. The result is another parallel collection, which you can transform back into a normal one, which will have the results of computations, in the correct order. The toList method which does this blocks and waits for all the items in all the threads to finish being processed. It looks like this:

  // ...
  def downloadHistoryFiles(allowMissing: Switch = Switch.T): List[File] = {
    require(config.hosts.length > 0)
    config.createDownloadDirectory()

    val downloads = config.hosts.par.map(downloadSingleFileFrom _).toList
    val (failures, successes) = downloads.partition(_.isLeft)
    if (!allowMissing && failures.nonEmpty) {
      throw new TransferError(failures map unwrapFailure)
    }
    successes.map(_.toOption.map(_._2).get)
  }
  // ...

The interesting part is in this expression:

    hosts.par.map(downloadSingleFileFrom _).toList

Which is roughly equivalent to the following Python code:

import os
from concurrent.futures import ThreadPoolExecutor

hosts = [ ... ]
futures = []
results = []

with ThreadPoolExecutor(max_workers=os.cpu_count()) as executor:
    for host in hosts:
        futures.append(executor.submit(downloadSingleFileFrom, host))
    for future in futures:
        results.append(future.result())

Parallel collections take care of all the bookkeeping behind the scenes, so that you can have a "scatter & gather"[10] concurrency and parallelism without having to do any scattering or gathering yourself.

One thing worth considering is if our use of parallel collections is safe. The question arises because calling external processes is very clearly a side-effect, and side-effects are generally bad fit for concurrency. In this case, however, we know that each application of downloadSingleFileFrom gets a file from a different server and saves that file under a path different from all the other calls. As there is nothing shared between calls, they are safe to execute concurrently.

Parsing ZSH history file format

ZSH has two formats of history files: simple and extended. The extended one, which I use, looks like this:

1
2
3
: 1538336195:0;history 1
: 1538336321:0;cat mix.exs
: 1538336321:0;find . -iname rye

Each history entry (note: not "each line", because entries can span multiple lines) starts with a colon and space - : - which is followed by the timestamp in typical UNIX format, then by time taken by the command to execute, and finally the command itself.

Commands spanning multiple lines are problematic - we can't just split the file contents on newlines to get the list of commands. Other than that, the format is not very complex and it can be parsed with a few regexes.

Personally, though, whenever I think about a few regexes - not just one or two - I tend to go for a more powerful and structured parsing tool. Scala provides a parser combinator library[11] with a DSL for defining the structure of the text to be parsed. Parser combinators are a functional way of encoding recursive descent parsing, and so are able to parse both contex-free and context-sensitive grammars; we don't need its full power here, but it's good to know we can deal with more complex formats with the same tool.

The parser is defined as an object which extends Parsers subclass, RegexParsers:

  import scala.util.parsing.combinator.RegexParsers

  type ResultData = List[(Long, Int, String)]
  type ParseResult = SimpleParser.ParseResult[ResultData]

  object SimpleParser extends RegexParsers {
    override def skipWhitespace = false

    def nl = "\n"
    def semi = ";"
    def colon = ":"
    def digits = "[0-9]+".r
    def noNewLine = "[^\n]+".r

    def start = nl ~ colon
    def elapsed = digits <~ semi
    def timestamp = start ~> " " ~> digits <~ colon
    def command = ( noNewLine | (not(start) ~> nl) ).+ <~ guard(start)

    def line = timestamp ~ elapsed ~ command ^^ {
      case ts ~ el ~ cmd => (ts.toLong, el.toInt, cmd.mkString.trim)
    }

    def lines = line.+
  }

The parser is an object which defines a series of methods returning Parser instances. There are implicit conversions defined for Strings and Regexex that convert them into Parsers - which is why the nl to noNewLine definitions work. The parsers themselves are defined as functions which take a stream as input and return a result along with the rest of the stream (to be parsed by following parsers.)

After defining the basic building blocks of the syntax, we define the grammar using parser combinators: methods ~>, <~, and ~. They all express sequencing, ie. the parser created by parser1 ~ parser2 will match whatever parser1 matches, then will try to match parser2. The difference between the three methods has to do with the results of parsing: the basic ~ operator means combine the results of both parsers into one, while ~> means discard results of the first parser, return whatever the second returns. The <~ operator discards the results of the second parser instead.

There is an alternative operator, spelled parser1 | parser2 - the same way as in regexps - which parses whatever either parser1 parses, or whatever parser2 parses. Another important combinator is parser.+, which parses repetitions of whatever parser parses. It puts the results in a List. Finally, we have guard(parser) combinator, which parses whatever parser parses, but doesn't advance the position in the input stream. This is known as lookahead assertion in regexes.

The last operator, parser ^^ fun, applies fun to the results of parser. Whatever fun returns becomes the new parsing result. Quoting the docs: If you've done yacc parsing, the left hand side of the ^^ corresponds to the grammar rule and the right hand side corresponds to the code generated by the rule. This is used for cleaning up and structuring parsing results, as you can see in the example, where the combined results of three parsers are pattern matched on and transformed into a single 3-tuple (lines 20-22).

To actually use the parser, you do something like this:

1
2
3
4
5
6
7
8
  def parseHistory(input: java.io.InputStream): Either[String, ResultData] = {
    val reader = new java.io.InputStreamReader(input, "UTF-8")
    SimpleParser.parse(SimpleParser.lines, reader) match {
      case SimpleParser.Success(lines, _) => Right(lines)
      case SimpleParser.Failure(msg, _) => Left(msg)
      case SimpleParser.Error(msg, _) => Left(msg)
    }
  }

As you can see, there are two failure modes: Failure and Error. Because I don't care about the distinction, I transform the result into Either, with Right meaning success. The ignored part of the patterns, _, is the mentioned rest of the input, which should be empty.

This would normally work, but the parseHistory, as it is, fails on some history files. What is happenning, and why?

(Un)Metafying ZSH history

Turns out ZSH escapes some characters in a special way when writing and reading history files. As far as I can tell, this mechanism is there to make sure characters special to the shell, like $, !, ~, etc., are not interpreted/executed by accident. It unfortunately causes a problem with string encoding; in Java this means you get the following exception when you try reading the file:

    jshell> var p = java.nio.file.FileSystems.getDefault().getPath("/home/cji/mgmnt/zsh_history")
    p ==> /home/cji/mgmnt/zsh_history
    jshell> var cs = java.nio.charset.StandardCharsets.UTF_8
    cs ==> UTF-8
    jshell> java.nio.file.Files.readAllLines(p,cs)
    |  Exception java.nio.charset.MalformedInputException: Input length = 1
    |        at CoderResult.throwException (CoderResult.java:274)
    |        at StreamDecoder.implRead (StreamDecoder.java:339)
    |        at StreamDecoder.read (StreamDecoder.java:178)
    |        at InputStreamReader.read (InputStreamReader.java:185)
    |        at BufferedReader.fill (BufferedReader.java:161)
    |        at BufferedReader.readLine (BufferedReader.java:326)
    |        at BufferedReader.readLine (BufferedReader.java:392)
    |        at Files.readAllLines (Files.java:3330)
    |        at (#3:1)

Now, ZSH mostly can handle UTF-8 text. Or rather, UTF is defined in such a way that it's mostly backward compatible: as long as a program doesn't do anything special with character codes above 127, it should be able to handle UTF-8 transparently. ZSH escaping, however, uses codes above 0x83 (that is 131 in decimal, 0b10000011 in binary) to encode its special characters. It does this without accounting for a variable-width encoding of UTF-8, breaking the encoding in the process.

The history file is written in the metafied - meaning escaped - format. To make it UTF8-clean, we need to reverse the escaping. In ZSH source, there's a function called unmetafy, which looks like this:

#define Meta 0x83

mod_export char *
unmetafy(char *s, int *len)
{
    char *p, *t;

    for (p = s; *p && *p != Meta; p++);
    for (t = p; (*t = *p++);)
	if (*t++ == Meta && *p)
	    t[-1] = *p++ ^ 32;
    if (len)
	*len = t - s;
    return s;
}

The metafication inserts a special character, Meta, before special character, which is then XORed with 32. As XOR is its own inverse, unmetafying simply removes every Meta character encountered, and XORs the following character with 32 again. The code is terse and efficient, as it does this in-place - no new string is allocated. My reimplementation in Scala is less efficient, as it creates a copy of the input string, but it's not really an issue, given the amount of RAM available vs. the lenght of the history file. For reference, currently my history file is 2.4Mb, while my computer has 32Gb of RAM... Anyway, in Scala it looks like this:

object Unmetafy {
  val Meta = 0x83.toByte  // On the JVM bytes can only be signed, so numbers
                          // above 127 need to be be converted
  def unmetafy(file: File): String = unmetafy(file.byteArray)

  def unmetafy(bytes: Array[Byte]): String = {
    val it = bytes.iterator
    val out = new ArrayBuffer[Byte](bytes.length)
    while (it.hasNext) {
      val byte = it.next()
      if (byte == Meta)
        out.addOne((it.next() ^ 32).toByte)
      else
        out.addOne(byte)
    }
    new String(out.toArray, "UTF-8")
  }
}

One curious thing I learned, which is visible in this example, is that on the JVM there is no unsigned byte type. What it means is that values above 127 are interpreted as negative; this is known as two's complement, where the highest bit encodes a sign. What's important to note is that the value - ie. the bits set - stays the same, it's just the interpretation that changes. This means we can write the Array of Bytes back into a file without doing anything special - we only need to convert values over 127 to Byte to satisfy the type checker.

Dumping merged history back into a file

After parsing, we have a number of Lists of tuples. Many - almost all, actually - of the tuples are duplicates, which we need to remove. I do this by joining all the lists, then sorting by timestamp (first element of a tuple), then removing duplicates, and finally removing repeating commands (third element of tuples) from the list. It looks like this:

object Transform {
  import better.files.File
  import Parsing.ResultData
  import Configuration.config

  def removeDuplicates(lines: ResultData): ResultData =
    lines.sortBy(_._1).distinct

  def removeRepeatedCommands(lines: ResultData): ResultData =
    lines.reverse.distinctBy(_._3).reverse

  def processHistoryFiles(): ResultData =
    processHistoryPaths(config.hosts.map(config.getPathForHost))

  def processHistoryFiles(files: List[File]): ResultData = {
    val parsedFiles =
      files.par.map({ dest =>
        println(s"Parsing ${dest}...")
        val inputString = Unmetafy.unmetafy(dest)
        val Right(parsed) = Parsing.parseHistory(inputString): @unchecked
        parsed
      })
    removeRepeatedCommands(removeDuplicates(List.concat(parsedFiles.toList: _*)))
  }
}

As you can see, parsing also happens in parallel, thanks to parallel collections. The main function, processHistoryFiles, returns Parsing.ResultData, which is an alias for a list of triples: List[(Long, Int, String)].

What's left is just dumping the results back into a file:

object Dumping {
  import better.files._
  import Parsing.ResultData
  import Configuration.config

  def renderLine(ts: Long, rt: Int, cmd: String) =
    s": $ts:$rt;$cmd\n"

  def dumpResultsToFile(res: ResultData): Unit =
    dumpResultsToFile(res, config.getPathForHost("merged"))

  def dumpResultsToFile(res: ResultData, path: String): Unit = {
    println(s"Dumping parsed data to ${path}")
    dumpResults(res, File(path).newOutputStream)
  }

  def dumpResults(res: ResultData, out: java.io.OutputStream = System.out): Unit = {
    Using.resource(out) { out =>
      for( (ts, elapsed, command) <- res ) {
        out.write(renderLine(ts, elapsed, command).getBytes("UTF-8"))
      }
    }
  }
}

The code here is straightforward, one thing worth noting is line 18 - the Using.resource() construct[12]. Scala doesn't have try-with-resources[13] which is used to always free, or close, a resource, when control exits from a block - both normally and via exception. In Python, you would use with statement; Scala defines a higher-order function which does the same.

Configuration and reading JSON data

Instead of hardcoding a list of hosts, I decided to create a configuration file, .mergerc. It is a JSON file - mostly because I wanted to try using JSON with Scala. The configuration format looks like this:

1
2
3
4
5
{
  "hosts": ["vps1", "kuro", "shiro", "midori", "sora", "..."],
  "tempDir": "./tmp",
  "sourcePath": "mgmnt/zsh_history"
}

To read the configuration, I first created a case class, imaginatively named Configuration, which has the fields corresponding to JSON attributes:

1
2
3
4
5
6
7
case class Configuration(
  val hosts: List[String],
  val tempDir: String = "./tmp/",
  val sourcePath: String = "~/.zsh_history",
) {
  // ...
}

The class defines some helper methods which are not essential, so I elided them here. Then, there's a companion object, also called Configuration, which handles loading the JSON data and mapping them into Configuration instance. It looks like this:

import com.fasterxml.jackson.databind._
import com.fasterxml.jackson.module.scala.DefaultScalaModule

object Configuration {
  import better.files.File
  import org.joda.time.Instant

  val settingsFile = (File.home / ".mergerc")

  val mapper = new ObjectMapper()
  mapper.registerModule(DefaultScalaModule)

  def loadConfig = mapper.readValue(
    settingsFile.toJava,
    classOf[Configuration]
  )

  if (!settingsFile.exists)
    throw new RuntimeException("No configuration file found!")

  val config = loadConfig()
}

Well, it's still longer and more verbose than a simple JSON.parse() in JavaScript would be, but I have to say - it's not bad, given the statically typed nature of Scala. The magic apparently happens in the ObjectMapper object, which is customized for Scala with the .registerModule(DefaultScalaModule) call. I have no idea how that works - probably via reflection, but I don't know Scala meta-programming well enough to say for sure. Still, all you have to do is define a class with fields of correct types, and you get an instance of that class with correctly parsed values automatically. Not bad.

That's it!

That's basically all the code in the project - there are some additional utilities here and there, but I covered all the essential parts of the project. I use it regularly to merge ZSH history on a number of hosts, currently 8, and it works without a hitch. A single run takes around 30 seconds, while subsequent runs take around 7-8 seconds. Most of the time is spent downloading and uploading files, with parsing, sorting, and dumping merged history taking less than a second. This is for 9 hosts and a history file(s) of around 2.5Mb - not bad I think.

Above all else, though, I learned a lot, about Scala and the JVM, and string encodings on top of that. There were frustrating moments, but overall it was a pleasant and fun journey. I hope you enjoyed reading about it - let me know what you think about the post in the comments.

That's it - thanks for reading!

Comments

Awesome WM and an animated 3x3 grid of virtual desktops

Last updated on:

My new window manager is Awesome

I wrote a few posts in the past about my window manager - I was using StumpWM, which is a WM written in Common Lisp. It's great, and I used it for a long time, but had to switch. Unfortunately, I got a 4k laptop from work, and Stump couldn't handle that - at least not without a lot of work. The fact that Stump draws everything with X APIs (XCB) doesn't make it any easier, too.

So, reluctantly, I was forced to switch to GNOME. It's a surprisingly solid environment, as long as you install a few add-ons and disable most "user friendly" features (I wonder who ever thought the "reversed scroll direction" is "natural"!!)

Unfortunately, GNOME is not an interactively extendable piece of software. It is scriptable with JavaScript (and probably other languages), but it doesn't offer a REPL where you could explore the system from the inside, access and modify all global state, and add or edit functions on the fly. These are important tools, which help customize the software to your needs - a lot faster than without them.

I started looking for a solution. I wanted a reasonably stable, maintained window manager with a (or even built around) REPL - a simple requirement, but it filtered out 99% of the candidates, leaving just three:

Sawfish is scriptable with "a Lisp variant", described as a "mix of Emacs Lisp and Scheme", though I'm not sure how exactly that's supposed to look like. Unfortunately, it looked the least maintained of the bunch, so I decided to pass on it.

XMonad uses Haskell for scripting, which is an interesting choice. I don't know Haskell - I just skimmed a few books on it and played in the REPL, but that's not the level of "knowing" a language - but I could learn. After taking note of this, I went to examine the third option.

As already said in the title, I ultimately chose Awesome WM. My main reason (besides the name) was that it is built around a library of lightweight widgets, which render using Cairo lib, not the antiquated X widgets (like in StumpWM). Furthermore, it was scripted in Lua, a language which I know, and which is also a target for the transpiler I happened to be interested in. The reasonably complete documentation helped, though you can see a few lacking areas.

I will write the rest - how I set up Awesome, explored its library (called "awful"...), re-learned Lua from the ground up, or how I ended up forking the mentioned language, you know, the uninteresting details - in the follow-up posts, so now I just want to share a minor success of coding a complete widget by myself!

3x3 virtual desktop grid

Right, its about virtual desktops. All modern OSes offer the functionality where you can switch to another "desktop" with its own set of windows, then switch back. In some cases changing the desktop is accompanied with an animation (eg. sliding vertically or horizontally), and there's often a "zoomed out" mode for viewing windows from all desktops at once. In most popular implementations, the desktops are chained in a straight line, even without a wraparound.

Awesome's of course has virtual desktops, though it calls them tags: switching to a desktop number 3 makes only the windows tagged with "3" to become visible. The default configuration has ten tags configured, users can change it however they wish. With the larger number of desktops, though, comes another problem: how do you remember what window is on which desktop?

My solution to that problem was, since my early Linux experiences with Enlightenment 0.16, arranging the desktops in a grid. I'm not sure what it is, but there's something about path-finding and spacial metaphor that our brains seem to like a lot... Anyway, Awesome sadly lacked the ability to display a grid of my desktops - the "taglist" is a single row of labels (often numerical), like here:

So the first thing I want to share is that, after a long while, I managed to display a three by three, free floating taglist in Awesome! It was much more troublesome than I expected, due to the dual (schizophrenic) nature of the widget system (X / Cairo) and lack of good docs on Awesome architecture. Well, after a lot of tinkering, I made it! See the screenshot below (on the right).

Well, it was almost perfect, but unfortunately, no matter where I placed it or how I played with opacity, there was always a moment where it displayed over some important details, requiring manual intervention. I understood that, to be fully ideal, it would need to hide and show itself at just the correct moments: show after desktop change (or after key shortcut, or mouse-over), then hide a few seconds later or immediately if clicked on.

This time it was a bit easier, mostly because I already learned most of the API, but also thanks to Lua coroutines. Awesome has no direct support for animations, doesn't use them for anything (that I'm aware of) and apparently is not very interested in them. Fortunately, Awesome implements a timer: a piece of code which can call the callback function after some time, then repeat (or not). That is enough to allow users to implement their own stepping logic. Thanks to Lua coroutines, you can describe your animations as simple functions, wrap them with coroutine and plug straight into a timer. Here's the result of my efforts in action:

The code for this is in the repo on Github (as usual), but if you'd like to use it, ping me first - it's my personal config and I didn't bother with too much cleanup, but I can do it if it's going to be useful to someone.

Comments