It has been a VERY HOT SUMMER, my brain has gone in overheating; I am going to discharge some s#!t in this post: bear with it, or skip a section, or go away now.
Recently, through Hacker News, I have come in contact with the concept of Harbingers of Failure; according to this theory: customers that systematically purchase new products that flop; when considered as a group, their collective behaviour is a predictor of product failure. As a single person I am statistically irrelevant, but if I think of the computer programming technologies I have dived in, I feel like one of those people. Sheesh… existentially scary!
Back when I was a kid: Computer Programming fascinated me; I tried to learn something on the Commodore 64, but, alone without guidance, I was unable to do much. Later, at university, Software Engineering interested me; I decided to learn generic computer programming to have “something more”, in addition to what I actually wanted to do as professional work. While studying engineering, I also dreamt of buying (yes) high–quality libraries for numerical methods and then to interface them to a powerful programming language, to build good tools for freelancing: a personal competitive advantage; in retrospect: a stupid idea.
After all these years, I can tell that I am not that good at programming. I can handle only that much complexity and abstraction, so I need a programming language that allows me to clearly organise concepts my way; otherwise I can do nothing. Syntax is also of paramount importance; this is a problem I also have with mathematics: there are expression notations that consume too much of my brain power (most likely, the slots in my working memory) and too little is left to process the meaning; so I need the “right” notation, otherwise I can do nothing.
When studying Mathematics, Physics and applied engineering disciplines, I always struggled to find the right mathematical notation for my brain (I made enemies this way: professors and their minions kept demanding things written “the way I do”, which obviously was not my way). Likewise, I have been in search of the “right” programming language.
At university, the first course on computers (the italian denomination was “Fondamenti di Informatica”) suggested Pascal as programming language in which to develop the project for the final exam; but there was freedom; I ditched Pascal and chose C. That was a good choice (when I think of Borand’s Turbo C: I have tears in my eyes). It was the last good choice.
Later, I laid my eyes on Lisp (shoot me now!). I borrowed an introductive book on
Common Lisp from the library; I had no way to try the language; reading about
lambda
and mapcar
made my head spin. I kept a bookmark in my brain to
come back to it in some future.
Later I wanted to learn a programming language to do stuff without too much effort: I chose Tcl. Back then, Tk was one of the most advanced free software GUI toolkits available on the GNU+Linux platform. Tcl is hated by many; it is not exactly the most effective language; it had some successful applications, but today it has very small commercial appeal. For certain it has limitations. I stuck with it for a while, because I learned much using it; I wrote some Tcl package and some interfaces to C language libraries. I could have learned much using other languages, too. IIRC, when I started using it, the Tcl core was at version 8.3 or 8.4, many years later it is at version 8.6: it is a mature software. Not mainstream.
Later I wanted to drop Tcl and try something different: I learned that Scheme exists and it is a Lisp; I learned that GUILE is a Scheme implementation part of the GNU Project. Here I was. I learned what I think is a decent amount of the language and started writing some extensions for it: Zlib bindings; BZip2 bindings; CSV files; email addresses processing; Ezxdisp bindings; Libgcrypt bindings; GD bindings; GMP, MPFR, MPC, MPFI bindings; GSL bindings; Libiconv bindings; OpenSSL bindings; Nettle bindings; Nuri bindings; Tcl bindings; some other stuff.
At some point I felt that GUILE was going nowhere (one of the maintainers asked for a ping from people still actively working on GUILE projects; I was the only one answering with availability of extensions for the then–current GUILE version); I tried to propose something but went down in flames (no clear direction for the project was built and discussed by people more competent than me). I thought: if I have to contribute in developing a project I need to have faith into its future; I had it no more for GUILE. I do not think GUILE can be considered mainstream (look at other “languages for extensions” like LUA).
For a while I tried PLT Scheme, now evolved into Racket. It appears to me that Racket is one of the most successful Lisp languages; for sure the most successful Scheme (or derivative of Scheme); sticking with Racket would have been a good choice. Instead, I was defeated by its complexity; for some reason, just “passively” using it was not enough to satisfy me; I had to actively understand it. No, this is not it. It is not about understanding or not, it is about having to bend my mind to accept what Racket is and how it works (because it works, no doubt about it), even if it is not the way my mind works. I dropped it.
At some point, I wanted to try something different. So I got an interest in Neko… I bet you never heard about it. From the web site:
Neko is a high–level dynamically typed programming language. It can be used as an embedded scripting language. It has been designed to provide a common runtime for several different languages. Learning and using Neko is very easy. You can easily extend the language with C libraries. You can also write generators from your own language to Neko and then use the Neko Runtime to compile, run, and access existing libraries.
I wrote some libraries for it and (crazyness) translated documentation in Texinfo format. I realised that Neko was not going to be adopted for something useful; so I dropped it.
Back to Scheme. I like Scheme. r6rs had been ratified and implementations started to pop up. I had some enthusiasm for it, so I dove in. It hurts me to list all the things I have done to contribute in building a community around r6rs implementations; so I am not going to do it. r6rs tanked. But I am still here developing a Scheme implementation of which I am, in practice, the only user.
I have written a library binding for
Vicare Scheme to Tcl version 8.6.4. It works with the head of the master
branch.
This binding makes it possible to run scripts in a Tcl interpreter; it allows to load
the Tk toolkit library and interface Vicare to it. Not the best way to write
applications with gui. Whatever…
Lately I have been developing container libraries. Containers or collections or whatever we want to call them are basic tools that should be present in any respectable programming language’s standard library. The support for r6rs already makes available built–in strings, lists, vectors, hashtables and bytevectors; Vicare extends the standard features with more functions for all of these containers. In addition, the following libraries are available:
(vicare containers weak-hashtables)
Like hashtables, but references to values are “weak”. It means that storing a value in a weak hashtable does not prevent it from being garbage collected.
(vicare containers bytevector-compounds)
A “bytevector compound” is a sequence of octets split into a sequence of bytevectors. Bytevector compounds have a special api to handle the sequence of octets as a First–in First–out queue.
(vicare containers char-sets)
Sets of objects satisfying the predicate char?
. It is conceptually based on
srfi-14, but it has extensions.
(vicare containers binary-heaps)
Binary heaps allow collecting objects according to a sorting function. They allow the extraction of the “lesser” contained object. They are useful for sorting.
(vicare containers chains)
A “chain” is a doubly–linked list. Being doubly–linked, both forwards and backwards iterations are possible.
(vicare containers stacks)
Stack objects are containers with efficient insertion and extraction in Last–In First–Out mode.
(vicare containers queues)
Queue objects are containers with efficient insertion and extraction in First–In First–Out mode.
(vicare containers deques)
Deque objects, or double–ended queues, are containers with efficient insertion and extraction at both ends.
(vicare containers dynamic-arrays)
Dynamic array objects offer an api similar to those of vectors, but with efficient dynamic resizing.
(vicare containers binary-search-trees)
Binary search trees implement storage of objects in a way that allows “fast” searching of members. The objects need to have a “less than” comparison function. This library implements basic data types and functions; no balancing strategies are implemented here.
(vicare containers sets-and-bags)
This library is actually the reference implementation of srfi-113 adapted to Vicare. Sets and bags (also known as multisets) are unordered collections that can contain any Scheme object.
(vicare containers ilists)
This library implements immutable lists. It is actually the reference implementation of srfi-116 adapted to Vicare.
(vicare containers ralists)
This library implements random–access lists. It is actually the reference implementation of srfi-101 adapted to Vicare.
The documentation for the container libraries is in the file vicare-libs. Let’s not forget the fectors and pfds projects, which provide functional data structures.
new
and delete
¶At some point I wanted a set of syntaxes to perform operations on structs, records
and Nausicaa’s classes through the use of the type identifier. The idea was that: by
importing (rnrs)
as base language and the syntactic identifier of a type from
a library, every operation on type instances must be possible.
The idea tastes good, but it leads to some weird syntaxes; a particular case is the constructor call. If we define a record type:
(define-record-type duo (fields one two))
we can call the constructor function as follows:
(make-duo 1 2) ⇒ #[record duo one=1 two=2]
obviously, if the type definition is in a library, we have to export the syntactic
binding make-duo
. But what if a special syntax allows us to call the
constructor by specifying only the syntactic identifier duo
? Here is the
weird syntax:
(duo (1 2)) ⇒ #[record duo one=1 two=2]
this is actually implemented and it works for structs, records and Nausicaa’s classes.
Lately I have changed my mind a bit. While, for an ideal world, I still like the
idea of coding operations using only the type identifier, I consider the weird
syntaxes as too much of a price: the expression (duo (1 2))
does not really
look like a constructor call. A more Schemey syntax, or at least Common Lispy
syntax, causes less confusion when parsing code with human eyes.
The expander already has syntaxes for dealing with type name syntactic identifiers:
type-descriptor
, is-a?
, slot-ref
, slot-set!
; they
are exported by (vicare)
. In the current head of the master branch, I
have added two new syntaxes: new
and delete
; these syntaxes take
their name from the the C++ and D operators, everybody knows these names.
new
is for calling the default constructor and delete
is for
calling the destructor.
For example, with structs:
(import (vicare)) (define-struct duo (one two)) (define (duo-destructor stru) (fprintf (current-error-port) "destroying ~s\n" stru)) (set-struct-type-destructor! (type-descriptor duo) duo-destructor) (define O (new duo 1 2)) (delete O) -| destroying #[struct duo one=1 two=2]
and with records:
(import (vicare)) (define-record-type duo (fields one two)) (define (duo-destructor reco) (fprintf (current-error-port) "destroying ~s\n" stru)) (record-type-destructor-set! (type-descriptor duo) duo-destructor) (define O (new duo 1 2)) (delete O) -| destroying #[record duo one=1 two=2]
I will progressively remove the weird syntaxes from the expander.
I am letting the Nausicaa libraries fall behind the development of the libraries
based on (vicare)
; I am not abandoning them. It is just that, while I am
thinking about how to implement the typed language in the expander, it makes no sense
to change Nausicaa and change it again, and again, and again. When the design of the
typed language reaches some sort of stability, I will update Nausicaa.