牛: a preface

(Ox or oxlang) is an experiment, a dream, a thought which I can't seem to get out of my head. After working primarily in Rich Hicky's excelent language Clojure for over two years now and spending a summer hacking on Oxcart, an optimizing compiler for Clojure, it is my concidered oppinion that Clojure is a language which Rich invented for his own productivity at great personal effort. As such, Clojure is a highly oppinionated Lisp, which makes some design decisions and has priorities.

Ultimately I think that the Clojure language is underspecified. There is no formal parse grammar for Clojure's tokens. There is no spec for the Clojure standard library. Due to underspecification Clojure as a language is bound to the Rich's implementation for Java as due to leaking implementation details and lack of a spec no other implemetation can ensure compatability. Even ClojureScript, the official effort to implement Clojure atop Javascript is at best a dialect due to the huge implementation differences between the two languages.

These are not to say that Clojure is a bad language. I think that Clojure is an excellent language. If there is an Ox prototype, it will likely be built in Clojure. It's just that my priorities and Rich's are at a mismatch. Rich is it seems happy with the existing JVM implementation of Clojure, and I see that there's a lot of really interesting work that could be done to optimize, type and statically link Clojure or a very Clojure like language and that such work is very unlikely to become part of the Clojure core.

Clojure's lack of specification makes it futile for me to invest in providing an imperfect implementation so the only thing that makes sense to do is specify my own lang. The result is the Ox language. The joke in the name is twofold: the compiler for Clojure I was working on when I originally had the idea for Ox was Oxcart, named after the A-12/SR-71 program. The rest of it is in the name. Oxen are slow, quiet, tractable beasts of burden used by many cultures. This ultimately characterizes the language which I'm after in the Ox project. There's also some snarking to be found about the tractability of the Ox compared to that of other languages mascots like the gopher, the camel and the gnu which are much less willing.

The hope of the Ox project is to produce a mostly typed, mostly pure language which is sufficiently well specified that it can support both static and dynamic implementations on a variety of platforms. Ox draws heavily on my experience with Clojure's various pitfalls, as well as my exposure to Haskell, Shen, Kiss and Ocaml to produce a much more static much more sound Lisp dialect language which specifies a concrete standard library and host interoperation mechanics in a platform abstract way amenable to both static and dynamic implementations on a variety of platforms.

In the forthcomming posts I will attempt to illustrate the model and flavor of the Ox programming language, as well as sketch at some implementation details that differentiate Ox from Clojure and I think make it a compelling project. Note however that as of now Ox is and will remain vaporware. As with the Kernel lisp project, while I may choose to implement it eventually the exercise of writing these posts is really one of exploring the Lisp design space and trying to determine for myself what merit this project has over the existing languages from which it draws inspiration.

[[ space reserved for links to the rest of the series ]]

^d

On Future Languages

As Paul Philips notes at the end of his talk We're Doing It All Wrong [2013], ultimately a programming language is incidental to building software rather than critical. Ultimately the "software developer" industry is not paid to write microcode. Rather, software as an industry exists to deliver buisness solutions. Similarly computers themselves are by a large incidental. Rather, they are agents for delivering data warehousing, search and transmission solutions. Very few people are employed to improve software and computer technology for the sake of doing so compared to the hordes of industry programmers who ultimately seek to use computers as a magic loom to create value for a business. In this light I think it's meaningful to investigate recent trends in programming languages and software design as they pertain to solving problems not to writing code.

As the last four or five decades of writing software stand witness, software development is rarely an exercise in first constructing a perfect program, subsequently delivering it and watching pleased as a client makes productive use of it until the heat death of the universe. Rather, we see that software solutions when delivered are discovered to have flaws, or didn't solve the same problem that the client thought that it would solve, or just didn't do it quite right, or the problem changed. All of these changes to the environment in which the software exists demand changes to the software itself.

Malleability

In the past, we have attempted to develop software as if we were going to deploy perfect products forged of pure adamantium which will endure forever unchanged. This begot the entire field of software architecture and the top down school of software design. The issue with this model as the history of top down software engineering stands testament is that business requirements change if they are known, and must be discovered and quantified if they are unknown. This is an old problem with no good solution. In the face of incomplete and/or changing requirements all that can be done is to evolve software as rapidly and efficiently as possible to meet changes as Parnas argues.

In the context of expecting change, languages and the other tools used to develop changing software must be efficient to re-architect and change. As Paul Philips says in the above talk, "modification is undesirable, modifiability is paramount".

Looking at languages which seem to have enjoyed traction in my lifetime, the trend seems to have been that, with the exception of tasks for which the language in which the solution was to be built was a requirement the pendulum both of language design and of language use has been swinging away from statically compiled languages like Java, C & C++ (the Algol family) towards interpreted languages (Perl, Python, Ruby, Javascript) which trade off some performance for interactive development and immediacy of feedback.

Today, that trend would seem to be swinging the other way. Scala, a statically checked language base around extensive type inference with some interactive development support has been making headway. Java and C++ seem to have stagnated but are by no means dead and gone. Google Go, Mozilla Rust, Apple Swift and others have appeared on the scene also fitting into this intermediary range between interactive and statically compiled with varying styles of type inference to achieve static typing while reducing programmer load. Meanwhile the hot frontier in language research seems to be static typing and type inference as the liveliness of the Haskell ecosystem is ample proof.

Just looking at these two trends, I think it's reasonable to draw the conclusion that interpreted, dynamically checked, dynamically dispatched languages like Python and Ruby succeeded at providing more malleable programming environments than the languages which came before them (the Algol family & co). However while making changes in a dynamically checked language is well supported, maintaining correctness is difficult because there is no compiler or type checker to warn you that you've broken something. This limits the utility of malleable environments, because software which crashes or gives garbage results is of no value compared to software which behaves correctly. However, as previously argued, software is not some work of divine inspiration which springs fully formed from the mind onto the screen. Rather the development of software is an evolutionary undertaking involving revision (which is well suited to static type checking) and discovery which may not be.

As a checked program must always be in a legal (correct with respect to the type system) state by definition, this precludes some elements of development by trial and error as the type system will ultimately constrain the flexibility of the program requiring systematic restructuring where isolated change could have sufficed. This is not argued to be a flaw, it is simply a trade off which I note between allowing users to express and experiment with globally "silly" or "erroneous" constructs and the strict requirement that all programs be well formed and typed when with respect to some context or the programmer's intent the program may in fact be well formed.

As program correctness is an interesting property, and one which static model checking including "type systems" is well suited to assisting with, I do not mean to discount typed languages. Ultimately, a correct program must be well typed with respect to some type system whether that system is formalized or not.

Program testing can be used to show the presence of bugs, but never to show their absence!

~ Dijkstra (1970)

Static model checking on the other hand, can prove the presence of flaws with respect to some system. This property alone makes static model checking an indispensable part of software assurance as it cannot be replaced by any other non-proof based methodology such as assertion contracts or test cases.

Given this apparent trade off between flexibility and correctness, Typed Racket, Typed Clojure and the recent efforts at Typed Python are interesting, because they provide halfway houses between the "wild west" of dynamic dispatch & dynamic checking languages like traditional Python, Ruby and Perl and the eminently valuable model checking of statically typed languages. This is because they enable programmers to evolve a dynamically checked system, passing through states of varying levels of soundness towards a better system and then once it has reached a point of stability solidify it with static typing, property based testing and other static reasoning techniques without translating programs to another language which features stronger static analysis properties.

Utility & Rapidity from Library Support

Related to malleability in terms of ultimately delivering a solution to a problem that gets you paid is the ability to get something done in the first place. Gone (forever I expect) are the days when programs are built without using external libraries. Looking at recent languages, package/artifact management and tooling capable of trivializing leveraging open source software has EXPLODED. Java has mvn, Python has pip, Ruby has gem, Haskell has cabal, Node has npm and the Mozilla Rust team deemed a package manager so critical to the long term success of the language that they built their cargo system long before the first official release or even release candidate of the language.

Why are package managers and library infrastructure critical? Because they enable massive code reuse, especially of vendor code. Building a webapp? Need a datastore? The investment of buying into any proprietary database you may choose has been driven so low by the ease with which $free (and sometimes even free as in freedom) official drivers can be found and slotted into place it's silly. The same goes for less vendor specific code... regex libraries, logic engine libraries, graphics libraries and many more exist in previously undreamed of abundance (for better or worse) today.

The XKCD stacksort algorithm is a tongue in cheek reference to the sheer volume of free as in freedom forget free as in beer code which can be found and leveraged in developing software today.

This doesn't just go for "library" support, I'll also include here FFI support. Java, the JVM family of languages, Haskell, Ocaml and many others gain much broader applicability for having FFI interfaces for leveraging the decades of C and C++ libraries which predate them. Similarly Clojure, Scala and the other "modern" crop of JVM languages gain huge utility and library improvements from being able to reach through to Java and leverage the entire Java ecosystem selectively when appropriate.

While it's arguably unfair to compare languages on the basis of the quantity of libraries available as this metric neglects functionally immeasurable quality and utility, the presence of any libraries is a legitimate comparison in terms of potential productivity to comparative absence.

What good is a general purpose building material, capable of constructing any manner of machine, when simple machines such as the wheel or the ramp must be reconstructed by every programmer? Not nearly so much utility as a building material providing these things on a reusable basis at little or no effort to the builder regardless of the ease with which one may custom build such tools as needed.

So What's the Big Deal

I look at this and expect to see two trends coming out of it. The first of which is that languages with limited interoperability and/or limited library bases are dead. Stone cold dead. Scheme, RⁿRS, and Common Lisp were my introductions to the Lisp family of languages. They are arguably elegant and powerful tools, however compared to other tools such as python they seem offer at best equal leverage due to prevailing lack of user let alone newbie friendly library support compared to other available languages.

I have personally written 32KLoC in Clojure. More counting intermediary diffs. That I can find on my laptop. Why? Because Clojure unlike my experiences with Common Lisp and Scheme escapes the proverbial lisp curse simply thanks to tooling which facilitates library and infrastructure sharing at a lower cost than the cost of reinvention. Reinvention still occurs, as it always will, but the marginal cost of improving an existing tool vs writing a new one is in my experience a compelling motivator for maintaining existing tool kits and software. This means that Clojure at least seems to have broken free of the black hole of perpetual reinvention and is consequently liberating to attack real application critical problems rather than distracting programmers into fighting simply to build a suitable environment.

It's not that Clojure is somehow a better language, arguably it ultimately isn't since it lacks the inbuilt facilities for many interesting static proof techniques, but that's not the point. As argued above, the language(s) we use are ultimately incidental to the task of building a solution to some problem. What I really want is leverage from libraries, flexibility in development, optional and/or incremental type systems and good tooling. At this task, Clojure seems to be a superior language.

The Long View

This is not all to say that I think writing languages is pointless, nor that the languages we have today are the best we've ever had let alone the best we ever will have at simultaneously providing utility, malleability and safety. Nor is this to say that we'll be on the JVM forever due to the value of legacy libraries or something equally silly. This is however to say that I look with doubt upon language projects which do not have the benefit of support from a "major player", a research entity or some other group willing to fund long term development in spite of short term futility simply because the literal price of bootstrapping a "new" language into a state of compelling utility is expensive in terms of man-years.

This conclusion is, arguably, my biggest stumbling block with my Oxlang project. It's not that the idea of the language is bad, it's that a tradeoff must be carefully made between novelty and utility. Change too much and Oxlang will be isolated from the rest of the Clojure ecosystem and will loose hugely in terms of libraries as a result. Change to little and it won't be interesting compared to Clojure. Go far enough, and it will cross the borders of hard static typing, entering the land of Shen, Ocaml and Haskell and as argued above I think sacrifice interesting flexibility for uncertain gains.

On Student Startups

When I enrolled in UT Austin's "student startup semenar" one of the guest speaker comments which stood out to me and has stuck most firmly in my mind is that "there are no new ideas, only good execution". This particular lecturer described how he kept a notebook full of random ideas he had for possible businesses, and talked at length about the importance of validating business models through surveys of potential customers as well as discussions with industry peers. The takeaway he left us with was that consequently rather than attempting to operate in "stealth" mode as seems to be fashionable for so many startups developing a product, he argued that ideas are so cheap and the first mover advantage so great due to simple startup execution costs that attempting to cloak a startup's model and/or product generated no measurable advantage and had a concrete cost in terms of potential comment from consumers and peers which is lost as a consequence of secrecy.

Of the dozen or so startups I've interacted with so far, both in and outside the context of the abovementioned startup seminar, I've seen this overvaluing of secrecy over and over again, especially when requesting feedback on an idea. On Freenode's #Clojure channel we have standing joke: the "ask to ask" protocol. Under the ask to ask protocol, some first timer will join the channel and ask if anyone knows about some tool X whereupon some longtime denizen will invoke the ask to ask protocol and tell the newcomer to just ask his real question.

When I see a request from a nontechnical founder for technical feedback over a coffee date or in a private context after an NDA, all I can think of is the ask to ask protocol and the litany against startup secrecy. A coffee date is a commitment of at least an hour of what would otherwise been paid consulting time, and an NDA is a legally binding commitment. For the privilage of signing a contract and giving advice I get what... coffee? A mention in the credits when you finally "break stealth"? What if I was nursing an equivalent idea? I can't know that until after I sign your silly NDA, which is kinda a problem because now you've robbed me of the ability to capitalize on my own prior art.

An email or a forum comment is free. By asking that an engineer go on a coffee date to hear a pitch let alone sign an NDA and then comment, the petitioner (see nontechnical founder) is entirely guilty of at best asking to ask and limiting themselves to one or two responses when several could have been had were the real question posed rather than an ask to ask instance. Start talking about NDAs and I expect you'll get what you pay for.

^d

Of Oxen, Carts and Ordering

Well, the end of Google Summer of Code is in sight and it's long past time for me to make a report on the state of my Oxcart Clojure compiler project. When last I wrote about it, Oxcart was an analysis suite and a work proposal for an emitter.

To recap the previous post: Clojure uses Var objects, an atomic, threadsafe reference type with support for naming and namespace binding semantics to create a literal global hashmap from symbols to binding vars, with a literal stack of thread local bindings. These vars form the fundamental indirection mechanism by which Clojure programs map from textual symbols to runtime functions. Being atomically mutable, they also serve to enable dynamic re-binding and consequently enable REPL driven development.

But for this second aspect of providing for dynamic redefinition of symbols, Clojure could be statically compiled eliminating var indirection and achieving a performance improvement.

Moreover, in the style of Clojurescript, exposing the full source of the language to an agressive static compiler could yield total program size improvements in comparison to programs running on the official Clojure compiler/runtime pair.

So. This was the premise upon which my Project Oxcart GSoC began. Now, standing near the end of GSoC what all has happened, where does the project stand and what do I consider results?

As of this post's writing, e7a22a09 is the current state of the Oxcart project. The Oxcart loader and analyzer, built atop Nicola Mometto's tools.analyzer.jvm and tools.emitter.jvm, is capable of loading and generating a whole program AST for arbitrary Clojure progrms. The various passes in the oxcart.passes subsystem implement a variety of per-form and whole program traversals including λ lifting, use analysis, reach analysis and taken as value analysis. There is also some work on a multiple arity function reduction system, but that seems to be a problem module at present. The oxcart.emitter subsystem currently features two emitters, only one of which I can claim is of my authoring. oxcart.emitter.clj is a function from an Oxcart whole program AST to a (do) form containing source code for the entire program as loaded. This has primarily been a tool for debugging and ensuring the sanity of various program transformations.

The meat of Oxcart is oxcart.emitter.jvm, a wrapper around a modified version of Clojure.tools.jvm.emit which features changes for emitting statically specialized bytecode which doesn't make use of var indirection. These changes are new, untested and subject to change but they seem to work. As evidenced by the bench-vars.sh script in the Oxcart project's root directory, for some programs the static target linking transform done by Oxcart can achieve a 24% speedup.

Times in ms.
Running Clojure 1.6.0 compiled test.vars....
1603.894357
1369.60414
1348.747718
1345.55669
1343.380462
1341.73058
1346.342237
1345.655224
1395.983262
1340.063011
1339.7823
1399.282023
1451.58462
1353.231654
1348.458151
Oxcart compiling test.vars....
Running Oxcart compiled test.vars....
1256.391888
1066.860551
1033.764971
1031.380052
1032.09911
1030.032666
1040.505754
1040.419533
1041.749353
1040.038301
1041.699772
1044.441135
1095.809551
1072.494466
1047.65782

How/why is this possible? The benchmark above is highly artificial in that it takes 500 defs, and tests over several thousand iterations of selectively executing half of them at random. This benchmark is designed to exaggerate the runtime cost of var indirection by being inefficient to inline and making great use of the (comparatively) expensive var dereference operation.

So what does this mean for Clojure? Is Oxcart proof that we can and should build a faster Clojure implementation or is there a grander result here we should consider?

Clojure's existing compiler operates on a "good enough" principle. It's not especially nice to read nor especially intelligent, but it manages to produce reasonably efficient bytecode. The most important detail of the reference Clojure compiler is that it operates on a form by form basis and is designed to do so. This is an often forgotten detail of Clojure, and one which this project has made me come to appreciate a lot more.

When is static compilation appropriate and valuable? Compilation is fundamentally an analysis and specialization operation designed to make a trade off between "start" or "compile" time and complexity and runtime performance. This suggests that in different contexts different trade offs may be appropriate. Furthermore, static compilation tends to inhibit program change. To take the extreme example of say C code which is directly linked change in a single function, if a single function increases in bytecode size so that it cannot be updated in place. In this case all code which makes use of it (worst case the rest of the program) must be rewritten to reflect the changed location of the changed function. While it is possible to build selective recompilation systems which can do intelligent and selective rebuilding (GHCI being an example of this), achieving full compilation performance at interactive development time is simply a waste of time on compilation when the goal of REPL driven development is to provide rapid feedback and enable exploration driven development and problem solving.

While vars are clearly inefficient from a perspective of minimizing function call costs, they are arguably optimal in terms of enabling this development style. Consider the change impact of altering a single definition on a Clojure instance using var indirection rather than static linking. There's no need to compute, recompile and reload the subset of your program impacted by this single change. Rather the single changed form is recomputed, and the var(s) bound by the recompiled expression are altered. In doing so, no changes need be made to the live state of the rest of the program. When next client code is invoked the JVM's fast path if any will be invalidated as the call target has changed, but this is handled silently by the runtime rather than being a language implementation detail. Furthermore var indirection means that compiling an arbitrary Clojure form is trivial. After handling all form local bindings (let forms and soforth), try to resolve the remaining expressions to vars in the mapped namespace, and to a class if no var is found. Brain dead even, but sufficient and highly effective in spite of its lack of sophistication.

While, as Oxcart is proof, it is possible to build a static compiler for some subset of Clojure, doing so produces not a static Clojure but a different language entirely because so much of the Clojure ecosystem and standard library even are defined in terms of dynamic redefinition, rebinding and dispatch. Consider for instance the multimethod system, often lauded with the claim that multimethod code is user extensible. Inspecting the macorexpand of a defmethod form you discover that its implementation far from being declarative is based on altering the root binding of the multimethod to install a new dispatch value. While it would be possible to statically compile this "feature" by simply collecting all defmethods over each multimethod and AOT computing the final dispatch table, however this is semantics breaking as it is strictly legal in Clojure to dynamically define additional multimethod entries just as it is legal to have an arbitrary do block containing defs.

So, in short, by trying to do serious static compilation you largely sacrifice REPL development and to do static compilation of Clojure you wind up defining another language which is declarative on defs, namespaces and dependencies rather than imperative as Clojure is. I happen to think that such a near-Clojure static language, call it Clojurescript on the JVM, would be very interesting and valuable but Clojure as implemented currently on the JVM is no such thing. This leads me to the relatively inescapable conclusion that building a static compiler for Clojure is putting the cart before the ox since the language simply was not designed to benefit from it.

Moving Forwards

Now. Does this mean we should write off tools.emitter.jvm, Clojure in Clojure and Oxcart? By no means. tools.emitter.jvm uses the same var binding structure and function interface that Clojure does. Moreover it's a much nicer and more flexible single form level compiler that I happen to think represents a viable and more transparent replacement for Clojure.lang.Compiler.

So what's that leave on the table? Open question. Clojure's compiler has only taken major changes from two men: Rich and Stu. While there is merit in considered design and stability, this also means that effort towards cleaning up the core of Clojure itself and not directed at a hard fork or redesign of Clojure is probably wasted especially in the compiler.

Clearly while var elimination was a successful optimization due to the value Clojure derives from the Var system it's not a generally applicable one. However it looks like Rich has dug the old invokeStatic code back out for Clojure 1.7 and the grapevine is making 10% noises, which is on par with what Oxcart seems to get for more reasonable inputs so we'll see where that goes.

While immutable datastructures are an awesome abstraction, Intel, AMD and ARM have gotten very good at building machines capable of exploiting program locality for performance and this property is fundamentally incompatible with literal immutability. Transients can help mitigate Clojure's memory woes, and compiler introduction of transients to improve memory performance could be interesting. Unfortunately again this is a whole program optimization which clashes with the previous statement of the power that Clojure derives from single form compilation.

Using core.typed to push type signatures down to the bytecode level would be interesting, except that since almost everything in Clojure is a boxed object and object checkcasts are cheap and this would probably result in little performance improvement unless the core datatypes were reworked to be type parametric. Also another whole program level transform requiring an Oxcart style whole program analysis system.

The most likely avenue of work is that I'll start playing with a partial fork of Clojure which disassociates RT from the compiler, data_readers.clj, user.clj and Clojure/core.clj. While this will render Oxcart even more incompatible with core Clojure, it will also free Oxcart to emit Clojure.core as if it were any other normal Clojure code including tree shaking and escape the load time overhead of bootstrapping Clojure core entirely.

This coming Monday I'll be giving a talk on Oxcart at the Austin TX Clojure meetup, so there's definitely still more to come here regardless of the Clojure/Conj 14 accepted talk results.

^D

Of Mages and Grimoires

When I first got started with Clojure, I didn't know (and it was a while before I was told) about the Clojure.repl toolkit which offers Clojure documentation access from within an nREPL instance. Coming from the Python community I assumed that Clojure, like Python, has excellent API documentation with examples that a total n00b like myself could leverage to bootstrap my way into simple competence.

While Clojure does indeed have web documentation hosted on GitHub's Pages service, they are eclipsed in Google PageRank score if not in quality as well by a community built site owned by Zachary Kim: ClojureDocs. This wouldn't be a real problem at all, were it not for the fact that ClojureDocs was last updated when Clojure 1.3 was released. In 2011.

While Clojure's core documentation and core toolkits haven't changed much since the 1.3 removal of Clojure.contrib.*, I have recently felt frustrated that newer features of Clojure such as the as->, and cond->> which I find very useful in my day to day Clojure hacking not only were impossible to search for on Google (being made of characters Google tends to ignore) but also didn't have ClojureDocs pages due to being additions since 1.3. Long story short: I finally got hacked off enough to yak shave my own alternative.

**drumroll**

Grimoire

Mission

Grimoire seeks to do what ClojureDocs did, being provide community examples of Clojure's core functions along with their source code and official documentation. However with Grimoire I hope to go farther than ClojureDocs did in a number of ways.

I would like to explore how I find and choose the functions I need, and try to optimize accessing Grimoire accordingly so that I can find the right spanner as quickly as possible. Part of this effort is the recent introduction of a modified Clojure Cheat Sheet (thanks to Andy Fingerhut & contributors) as Grimoire's primary index.

Something else I would like to explore with Grimoire is automated analysis and link generation between examples. In ClojureDocs, (and I admit Grimoire as it stands) examples were static text analyzed for syntax before display to users. As part of my work on Oxcart, I had an idea for how to build a line information table from symbols to binding locations and I'd like to explore providing examples with inline links to the documentation of other functions used.

Finally I'd like to go beyond Clojure's documentation. ClojureDocs and Grimoire at present only present the official documentation of Clojure functions. However some of Clojure's behavior such as the type hierarchy is not always obvious.

< amalloy> ~colls

< Clojurebot> colls is http://www.brainonfire.net/files/seqs-and-colls/main.html

Frankly the choice of the word Grimoire is a nod in this direction... a joke as it were on Invoker's fluff and the Archmagus like mastery which I see routinely exhibited on Freenode's Clojure channel of the many ins and outs of the language while I struggle with basic stuff like "Why don't we have Clojure.core/atom?" and "why is a Vector not seq? when it is Sequable and routinely used as such?". "Why don't we have Clojure.core/sequable?"?

Clojure's core documentation doesn't feature type signatures, even types against Clojure's data interfaces. I personally find that I think to a large degree in terms of the types and structure of what I'm manipulating I find this burdensome. Many docstrings are simply wanting if even present. I think these are usability defects and would like to explore augmenting the "official" Clojure documentation. Andy Fingerhut's thalia is another effort in this direction and one which I hope to explore integrating into Grimoire as non-invasively as possible.

Status

Much of what I have talked about here is work that needs to be done. The first version of Grimoire that I announced originally on the Clojure mailing list was a trivial hierarchical directory structure aimed at users who sorta kinda knew what they were looking for and where to find it because that's all I personally need out of a documentation tool for the most part after two years of non-stop Clojure. Since then I've been delighted to welcome and incorporate criticism and changes from those who have found Grimoire similarly of day to day use, however I think it's important to note that Grimoire is fundamentally user land tooling as is Leiningen and as is Thalia.

As such, I don't expect that Grimoire will ever have any official truck with Rich's sanctioned documentation. This and the hope that we may one day get better official docs mean that I don't really foresee migrating Grimoire off of my personal hosting to a more "Clojure-ey" domain. Doing so would lend Grimoire an undue level of "official-ness", forget the fact that I'm now rather attached to the name and that my browser goes to the right page with four keystrokes.

Future

As far as I am concerned, Grimoire is in a good place. It's under my control, I have shall we say a side project as I'm chugging along on "day job" for GSoC and it seems judging by Google Analytics that some 300 other Clojureists have at least bothered to stop by and some 70 have found Grimoire useful enough to come back for more all in the three days that I've had analytics on. There is still basic UI work to be done, which isn't surprising because I claim no skill or taste as a graphic designer, and there's a lot of stuff I'd like to do in terms of improving upon the existing documentation.

Frankly I think it's pretty crazy that I put about 20 hours of effort into something, threw it up on the internet and suddenly for the first time in my life I have honest to god users. From 38 countries. I should try this "front end" stuff more often.

So here's hoping that Grimoire continues to grow, that other people find it useful, and that I manage to accomplish at least some of the above. While working on Oxcart. And summer classes. Yeah.....

^D