Of Inertia and Computing

Let's talk about Stanislav, probably best known for his blog loper-os. He's interesting in that I'm indebted to him for his writing, which was very personally influential, and yet I consider him useless.

I didn't realize this at first, but Stanislav is rabid about the ergonomics of computing. BruceM was kind enough to make this comment the last time I voiced my objection to Stanislav's writings,

The Lisp Machines have pretty much nothing to do with a “lost FP nirvana”. Working in Common Lisp, or the Lisp Machine Lisps like ZetaLisp or Interlisp, had little to do with functional programming. Lisp Machines are about an entire experience for expert developers. It isn’t just the OS. It isn’t just the hardware (compared to other hardware of the time). It isn’t just the applications that people ran on them. It was the entire environment, how it worked, how it fit together and how it was built to be a productive environment.

I'm a software developer by training and by trade. Maybe on a good day I'm a software engineer, on a rare day a computer scientist. For the most part my job consists of slinging code. Lots of code. Code that interacts with ugly, messy but somehow important systems and software. For me, the goal of computing and the goal of my work is to take the dreck out of software development.

The lisp machines of which Stanislav and above Bruce write predate me. I've been fortunate to once work once with someone who developed software for them, but that's hardly the same. I've never seen one, and I've never used one. All I know because I grew up on them is the current generation of x86 and ARM laptops running Windows, Linux and MacOS.

In short, I've never had an ergonomic computing experience of the kind they describe. No chorded keyboards. No really innovative -- forget convenient or helpful -- operating systems. Maybe at best I had error messages which tried to be helpful, Stack Overflow, and on a good day, the source code.

Stanislav's essays, much in the same spirit as the work from Engelbart (Mother of all Demos) and Bush (As We May Think) which he references, suggest that computers could and should be should be vehicles for thought. They could be tools by which humans can wrangle more information and explore ideas far and beyond the capabilities of an unaided human. This is as I understand it the entire thrust of the field of Cybernetics.

And indeed, for computer experts, computers can be huge enabler. I've seen people extract amazing results about society and individual behavior from simple SQL databases sampling social network data. I've personally been able to at least attack if not wrangle the immense complexity of the tools we use to program largely because I have had immensely fast computers with gigabytes of ram on which I can play with even the most resource inefficient of ideas without really worrying about it. But these capabilities are largely outside the reach of your average person. You and I may be able to hammer out a simple screen scraper in a matter of an hour, but just consider explaining all of what's involved in doing so to a lay person. You've got to explain HTTP, requests, sessions, HTML, the DOM, parsing, tree selectors and on and on -- forget the tool you're using -- to do all of this.

So by and large, I agree with Stanislav. We've come too far and software is too complex for anyone but trained professionals to really get anything done, and even to trained professionals computers hardly offer the leverage which it was hoped they would. Consider how much time we as programmers spend implementing (explaining to a computer) solutions to business problems, compared to how much time is required to actually devise the solution we implement? Computers enable us to build tools and analyses on an otherwise undreamed of scale. But that very scale has enabled the accumulation of hidden debt. Decisions that made sense at the time. Mistakes that lurk unseen beneath the surface until we see an 0day in the wild.

Given as Joe Armstrong said "the mess we're in", what of the industry is worth saving? Has the sunk cost fallacy sunk its hooks in so deep that we are unable to escape fallacy as it may be?

Stanislav among others seems to believe that the only way forwards is to lay waste to the industry from the very silicon up; to find a green field and in it build a city on a hill abandoning even modern CPU architectures as too tainted.

My contention is this - that Stanislav is off base. From the perspective of hardware we've come so far that "falling back" to simpler times just doesn't make sense. Where will someone find the financial resources to compete with Intel and ARM as a chipmaker with a novel simplified instruction set? Intel has the fabs. They have the brain trust of CPU architects. They have the industry secrets, a culture of the required art, and most importantly, existing infrastructure around proof and implementation validation. The only challenger in this field is Mill Computing who seem to have been struggling between filing patents and raising funding since their first published lectures two years ago. Furthermore adopting a new CPU architecture means leaving behind all the existing x86 tuned compilers, JITs and runtimes. Even with all their resources Intel wasn't able to overcome the industry's inertia and drive adoption of their own green field Itanium architecture.

Nor will such a revolution arise of hobbyists. Who will be willing to accept a sub-gigahertz machine or the otherwise crippled result of a less than professional CPU effort? Where will a hobby CPU effort find the financial resources for a small, high cost per unit production run or the market for a large one?

Even if the developer story were incredibly compelling, the rise to dominance of another platform given the existing investments of billions of dollars not only in literal dollar spend, but in the time of really excellent programmers writing nontrivial software like Clang, the JVM and modern browsers? It's a huge investment in throwing away mostly working software, one few people can or will make. But I'm OK with that. Intel and ARM chips are really good. It's the software ergonomics and software quality that's the real problem here today in 2016, not the hardware we run our shoddy programs on.

Likewise network protocols by and large are ossified. There is so much value in UDP, TCP/IP, HTTP and the Unix model of ports just from being able to interact with existing software deployments that I think they'll be almost impossible to shake. Consider the consumer side. Were a company to have so much courage and such resources that they were to begin down this path, the first question they'll be met with is "does this machine run Facebook"? How can a competitor network stack even begin to meet that question with an answer of "we can't interoperate with the systems you're used to communicating with"?

Microsoft for a while ran a project named Midori which Joe Duffy wrote some excellent blog posts about, available here. Midori was supposed to be an experiment at innovating across the entire operating system stack using lessons learned from the .NET projects. Ultimately Midori was a failure in that it never shipped as a product, although the lessons learned therein are impacting technical direction. But I'll leave that story for Joe to tell.

I propose an analogy. Software and organizations which rely on software can be considered as Newtonian objects in motion. Their inertia is proportional to their mass and their velocity. Inertia here is a proxy for investment. A small organization moving slowly (under low pressure to perform) can change tools or experiment without much effort. A small organization moving faster (under high pressure to execute some other task) will have much greater difficulty justifying a change in tooling because such a change detracts from their primary mission. The larger an organization grows, the more significant the "energy" costs of change become. Other people in the organization build tooling, command line histories, documentation, and organizational knowledge, all of which rides on the system as it is. The organization size here extends beyond the scope of the parent organization to encompass all users.

The Oracle Java project has explicitly made the promise of stability and binary backwards compatibility, and is thus a great study in the extremes of this model. Now the Java project's guardians are forever bound to that expectation, and the cost of change is enormous. Brian Goetz gave an amazing talk on exactly this at Clojure/Conj in 2014. Java's value proposition is literally the juggernaut of its inertia. It won't change because it can't. The value proposition and the inertia forbid it, except in so far as change can be additive. While immensely convenient from a user's perspective, this is arguably a fatal flaw from a systems perspective. Object equality is fundamentally unsound and inextensible. The Java type system has repeatedly (1, 2, 3) been found unsound, at least statically. The standard library is not so much a homogeneous library, but rather a series of strata -- Enumerations from way back when, interfaces with unrelated methods, interfaces which presume mutability... the list of decisions which made sense at the time and in retrospect could have been done better goes on and on. And because of the stability guarantees of the JVM, they'll never be revisited because to reconsider them would break compatibility -- an unthinkable thing.

As you may be able to tell at this point, I firmly believe that software is in a sense discovered rather than invented and that creative destruction applies directly to the question of developing software. As suggested by Parnas et all, we can't rationally design software. We can only iterate on design since final design requirements for an artifact are unknowable and change as the artifact evolves together with its users and their understanding of it. Thus systems like the JVM which propose a specification which can only be extended are as it were dead on arrival unless as with scheme they are so small and leave so much to libraries that they hardly exist. The long term system consequences of architectural decisions are unknowable, which admits of keeping mistakes forever. Moreover even in the optimistic case, better design patterns and techniques when discovered cannot be applied to legacy APIs. This ensures that there will be a constant and growing mismatch between the "best practice" of the day and the historical design philosophies of the tools to hand.

Eick et all propose a model of "code decay" which I find highly compelling, and which feeds into this idea. Eick's contention is that, as more authors come onto a project and as the project evolves under their hands, what conceptual integrity existed at the beginning slowly gets eaten away. New contributors may not have the same vision, or the same appreciation of the codebase, or the underlying architecture may be inappropriate to new requirements. These factors all lead back to the same problem: code and abstractions rot unless actively maintained and curated with an eye to their unifying theories.

All of this leads to the thought which has been stuck in my mind for almost a year now. Stanislav is right in that we need a revolution in computing, but it is not the revolution he thinks he wants. There is no lost golden past to return to, no revanchist land of lisp to be reclaimed from the invaders. The inertia of the industry at large is too high to support a true "burn the earth" project, at least in any established context. What we need is a new generation of new, smaller systems which by dint of small size and integrity are better able to resist the ravages of time and industry.

I found Stanislav's writing useful in that it's a call back to cybernetics, ergonomics, and an almost Wirthian vision of simplicity and integrity in a world of "flavor of the week" Javascript frameworks and nonsense. But in its particulars, I find his writing at best distracting. As I wrote previously (2014), I don't think we'll ever see "real" lisp machines again. Modern hardware is really good, better than anything that came before.

Max really hit this one on the head -

Stanislav's penchant to focus on the hardware which we use to compute seems to reductively blame it for all the ills of the modern software industry. This falls directly into the usual trap we programmers set for ourselves. Being smart, we believe that we can come along and reduce any problem to its essentials and solve it. After all that's what we do all day long to software problems, why can't we do the same for food and a hundred other domains? The problem is that we often miss the historical and social context around the original solutions -- the very things which made them viable, even valuable.

The future is in improving the ergonomics of software and its development. The hardware is so far removed from your average programmer -- let alone user -- that it doesn't matter.

Right now, it's too hard to understand what assumptions are being made inside of a program. No language really offers contracts on values in a seriously usable format. Few enough languages even have a real concept of values. It's too hard to develop tooling, especially around refactoring and impact analysis. People are wont to invent their own little DSL and, unable to derive "for free" any sort of syntax analysis or linter, walk away, leaving behind them a hand rolled regex parser that'll haunt heirs of the codebase for years. Even if you're lucky enough to have a specification, you can easily wind up with a mess like JSON, or a language the only validator for which is the program which consumes it. This leaves us in the current ever-growing tarpit of tools which work but which we can't understand and which we're loath to modify for compatibility reasons.

I don't have an answer to this. It seems like a looming crisis of identity, integrity, and usability for the software professions generally. My goal in writing this rant isn't just to dunk on Stanislav or make you think I have any idea what I'm doing. My hope is that you find the usability and integrity problems of software as it is today as irksome as I do, and consider how to make the software you work on suck less, and how your tools could suck less. What we need is a new generation of new, smaller, actively maintained and evolved software systems which by dint of small size and dedication to integrity are better able to resist the ravages of time and industry. In this field hobbyist efforts are useful. It's a greenfield fundamentally, but one running atop and interfacing with the brown field we all rely on. These dynamics I think explain much of the "language renaissance" which we're currently experiencing.

There are no heroes with silver swords or bullets to save us and drag us from the tarpit. The only means we have to save ourselves is deliberate exploration for a firm path out, and a lot of lead bullets.