The Future Programming Manifesto

It’s time to reformulate the principles guiding my work.

[Revised definition of complexity in response to misunderstandings]

Inessential complexity is the root of all evil

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming.

Complexity is the total learning curve

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

Our institutions, culture, and psychology all foster complexity

  • Maintaining compatibility increases complexity.
  • Technical debt increases complexity.
  • Most R&D is incremental: it adds features and tools and layers. Simplification requires that we throw things away.
  • Computer Science rejects simplification as a result because it is subjective.
  • The Curse of Knowledge: experts are blind to the complexity they have laboriously mastered.
  • Rewarding programmers for their ability to handle complexity selects for those who love it.
  • Our gold-rush economy encourages greed and haste.

To make progress we must rebel against these vested interests and bad habits. There will be strong resistance.

Think outside the box

Much complexity arises from how we have partitioned software into boxes: OS, PL, DB, UI, networking; and likewise how we have partitioned software development: edit, version, build, test, deploy. We should go back to the beginning and rethink everything in order to unify and simplify. To do this we must unlearn to program, a very hard thing to do.

Programming for the people

Revolutions start in the slums. Most new software platforms were initially dismissed by the experts as toys. We should work for end-users disenfranchised by lack of programming expertise. We should concentrate on their modest but ubiquitous needs rather than the high-end specialized problems addressed by most R&D. We should take inspiration from end-user tools like spreadsheets and HyperCard. We should avoid the trap of designing for ourselves. We believe that in the long run expert programmers also stand to greatly benefit from radical simplification, but to get there we must start small.

Simplicity first; performance last

Performance is often the first excuse for rejecting new ideas. We even do it to ourselves. We must break our own habit of designing for performance: it is seductively objective and quantifiable whereas the most important design issues are messily subjective and qualitative. After all, performance optimization is one thing we have mastered. Build compelling simplicity and performance will come.

Disciplined design evaluation

Computer Science has decided that, being a Science, it must rigorously evaluate results with empirical experiments or mathematical proofs. We are not doing Science. We are doing Design: using experience and judgement to make complex tradeoffs in order to satisfy qualitative human needs. Yet we still need a disciplined way to evaluate progress. Perhaps we can learn from the methodologies of other fields like Architecture and Industrial Design. This is a meta-problem we must address.

59 Replies to “The Future Programming Manifesto”

  1. I like your article.

    One differing opinion I will offer is this: simple tools that take away choice from the programmer can also induce better results.

    To state this more cleanly I will say that as of late I am more enthralled with languages that take a bit of “choice” or power away from the programmer and impose constraints. I’m thinking of Clojure and Haskell specifically. I think picking tools that are “too easy” or that “anyone” can learn breeds the over-complexities we see.

    What happens when you give smart, talented, and creatively driven people tools that are too simple? They usually create complexity upon complexity in order to “flex” their brain and creative needs.

    I think there might be something to having to fight against the language a little. I have iterated a list millions of times. Once I was confronted with immutable data and map/filter/reduce I suddenly had a “box”, a set of constraints. I now needed to solve my problems more in line with the language. This allowed me to be creative without going overboard and making an AbstractFactoryIteratorFactoryIterator, and so on.

    OO vs. FP vs. whatever isn’t the point. The point is languages that give power by taking away choice. Programmers are unique in that they can constantly CREATE tools up and down the stack. I think this is another double edged and oft dangerous sword.

    Just a hypothesis I’m tossing around.

    1. I quite agree. Spreadsheets enforce a strict 2D array structure on everything and that has turned out pretty well. Many people have observed that in art constraints lead to creativity. This might be worth elevating to an explicit principle. Thanks

      1. Jack White has said that he likes a lot of his cheap guitar that he had bought from Wall-Mart. I think that it is mostly because it is so hard to make it sound right that you really need to concentrate and fight (i.e. make effort, and hopefully get reward, but not too easily). Old Japanese craftsmen have always required that their apprentices start with what some would call “worse” tools (i.e. better tools are too good for their level of skills). Many older carpenters seem to use less tools, but more creatively (and they usually have quite strong relationship with the tools they use). I’m not sure how this could be extended to programming. The main problem seems to be that we do not have any tradition. The field is going faster forward than many of us can handle (more things that I have learned are dead than alive for example). The power of the machines forgives many of the problems in code, and we are constantly reminded that our time costs a lot more than a few more processors. Which leads to a situation where even those that don’t write code, value complexity more than say simplicity, if it can get the job done faster or cheaper or both (anyone can do complex things but it needs genius to make it simple).

        But I think that most of us seek for simpler things, and once in a while we succeed as a whole. Who really builds new XML Web Services these days (mind you enterprise world). Our simpler choices have already started adding more layers of complexity, though.

      2. For my first internship at IBM (in 1995), I implemented an entire issue tracking system in Lotus Notes Script (circa 1995, it got better gradually)…so no loops, no arguments for procedures, it was really crazy. I’m not sure I want to experience that kind of creativity ever again.

    2. I have also been thinking about this, but I started thinking about this when I found Lua. A simple language that anyone could learn, but still they don’t (and it looks a lot like a toy, actually). So, this has to do with community as well. Generally everything starts to bloat as soon as it becomes popular. This comes to tools as well. Say e.g. build-tools: there is ANT, Grunt, Gulp…, and then there is Make.

      I think that simpler more constrained tools and languages are better (if they are good, Lua, Haskell, and Clojure are good but slightly different ways…say that they all have more good parts than bad parts). You will end up more directly solving the actual problem than thinking about the dozen ways of solving the problem.

  2. I would love to see some elaboration on “Think outside the box”. What is your vision for decreasing compartmentilisation leading to decreasing complexity? Thanks!

    1. One thing I have been working on is to have a single unified data model across the PL, DB, & UI. We have lots of complex mappings and embeddings but few true unifications.

      1. First of all, this whole post is The Truth™. This point is particularly true. Programmers lose by considering the same things in different ways. Case in point: most wouldn’t think of a database query having any relationship to a language construct, like, say, an if block, but they’re both just filters on a set of data.

        There’s nowhere near enough holistic thinking about systems, independent of the component boundaries.

        None of our current languages or tools (barring the occasional oddity like Gherkin) support high-level descriptions of the goals of software systems. They’re all too mired in the weeds of ‘how’, which loses the understanding of ‘what’.

      2. Jonathan,

        A single model would be great. What do you think of the naked objects framework ? it seems to follow the spirit you’re looking for.

          1. The concept of objects still has a lot to offer; they are the nouns of our human language, after all! Anything with identity is probably an object from a linguistic standpoint. The object design space is also huge; it goes way beyond the objects that have traumatized us in Java and Smalltalk.

            And especially if you are going after non-programmers, do you really think they will better grock mathy value-oriented programming?

          2. They need all that power because values are so limited. A little bit of identity and state goes a long way.

  3. Amen! But all in vain. The damage has been done and new generation has been brought up, that would rather die, but won’t let Spring go.

  4. A system has some necessary amount of complexity; it follows from the requirements.

    Unnecessary complexity comes from poorly localized complexity. When complexity is minimized, the system has this property: every requirement is solved by one subprogram in that system; one module of code. “Too complex” is when a given requirement is solved over and over in many places.

    The main tool for reducing complexity are languages and their libraries. Whatever is not in the language or library, programmers will re-invent over and over again, differently but similarly, proliferating it in their programs. A system made of 50 packages written in C might easily contain forty solutions to the requirement for “dynamic character strings”. It’s worse when multiple libraries are combined which solve the same sub-problems in different ways, and their conventions have to be bridged so that their data representations interoperate. That is all pointless complexity which doesn’t satisfy an actual system requirement; just duct-tapes, rubber bands and staples to hold things together.

  5. Kolmogorov Complexity is a measure of complexity directly rooted in programming. Having said that, the main problem with Kolmogorov Complexity (aside from the minimal program to represent a given sequence of bits is in principle generally noncomputable) is the choice of virtual machine upon which the K program that encodes the target bit strings is to run. This choice of VM has sent some of the best minds over the edge, such as Solomonoff himself when he all but threw up his hands and said it was entirely arbitrary (thereby relegating the K program an entirely arbitrary “cultural” choice).

    We can do better.

    I would suggest starting with the problem of “time” itself — not in terms of temporal calculus or “reactive programming” or anything so divorced from reality. The starting point for time should be rooted in our best formalization of physical theory and proceed from there to the design of artificial languages that model the dynamics of the real world.

    The second touchstone is that our primary formalism be relational rather than functional since functions are simply relations with an N:1 mapping.

    The third touchstone is that the relational formalism deliver dimensional analysis as a calculus of column counts — including negative column counts — so that data “types” are done away with in favor of arithmetic commensurability: You can add two velocities (a column for distance with count of one and a column of time with a count of negative one), but you cannot add a velocity to a distance. You can, however, multiply a velocity times a distance yielding area per time (distance with column count of 2 and a time column with a count of negative one).

    Am I moving to fast for you? I thought so. Here, let me help you try to catch up by going back to 1912 with the publication of Principia Mathematica’s notion of “relation arithmetic” where addition and multiplication of relations (tables, ie: things with “rows” and “columns”) was introduced by Russell in an attempt to provide the formal tools with which to retain one’s bearings in the empirical world. The late Tom Etter and I picked up on that thread while at HP’s eSpeak project circa 2000 in an attempt to provide a rigorous foundation for what HP was touting as “Internet Chapter II”. We got far enough that a paper “Relation Arithmetic Revived” was written up.

    Relation Arithmetic provides the correct “VM” for Kolmogorov Complexity.

  6. “We should measure complexity as the cumulative effort to learn and use a technology.”

    For whom, though? This falls into the trap that Rich Hickey identifies as “simple” versus “easy.” The technology that takes the least effort to learn and use is one identical to the one I’m already using, perhaps with some small improvement. For a C89 programmer, the language with the least effort to learn and use is probably C99. For a Fortran-90 programmer, it’s probably Fortran-95, and C99 would be a disaster. For a complete newcomer, something like Python might be a better choice than either of these.

    I think you’re very close to the real issue, with: “Our gold-rush economy encourages greed and haste.” 50 years ago, technology changed much more slowly. Automatic transmissions beat manual transmissions (in America) because they were a lot easier for newcomers, and there were a lot of newcomers. We just had to wait for the older generation to die off, and stop driving their cars. That’s no longer the case. Kids today use Uber, and Uber will be replaced by something else (self-driving cars? decent public transit? electric bicycles?) long before today’s average Uber user is dead.

    The systems we are expected to know and use now change every year, so most programmers today are perennial newcomers. We need things to be *easy* because nobody has time to get good at *simple* things before something else comes along. I’d love to be a Scheme or Smalltalk programmer by day (both great, simple languages that haven’t changed much in decades, because they haven’t needed to) but nobody is hiring for that. Everybody’s hiring for whatever the latest version of Java/C#/Ruby is these days.

    1. I am including the entire learning curve for the entire tech stack. Considering only the incremental effort for an expert to learn an added layer is exactly the short-sighted thinking that has led us into technical debt bankruptcy. In the long-term, all that should matter is total learning curve from novice to expert. If you are increasing that then you are making things worse.

  7. The key are the child. We should teach them only the minimum necessary and see what ideas grows there.

  8. Nice piece. Curious as to whether there are any promising examples out there already in your view.

  9. My only complaint with this manifesto, is that it likens cognative load with complexity. The trouble is that things that are unfamiliar, while being simple, are still going to take more processing power to learn them. This goes back to notions of which programming language you learned first. If you cut your teeth on python, then c++’s nature will be hard, and if you learned visual basic 6.0, you are going to have a hard time with anything.

    Here’s a great talk that discuss the issue of complexity and mental familiarity:
    http://www.infoq.com/presentations/Simple-Made-Easy

    1. As I said, cumulative cognitive load includes the entire learning curve. The best way to compare two technology stacks is to compare the entire learning curve from novice to expert. We don’t need to make dubious distinctions between “simple” and “easy”.

  10. Thanks for putting these down, much food for thought.

    Besides programming, I work with non-technical people a lot in social justice work, supporting their personal and collective computer needs. I don’t think any of them need to learn programming, not even of the Hypercard or Excel or VB variety. I mean not only that they don’t see a need, but that don’t have any need for it, at present, so it would be pointless to teach. It took me many years to acknowledge this. They need to learn things like: how can I effectively file and search for documents and emails, and what is the file system first of all; how do I do an effective internet search for public resources; how do I post things on social media and websites; how do I layout a flyer; what’s the workaround for this stupid MS Office or Google Docs behavior; how do I download, install, and use backup software. Also, some of them see large productivity gains simply from learning two-handed typing.

    My point isn’t that democratizing programming is a waste of time — because there are many office workers and others in various situations who would benefit — but that programming ought to be seen on a continuum with other skills that it sits on top of. The key thing is, is the technology something that enables us to take control of some drudgery in our lives? Or is it enabling others to put more drudgery on us, or at best shifting it around? And here is a common experience we all share, whether programmers or not: by and large (and for centuries now in fact) technology is something we have to fight to use to our advantage. I suppose this is all pretty self-evident, but I think it worth saying, because programming or “hacking” is always separated from merely “using” computers. But in my experience, it’s sometimes the least experienced user who will cut through my tedious and unnecessary solution with a common-sense workaround. To me, that is hacking, with a limited skillset. And I think that is one of the strongest arguments for democratizing programming skills. Although I don’t have much hope for it as a strategy for changing the industry and programmer culture on its own — in my opinion, that will have to come from outside economic pressure as well.

    But besides general tools for democratizing programming, what I would like to see is infrastructure that drastically lowers the cost of developing and maintaining custom, so-called domain-specific software, built around a specific workflow and needs of a group, that members of the group themselves can develop together — programmers and non-programmers. The economics of software development has made this difficult, but there is a great need for it. Instead we have increasing centralization of increasingly crappy services whose basic purpose is to spy on us and serve us ads: AKA “disruption”. Or frameworks and tools that lock us in to their conventions.

    Finally, even more than non-technical users learning programming (or “programming concepts”), I think the urgent thing is for programmers to re-learn how to work with non-technical users. We are not going to change the world by our little vanguard of technicians acting on behalf of everyone else.

  11. “Computer Science has decided that, being a Science, it must rigorously evaluate results with empirical experiments or mathematical proofs. We are not doing Science. ”

    Do you think the curry-howard correspondence is wrong?

  12. Fools ignore complexity; pragmatists suffer it; experts avoid it; geniuses remove it.

    Alan Perlis

    1. I love that quote! Here’s another:

      In the martial art of Karate, for instance, the symbol of pride for a black belt is to wear it long enough such that the die fades to white as to symbolize returning to the beginner state. – John Maeda

  13. This is all great. I hope you succeed.

    I’m not sure about this though: “We should avoid the trap of designing for ourselves.” Of course people designing for themselves is a fundamental part of the problem, but it’s also the thing that motivates most of us. Also, designing for other people is harder.

    1. Thanks. Real designers have a motto: “you are not the user”. Designing for yourself is lazy and self-indulgent. Fine for a personal hobby but a failure for most serious work. Unfortunately self-satisfaction is the prevailing approach in our field.

      1. This is really difficult. I’ve never been fully satisfied with any software I designed for someone else to use; the feedback cycle is just crap, and it’s difficult to internalize the usage patterns.

        The only way out of this trap that I can think of is a curious one: by having children, I can see myself writing software for them to use but still remaining really engaged in the process and maintaining a deep understanding of the back-and-forth between the code and the user.

  14. Programming, or design, if you prefer, is quite complicated. Not everyone can do it, just as not everyone can entertain an audience from a stage, or paint like Michaelangelo. If you can devise a system that gives designers the power they need to write their programs, while also meeting the needs of less expert users, great! But I really wonder if that aim is achievable…?

    1. Not everyone can write at a professional level. But almost anyone can use writing to tell their own story, or extend that of someone else and successfully entertain an audience. I can easily envision augmented-reality, projection mapping, and virtual-reality systems that allow children to program spell effects or slap together a minion AI. We should all stop drawing dead fish. An artist should easily be able to compose effects and services from many different systems.

      The role for casual programming – if we can make it accessible – is enormous.

      And, indeed, as Jonathon notes we should be able to unify UI with programming in a similar sense as the old command-line interfaces with easy pipes and scripting. There is no reason we cannot ‘live’ within our (personal) programming environment (connecting and wiring and manipulating services) in a similar sense that users today live within their data (dragging and dropping and editing files).

      (Heh. Not just live programming environments, but live-in programming environments. :))

      1. Live-in programming environments sound like live coding environments. We really should look more at improvised real-time programming, which is Alex McLean’s topic.

  15. I sympathize with the ‘Simplicity First, Performance Last’ section. Even repeating KISS like a mantra, it’s very tempting to bury untold hours into compilers and accelerators and micro performance tweaks. 🙂

    I disagree with ‘complexity as total learning curve’. Nor is it directly related to expressiveness or terseness. Cognitive load has much more to do with our ability to reason about the effects of an action, e.g. a design decision, or change, or pressing a button. If this requires a lot of effort for each action, even after learning the system, the cognitive load will be very high. The ‘Art of Unix’ has a nice chapter on transparency and discoverability – where transparency is our ability to develop a simple, correct mental model of a system’s under-the-hood machinery, and discoverability describes accessibility to learn this model in the first place.

    We should pay attention to learning curves, of course… and not just total effort, but the incremental efforts and discontinuities. The Curl language, originally from MIT, was the first one I’ve read about making explicit efforts for incremental ‘gentle slope‘ learnability – i.e. such that users can learn just the little extra bit they need to solve a problem.

    My own hypothesis is that composition is an excellent basis for both of these features. And I mean composition in the algebraic sense, where the composite is the same type of thing as the components and has a predictable set of high-level properties based on the properties of the components – i.e. there is some simple function F, composition operators like *, and a set useful properties P, such that ?x,*,y.P(x*y)=F(P(x),’*’,P(y)). Composition gives us simple, scalable reasoning about useful properties, thus keeping the cognitive load down. Further, composition gives us incremental learning, because developers can effectively use high level components without fully understanding them, but may decompose and learn and modify them later.

    So my own efforts towards future programming focus heavily on composition.

    I also believe non-essential use of state is a major source of accidental complexity. Every little bit of state creates a cognitive load to understand not only immediate impact, but the impact on the whole future of a program and across partial failures. I’ve been thinking a lot about how to simplify state, avoid state, make state more robust or resilient to partial failures, etc.. I especially like the idea of utilizing histories directly as a basis for state (e.g. exponential decay of history) and of avoiding true state when all we really need is stability (e.g. stateless stable models, using non-determinism as an opportunity for stability).

    1. Yes I am still struggling to properly define complexity. It must be based on human cognitive load, not the inhuman mathematical notions people often use. It also has to consider the full life-cycle from learning through usage and into maintenance, not the narrow slice of expert developer throwing code over the wall that many people prefer.

      1. Mathematical notions of complexity are very useful at scale, e.g. for understanding asymptotic complexity, or how many individually understandable rules and states can easily blow up into an incomprehensible system. In my experience, humans don’t have good intuitions for how complexity will scale, or how local decisions (e.g. using a mutex) might impact whole-system behavior. I agree with the focus on full life-cycle.

  16. Agree but this is still too general. Simplicity is reduced complexity not the absence of it. The most difficult question is what will be that surviving minimal complexity . How will it look like. We often over-complicate things in an effort to find that minimal complexity.

  17. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler

    1. We should keep in mind that code comprehension doesn’t necessarily require reading the code. We can understand code also through example inputs and outputs, by manipulating pieces of the code and seeing how the outputs change, by probing intermediate values and animating the dataflows, by highlighting types or coloring where code is inlined, and so on.

      I sometimes wonder whether we should abandon ‘reading the code’ as even a primary mechanism for understanding it. The ‘code is material‘ metaphor would certainly lean in this direction, leaning towards human comprehension of code through experience, observation and play. Code as material could potentially come with built-in widgets and documentation for the more commonly tweaked aspects.

      Forth, APL, and J programmers offer an interesting contrast on how they understand code – not primarily by reading it, but rather through an ad-hoc mix of incremental execution and re-invention (‘this is how I’d have done it’). But I think we could do this approach a lot better with live feedback and graphical visualizations.

  18. I think a potential solution to all of this could come from changing the viewpoint. Try to conceptually create a new universe with new rules which allow programming to be easy, assuming you can have anything, forget about computers, assume your a God, then to work backwards from this perfect universe to implementing a simulation of this universe on computers allowing humans to work in this universe. For example, the universe may allow time travel, to run code backwards, to do all sorts of things, but you define the rules. Forget about the current limitations of computers, I have a 32GB ram computer now, that would have been impossible for me to believe when I first started with a 32k RAM computer as a kid. In a short time we will have massive computational power. Change the assumption and viewpoint.

  19. Nice article.

    What do you people propose in order to remove complexity from software?

  20. Yes, unlearning is the most difficult part. However, it is even more difficult than what you describe, because from what I can tell, the solutions are not radically different ideas that are obviously new and revolutionary.

    Instead, the solutions are actually often pretty close to what we already have, yet with subtle but absolutely crucial differences. Exploring these sorts of differences is maddeningly difficult, because the “obviously correct” answer is already there, you almost have to block your ears with wax in order not to be drawn to the existing solutions.

    And even worse is trying to explain this to others. Nigh impossible.

    My current attempt at dealing with all this is Objective-Smalltalk, and the name is indicative of what I’ve described. Yes, it is another Smalltalk, yes, it borrows from Objective-C. Yet no, it is completely different, because it abstracts concepts like messaging, assignment etc. into architectural primitives that can be used to express larger scale composition and adapted to suit.

  21. Excellent pragmatic measure of the complexity of a tool, necessary or not. May I propose another: The presence or absence of an animated, “soup to nuts” walk-through of the application of the tool. It would be like the preview button at the bottom of this textarea, and the metric would include how well the tool can be remembered while also be maintained distinct from its cousin processes. I imagine the procedure for actuating the launch-nuclear-missle lever should not be confused with the lever for ‘excuse-me-a-moment,-I-am-going-to-the-loo’ button, at least by our nuclear-missle personnel.

    1. That is certainly a great design goal if not a metric. Documentation ought to be example-centric all the way from individual commands and APIs up to top-level working examples. Perhaps the language should be “literate” enough that the documentation becomes just another program.

  22. Greatly enjoyed the updated manifesto. In the spirit of focusing on improving the programing paradigm as a design problem, the counterpart to design complexity is simplicity. I think you’d enjoy reading John Madea’s design book on the Laws of Simplicity (here’s a link to a quick list of the principles and brief explanations: Ten Laws of Simplicity). Thinking through how the principles apply to the act of programming might serve as good inspiration.

  23. “We are not doing Science. We are doing Design: using experience and judgement to make complex tradeoffs in order to satisfy qualitative human needs.”

    Yes, yes, and more yes.

  24. Funny, seems like you’ve come up with a manifesto for Smalltalk adoption.

    Smalltalk was always informed by working regularly with children to ensure it retained simplicity and ease of adoption, as emphasised in the videos that start at the following link:
    Kay Video. The language syntax fits on an index card. There are so many parallels between his ideas and those you are discussing that I wouldn’t know where to start.

    On the one hand, you can take it as a sign you are in good company (after all, he was the driving force behind Smalltalk, and coined the name “Object-Oriented Programming”. The bad news is that Smalltalk has largely been ignored, regardless of its undoubted influence. The influence of Simula (via C++) is much more strongly felt – polymorphism based purely on subclass relations, vtables, etc. There is a huge resistance in the industry to good ideas it seems.

  25. Amen!

    “The law that entropy always increases, holds, I think, the supreme position among the laws of Nature…if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” – Arthur Eddington

  26. Relational reactive programming is the way to go. Think of it as generalizing incremental refresh of materialized view.

    “relational” rather than “functional” because functions are degenerate relations.

    If you want syntactic convenience — go for it but do it with syntactic sugars. If you are worried about performance — add pragmas to the language to give compile time (executable view of source code materialization time) insights to the system, but don’t compromise the language itself.

Comments are closed.