The Myth of the Super Programming Language

I just read yet another recycling of the old myth of how some esoteric programming language (often Lisp or Haskell) is the secret weapon that allowed a team to outperform expectations by an order of magnitude. Paul Graham has strongly encouraged this myth (see Beating the Averages), but it has been circulating for ages. It is totally false. Worse, it reinforces the intellectual elitism that is the bane of our field.

The objective evidence we have is that the differences in programming performance are almost entirely due to the individual cognitive differences of programmers. It doesn’t matter what language a great programmer uses – they will still be an order of magnitude more productive than an average programmer, also regardless of what that programmer uses.

The anecdotal benefits of esoteric languages are a selection effect. Here is a common scenario. Lots of really smart programmers think they are too good to waste their talents doing mere application programming. But they also love esoteric languages that show off how smart they are. So you can get them to do application programming by letting them use their beloved smarty-pants languages. Presto, amazing results. But the ubermensch aren’t about to stoop to maintenance programming. Once the fun development is done, they are gone. When you bring in professional programmers to take care of things, they are dumbfounded by the towering monument to mental masturbation. The system gets thrown out and rewritten in a normal programming language using normal techniques that normal people understand. The super programmers blame it on the stupidity of the new hires, further confirming their sense of superiority.

There are no super programming languages, only super programmers. And they tend to be super jerks. I should know – I used to be one. What would really make a programming language be super powered is the ability to be used by normal people.

P.S. I seem to have hit a nerve with this! Methinks they doth protest too much. I responded to some points in this comment. The final paragraph bears repeating:

These kinds of claims are just thinly veiled bragging about how smart Lisp/Haskell programmers are. I actually agree that they are smarter. But ascribing it to the language is a form of tribalism: our language (country/race/sports team) is better than yours, so we are better than you. Language tribalism is a pernicious affliction on our profession and our art. Grow up, everyone.

P.P.S. I guess I was a little intemperate, and got it back ten-fold. Publicly calling out cherished myths and immature behavior is not going to win me many converts. I also probably overstated the science, which is old and ambiguous. Still, my experience is that when people talk about how great their language is, they are really talking about how great it makes them feel. That’s fine, programmers should use the language that makes them happy, and being happy could very well make them more productive. But programmers who dislike the language will probably be equally disadvantaged. And in any case, the effects are small compared to the differences between programmers. But it does raise some interesting questions: what are the factors that make languages more amenable to programmers? Can we identify specific cognitive styles or personality types and map them to specific language design decisions?

I am re-opening comments, so long as the discussion remains reasonable.

97 Replies to “The Myth of the Super Programming Language”

  1. I love when people say “it is totally false” and then continue with no prooflink.

  2. You’re misusing ‘esoteric’. Esoteric doesn’t apply to common, but not mainstream programming languages. It applies to bizarre and intentionally baroque languages: http://en.wikipedia.org/wiki/Esoteric_programming_language

    I’d suggest ‘magic language’ would be a suitable substitute.

    Now, there are differences in programming languages — despite your sloganeering in this post — 50 years of research into improving performance, safety, productivity in language design was not wasted, however, it is just one of many variables, and great tools are poor substitutes for great skill.

  3. I was initially interested in the title… but was very disappointed by the content. Effectively, there is no “super programming language” or “magic language” that increases productivity for all tasks … but there are some languages better than others to solve a particular task.

    So the point is to extend our toolbox rather than limiting ourselves to one language under the false assumption they’re all equivalent.

  4. I find a bit strange to assert that “normal” languages should be preferred on the basis of their normality. Surely then you must yearn for the programming of yore without these esoteric language getting in the way; just you and the machine with only a word’s worth of little switches and LEDs between you. That was, of course, “normal” at one point and the argument was as valid then as it is now.

    Contrary to making do with whatever happens to be the flavour du jour, we should prefer the correct tool for the job at hand. If anyone is foolish enough to reject the many very well designed little languages (languages like AWK and make and sh and regular expressions, for example) merely because they aren’t “normal” or the many interesting general purpose languages that provide support for writing safe code (such as Haskell and Ada) because they aren’t [yet] mainstream, then they are certainly not doing themselves any favours.

  5. Interesting that there are already several comments objecting to the thesis. Everyone should at least agree that this should be the null hypothesis; i.e. that Lisp is probably no more powerful than e.g. C# despite macros and so forth, but that there is a selection effect because Lisp has super programmers. The burden of proving that Lisp is more powerful despite the selection effect should fall on those making that surprising claim.

    1. There *is* already an attempt to prove that, and it predates this article – see http://www.paulgraham.com/power.html

      Now, it might be argued brevity is not the same thing as power, but I think Paul makes a good point that they are at least related.

      1. The metric Paul gives has some problems, for instance:

        * Succintness leads to abstraction. Ability to handle very high levels of abstraction results in self-selection.

        * Succintness leads to metaprogramming (Paul even mentions he can’t bear program without macros). Ability to understand and handle metaprogramming results in self-selection.

        * Paul opens an exception to “pathological examples”, which is a nice escape route for “that’s above my level”. Any language one finds too succint can be labeled as “pathological”.

        It’s very easy to stand amazed by APL (which was based on a mathematician’s blackboard notations, by the way), which is one of the most succint languages out there. However, very few people can think in such abstract terms as to be able to read, understand and modify (maintain) an APL program.

        1. But APL *is* a more powerful programming language (and as a consequence it will, of course, be accessible to less than the set of all programmers). Incidentally, it is way beyond my ability to work on it – but I still claim that it is a more powerful language than C#, the language I’m currently working in.

          As another example, C#-with-lambdas-and-generics is a more powerful language than C# without those – and that’s pretty much the same thing as “the former allows for more compact programs to be written than the latter”.

          1. No. C# with generics and lambdas is a more succinct language, but definitely not more powerful (as is APL). With those features, the complexity is the same, but the verbosity is lower.

        2. “It’s very easy to stand amazed by APL (which was based on a mathematician’s blackboard notations, by the way), which is one of the most succint languages out there. However, very few people can think in such abstract terms as to be able to read, understand and modify (maintain) an APL program.”

          APL was developed by Iverson as a more rigorous notation for doing computational applied mathematics. If you don’t understand algebra, arrays, and matrices, you can’t program in APL. But that doesn’t really matter, because if you don’t understand algebra, arrays, and matrices you don’t understand the domain of programs APL was designed for.

          What is the lesson here? Programming languages are irrelevant; understanding the domain is everything.

          Trying to “democratize” programming languages will only lead to more bug-infested, unmaintainable shit code by people who don’t understand why they are writing the programs they are. That is, it will only lead to more of the status quo.

    2. I think that lisp does a lot of things right. Whether it is the “magic language” depends on the programmer. No language is really more powerful than any other. Some of them make more sense to the individual programmer depending on the mindset, and some are indeed more suited for certain tasks than others. With that said, you can write any program in any language, if you are willing to take the time to figure out how to do it. Sometimes it is not logical to write a program in C that would take thousands of lines, when it would take mere hundreds in Lisp.

      I do find it interesting the C# is trying to fill all of the spaces provided by some of the more “interesting” languages such as lisp. Take for example the inclusion of lambdas in C#.

  6. …the intellectual elitism that is the bane of our field…

    Really? I guess you’ve never worked with someone from social sciences or such

    1. Let’s say Q(f) is intellectual elitism of field f. So, Q(computer science) => bane. Could you please explain to me how you derived For All f, f != computer science, Q(f) => !bane?

  7. By day I code in C, mostly Linux drivers/kernel work. By night I code in Haskell. Not because I want to prove how smart I am, but because after several years in industry I am sick and tired of crappy software.

    Productivity gains aside, I think a far more important goal with languages should be in discovering how we can produce *correct* software for *much* cheaper. Languages can help with this, and Haskell is a great example in my opinion. The problem with Haskell and other great languages is that they tend to interface with a lot of code developed in not as fortunate of languages 🙂

    Don’t get me wrong, I really enjoy hacking in plain old C. But the right language can make a huge difference. The current approach of fixing the disaster that is mainstream OO with techniques like TDD, BDD, etc.. almost seems like a joke. We can really do a lot better.

  8. I agree more with Thomas’ comment than with the content of this article.

    Even so, Jonathan does make at least one point that also concerns me:
    projects written in good non-mainstream languages tend to get rewritten.
    Maybe a foothold in the familiar is as important as spectacular new ideas.

    Actually, I’m not sure I understand the full opinion of the author.
    He says this about himself:
    “I seek to better understand the creative act of programming, and to help liberate it from the primitive state of our art.”
    (from http://subtextual.org/AboutMe.htm)

    And he seems to be doing research in programming languages.
    Why do that if one language is no better than another?

    1. Making unmaintainable code (whether in normal or esoteric languages) IS a primitive state of our art. Partially, but it is. Jonathan seeks the holy grail of a tool that won’t let (or at least hinders) the programmer to create illegible, unmaintainable, unreadable, etc. code.

  9. Although I see your point, and I’m sure that the selection effect you describe does exist, I don’t think it is the whole story. Programming languages can operate at very different levels of abstraction, offer different standard libraries and data types, have different common idioms and therefore present very different ways of solving the same problem.

    Clearly Erlang, for instance, with its approaches to concurrency and fault tolerance, is a more appropriate language for implementing telecomms systems than, say, C.

    There is also an indirect benefit (and I’m drifting off-topic a bit here) to learning new languages as an individual. One of my favourite tips from The Pragmatic Programmer is the one about learning a new programming language each year, because of the exposure it can give you to different problem-solving techniques. Even if you’re not working in a given language at the time, experience of having worked in it previously can often influence how you go about tackling a problem.

    1. Yet I’ve seen every pattern in Erlang’s OTP implemented in concurrent, fault tolerant C programs. It makes that part of the application easier at a cost of performance and ability to find maintenance developers.

      The most reliable software I’ve come across has been a horrible C and C++ thing that was subject to ridiculous amounts of testing – it controlled satellites with decades of up-time without failure, distributed over redundant hardware so it would outlast the silicon.

      Engineering gives robustness, not languages.

      1. It is reasonable to engineer a great deal more robustness into our languages. Garbage collection, persistence, automatic code-distribution for redundancy and scalability, object capability model and typing for security and safety, support for timing models to better synchronize effectors (sound, video, robotic motion) without a lot of hacks, automated scheduling based on dependencies in order to avoid update conflicts, etc.

        There are many challenges in such an engineering task, many opportunities for design-bugs. Not the least challenge is presenting a sexy and marketable interface to the end-user (programmer), who, as a rule, often isn’t comfortable with change.

        1. ‘It is reasonable to engineer a great deal more robustness into our languages. Garbage collection…’

          Garbage collection may make a small program more robust, particularly if the programmer has not much experience with memory management, but it is not a given. It can introduce random garbage collection pauses, and it also creates an idiom that the lifetime of an object is no matter. In the wrong circumstances its lax use can lead to memory crashes. Many times when I write high performance code in C# I miss creating objects on the stack, as I would with C++. Once you try to reproduce the same idiom in C# you end up having to manage disposal and finalization, which is a lot more work than writing a constructor/destructor pair.

  10. “Towering monuments of mental masturbation” — that’s so Freudish. Totally agreed.

    1. I have to refine my opinion above: though I agree with you in general, not everybody is “jerk” who creates or uses a new language or paradigm. If we stick too much to the good ol’ way, we were still wondering at ALGOL. New and esoteric languages don’t harm anybody. Abusing them does. Jerks will make jerkish (=unmaintainable) code with “normal” languages and techniques. On the other side, simple and understandable programs can be created even in Haskell and Erlang.
      The Import Factor is the man, not the tool.

    2. At last: In your terms, you also count a “jerk”. Unless you make sure, that Subtext/Juncture/Coactions/Coherent binding is usable by “normal people”.

  11. So, following your “reasoning” (?) “normal programmers” should use “normal languages” and “normal techniques” to implement “normal applications” and whoever don’t follow this simple rule, is a jerk.

    Let me guess, by “normal language” you mean, the language you already know, and “normal techniques” are the way you are use to work, right?

    So, I could rephrase this as “any programmer that doesn’t use the same tools and techniques I use, is a jerk”.

    Talking about jerks ….

  12. I’ve extremely surprised by the negative comments on this post. I have to say that I very much agree with the post, and that’s either because I am missing something, or most other comments took this too personal.

    IMO, the main point was just that some languages attract people that’s just plain smarter than others, therefore causing the illusion of bringing massive productivity improvements, when it’s actually not the main cause (the language), but a side effect (the selection effect).

    There was never a discussion about one tool being better than the other.

    1. > What would really make a programming language be super powered is the ability to be used by normal people.

      Unfortunately ‘normal people’ are morons. And the fact that programmers are building tools that ‘normal people’ can use to ‘write applications’ is a certified bad idea. Look what happened when ‘normal people’ (and these are not your next door neighbor normal people, these are people that can actually read and write, sort of; not LOLMAOROFL ‘normal people’ (95% of humanity)) started programming with VB and PHP. The worst code ever written was written in those ‘programming languages’. Applications full of holes running at banks because, yes, now ‘normal people’ get to work as programmer too! Because they are cheaper and better socially. Fire mister PhD-dont-communicate, hire that nice guy that had tickets to the Nicks and let him build your apps! In a non-esoteric language! Great idea. Oh wait.

      I would feel a lot happier if I knew my radio-therapy machine was programmed in a ‘super language’ than what they are programmed in now. Because you are right and wrong; there *are* super languages, but they can only be wielded by super programmers and *those* programmers should be the normal programmers, while the current normal programmers shouldn’t be allowed near a computer.

      1. That’s an excellent example for elitism. (“Everybody is stupid to programming, except me and my few, selected mates.”) Yes, there are many examples of bad, really bad code. Many of them is blamed to the author: short-sighted decisions, negligance, laziness. But many of those “apps full of holes is banks” stems in the quality of existing tools.
        Jonathan seeks a tool that eliminates (most of) this, by allowing (mostly) anyone to express formulas and algorithms (=writing programs) WITHOUT getting lost among concurrency issues, data bindings and such factors.
        In this way, this post is quite contrary to his work of many years.
        What I think you shold see is that most programs just cannot be perfect. Time and costs are very important factors. If a program makes less “income” (not only in money) than it costs to create, then that program has no “right” to exist. Those “Applications full of holes running at banks” are result of a trade-off. (Exceptions exist, of course.)
        The salvation would be a tool, that raises code quality by eliminating many of the possible holes.

        1. Well yes, but as long as that ‘tool’ is not there, just having anyone deliver whatever they see fit is not a solution either. Why not demand from programmers the same as we demand from medical doctors; that they (programmers) must have a PhD, be able to write proofs etc? Until that ‘super tool’ will be there to save us, how about limiting the damage by the only means we know off?

          1. I tell you a secret: medical doctors, engineers, architects etc. often work very much the same way. Buggy, incompatible, etc. There is not much difference.

          2. What planet are you from? Most doctors don’t have PhDs (at least clinicians), and they most definitely don’t prove things before they act, its very much trial and error along with the scientific method (which is a lot how programmers have to work).

        2. By the way; I believe the problem is the distance between the specification and the resulting software. The specification having been written in human language, which is vague. Also when changing one line in natural language, it can mean 1000s of lines in a programming language. Either programming needs to become more human (and that means stacks of AI to ‘guess’ what the human means or at least be able to adapt based on the human saying ‘i didn’t mean that’ for instance) OR programmers need to be more like computers (like they used to be fresh out of uni). The former is not possible yet (but I think we will get there), so I think we can only pick the latter for now.

  13. I think it’s sort of funny if not ironic that we’re all here arguing over languages. Crappy software design makes for crappy software; the language that implements the crappy design is, by and large, inconsequential.

    That is not to say that there aren’t advantages to some languages or disadvantages to others, of course, but the fact remains that there is no real argument to be made that writing code in a supposedly better language will produce better software.

  14. Language choice matter. I disagree it is a selection effect. First, few people would argue that there is no difference in the chain Assembly, C, Java and Python. Each time you step up this chain, you will be able to build programs faster. The programmer matter selectively little here. If you give the brilliant assembly programmer C, he usually builds the program faster and nowadays also with higher execution speed as the result.

    Hence, if there is a selection effect, it must be tied to the next tier of programming languages. But what is the next tier? I will argue that programming in Java or C++ is a lot harder than Erlang or Haskell any time. Even a moderately stripped down language such as Go, or perhaps even C are much harder languages to program in. Getting the program right to the extent it is demonstrably error-free is no easy task. The difference is where you pay and invest your time. C is learned rather quickly, but to program it in a correct and safe way without security problems literally take years. Forcing out abstraction of a language so constrained is an even greater feat, which to all days amaze me. I envy those who can do it. You could argue the same with Java or C++.

    Interestingly, very few really good programmers I know are jerks. Most of them have a curiosity for learning new tricks and are fairly open to new ideas. You can’t be a jerk and expect other people to teach you at the same time. Add to this the rather small community. A small prog. lang. community will only survive if it is friendly and avoid jerks. A large community will survive despite being infested with aggressive people who like to tell you how wrong, dumb, idiotic and moronic you are. Community tells you lot about what kind of people you can expect to work with if programming in said language.

  15. Jonathan,

    It seems to me that your post explores only a single, very thin dimension of a more complicated issue.

    I present another, orthogonal, slice of this same issue:

    Imagination. Fundamental entrepreneurialism.

    Imagineers invent new things. They are, by their very nature, attracted to baubles, like new languages. Artists are trained to use a plethora of media before they settle into only one kind of medium (if they ever do settle).

    Single-“normal”-language bigots are mortally afraid of the unknown. They are naturally attracted to putting things in order and like to do the same thing over and over again, the same way. They want the status quo. They want projects that are well-spec’ed in advance, so that they can churn out code the only way they know how. They will gladly rewrite an existing system – regardless of the corporate advantages and costs – to make it more “orderly” and “manageable” (from their perspective).

    Regardless of what Paul Graham says were the factors for his success, what he really did was invent a new product niche. He created a new niche that the highly-paid marketing gurus failed to identify and pre-specify. The “normal” language bigots could not have addressed this niche, because it was not specified for them in advance and could not be understood (without experimentation).

    There are overwhelmingly more normalcy bigots than there are imagineers in the world. Just about all puppy-mill trained CEOs, marketers, bean-counters, project managers, etc. are normalcy bigots who fundamentally *need* to constrain creativity so that they can produce warm-and-fuzzy, measurable accountings of these things. Hence, most real ground-breaking projects go the way of Graham’s example – a working prototype is invented and brought to market defining a new niche, then it is normalized to take it out of the hands of the imagineer.

    Maybe we need both kinds of people?

    Maybe we need a new classification in software? Software Artist?

    Here’s a case study that I just lived through. Our company has exactly two software experts, each with about 30 years of experience. One is a C bigot, the other is a Common Lisp bigot. The C bigot grinds out the same code every year. The CL bigot has tried every language under the sun and recently settled into CL because it allows the convenient expression of all of the paradigms that he learned.

    The C bigot produced a programming / scripting language for a particular market. The marketing guy implored the C bigot to make the language more accessible to a wider market by providing some kind of GUI access to the language’s feature set. The C bigot told the marketing guy that it couldn’t be done and that the idea was stupid and that the customers must be stupid if they couldn’t be bothered to learn to use the language. After more argument, the C bigot continued to resist and the problem was finally, in exasperation, turned over to the CL bigot. The CL bigot flailed at the problem, trying out all kinds of HCI approaches and, finally, did come up with new ways to express programs in this language using a GUI. The C bigot, having now seen how this could be done, rewrote the whole thing from scratch in C. The advantage to the company was that it had a “clean” version of the code written to a “spec” (the existing CL gui). The disadvantage was a 3-quarter delay in time-to-market of the normalized C-based product.

    What would really make a programming language be super powered is the ability to be used by normal people.

    No.

    Normal people don’t want programming languages at all. Just solutions.

    My mother wants a yahoo-mail machine and a separate Tetris machine. She doesn’t want a programmable computer, she doesn’t want Microsoft asking her if she’d like to download critical updates every day she turns the machine on.

    Here’s a case study. I once asked our secretary to backup our shared mini-computer in the morning, before the rest of the company showed up. I wrote down, on paper, a simple set of instructions, that covered every case I could think of. For example, I wrote down “Go to any terminal. If you see C:> on the screen [ed. i.e. a developer forgot to logout before going home the night before], type ‘logout’. When you see ‘login:’ type ‘operator’, and then when you see ‘password:’ type ‘yyy’. Then type ‘backup’.”

    I came in one morning and found the following on EVERY screen in the office:

    c:> logout
    login: operator
    password: yyy
    c:> logout
    login: operator
    password: yyy
    c:> logout
    login: operator
    password: yyy

    <>

    I had to ask her why she did this – being a programmer, I couldn’t understand what was in her mind. It turned out that the concept of ‘sequence’ was not bred into her thought process. In essence, she treated the instructions in a declarative, pattern-matching manner. After every action, she re-scanned the whole sheet of instructions and picked the closest match, instead of treating the instructions in a sequential manner as I had intended. Every time she successfully logged in as operator, the closest match on the sheet of instructions was “if you see C:>, type ‘logout'”, so she did that. Again and again.

    The paradigmatic chasm between programmers and normal humans is huge. We can’t even recognize that programmable computers are a burden to the general population, not a boon.

    What we need are programming languages that allow software engineers to produce products – encased in epoxy – that provide solutions to specific problems. Whether we choose to use programmable computers inside the epoxy is our own problem, not the customers’.

    1. Nice anecdotes. Coincidentally, my current research is on how to eliminate sequentiality from programming. The secretary was thinking naturally. A truly super programming language will let us program the way we naturally think, not require us to learn complex new ways of thinking.

      1. But sequencing (or temporal reasoning) is wired into our brains. It would be a loss not to take advantage of that, just like it would be a loss not to take advantage of our language processing capabilities (like visual languages).

        1. I agree that we have good intuitions about simple temporal sequences involving one or two actors, corresponding to the cases where we act sequentially upon passive objects, and when we interact in turns with another person. A natural programming language should offer those scenarios. But we can’t understand hundreds of actors, which is what current languages expect.

          1. That is true, but you have to be careful that the alternative is not worse. Some people say “mutability is hard, so lets remove mutability”. But sometimes mutable state is the most natural solution. If you removed mutability you have to encode it in some way, and with that you get all the disadvantages of mutability back along with the extra complexity of the encoding. So if your problem really is hundreds of actors, the best solution is to just have hundreds of actors.

            The same argument is made about first order languages: reasoning about higher order code is hard, so lets do first order languages. When you manually do closure conversion and transform calls to first class functions into giant switch statement you have first order code so its theoretically easier to reason about. The problem is that your program has exploded, negating any easier reasoning.

          2. If you removed mutability you have to encode it in some way, and with that you get all the disadvantages of mutability back along with the extra complexity of the encoding.

            I.e., monads. Which also serve as a perfect example of the larger point of this post. Look how clever my language is – you need to know Category Theory to get it.

            However your two points are well taken, and I agree they are two fundamental dilemmas of language design. I believe they are both false dilemmas, and that there are alternative solutions that have been overlooked. The solution to the first one is what I am working on now.

      2. I think there is room fore more logic between the code and the compiler, so an interpreter in the editor can translate the written code to and from the stored specification, and the builder can translate the stored specification, with added directives about use and representation, to a low level specification to a compiler.

        This can do imperative programming more declarative and less operative, when it´s used to describe communication (without state in order, and with lazy evaluation) instead of operation (with explicit or assumed state in order, and with strict evaluation).

        It can also be used to edit different operative processes in different places, with one order and one state in each place.

      3. A truly super programming language will let us program the way we naturally think, not require us to learn complex new ways of thinking.

        I disagree. Strongly. Most people have flaws and fallacies and delusions if not outright diagnosable bugs in their thinking. Thinking effectively is a skill that needs be taught, with per-domain specializations. Additionally, most people do not know how they naturally think (introspection is both rare and fallible) and certainly don’t know how to express those meta-thoughts.

        If we programmed based on how we speak (or communicate among one another), we might do better. Human speech depends heavily upon shared context, shared experiences, analogy. The goal is to support the other participant in making the same mental leaps you’ve made. For the most part, this is interactive, with the listener providing cues for misunderstanding and requesting clarification. Non-interactive speech or writing requires a lot of training. Even with all this effort, miscommunication still abounds. We could mitigate this with a large database of knowledge, heuristic leaps, and support for ‘interactive’ programming including requests for clarification and cues for misunderstanding.

        Even without big ‘leaps’, interactive programming is a good start: continuous testing, live debugging, databases of reusable code and effective search mechanisms, etc. Your own work with Schematic Tables falls into this arena.

  16. I don’t think I agree. Not too long ago I tried writing a code generator for a toy language of mine in Python and gave up because it was too much effort. Not too long after, I tried again in Clojure and not only got it working in a weekend, it was much more feature complete than my first attempt, supporitng function inlining and a reasonably efficient register allocation scheme – neither of which were planned for my first version.

    But heres the thing – I only started learning Clojure about a month or so before this, yet I’ve known (and enthusiastically used) Python for many many years.
    I also used Java professionally every day for the past two years, yet I can’t bring myself to even attempt to write this in Java…

    Does this mean I’m a terrible Java programmer, bad Python programmer but great Clojure programmer? I doubt it and I’m not saying everyone will have the same results. Am I a jerk and an elitist for being able to write a program better in a non-mainstream language than a mainstream one? Maybe, but that doesn’t sound reasonable to me. Or maybe langauges are not all equal? That sounds more reasonable to me.

  17. You say:

    I just read yet another recycling of the old myth of how some esoteric programming language (often Lisp or Haskell) is the secret weapon that allowed a team to outperform expectations by an order of magnitude.

    But let’s look at the article you are responding to (I had nothing to do with it…I just read it):

    We release a new version once a week and we estimate that we only write 30% of the code we would have to write in Java and about 50% of the code we would have to write in Python. Writing less code requires less time.

    It does not make a claim about “outperforming expectations by an order of magnitude.” It doesn’t mention expectations: it is talking about LINES OF CODE. And implies AT MOST a 3x improvement: not an order of magnitude.

    Whether Clojure code is really, generally 3x more concise than Java code is worth examination and experimentation. But I think that by now the question of whether there exist languages X which are 3 times more concise than language Y should be closed. Almost any programmer can take Python and produce a complex app in less than 1/3 of the lines of code of DOS batch files (to pick an extreme example).

    So the claims made in the article are fairly modest. Your attempt to paint them as an example of some “recurring myth” is fallacious.

    Programming languages are tools. We should expect them to provide some modest improvement in efficiency over the decades. Just as the internal combustion engine improves over time, so should programming languages. The article you referred to reported anecdotal evidence that we’ve made some minor progress.

    As anecdotal evidence, it is of course suspect, but not for the reason you say: it does not make outlandish claims about “order of magnitude improvements in performance.”

    1. That is just legalistic evasion. The post claims that choosing Clojure made a massive difference in their success. I am calling bullshit. There is no reason to believe that language choice makes a big difference. People have been doing studies for decades, and there are no reproducible strongly positive results. The burden of proof is on the poster, who is making a surprising claim. I am just pointing out that there is no empirical evidence to support the claim of big differences between programming languages, while there is plenty of evidence of big differences between programmers. So there are good reasons to discount the claim as a selection effect.

      Sure, 2-3 times improvement is not technically a decimal order of magnitude, but it is still claiming a massive improvement. And then there is the typical ingenious assertion that “Writing less code requires less time”. Graham makes the same sly move. Sounds obvious doesn’t it? But is it true? Programming is not data entry. You think a lot. You read a lot. You debug a lot. Key-pressing is a small part of programming. In fact there are clear cases where terser programs are HARDER to write. Think APL. The argument is bogus, and flies in the face of everything we know about programming.

      These kinds of claims are just thinly veiled bragging about how smart Lisp/Haskell programmers are. I actually agree that they are smarter. But ascribing it to the language is a form of tribalism: our language (country/race/sports team) is better than yours, so we are better than you. Language tribalism is a pernicious affliction on our profession and our art. Grow up, everyone.

      1. Ok… I realize that there were many responses to this article, but honestly; Graham never mentions typing once in the article I linked to.

        “Programming is not data entry. You think a lot.”

        This is exactly Graham’s point, and exactly how he SUPPORTS the idea that “shorter is better” – you can think a lot MORE in a terser language than in a verbose one.

        “The burden of proof is on the poster, who is making a surprising claim.”

        It is rather obvious that you can keep a lot more in your mind if you think in a high-level language than in a low-level one. Try understanding a small method in a language like C# (I try to keep my arguments to languages I actually know) and try “grokking” the same method in machine code or assembler. Or think SQL vs plain old C. I can think of hundred of examples – in fact, articles about this subject came up with the notion of “ceremony” in programming.

        There’s a reason macros were invented in assembly language – to make it more compact, and therefore to allow thinking at a higher level of abstraction… which means thinking MORE at the same time.

        “In fact there are clear cases where terser programs are HARDER to write.”

        Again, this is something Graham AGREES with in the article I linked. He’s not talking about terse *programs*, but about terse *languages*.

        In conclusion – you wrote an interesting article, in that it generated a lot of reactions, but I think so far the evidence is favoring the other side.

        1. I think we should differentiate some more between what constitutes power in a programming language.

          While terseness means being able to put a lot of information into little space (just think of APL), this does not necessarily make the language more powerful.

          On the other hand, providing high levels of abstraction, like many modern and esoteric (functional) PLs do, actually *does* provide power (at least in my opinion/experience).

          So I would say in general it is not as much about conveying a lot of information in litte space, but about providing the programmer tools to build and use abstractions.
          In this way one can concentrate on the essence of a problem without being distracted by boring details (as you are often forced to do so in many “popular” languages).

  18. That is just legalistic evasion. The post claims that choosing Clojure made a massive difference in their success. I am calling bullshit. There is no reason to believe that language choice makes a big difference.

    Let’s be very clear on this then. Are you saying that you do not believe that using a modern programming language can produce a large efficiency improvement compared to using GWbasic or COBOL ’68. That’s your claim: there are no large efficiency improvements to be had through programming language improvement.

    If you were the CTO of a company and the programmers were considering COBOL ’68 or GWBacic for their language, you would not feel that that was likely to detract from your team’s capacity to hit deadlines and compete with people using Java.

    Presume that they have a COBOL IDE, you have a stack of adequate COBOL resumes in your inbox, COBOL runs on the JVM so you have the same libraries. But one language has exception handling, garbage collection, static type checking, polymorphism and everything else invented in the last 40 years. The other does not. You would still not expect those features to make much of a difference in productivity: ” There is no reason to believe that language choice makes a big difference.”

    And if you feel that way, then I guess you also feel that the industry has wasted vast amounts of time and energy in inventing these new languages and switching to them. We should all still be programming happily in COBOL ’68 and focus our innovation elsewhere.

    1. All things being equal, i.e. given library and IDE and experience equivalence, there is probably little measurable difference between COBOL and Java. By little I mean like <25%. Lost in the noise compared to the differences between programmers. People tried really hard in the 90’s to measure the impact of OO programming over procedural, and never got any solid positive results. The solid weight of all the empirical evidence is on my side.

      I actually have real-world experience with this situation: my old company had a massive COBOL base dating back to 1980, and slowly ported/rewrote to Java. The big win with Java was due to the better libraries, IDE’s, and pool of experienced programmers. Application logic is application logic regardless of the syntax. You quickly learn to visually defocus the boilerplate, and auto-type it. Some things in Java, like inheritance and polymorphism, can be very double-edged swords.

      1. http://page.mi.fu-berlin.de/~prechelt/Biblio/jccpprt_computer2000.pdf

        The following statements summarize the findings of the comparative analysis of 80 implementations of the phonecode program in 7 different languages:
        – Designing and writing the program in Perl, Python, Rexx, or Tcl takes no more than half as much time as writing it in C, C++, or Java and the resulting program is only half as long.
        – No unambiguous differences in program reliability between the language groups were observed.

        The often so-called “scripting languages” Perl, Python, Rexx, and Tcl can be reasonable alternatives to “conventional” languages such as C or C++ even for tasks that need to handle fair amounts of computation and data. Their relative run time and memory consump- tion overhead will often be acceptable and they may offer significant advantages with respect to program- mer productivity — at least for small programs like the phonecode problem.

        Now I do not claim that that paper is definitive, but if you’re going to wave your hands towards “the literature” then let’s get some actual citations into the thread.

        I’ll also note that they said:

        — For all program aspects investigated, the performance variability due to different programmers is on average about as large or even larger than the variability due to different languages.

        You might think that this is validating “your” side of the argument. But nobody has ever, once, in this thread or the linked article, said anything to dispute that point. To excel, one should try to get the best programmers one can afford, using the best tools one can afford. I have never met a language partisan who would disagree with this common sense statement.

        1. To excel, one should try to get the best programmers one can afford, using the best tools one can afford. I have never met a language partisan who would disagree with this common sense statement.

          I disagree. You can’t just put a good programmer together with good tools and expect good results. You need to get the tools said good programmer groks, or is at least willing to grok. You might even be better off getting better tools and interchangeable programmers.

          There is all sorts of politics, psychology, human resources, risk management, etc. to deal with.

      2. You quickly learn to visually defocus the boilerplate, and auto-type it. Some things in Java, like inheritance and polymorphism, can be very double-edged swords.

        Jonathan, I think you make many valid points, but aren’t you contradicting yourself with this last statement?

        You say that some language features are “double-edged” swords. But then clearly language matters (and some language features are obviously useful).

        Also, you say abstraction isn’t necessarily a good thing. But abstraction is exactly what makes programming possible for “to be used by normal people.” You may be right that there have been no consistent measurements of increased productivity with say OO over procedural programming languages, but what about managing complexity. I think it is fair to say that OO let’s us manage more complexity than previous approaches. Now, OO isn’t perfect, and perhaps functional programming is better when it comes to managing complexity.

        To me the complexity argument matters more than the productivity argument. When that is said, I do think we tend to obsess a bit about the superficial differences in many languages.

        /Karl

  19. You should join the Tea Party or be on Fox News or something. Maybe Karl Rove still needs some Jr. staff members.

    Of course “they protest too much”. Your entire argument is structured in such a way that anyone who offers a rebuttal is automatically an elitist snob.

    Here is how much time you have left on your clock: 12.5 minutes.
    Here is how much your *opinion piece* matters: ZERO.

    I gotta go back to work — I have to untangle a build system written by mediocre programmers who couldn’t be bothered with learning, so they replaced a towering monument of mental masturbation that *worked* with a pile of crap that doesn’t work.

  20. If there was a Like button for this post, I’d hit it 🙂

    Barring all this sideways talk about language A vs language B, I understand where you’re coming from. Unfortunately, you seem to have attracted the ire of many a programmer, while the rest of us are just silently nodding in agreement.

  21. Nice post, unfortunately made outside of my time zone :), but this is a basic reformulation of the weenie syndrome right?

    One thought: learning and using unconventional programming languages (say Haskell, Lisp, or even Prolog) contributes to making a programmer better even in conventional languages (C++, Java). Using a different language causes a programmer to think about a problem and its solution in a different way, which in helps develop a better programmer (PL survey courses in CS programs are very important!).

    Perhaps language designers are finding different ways to solve problems, and we develop new syntax and semantics just as a mathematician develops new notation to help think through a proof they are trying to build. The syntax perhaps makes the solution style easier to read, but the solution exists independently.

    But I don’t think thats the end of the story. I personally am interested in languages that are potentially usable by normal programmers, not insanely powerful languages like Haskell or even Scala that emphasize unnatural amounts of abstraction and logical thinking. But this really means I’m interested in solution styles that are easy and don’t require a lot of clever thinking to apply.

    1. Exactly Sean. The Lisp/Functional crowd gush about terseness and abstraction, as if those were good things! They’ve got it backwards. Abstraction is a scarce resource, limited by human cognition, and we have to spend it carefully.

      1. Not all types of abstraction are expensive.

        Pretty much every API abstracts over an implementation — and “normal programmers” don’t have any problem with that.

        Without such abstraction, we lose modularity, and the incomprehensibility of code becomes O(n^2) where n=code-size, rather than linear. That is, to learn an unmodular program takes super-linear effort if it isn’t modular, or linear effort if it is super-modular. Modularity absolutely depends on abstraction at each and every interface between two modules.

  22. just thinly veiled bragging

    Hey what’s wrong with bragging?

    I appreciate the time I’ve spent with programmers more skilled than myself, who were also willing to show me a few things that previously had been above my head. I don’t even mind when a programmer [who I feel is] less skilled than myself wants to drop some knowledge on me: he’s going to be right at least part of the time! If you think you have a better language/library/methodology/editor/OS/taste in music/whatever, I want to hear about it. I might give it a try, and even if I find my opinion is different than yours, I’ll learn something. A fragile ego is the biggest obstacle to learning, and this whole line of argument just perpetuates that. If one finds himself crying about mean old elitist braggart “great” programmers, one ought to adjust his attitude and start learning.

    So what if some app needs to get rewritten for maintenance purposes? Has that never happened for any of thousand lame and stupid other reasons? We’re human beings who use our minds for a living. If the company has to pay for a few more hours of learning, they’re better off just to sack up and pay. In any knowledge enterprise, too tight a focus on short-term efficiency is deadly for long-term efficiency. Of course, management might decide that on project A, we will use language Z, and that’s that, until management changes its mind, at which point we’ll use language X. Developers who are simply unable to keep up with these sorts of cook-book changes will be at sea when management decides that we need to completely revamp our business processes, which will probably require some <gasp> creativity.

    In the legacy support I’ve done, it has been common to find code both above me and far beneath me. Sometimes it’s been an effort to catch up to a genius predecessor, but having done that I’m better off for the rest of my life. It is always a giant pain to support stupid legacy work (e.g., a thousand-row configuration table that could have had ten clever rows), and you don’t learn anything from that.

  23. This is one of the most ridiculous things I have ever read in my life.

    One of the conclusions that can be drawn from this blog is that garbage collection is not a significant improvement over manual memory management.

    That is absurd on its face.

    You wonder why people seem upset that you would come to this conclusion and you then state that you think that people who use ‘esoteric’ languages are really smarter.

    So they are smarter yet they cannot realize how pointless all their time spent in these languages are?

    They are smarter yet they who actually know both the ‘esoteric’ languages and the ‘normal’ ones can’t figure out that there is no significant difference.

    People may seem upset because you are insulting their intelligence.

    You may think that the advantages of a particular language are not as significant or not really advantages but to argue that there is no significant difference at all is absurd. And to pretend you are not insulting the intelligence of people who use ‘esoteric’ languages is even more absurd.

    Really an equivalent argument to what you are saying is “it is impossible to make a worse programming language than the one I currently use”.

    Anyone with a brain can create an example that proves that to not be true.

    Really you are just upset that not everything is written in the most popular languages (the ones you understand best) and you want to rectify that situation.

    Certainly some people may learn languages to impress others with their intelligence but obviously if those languages are no better it was not particularly intelligent to learn them.

    1. GC was one of the biggest improvements ever in language design (now 50 years old). I wouldn’t use a language without it. It is a big enough deal that it might actually result in measurable differences in programmer productivity, unlike virtually all the language features we obsess about. However it is still insignificant compared to the differences between programmers.

      If you want to post here again, you will need to make a rational argument about what has been said, not phony ascribed positions and motivations.

      1. GC was one of the biggest improvements ever in language design (now 50 years old). I wouldn’t use a language without it. It is a big enough deal that it might actually result in measurable differences in programmer productivity, unlike virtually all the language features we obsess about. However it is still insignificant compared to the differences between programmers.

        Why do you keep acting as if we must choose between these two things, when we do not need to? Airbags are also “insignificant” as a safety feature in cars when compared with brakes. But both are important. Nobody has ever claimed that an incompetent programmer in a great language will beat a great programmer in an obsolete language.

        1. Nobody has ever claimed that an incompetent programmer in a great language will beat a great programmer in an obsolete language.

          But people are constantly claiming that their favorite language gives them massive productivity gains. Like the posts I cited at the top. I am calling bullshit. The observed differences are due to selection effects on the programmers. Powerful languages are powerful because they attract powerful programmers. That is all I am saying. But it seems to threaten some people’s cherished beliefs.

          1. I think what Paul is saying, is that those intelligent programmers who are making those claims — are intelligent enough to see that they have these productivity differences when using the two languages, and you are telling them: “No you, don’t”, contradicting their hands-on experience, with little to base it on.

  24. Boy did you ever hit a nerve! 🙂

    In the main, I agree. There is too much “myth” around programming languages, too little separation of causes, and too little thought given to the complete life cycle of software.

    Still … clearly there are differences between programming languages and frameworks. Differences not just positive, but also pathological. Without understanding the pathologies, you are bound to undercount cumulative cost.

    Java is one particular sort of example. Java made abstraction more accessible to more average programmers, so we got lots of abstractions … far more than needed. The main pathology of Java is too many abstractions. Is in part the flight to Python and Ruby mainly to escape the nightmare wedding cake that dominates much Java development?

    On a similar note, my personal inclination is to promote Javascript on the server (for some usage). Not that Javascript is the “bestest-ever” programming language, but rather as it is a nice step up, common in the web browser, generally familiar, and thus more likely to become familiar to more programmers.

    What are the usual pathologies for each language/framework when used by large groups of average programmers? How can they be identified? How can they be avoided?

  25. I guess I was a little intemperate, and got it back ten-fold. Publicly calling out cherished myths and immature behavior is not going to win me many converts. I also probably overstated the science, which is old and ambiguous. Still, my experience is that when people talk about how great their language is, they are really talking about how great it makes them feel. That’s fine, programmers should use the language that makes them happy, and being happy could very well make them more productive. But programmers who dislike the language will probably be equally disadvantaged. And in any case, the effects are small compared to the differences between programmers. But it does raise some interesting questions: what are the factors that make languages more amenable to programmers? Can we identify specific cognitive styles or personality types and map them to specific language design decisions?

  26. While I would tend to agree (in an utterly uninformed manner. I haven’t by any means taken a fair sample of languages) that there are no “super languages” which instantly make everything better, I do not believe that the programmer is the only source of goodness in the task of programming. There’s a line somewhere. For example, a programmer, good or bad, will be able to do more, faster, and more readably, in a language which supports lambdas than in one which does not. Even if they don’t use lambdas more than once in the whole of the application, I bet it saved a hell of a lot of time in that one place.

    Language matters. But beyond that it’s just a pissing contest.

  27. A good programmer can create a towering monument to mental masturbation in any language.

  28. “AgitProp” is definitely the right category here. I disagree with your headline, but hidden among the agitprop is an important truth

    The differences between languages are smaller than the differences between programmers.

    On the language side, I believe that productivity and reliability really have been improved by well-known features like automatic memory management and by less well-known features like pattern matching with compile-time checks for coverage and redundancy.

    On the programming side, last term I measured variations in productivity up to a factor of 9, across a population of 20 pairs of student programmers. (This measurement doesn’t include pairs who clearly aren’t trying.) I myself am far more productive than my best student programmers—maybe 2 to 4 times more productive.

    Now, I believe I am more productive using Haskell than I am using C, despite my having more years of C experience and having written more thousands of lines of C code. But am I 20 times more productive? I would be surprised.

    On both sides, the available science is so thin that I speak purely anecdotally.

    1. Hi Norm,

      I usually leave such rants as unpublished drafts but for some reason I hit the post button this time. But it was worth it just to be accused of being in the Tea Party! Your comments are eminently reasonable, but I think there is a deeper insight struggling to get out. Programmers’ affinity for languages appears similar in some respects to religious belief. This tells us that something primal is being stimulated. Understanding what this is ought to help us better design programming languages.

  29. If you don’t think language makes a difference, try TECO or InterCal or UnLambda for a while.

  30. Pingback: Mea Culpa
  31. I’m not certain I agree wholeheartedly;
    Still, I’m quite sure of one thing:
    The programmer can fix the language,
    But the language cannot fix the programmer.

    1. I’m not confident of either of those. Most programmers can’t fix the language (they just ‘make do’ via pain, frustration, and boiler-plate).

      And exposure to a wider variety of languages and paradigms can certainly affect the programmer, often in very positive ways.

      “A language that doesn’t affect the way you think about programming, is not worth knowing. ” – Alan J. Perlis “Epigrams in Programming”

  32. I actually think the opposite is true. I write programs in high-level languages because I’m not smart enough to write them in assembly languages. I’m interested in powerful abstractions because I’m “aware of the limited size of my own skull” (Dijkstra) and therefore of the need to amplify it as much as possible.

  33. There are several interesting things going on here.

    One is that, as everybody agrees, variations among programmers trump everything. The original study that everybody cites to show 10× variation among programmers also mentions that several of the programmers tested couldn’t even finish the test tasks at all. Try to compute the ratio of variation there, and you get a division by zero error. And we’ve pretty much all experienced co-workers who exhibit negative productivity — they introduce bugs, create ill feeling, distract other people from the important things, make promises the company can’t keep, copy-and-paste huge volumes of code, or add lots of unjustified abstraction.

    Most of us, if we’re honest and humble, can think of times we’ve been the guy with the negative productivity.

    So we can stipulate that programmer quality can account for any productivity ratio whatsoever, including negative productivity ratios. So clearly it’s more important to be a programmer whose productivity is relatively high than to choose the right language, or even a good language, as long as it’s possible to implement the system in the language chosen with a sufficient amount of effort. OK?

    A second thing is that, when you don’t know how to solve a problem, your choice of language (and other tools, e.g. libraries) has a huge effect on how long you flail around before coming up with a workable approach. It doesn’t take that much effort to implement backtracking depth-first search in C or Java or Python. But if you’re working in Prolog, backtracking is one of the first things you’ll try. If you’re trying to solve Sudoku, this is likely to get you to a solution rather quickly. On the other hand, there are lots of problems for which this approach is too slow to be applicable, and it can take you a long time to figure that out.

    Of course, some languages tend to focus your attention on questions that don’t really bear on finding a way to solve the problem, like whether this field is going to be a Foo&, a Foo*, or a Foo, or how many bytes to allocate to which field, or which registers you’re going to pass your arguments in, or the precise lifetime of each allocated value — things that are crucial when the problem is to decode a frame of video by the time it needs to be shown, but maybe not for the majority of software.

    These are bad languages to use to explore problems you don’t know how to solve at all, unless the problem you’re having trouble with has to do with time or space constraints.

    On the other hand, like I said, if you know how to solve the problem, it isn’t going to take you fifty times as long to solve a big problem in C than in Prolog.

    (Relatedly, learning more languages teaches you more patterns of thought. Learning Prolog forces you to learn to think in terms of search, and when that’s useful, and how to control the exponential explosions. Learning ML forces you to learn to think in terms of pure functions on recursively defined structures. Learning Forth, Scheme, or Ruby encourages you to learn to think in terms of embedded domain-specific languages. Each of these turns out to be very useful in its appropriate realm, and you can of course implement them in your language of choice.)

    A third thing is that, in fact, my choice of language does affect my productivity substantially, even though I’m the same person. In the last year I’ve written production code in Perl, PHP, Python, Objective-C, and JavaScript. I consistently found myself fighting with bugs I had written in Objective-C in decisions I didn’t have to make in the higher-level languages, and so everything just took longer than I expected. I’m often frustrated with the circumlocutions I have to use in JS to say something simple — a little bit when I’m writing the code, but much more later when I have to go back and read it in order to extend it.

    During that time, I’ve also written non-production code (“towering monuments of mental masturbation”) in Lua, Go, Forth, C, PostScript, and x86 assembly language. I find it relaxing to write code in C. I don’t have to distract myself from what I’m doing to look up functions in an API. It’s very clean and simple. I enjoy the mental orgasms. On the other hand, it’s slow going, and every time I have to track down a segfault, I am reminded of why software in the 1980s got new features so slowly. (Forth is even a little worse: no compile-time type checking, and it’s easy to introduce stack-effect bugs.)

    There are certainly some things for which the difference in program size between, say, Python and MIXAL is negligible. Even aside from the better readability of Python, its nicer error-reporting behavior, and its REPL, there are many more things for which the ratio is thousands to one.

    If you’re not convinced of this from your own experience, let me suggest that you try the exercise. Pick some simple problems to solve. Here’s a list of simple problems I’ve done or thought about recently:

    • using IP multicast to copy files between machines on the local network
    • downloading a series of web pages from webofstories.com
    • drawing some 2-D ray-tracing diagrams of optical systems
    • merging some text files whose lines are sorted in ASCIIbetical order at I/O-limited speed
    • summing the first column of an input text file
    • calculating the Haar transform of an input signal
    • generating fractals
    • losslessly cutting a JPEG file up into tiles and generating HTML to fit them back together
    • median-filtering the pixels of an image
    • uploading the latest photo from your digital camera to your web site.

    Try several of these in, say, C, and your high-level language of choice, switching up the order at random. If you’re like me, you’ll be amazed at how much more work it really takes to do this stuff in a lower-level language like Java or C, and how much harder it is to maintain the result. It’s easy to forget these things.

    (I was going to mention Yossi Kreinin’s nomadic programmer phenomenon, but I realize I don’t really know how to fit that in here. Is Olin Shivers a canonical nomadic programmer? Dan Bernstein? Thomas Dickey? Julian Seward? Russ Cox? What languages do nomads tend to use? I’m not sure.)

  34. It depends on context.

    If your problem fits a particular language really well you’ll benefit more from it. The reason you’re getting flamed is probably that you’ve posted something unreasonable that ignores context.

    Contextually aware blog posts are not-controversial and are boring, which might be a reason that no one seems to post about them.

  35. Interesting certainly, not sure I agree 100% with you though. Programming languages are different, and spreads out un-evenly across the metric scales we use to compare them. C is fast in execution, Java is faster in project man hours and so on. Languages definitely have differences. If I’d guess I’d say that you’re 50% right, and Paul Graham et. al. is 50% right, which leaves you both with 50% BS I guess … 😉

    Still I likes your writeup…

  36. Great post and great points. I agree with all of the points you are making, keep it up! I also find myself at fault many times, arguing over the “appearances” and not the content. It is taking me a long time to realize that it is not the language, but the programmer, and I don’t think I am quite there yet!

  37. I think language choice is largely a red herring. What matters much more is toolkit choice, and language debates tend to come along for the ride because they’re tricky to separate. I may find Java faster to program some things than C++, but largely because it tends to come with a whole set of useful predefined classes, not because the syntax is somehow dramatically improved. Ditto for Objective-C (Cocoa), C# (.NET), and Ruby (Rails or Sinatra). And goodness knows my own productivity in JavaScript increases at least twentyfold when I’m using jQuery, even though the language itself is unchanged.

    Which is not to say that there aren’t structural differences between various languages themselves, or even shortcuts for certain tasks (like Ruby’s “10.times do |x|”) but they tend to get overblown and overgeneralized. Each person’s brain will be better suited to a different programming paradigm, as will each task, and there’s nothing wrong with that.

    1. Sean: You say “not because the syntax is somehow dramatically improved”. It sounds like you are assuming all languages are just different front-end syntaxes over the exact same concepts.

      While Java and C++ are closely related, even they differ in ways more important than syntax.

      Java frees you from handling memory management at all, which means you can focus on other aspects, or just shorten the development.

      With languages that are even more different in ways more fundamental than syntax, the differences in productivity are even increased.

      Do you think using a vast library and toolkit while programming machine code, or assembly, is equally productive?

      Surely you agree that machine code -> Java is a significant productivity boost. Why is it that you think productivity stops there?

  38. I don’t know a single esoteric language. Not one.

    However, I read up a lot about them. They are most often languages built using a single aspect that is difficult or lengthy to implement in a OOP language (e.g. Monads) – read difficult not impossible.

    In this way they are useful in the way that they teach you new concepts in an approachable manner. You can learn all of these concepts through reading and dedication; and instead of just using them (as you would do in an esoteric language) you actually understand what makes them tick.

    1. You say “I don’t know a single esoteric language. Not one.” but then go on to speak as if you speak of the experience of those who write in “esoteric languages”.

      Clearly, you refer to Haskell here as an esoteric language. In real life, I know about 6 Haskell developers and if you include the Internet, I’ve discussed lengthily with a few dozens. Only the newest one who has been using Haskell for only a few months has a bit of difficulty fully *understanding* how Monads “tick” (and he too will easily understand it if he devoted any time at all, he’s currently using Haskell to *get work done*).

      I don’t understand why so many people feel they can simultaneously admit 0 experience in a field, and then go on making unbased assertions about that field. Why don’t you learn some Haskell, use it for a while, and then pass judgement?

      Haskell makes my code shorter and more reliable, and I get *more work done* than in any OOP language. Yes, my code will not be readable to a general layman, nor will it read well to a developer well-versed in OOP. But it will read very well to anyone well-versed in the idioms that I use — and Haskell’s type system makes the abstraction problems Edwards mentions almost minuscule — the types help anyone approaching the code make sense of any and all abstractions in use.

Comments are closed.