The future of programming

I discovered the creative exhilaration of programming at a formative age. Yet all along I have also felt stifled by primitive, ill-suited languages and tools. We are still in the Stone Age of programming. My quest is to better understand the creative act of programming, and to help liberate it. Reflection upon my own practice of programming has led me to three general observations:

  1. Programming is mentally overwhelming, even for the smartest of us. The more I program, the harder I realize it is.
  2. Much of this mental effort is laborious maintenance of the many mental models we utilize while programming. We are constantly translating between different representations in our head, mentally compiling them into code, and reverse-engineering them back out of the code. This is wasted overhead.
  3. We have no agreement on what the problems of programming are, much less how to fix them. Our field has little accumulated wisdom, because we do not invest enough in the critical self-examination needed to obtain it: we have a reflective blind-spot.

From these observations I infer three corresponding propositions:

  1. Usability should be the central concern in the design of programming languages and tools. We need to apply everything we know about human perception and cognition. It is a matter of “Cognitive Ergonomics”.
  2. Notation matters. We need to stop wasting our effort juggling unsuitable notations, and instead invent representations that align with the mental models we naturally use.
  3. The benefits of improved programming techniques can not be easily promulgated, since there is no agreement on what the problems are in the first place. The reward systems in business and academics further discourage non-incremental change.

What is the future of programming? I retain a romantic belief in the potential of scientific revolution. To recite one example, the invention of Calculus provided a revolutionary language for the development of Physics. I believe that there is a “Calculus of programming” waiting to be discovered, which will analogously revolutionize the way we program. Notation does matter.

Our current choice of notation leaves much to be desired: ASCII strings encoding structure through grammars and names. We have automated many complex information artifacts, from documents to diagrams, and we have built many complex data models and user interfaces for them. From this perspective, it is absurd that programs are still just digitized card decks, and programming is done with simulations of keypunch machines. The move from keypunches to on-screen editing was a huge leap forward, but that was 30 years ago. We have not yet taken the next logical step, which is to represent programs with the complex information structures made possible by computers.

The guiding goal I propose above is programmer usability. One way to discover how to make programming more usable is to look for clues in what currently feels the most difficult and unnatural when we program. Here are some of the clues I am investigating:

  1. We often think of examples to help understand programs. Programs are abstractions, and abstract thinking is hard. Across all fields, examples help us learn and work with abstractions. Example centric programming is my proposal to integrate tool support for examples throughout programming. Going further, I am trying to reduce the level of abstraction required in the first place with prototype-based semantics and direct-manipulation programming.
  2. We tend to build code by combining and altering existing pieces of code. Copy & paste is ubiquitous, despite universal condemnation. In terms of clues, this is a smoking gun. I propose to decriminalize copy & paste, and to even elevate it into the central mechanism of programming.
  3. The dominant programming languages all adopt the machine model of a program counter and a global memory. Control flow provides conditional execution, and allows memory accesses (side-effects) to be serialized. The problem is that control flow places a total order on the actions of the program, while both conditionals and side-effects are inherently partially ordered. One condition arises when a specific combination of other conditions arise. Likewise we care that a data read occurs only after a specific set of data writes. Control flow languages force us to do a topological sort in our head to compile these relationships into a linear execution schedule. What’s worse, we later have to reverse engineer the tacit relationships back out of the code. Control flow is a suitable notation only for the back-end of a compiler, and should be considered harmful to humans. Escaping the linear confines of textual notation is a first step towards directly expressing non-linear relationships. To better represent side-effect relationships, I am exploring database-like transactional semantics.
  4. Our languages are lobotomized into static and dynamic parts: compile-time and run-time. This dichotomy exists solely for ease of compilation and optimization of performance. Modern hardware and compilers have made these concerns unimportant in most situations, yet they live on enshrined in the design of our languages. There should be no difference between run-time and edit-time. More generally, we must discipline ourselves to ignore the siren call of performance, keeping the tiller pointed towards usability.
  5. We can roughly categorize many mental models into two types: structural and behavioral. Some cognitive scientists trace this division to specializations for vision and communication, respectively. Our programming languages are highly biased about these two representation styles. Generally, we use structural models for static constructs, and we use behavioral models for dynamic constructs. This pattern is seen most clearly in pure OO languages like Smalltalk. We can not afford to program with one hand tied behind our back: we need to integrate the use of structural and behavioral models throughout programming. To better frame the issue, I am exploring the “dual” of OO, in which the encapsulation of behavior is replaced by the publication of reactive structure.

These clues outline a long-term research program, in which I hope to collaborate with others of like mind. The first steps can be seen at subtext.
Leaping ahead, let us suppose for the moment that we have invented a revolutionary new way to program. How can we get programmers to use it, academics to research it, and entrepreneurs to invest in it? As discussed earlier, there are deeply entrenched obstacles to change, both psychological and institutional. The endless cycle of hype and disappointment has hardened cynicism. I am afraid that we have reached the point where talking is useless.

I can think of only one strategy to break this deadlock: lead by example. We need to get our hands dirty and build real code for real users, and do it better-faster-cheaper. Nothing succeeds like success. Thinking even further out, perhaps we could establish something like a teaching hospital: a non-profit organization that combines programming for real clients with a mission of education and research. In the meantime, the first job is the inventing.


[Welcome Diggers. Before you comment on this old post, I suggest you take a look at what I have done in the meantime. The best summary is in my last OOPSLA paper.]

96 Replies to “The future of programming”

  1. Are you familiar with the notion that software design and the software product generally may be “evil” problems? problems where the solution changes the nature of the original problem, which requires another solution, and so on? The analogy is to social enginnering and the failure of housing projects. This is not something that could be done away with by better planning.

    There are other factors that make the problem “evil”: problems are always essentially unique; there is no defined stopping point for solutions; there are too many alternatives and alternatives which, at one point, were possibilities, once implemented are too costly and risky to reverse.

    This may sound like a grab bag of gripes. I’ll have to get the references for this topic. There is a classic paper,
    Rittel, H., and M. Webber, 1973 “Dilemmas in a General Theory of Planning” pp 155-169, Policy Sciences, Vol. 4, Elsevier Scientific Publishing Company, Inc. Amsterdam

  2. Here is another way of formulating what a wicked problem is, in terms of there being no agreement on the statement of the problem: Coping With Wicked Problems. Here is the first para from the intro:

    Government officials and public managers encounter a class of problems that defy solution, even with the most sophisticated analytical tools. These problems are called “wicked” because they have the following characteristics: 1). There is no definitive statement of the problem; in fact, there is broad disagreement on what ‘the problem’ is. 2). Without a definitive statement of the problem, there can be no definitive solution. In actuality, there are competing solutions that activate a great deal of discord among stakeholders – those who have a stake in the problem and its solution. 3). The problem solving process is complex because constraints, such as resources and political ramifications, are constantly changing. 4). Constraints also change because they are generated by numerous interested parties who “come and go, change their minds, fail to communicate, or otherwise change the rules by which the problem must be solved” (Conklin and Weil, N.D., p. 1).

  3. Nick – certainly some software projects are wicked by this definition, and thus doomed to failure. Maybe software tends to be more wicked than, say, IC design, because software naturally gets more intimately involved with social/human problems.

    What I think distinguishes software is that we fail even on un-wicked problems, where everyone thinks they actually understand the problem and agree on the solution. This is a failure of intelligence, not a failure of socialization.

  4. Counter to your assertion, there are plenty of theories of what’s hard about programming. See, for example, Andrew Ko’s recent paper on the 6 end-user prorgramming barriers.

  5. Python, particularly with unit tests, goes a long way toward addressing a lot of these issues… In fact, unit tests are a great way to add example-based coding / self-validating documentation to just about any language.

  6. Could you elaborate more on clue #5? The first four clues are something I agreed with and understood, but the fifth made no sense to me. I am not well-versed in smalltalk, so that reference didn’t help. Could you provide examples of structural and behavior constructs in existing languages? Could you elaborate on what the “dual of OOP” is, and what kind of stuff it would entail?

    If you don’t, then, well, I may just have to drop by G706 and ask you directly. 😉

  7. I agree, we are still in a Stone Age of programming. I am really grooving what you’re saying in this blog, I’ll have to bookmark it.

    For the past few years I’ve been thinking on-and-off about how the “ultimate” programming language and IDE should look, but I’ve seen so many cool paradigns and techniques that it seems practically impossible to make an “ultimate” language that could support all the useful ideas that I’ve seen. Aspect oriented programming, compile-time programming, intentional programming, languages with extensible syntax, smart editors that help you refactor and understand your code inside and out, and do duplicate code management, graphical programming (e.g. executable diagrams), languages that make various things implicit/automatic to save typing (such as type declarations), functional programming, example-oriented programming which I read about here today, and so forth.

    It’s clear to me that most all of the ideas I’ve heard are useful and have their place. But any given computer language of today supports only a subset, often just a small subset, of the useful methodologies that have been invented. The most popular languages of today, Java and C++ (and C# too) seem horribly limited to me. Similarly, our toolchains, such as our IDEs, seem very limited to me.

    I’ve been wondering lately just how many people agree with me about this–is my frustration with today’s programming world common, or is my point-of-view obscure in the world-at-large? And I wonder whether there is a community of people hidden somewhere on the web discussing all of this and planning a solution without me. Is there? I’ve seen several disjoint projects to make new computer languages, new GUIs, and even to “reinvent computing”, but these projects don’t seem to know about each other.

    OOPSLA looks like a great event to discuss these things, but I’m sure there are many like me that don’t have the time or money to attend. There really oughtta be a place on the Web for all of this.

    Anyway, keep the great ideas coming. I just love reading about them, even though my usual Microsoft IDE/.NET toolchain provides no opportunity to use them.

    Oh, by the way: your article talks about “dues”… what’s a “due”?

    And I’m not sure what you’re getting at with point #3. Are you saying the conventional “imperative” mode of programming is bad?

    If so… I agree that it isn’t always the best way to go about programming, but other times, in fact, it is. A lot of problems have a solution of the form “do this, then this, then this, and if ‘this’ is the case then do ‘that’, otherwise do ‘this’…” If you’re suggesting that the language of the future would be declarative rather than imperative, that it would look somthing like prolog, I must disagree. Both styles are useful, one usually more than the other, depending on the circumstances. For that reason I believe the “ultimate” language would be, at its very core, an imperative, sequential language. It would be this way, not because imperative style is necessarily the “best”, but because it reflects the natural way in which the computer operates. Of course, you could USE other styles in this ultimate language, but you would rely on libraries (of code, syntax, etc.) to do so, and those libraries would implement declarative code in terms of the imperative equivalents.

    In terms of its most fundamental features, the ultimate language should reflect the operation of the computer, because any other design would create a lot of inefficiency. Imagine an extensible language whose core only supported unlimited-precision floating-point, for example. Certainly it could do everything that a fixed-size-integer-supporting language could do, and you could certainly extend the language with an “int32” type that behaves in every respect like a C++ int, but it would be silly to do things this way because it would be horribly slow at integer calculations and probably require a grotesque amount of memory for an array of “ints”.

    Similarly, you could no doubt simulate step-by-step imperative semantics in an extensible language which at its core only supported Prolog-like declarative logical inferrence…. but the overhead might be unacceptable.

    Conclusion: the “ultimate” language must support all (or almost all) the features that can be expected in the underlying machine, or else it would totally suck for code that relies on those features. The ultimate language would support the slow-but-safe semantics of Java or the fast-but-deadly semantics of C++, whichever the programmer requires.

  8. I’d like to add point 6.

    6. The main intention of the programmer is clouded by exception handling code. Programs should read like a novel, where the main intention of the contract flows chronologically and logically. Exception handling should be like the notations along the margins a religious text, which provides an alternate story and commentary, and not in the main body of the original text.

  9. Chui,

    I take it that by exceptions you mean unusual situations handled with conditionals rather then signaling constructs like Java Exceptions. I agree with your point. I speculate on one approach in my OOPSLA paper. See the section on Adaptive Conditionals.

  10. I really like this article because, for one, the author is a good writer, and also because his ideas are pretty well thought out. But I disagree with the idea that we should try and make everything work exactly like our brain. Our brain exists to map concepts from one “world” to another. That’s the job of our brain, to take some abstract concept that doesn’t work exactly like our brain, and combine it with memory to end up with a useful “task”. In the world we live in, comprimises are everywhere. In computer science, this is no different. When operating within the confines of reality, encoding our logic into strings is the best way to accomplish all of our goals. I guess my question for the author is then, if not ASCII strings, then what? Here is the other reason why I think this is somewhat flawed. The day we invent hardware which can bring this “ditch the text-based stuff” idea to reality, the implications of such hardware will likely have such huge affects on the industry that our entire goal and reasons why we have computer science will change, therefore changing the needs of computer programming languages. So it is impossible to imagine a world where programming isn’t done with a keyboard, because everything we know and do with computers uses text-based interfaces.

    “We are constantly translating between different representations in our head, mentally compiling them into code, and reverse-engineering them back out of the code. This is wasted overhead.”

    This action takes place in EVERY single learned task humans do. Once again, that’s in the job description of the human brain. I suppose we should get rid of english and try to come with a new language that uses our native brain signals? We are constantly translating english words/sentences into the concepts and ideas they represent, but that doesn’t mean we should get rid of english!

    “The dominant programming languages all adopt the machine model of a program counter and a global memory. Control flow provides conditional execution, and allows memory accesses (side-effects) to be serialized.”

    The reason they adapt to the machine is because the people tasked with actually implementing such technologies (as opposed to just writing about their faults) have the daunting task of making it work at a level of performance which is acceptable. COMPRIMISE is the key word here, we needed speed, and we needed the ability to code at a decent speed to actually be able to create software in a reasonable amount of time. This problem still exists today, no matter how much proponents of garbage collected slow-as-snails languages want to tell you. The distance in which you abstract yourself from the machine is directly proportional to the performance hit in which you incur (lose approximation but no one can disagree with the premise). Sure, it would be great if I could just think of a program in my mind and have it all the sudden appear (which is what it sounds like is the goal), but what would separate a computer scientists from a bum if it was so easy? I am against revolution just for the sake of revolution, and the lack of concrete suggestions for improvement places this article right on that line.

  11. Whilst I agree with the sentiment, especially the example based programming, I do believe that the tools almost exist already. What you effectively are looking for is a flexible language that can support all styles of programming, and preserves as much semantic detail as possible – the more the computer understands, the more it can optimise for a given problem (eg. using different algorithms dependent upon current data).

    The language I am thinking of is Maude + BMaude + natural narrowing semantics, this would form your basic operating environment. Note I am considering OS and language to be one and the same thing – every OS has an API which you need to learn regardless of language so this concept is just the logical extension of this.

    Maude is a reflective equational language with a user-definable syntax, it is used to model other languages, or problems which cannot be represented well in other languages. It has a module/view system which should eliminate cut/paste coding – you just import existing code in a similar fashion to OO concepts.
    BMaude deals effectively with behavioral provability which I’d consider essential for dealing with other people’s code (you need to place limits on the code to allow it to do only what it says it does, additionally, it allows you to see what people’s code is supposed to do even when you can’t have access to the source eg. for protection of IP – this aids code re-use).

    Natural narrowing is merely an advanced version of partial evaluation and implementing a practical evaluator in Maude is the hard part to do. BMaude is not available but has at least been specified, so the implementation work would be much easier to do than the evaluator. Currently the ELP group in Valencia are working in the area of narrowing among other projects and appear to be leading the field in this important area.

  12. Your general observation #3 (which I totally agree with) is pretty well covered by Richard Gabriel’s thoughts on creating a Master of Fine Arts in Software .


  13. Not really different than any maturing industry.

    Most everything you say could just as well describe furniture making. For example, to your copy&paste example – people used to copy designs and techniques to keep chair legs up. These got replaced by documents which could describe what to copy&paste for low cost overseas workers; and eventually for automated manufacturing lines.

    Sure software’s still in the “stone age” – well, I’d rephrase that as “maturing industry” by evidence that programmers still get paid OK. But like any commodity, the copy&paste stage (Thanks largely to BSD which gave copy&paste source material for entire OS’s) is leading to the manufacturing-assembly-line quite nicely (as seen by chinese software sweatshops); and the automation of what these companies do manually (translate specs to code) is well underway in some startups focusing on that kind of wrok.

  14. You are a pompous bag of wind. Just because you can’t grasp programming does not mean that no one can.

  15. It seems that every programmer I have heard discuss Eiffel is very impressed with it, but is stuck in some other legacy platform. I beg level headed programmers and Software Engineers to take a serious look at Eiffel the software platform, which is so painfully ahead of its competition.

  16. Just as computers are very different, brains are too. What is easy for one, is difficult for the other and vice-versa and in a different situation it might be totally different again. I’d like to be able to work with diagrams at one moment and code the other, hide exceptionhandling at one moment, have it more visible the other. Extensible frameworks are key. IDEs, the web, XUL, XBL, SVG

  17. We are in the stone age of computers. Being around for only a couple decades things are just getting good. Wait and see what is out in the next 40 years.

  18. What I find ironic, is that it is up to programmers are develop languages and tools for other programmers. I can visualise something in my mind, and sometimes even guess that it will take a good 6-8 hours to develop, yet I’m still no further forward in regard to development times than I was nearly 20 years ago when I started. There will always be some form of input, processing and output, with the relative problems of making sure the input captured correctly, it is processed without error and output in an acceptable form.

    Until we have computers and languages that work at the speed of thought, with interfaces that are more visible, intutive and rely less and less upon keyboard input via lines of code, the problems will always exist.

    Of course, IF we ever reach a stage where what we think appears on directly on screen THEN we may not need programmers any more…END IF

  19. I think your concept of usability being the central concern is close. I would say the central concern is maintainability. Many of the problems you identified are related to the act of maintaining hence understanding code.

    I must say I found your idea of encouraged copy and paste rather alarming as the main reason it is discouraged today is that it makes things less maintainable. I would find it interesting to see a system that allowed copy and paste to be used and still provide for system maintainability.

  20. ASCII is insufficient? All the words in the English language (and many others) can be represented in ASCII. How do you expect to name functions/variables/classes, etc.? By drawing pictures? Saying that “ASCII is insufficient” makes it obvious that you have almost no idea what you’re talking about other than spouting off buzzwords.

  21. I read an article with a very similar intent a couple of weeks ago.

    It was about the very fascinating idea of putting the Source Code in a Database. Or a SCID I agree completely with his idea, but implementing it in Java wouldnt be my first choice. 😛

    I think the problem that SCIDs address and the problem that this paper is talking about is more about the input method used to get the program into the computer. Its a bit like the comparison of typing text in, or being able to naturally talk to input text. Typing is something that has to be learned and uses certain abstract unnatural conventions (the placement of the letters on the keyboard). I think programming in text could also be likened to making a wheel out of stone with a chisel and a hammer versus making one on a lathe. With the chisel and hammer the roundness of the stone is wholly up to you, and one misstep will produce a non-round wheel.

    Thus programming is still in the stone age….

    I can envision building a program in an ide where there is virtually no code to manipulate. The connections in the program are made in a mindmapping kind of fashion, and the ide aggregates the code from your input into any language it’s been built to understand. Which can then be compiled, and its virtually guaranteed to produce a round wheel.

    This would remove the need to make “the perfect language” or to have to use text commands to communicate with it. Or any other particular way of inputting and modifying your program.

  22. I meant to mention copy and paste as well, what if you had an automatic replacement copy and paste tool, so that when you copied and pasted something, the tool would check your variable types and names and other patterns and go through the pasted section and change variable names and so on, until it fit, without ever touching the logical statements that make the code work in the other program.

  23. Every so often someone comes along and promises this holy land of programming, reaching the of completely eliminating the “accidental difficulties” of programming as described in Brooks’ “No Silver Bullet”.

    Being one to give the benefit of the doubt, I went as far as to even watch the demo videos on Subtext. This article promises the elimination of code, but what I was seeing in Subtext was very definitely still code, except that the ability to directly edit the code through linear typing was gone. Further, I felt that the code style of the Subtext “code” was very hard to follow, with dense symbol representation, mostly ASCII, with added symbols such as strange “compass” symbols that pointed to references in other code, and nary any whitespace to cool the eyes.

    Subtext started out innocently enough, and it was in many ways an introduction to another world of programming. Some of the concepts reminded me a bit of stack-based programming. I wondered a bit, though, about the “man behind the curtain” on his seed functions such as sum, difference, and product, and how they were implemented.

    However, as the presenter worked through the example of how to create a Factorial function using Subtext, I noticed that much of the abstract meaning of the code was disappearing – there did not appear to be a means of adding comments to the code-dense statements that Subtext was generating.

    I admit that I had to stop the video though when the presenter got to the point of explaining how to create an assignment statement. What appeared on the screeen was seven lines of gobbledygook to explain “Employee.Wage = 0”. At that point I was absolutely convinced that the programming model presented by Subtext could not be of any use to reduce the difficulties of programming in my day-to-day career.

  24. Yes, one of the problems is that there are constantly new languages coming out and people have to study all new concepts and are inundated with things they don’t understand YET, and so the industry makes them feel like a complete beginner every couple of years. Look just at Visual Basic, they never got the access to DATA quite right, and so they keep re-inventing it every few years, Jet Engine, ADO, DAO, OLEDB, whatever. Talking to a database with command strings, connect string, etc. People should invent languages that are actually designed for a purpose, such as gaming or business rather than for the purpose of moving memory cells and variables around in a computer. Instead of creating a language for Beginners with All Purposes (BASIC) or a language to bring down the evil empire (=Java), why not have a language to keep track of Football scores (=dBASE II). The evolution of such a language to the now contemporary Visual FoxPRO 9 is quite amazing. I can write most business applicatons with much less code than any other program language because I can embed most database commands right into the program code, in-line SQL statements, etc. It’s also been great for web databases and I’ve even written a web-application firewall with this. I think I’ve been suffering much less than mose. So my suggestion: Take a good mature programming languagethat is for the purpose of what you’re trying to do, learn it WELL and it will be much easier to get stuff done.

  25. This is an interesting and unusually civil discussion so far. Like Pengiun Pete, I feel there are limits to the benefits of simplification.

    I’ve been at this for over 20 years and know this business is mostly about translating human concepts and desires, expressed through human language, into machine control instructions (another language), which result in some real world action (another translation). All translation is approximate. Some concepts are difficult, or impossible, to express in another language. We are inherently constrained by the constructs of our languages and their imperfect abstractions of the real world.

    In all systems – natural and artificial – structure and function are inseparable. Flexibility and adaptability require complexity in both structure and function. Rigidity and sameness do not. Mastery of complex skills makes accomplishing complex tasks easier. If a complex task never changes, it’s controls can be greatly simplified. But the goals and tasks we encounter in life are always changing and simplistic (easy to learn/understand) abstractions breakdown quickly and become part of the problem. We see this with object orientation. Our carefully crafted abstractions often breakdown over time – often rapidly – under the pressure of the chaos of life.

    Perhaps some future day, we can speak to an artificial intelligence which will carry out our wishes for us, freeing us of any need to understand the complexities of what we ask. But I doubt it. The inherent conceptual and knowledge differences between us will still require imperfect abstractions and translations. If we don’t know what we ask for, we’re certain to be surprised at the results and feel it’s all still “too hard to use”.

  26. To create a revolution in the future of programming, you need to look at the simplicity of nature… the answer is there. Computers and computer programming technologies are based on our old comprehension of life and our own un-evolved minds, which has led us in the wrong path. To have a revolution, one need to forget of everything we have done so far, start clean, and truly understand the simplicity of nature and how human minds really work.

  27. The solution to this problem can, and I think will be discovered by educators. For those of us who have been programming for years, our mind/language paradigm has been very altered from the average person. Only when a greater percentage of the population is capable of taking advantage of the tools we have been priviledged to know and use will programming rise out of the “stone age”

    We are essentially a group of cave dwellers who have developed a type of tool that allows us to create drawing on the wall. Yet hand this tool to someone else and they will stare at it confused. Someone needs to create the easel, the painbrush, and the colors for programming to meet the level of “art”

  28. Declarative languages are just specifications of exactly we want, so the compiler/runtime decides how to actually run the thing. There are various ways to describe our pure specs (logical, functional, flow, DSLs, etc), but these only succeed as domains mature – they are often too much of a shift so users want toned down versions relevant to their domains. Programmers are stuck in the C/Java mindset, so it is really engineers, accountants, secretaries, etc who benefit from these approaches. As soon as you say “linear temporal logic” to a programmer or engineer, their eyes glaze over, even if that’s really what they want: synthesis of a system just by writing some properties about it. So it’s clear that’s not what we want as the exposed model. Let’s look at your other points of “originality”:

    Copy & Pasting? Check out work at U.W. I think this was somebody’s thesis and an active project there – who knows where else it is being examined.

    Behavioural/Reactive Object Oriented programming? This work started in Yale 10 years ago and has been growing since. I won’t comment further on it since it is my active line of research, but I believe in it.

    Reactive programming, temporal logic/verification, message passing, contracts, unit testing, etc. These are becoming necesseties for programming, but I believe the Java world has demonstrated we need the type theory & program analysis communities to step up to make these techniques usable within an IDE – they need to catch up on analysis of features that the LISP community (Scheme, Erlang, etc) have demonstrated we want. Propose a new approach, or make an existing one analyzeable – a GPL won’t fly, so I’m skeptical of what you are proposing.

    Want to really think big? Imagine a system where input and output are specified and the system infers the rest, potentially asking questions [eg, Ersatz]. If you’re not ready to think at that level, take a closer look at what the academic community has been doing in ALL of the points you mentioned.

    I’m not sure why I wrote this entire post.. these sorts of articles are depressing. They tell me ‘innovators’ are just recreating the wheel and ignoring the works of others, getting nothing much done, and those looking to the ‘innovators’ for answers will continue to be misled.

  29. Have you ever used WorkView? The new Lego Mindstorms IDE? Any IC design/CAD tools? That kind of programming interface makes much more sense!
    I think the main failing in programming is that it tries to use such an inconsistent, non-linear, inference-heavy form of communication as written language. The biggest problem with languages in general, especially non pictographic languages, is that they were designed to simply record sound into a visual/mechanical form, before sound could be recorded directly in audio form. Since it was the main communication tool we had, that successfully and consistently carried repeatable control past single individuals, we tried to shoehorn it into all kinds of inappropriate uses. It is the ultimate case of have-hammer/appear-nail syndrome.
    Let’s face it: computers are machines. They have a lot of non-moving parts, thanks to the miracles of electricity, magnetics, solid-state transistors, and even optics — but they are no less machines than cars. Yet we try to interface with them via human communication means — rather than mechanical communication means as in the steering wheel, gas, and brake pedal of a car. We tried to do this well before we had any hope at succeeding in any Turing-complete test. Why? Probably because it was the closest thing we had to a well known serial command format to convert to serial machine code. Close enough at the time, anyway.
    Now, we are in the age of semi-realistic 3D graphics, and advanced 2D vector graphics, aided by coding “paradigms” that are shoehorned into our written language (OO and Call-Backs), to more closely reflect the needed visuals and interaction of these new communication formats. Why do we still use this archaic, arbitrary conversion of phenomes to mechanical instruction? Every other engineering discipline uses interfaces that make more sense to the machine’s pre-existing form — wheels, buttons, pedals, valves, switches, handles, levers, pulls, etc. Why can’t we?
    Why do you think it is, that every large coding project is at some point written down in chart/diagram form? Flow charts, network diagrams, relationship graphs, I/O maps, etc. These visual, almost mechanical, forms and connections all make better sense to the types of objects and problems we’re dealing with. Even computer storage “containers” are represented as boxes, with interconnected sub-folders, in all graphic file browsers. It is easier to communicate to each other with these visual forms. Why shouldn’t it be possible to communicate with the machines using these interconnected visual forms as well?
    Bits are constantly pushed around different to boxes (chips) with different set pipelines (electronic wire or optical beam transmission). Some boxes store bits (memory), others transfigure them (processors), and yet others form simple on/off valves (gates). Why not represent these boxes and pipes directly? What the hell should these boxes care about any human notions — even simple ones like “if”? All they know is “Bit State In => Bit State Out”. And that’s all we should need to know, to get them to do what we want.

    Human notions should be secondary to that clarity of interface to the machine.

  30. This is quite possibly the single worst article on programming I have _ever_ had the misfortune of reading.

    “WAH! WAH! Programming is too hard for me! Someone fix it! WAH! WAH!”

  31. Dig into Cognitive-Science.

    And take a look at Lisp, Common Lisp, or it’s modern day equivalent, Ruby. Smalltalk may interest you as well.

    Anyone ever see a study/paper on why so many people report those languages as being “fun” and “intuitive”, vs how people attach “laborious” and “tedius” to C++, etc.

    Not looking to start a flame war, just interested in the CogSCI portion of the discussion.

  32. I don’t think we’re looking for a revolution, like you suggest. Instead I think we’re mainly looking to build new constructs on top of the languages which we already have. Constructs which appear more “natural” to the human mind, as you suggest…

    But as for the basis of it all, the compilers and everything behind, we don’t have to do changes. Nothing needs to be pulled up by the roots… Things just have to be added.


  33. Python, it seems to me, has been designed to address all those issues. Python is the first language to to truly understand the value of conciseness, clarity, modularity and complexity hiding.

  34. >In terms of clues, this is a smoking gun. I propose to decriminalize copy &
    >paste, and to even elevate it into the central mechanism of programming.

    In terms of clues that last sentence about copy and paste indicates you don’t have one. The art of code reuse without copy and paste is already a central mechanism of programming.

    As a programmer I think you make a good journalist.

  35. It’s all pholosophical – if you’re successful, you don’t complain about your languages and methods. If you’re making tons of money and your customers like your products – why bother complaining about your tools?

    My other point is – any language you use is ALWAYS tied to an operating environment, even cross-platform ones. Imagining you can invent a perfect language that works with a very imperfect environment is a waste of time.

  36. non-procedural (SQL-like) “here’s what I want from here’s what we have” statements are a good idea. For UI’s, try a QT-like horizontal/vertical containers with expanders inbetween pushing the extremities. You could also use that for abstract data representation–where data is also described by what it is, as in TeX or XML.

    And, above all, base on normal English and/or roman/arabic characters. A quote looks like this “and not this ‘ — make left-shift-quote give a left-quote and right-shift-quote give a right quote so we no longer need escapes. And let comments also be recursively embedded, too.. It’s hard to comment out large sections of code without an inner comment breaking it.

  37. I think the way forward is to make a language not comprised of words (think of how maths works for example)
    for making the language international and understood by all

    I think it needs to be extremely simple and uniform (eg: not going from machine code to assembly to high end code)

    It would help greatly if it were visual, around 70% of our communication is visual (we use our eyes the most)

    and last, I really think we need to remove “compiling”
    we should be able to, in realtime, change the structure of a program as we see fit.

    my suggestion is to create a graphical realtime relationship system
    where you can, in realtime change any relationship in it, with its detail being scalable, so everyone can easily see and change the highest level “code” and the people who want to optimise can zoom in google earth style to see the nitty gritty. all this without using a single word.

    as said, our current system is linear, and our language is limiting us.

    I completely agree, we need a significant change.

  38. Meh.. you don’t have an answer here… All that your going to get is developers bit@ching at you and your stupid thoughts,, and then further insist EVERYTHING you do sucks, and people in real life should hate you and all that you do..

    Can you get past that ? Do you know that anyone can program if taught by a good teacher NOT a book…

    Your right , theres no perfect lang… But unless you can make it Griping about it won’t help….

    Do you have a plan ????

    I have the same message but very short.

    But I don’t have a plan for it.. I know how I want a visual programing tool.. But I an’t working to make it , cause I can’t I know I can’t . But D@m it sure would be nice…

  39. “OMG math is soooo hard too!!! we need to fix it!!!”

    1. Programming is mentally overwhelming, even for the smartest of us. The more I program, the harder I realize it is.

    ORLY? i dont have a problem understanding how my programs work. i take time to layout how i want something to work. if part of your code becomes unwieldy, it’s poor design or poor implimentation. dont blame a programming language for your inabilities. there are many good tools and yes, you have to pay for some very advanced ones. if you need something, get it from someone else or invent it.

    2. Much of this mental effort is laborious maintenance of the many mental models we utilize while programming.

    UML? objects? not all problems can be solved with a silver arrow thus they become elaborate. however, like with objects, you break the problem into parts and build towards your final project.

    We are constantly translating between different representations in our head, mentally compiling them into code, and reverse-engineering them back out of the code.

    i personally have no problem reading my code and understanding it like it’s a second language to me. also, this is why OO was created, to simplify things. i mean, it’s like building anything, we have parts to build either larger parts or the final project. we have nails, screws and lumber and you dont have to know how to make any of them, only how to use them. OO is all about being to reuse small parts to make larger parts which can then be used to make something even larger. it’s that whole not reinventing the wheel bit. if you cant assemble a desk you ordered out of a catalog, it might be a sign you are not cut out to be any type of engineer.

    3. We have no agreement on what the problems of programming are, much less how to fix them.

    how do you know if there is even a problem in the first place?

    Our field has little accumulated wisdom, because we do not invest enough in the critical self-examination needed to obtain it: we have a reflective blind-spot.

    NOT TRUE! i didnt come up with the whole structured programming or OO design on my own, someone found their was a problem and solved it and the knowledge was passed on. also, it is not programmers who do not learn, it’s the non-programmers who decide how the programmers must make a product. a programmer who can logically lay out and assign parts of their team to complete the project should be in charge. also, we learned to use APIs.

  40. Programming isn’t getting harder, expectations are getting higher. Today we have a dozen dynamic scripting languages that turn our complex ideas into a dozen lines or less. These scripting languages have functions to grab data from SQL databases, in a couple lines of code.

    We have ready made classes that handle the most complex data structures in there sleep, linked list, double link list, groups are all interchangable.

    We have browsers that make presentation of wyiwyg(sp?) painless.

    But all of this come at a cost, expectations we all know how easy it is print a list of names or add name to a list, so that isn’t good enough, we now need to make our interface pretty and handle adding data to our database happen on a single screen no reloads, just redraw the parts of our screen that changes and redisplay our list to the specification of the user on the fly.

    We also have to meet the expectation of scalibility of our application so it can’t just handle one user adding data to our files. We have to deal with 100’s using multiple threads to do the updating and verifing our data is correct of course we want to do all the data verification in browser so our work just got even more complex. The mutlithreading and security is where our new fancy dynamic languages fall down, we are basicly using the same methods of debugging that have been used for decades. The only bright light on this mess is DTrace ( ) which at least gives you the tools to debug single and multithreaded tasks with out changing our code or where it loads. Security is still pretty much unchanged with very little expected to change in the future.

Comments are closed.