The internets are buzzing with new IDE ideas. I credit Bret Victor’s masterful demo for much of this. Chris Granger is having amazing success kickstarting his IDE concept. Josh Marinacci discussed some possibilities. [Another one: Instant C#] I have been working in this area for over a decade and have very mixed feelings about these events. On the one hand, it is great to wake people out of their stupor and and show them what might be possible. But on the other hand I am bothered with the unspoken implication that such things are possible with current programming languages. Just slap a magical new IDE on top of Java or JavaScript and the world will be a better place. Unfortunately I don’t believe that is possible, and I fear it will lead only to disappointment and further fatalism.
Live code execution is an instructive issue. Annotate the code with actual execution values. See the semantics of the code change as you edit it. Eliminate debugging! Cool idea. A cool idea that has been proposed repeatedly for over 30 years. There is a long history of sketchy demos of live code execution that have never progressed past the demo (including my own). The reason is that there are fundamental and intractable problems. What about mutable state? [but see Circa] What about I/O? What about non-determinism and asynchrony? If live code execution only works for factorial functions and the like, it is pretty much irrelevant to real programmers. In general you will still need to edit dead code and use a traditional debugger to examine its execution. So all you have done is complicate the IDE. Why bother?
The same tradeoff kills many proposals for fancy new ways to visualize code. Hundreds of papers have been published about new ways to query and visualize code. They look pretty in cherry-picked examples, but they don’t work for the whole range and scale of real world programs. So why bother with the complexity of limited purpose features? Coming up with new general purpose code visualizations is actually very hard, especially if you are not allowed to change the language to help out.
The fundamental reason IDEs have dead-ended is that they are constrained by the syntax and semantics of our programming languages. Our programming languages were all designed to be used with a text editor. It is therefore not surprising that our IDEs amount to tarted-up text editors. Likewise our programming languages were all designed with an imperative semantics that efficiently matches the hardware but defies static visualization. Indeed it would be a miracle if we could slap a new IDE on top of an old language and magically alter its syntactic and semantic assumptions. I don’t believe in miracles.
Languages and IDEs have co-evolved and neither can change without the other also changing. That is why three years ago I put aside my IDE work to focus on language design. Getting rid of imperative semantics is one of the goals. Another is getting rid of source text files (as well as ASTs, which carry all the baggage of a textual encoding minus the readability). This has turned out to be really really hard. And lonely – no one wants to even talk about these crazy ideas. Nevertheless I firmly believe that so long as we are programming in decendants of assembly language we will continue to program in descendants of text editors.
I totally agree with you on the co-evolution of languages and IDEs. I’ve been also thinking about loosing the text files and building some kind of “wizard based” IDE which would work like filling out (intelligent) forms. Couldn’t yet figure out method bodies though.
Look at the existing Squeak IDE. The concept of using a “wizard” in the IDE has been around for more than 30 years:
> which would work like filling out (intelligent) forms
Hypercard? .Net?
>> which would work like filling out (intelligent) forms
>
> Hypercard? .Net?
You are confusing .NET with Visual Basic. Possibly even Visual Basic 6. I spent most of my days in Visual Studio 2010 and barely need to touch a mouse, let alone a wizard.
I answered all this already, here:
http://www.oreillynet.com/onlamp/blog/2008/05/dynamic_languages_vs_editors.html
“Each time you run the tests, the editor should instrument your interpreter to extract type information.
“Each test run should update a type library, containing the fully-derived type of every object found on every line of the source code, complete with the call stack that put it there.”
Now we just need a cunning linguist to implement my idea.
You might be interested in Strongtalk, a version of Smalltalk with optional static typing (and a type inferencer). Some of the technology there made it into Java (the project was bought by Sun).
Small note: you should differentiate between static types and runtime types in your proposal. Running yields runtime types (actual types), as a developer you are (also/primarily) interested in static types (because it “summarizes” the types you can pass). I made a type inferencer for Smalltalk (http://people.cs.kuleuven.be/~roel.wuyts/roeltyper/index.html) with a paper that describes some of this. I indeed can tell you why the inferences proposes a type, which is useful for developers. Contact me for more info 😉
Jonathan, you are not alone. I believe any progress is progress, so these explorations into IDE designs do help the cause, but i completely agree with you. The problem is how we have let “programming” continue based on the limitations of the past. I’m here and eager to help.
Can you expand the comment about getting rid of an AST? Is it because the S is for Syntax rather than Semantics? It seems like a tree representation of your code isn’t a problem inherently. But I tend to agree with you in general regarding the limitations of current programming languages. I think the trick, though, is that humans find it easier to understand problems when the solution is laid out imperatively. I could be biased based on my education but I do believe that is true.
There are two representations of source code, Abstract Syntax and Concrete Syntax. They are not quite dual, but potentially invertible. Such inversion expresses the relationship between a data structure and its pretty printer. Further, we can regard code as text, trees, or graphs. Even further, there are non-grammatical (’emergenist’) and grammatical approaches to language.
There are a lot of ways to slice and dice the problem domain of language design for IDEs, but you’ll always end up somewhere in this design space.
Cheers,
Z-Bo
So you’re saying that serializing semantics into human readable ascii is necessarily lossy? And a visual programming ‘language’ is the cure for this because it may allow us to represent the information in a rich enough way to both be human readable and unambiguously Concrete to the compiler?
Can you expand on what you mean by ’emergentist’? That’s new to me.
I agree that for technical reasons the end goal certainly needs to be to reinvent the whole stack, the editor and the languages, but for marketing reasons I believe there needs to be an intermediate step (or several even) to reduce the impedance mismatch.
If we can make a structured editor for Javascript (or more likely a C#/Java some other more static language) that doesn’t suck it will give users a bridge. If it’s done well they won’t have to learn anything new to start using it which is hugely important to actually get it adopted. The improved editing, customizable projections, powerful refactoring, and improved version control should be enough to convince a significant percentage of people that this direction is the way to go. Now that you’ve made this structured editor you can throw in a couple mind blowing examples of what’s possible that wasn’t possible before, but that generate down to the existing base language rather than requiring a whole new language (for example your semantic tables from the subtext 2 demo). You could even add a language extension to mark a specific function as having no side effects (ala C++’s const methods, but no referencing global state either) which could enable you to do live code execution for that specific function. Even if it only works for factorials, to put it in someone’s hand in the context of a fully functioning environment that does everything they’ve been able to do before you can now tell the story “Cool right? Now imagine if we built a language around this” and you’ll get people interested in the final step, and believing it’s actually possible more than a video demo can.
If instead of making an editor for a specific language you make the structured editor a Language Workbench (http://martinfowler.com/articles/languageWorkbench.html) with one fully implemented language folks are familiar with then the next step of making a new language specifically designed for this system means the user doesn’t have to change systems. It also means you can do multiple existing languages to attract a larger audience.
Jetbrains’ MPS (http://www.jetbrains.com/mps/) is an early example of exactly such a product. It still needs a lot of work to be attractive enough to a wide audience, but I believe it’s already attractive enough to help convince people that are already leaning towards this idea.
I think Light Table is an important sub-step to the structured editor. It’s starting to break people’s notions about what a code editor is, but not scaring them by taking away their giant text boxes. Once they’re hooked on these features it’s easier to show them that if we go structured it unlocks a whole new set of things.
MPS is a total failure. All it does is allow you to marginally reduce boilerplate, without facilitating reasoning about your code. It appeals to people who like the idea of impressive mounds of code.
Working directly with the program’s graph structure instead of with syntax is definitely a huge leap forward, even if it’s the same familiar semantics underneath. Your [brilliant] subtext demos is what made that clear to me. It was a thunderbolt when first I saw it—now it seems obvious.
Killing imperative programming is an extremely ambitious goal though. Not only from a mathematical standpoint, but also from the standpoint of being adopted by industry, for a whole raft of reasons. We need to move forward one baby step at a time.
But I do think there’s possibilities that lie between those extremes. One thing syntax has going for it is that it enables a conciseness that you don’t necessarily get from visualized data flow graphs; so it may be beneficial to keep a syntactic view of the program, if we can work with whole tokens or nodes instead of mere characters.
Where I think the current efforts might be going off track is that the emphasis is on improving the usability IDEs for debugging rather than specification / verification. It will make problems easier to find and correct, but it wont do anything for proving the correctness of the system; which is a more fundamental goal than debugging.
My conception of the next gen IDE is one where unit tests / specs are first class citizens. It’s not about code but units of behavior. More direct access to the program graph will also be a huge boon for verification because it will mean we can isolate units of code in a much more natural, agile manner.
How do you define correctness? Or to say, can you always define correctness? Debugging is nice when you really aren’t sure what is correct; not all (most?) of us can think top down from specification to code, there is often a tight feedback cycle involved.
It would be nice to be able to easily throw a test in when you think its useful, and have it execution continuously as you are writing code. I’m much more weary about (semi-)formal specification and automated verification; outside of a few safety critical domains, they are just too expensive, and it has nothing to do with accessing the code.
To borrow for Hal Webre “correct software does what it’s supposed to do, and only that”. True, for some exceptional cases it’s very difficult to specify correctness in a satisfactory way; but for the large majority of the requirements your find in most apps, that’s not the case.
Verification can be expensive, but so is buggy, poorly tested software. The goal of next gen IDEs should be to make it less expensive to have properly verifiable code. Debugging is only half the battle, in my view.
I think you have it backwards: for some exceptional cases, correctness can be specified in a satisfactory way. There are even some cases (avionics) where correctness has to be specified.
The problem with semantic-structure editors is, as you say, that they are hard. I saw this first hand when I was working at Intentional Software: the software is magical, but extraordinarily complex. The challenge is twofold: how do you build such a complex piece of software, but also how do you hide this complexity while allowing developers to be productive in both using and extending it? The first challenge, building such a thing, is hard enough. I have very little hope we will see the second challenge being risen to until there are a few working systems available that are cumbersome and hard to use. We’re not even there yet, so I am skeptical I will still be alive when the great structural editor renaissance comes to be. 🙂
Building a structural editor is certainly challenging – I have been working on such a project myself recently. Your experience at Intentional is interesting – you said that the second challenge is particularly tough. How does the usability of Intentional’s code editor compare to that of Jetbrains’ MPS? Does it manage to ’emulate’ the free-form interface of a text editor successfully, or do you have to learn a new set of interactions to use it? Hope you don’t mind me asking – I am very curious about these things! 🙂
Hey Geoff, I also have worked at ISC (not at the same time as Greg, I’m not sure we’ve ever met, maybe we should fix that Greg). I’ve also used MPS quite a bit recently.
Emulating a text editor isn’t one of the values of ISC’s editor and I think that’s the right decision ultimately, though to get people to try it you might need at least one projection that does emulate a text editor. I find editing using either ISC’s editor or MPS very quick.
I’d be interested to hear more about your project. Even if it’s in super early stages. You can email me if you want to take this offline (so to speak) jbrownson@gmail.com.
I probably can’t talk too much about ISC’s editor but as Jake says it was altogether a different experience at the time than using a traditional text editor. It was fast once you got used to the shift. When I referred to complexity it wasn’t about its usability as an editor. As an editor, when all is set up and your languages are in place, it’s quite fine. The complexity comes into play just by the nature of what you are dealing with, since the medium itself is more complex than raw text. This wasn’t really a dig, saying its complex is the same as saying the space shuttle is complex, it’s a necessary condition based upon the environment being worked in.
The open question to me is if this is *necessary* complexity or *incidental* complexity within the problem space of domain-driven design. So far it seems necessary but I’m not going to bet against someone coming up with a “worse-is-better” solution that gets you 80% of the way there with much less complexity involved. I’ve certainly racked my brain trying but have failed to come up with anything that doesn’t make me weak in the knees when setting out trying to build such a thing.
I have been bugging ISC for a long time to publish their work. By many accounts they have the best structure editor ever built, but only sketchy details have been revealed. They almost did an Onward paper this year, so I’ll keep trying.
Personally, I think LISP had the right idea – it’s just that S-expressions aren’t expressive enough…
Gah, disappointing to hear they didn’t follow through on Onward :\
Yeah. I am pretty pumped about LightTable. When I get my grubby hands on it the first thing I will be doing is seeing how hard it is to turn it into a projectional editor of s-expressions. (raw text on left, higher level, read-only projection on right)
I have just hastily prepared a video of my syntax-recognizing structure editor that I have built as part of my project – ‘The Larch Environment’.
Here:
While the Larch editor operates on a structural data model, it editor behaves much like a normal text editor, so it should operate in a fashion familiar to most programmers.
Thanks Geoff, Larch looks very highly developed. I am interested in why you built a Python environment in Java? More importantly, what have you learned? What are the technically hard parts and unsolved open problems? How do people react to it?
Thank you for your compliments! 🙂
I targeted Python because its a nice language that I got to like. It was originally pure Python, but performance problems pushed me to implement much of it in another, faster language – I chose Java.
As for what I have learned?
I built a pure structure editor early on, in which all edit operations were structure-oriented. Keyboard short cuts were used to create ifs/whiles/fors/etc, while specific key presses (+, *, /, -, etc) wrapped expressions in operators. It was a bit faster to use than an equation editor. I found that it was an interesting prototype, but essentially useless. So I followed in the footsteps of Andrew Ko’s Barista and made a syntax recognizing editor to replace it. I find this to be a very acceptable compromise. The text editor style behaviour should ensure that it is not completely alien to existing programmers, and should make adoption easier, while still retaining the benefits of structural model.
A Python structure editor is not the valuable in and as of itself – the main value comes from avenues for further work.
Structural models allow you to properly embed languages within one another – instead of putting the embedded language within a comment or string of the host language.
They also provide opportunities to enhance the presentation of difficult
Having the ability to construct visual representations of objects throughout the system is also very valuable.
Dynamic languages such as Python are useful because you can immediately compile and execute code, and they are very flexible. This allowed me to build my Notebook interface.
Technically hard parts?
Getting a syntax recognizing structure editor to the point where it is usable. Quite a lot of effort went into this, along with the effort required to develop the presentation/GUI system that supports it.
Unsolved open problems?
Integration with existing tools, like version control, etc.
I have demoed Larch at Python conferences (Europython 2011, PyCon UK 2011, and PyCon Ireland 2011) and the questions have often centred around ‘how do I use this with existing text based projects?’
Unfortunately, there is a lot of text based code out there which people can’t throw away – they have to continue to build upon it. People who would like to use my editor often have to work with others who refuse to leave emacs/vi/etc. As a consequence, allowing Larch to interoperate with these existing ecosystems is almost certainly a requirement.
I think that the two problems that need to be addressed are:
1. Integration with text based projects – export to text in such a way that it will minimise the diff’s it will create to avoid sending your version control software crazy.
2. Direct version control at the object/AST level – so that you can at least do version control if you are willing to abandon text.
People’s reaction?
Peoples reaction has been mostly positive. They would like to use it, but can’t yet see how it will fit into their current workflow. This is quite understandable – it doesn’t play well with external tools yet.
The idea is not crazy at all. Why are we using files in programming? Because it is portable. Why we program the same way as before, why we call it programming language? At beginning programmer has to remember every thing, now you don’t have to(at least for some tasks). It makes sense to start something completely new.
I never program with files (conceptually).
I program with objects (when I’m doing OO), or functions (when it is functional), or predicates (when it is Prolog). None of these are files. I consider it very unfortunate that in almost all programming languages files have an (implicit or explicit) influence on the semantics. Therefore I have to think about files as compilation units.
Note that it should not necessarily be like that, but in most cases it is.
For an exception in OO languages look at Smalltalk or Self. There you work all the time with objects (and textual representations of Objects, so the coding itself is still purely textual), not with files. You can store programs as files if you want to, but that is only for storage in the mindset of most Smalltalkers (even though there is no problem editing this file and then loading it back in, which most Smalltalker would consider stone-age development 🙂 ).
Regarding portability: as a storage medium files are portable. Why would this make them good for editing ? Do molecular biologists use files to edit their molecules ? Do architects use files to work or their buildings, or 3D editors? Etc.
Why is there an apostrophe in IDEs? It’s not possessive.
[You’re right, it seems most authorities agree on that. Except the New York Times. – Jonathan]
“Another is getting rid of source text files”
I agree, relying on text files and the file system seems like a very ’20th century’ technology to me.
Text files and the file system leak a tremendous amount of interesting information. There was a time when storage limitations made this a reasonable tradeoff. Nowadays, there’s no reason not to store programs in different formats.
Take a look at the Spoon project for Smalltalk: http://www.netjam.org/spoon/
I like text files because pragmatically all I need is a text editor to at least be able to see the code and when it’s three in the morning, you are working with someone else’s code and you aren’t even sure which language it was written in then NotePad or cat are your friends.
I shudder when I remember reverse engineering a binary file because someone left and destroyed the source.
How will I interact with these semantic structure editors? Will it be an OS level tool?
Your concern is valid. We want our code to be accessible and portable. We don’t want our code locked up in proprietary binary formats, subject to administrative whims beyond our control.
But I believe we can address those concerns without limiting ourselves to text editors. We just need transparent structure and open standards for access, e.g. SQL or XML or JSON for structure and odbc or HTTP or a mongo or git protocol for access.
These days there wouldn’t be a problem creating web services for programming, even on the client side. Browsers are as widely accessible as text editors. Think about it.
I agree. It’s easy to forget that text files are binary files too. It’s just a really widespread standard with many editors available. I think we need a similarly strong format that we can use to edit structure. XML is nice, but its design is constrained by the requirement it sits on top of text. Part of the standard will likely need to be a schema DSL and projection DSL that describes specific data domains, as well as having general purpose projections that can display anything.
The debate about text vs binary storage is pointless.
As as been pointed out text is just another binary format with lots of editors. It is easy to conceive of a text format which simplifies parsing and refactoring for an IDE. Say every variable/method definition is followed by a GUID and thereafter every reference is to the GUID. Method overloading is moot, renaming is trivial. I sure would not want to program directly in such a language but a suitable editor would work and so would all the source control tools. The problem is not the storage medium it is the archaic things we store.
That is a key point Peter. A major problem with textual languages is that they assume identifiers are identified by their spelling. In Subtext I am in fact using unique IDs with an associated non-unique name. This requires a fancy editor to be usable. But when you do this you realize that much of the standard class/module constructs are all about resolving name clashes, and suddenly become obsolete. What exactly to replace them with is not immediately obvious though.
+1
strong identities are a key part of the solution to this problem.
Or you could pursue modularity without names…
I think it’s funny that a video about usability is hosted in a video player that has a progress bar that can be manipulated but doesn’t actually do anything.
Yes! It’s such a pet peeve of mine how awful web video players are.
So your argument is that since IDE’s shouldn’t be used because they can never be perfect? I would argue that the reason that IDE’s exist is that text editors are such a pain in the ass to learn, configure, and use.
And I don’t buy the argument that we have to program in descendants of text editors. I would remind you that these things are COMPUTERS. We tell them what to do, not the other way around.
IDEs originated with Smalltalk. While it is true that IDEs for non-reflective languages have the limitations that you suggest, the IDE for Smalltalk evolved along with the language itself, which in turn, evolved along with the virtual machine that is the target of the Smalltalk compiler. The GUI found in current Smalltalk IDEs is very primitive compared to what can be done. The folks behind Pharo Smalltalk are currently exploring how to move the GUI to OpenGL. This will enable the creation of many new kinds of IDEs. Projects like SiliconSqueak, which are creating CPUs designed around fully (or at least highly) reflective languages, and designed to scale to mega-multi-processor systems, conceivably programmed by multiple people working on the same application at the same time, will require new kinds of IDEs that will make current concepts obsolete.
There are extra rather a lot of people who would like to entertain these ‘crazy’ ideas and I think they all read your blog. I agree with your feelings though (about IDEs) and I find it really disappointing not many more people are attempting revolutionary things. It’s all so ‘careful’ (spineless?). We haven’t progressed much since high level programming languages were invented and that accounts for the small jumps in productivity and fairly constant bugs per line of code counts.
I do believe it’s good these kind of projects we are seeing now are getting a lot of attention; people might admit something is off.
For some problems or domains text is an excellent way to program. And even in cases where it is not, we’ll generally want a well defined serialization format to share code between developers.
The use of `files` is a bit awkward, especially in this Internet 2.0 era. I’d think more people would have built languages upon wikis and web services by now. (Compilers as web services, and IDEs as web applications, seems quite viable and useful.)
I would prefer the choice of surface structure be made by the user on a per-module basis. Originally I thought more in terms of extensible syntax, but that’s a mistake because extending the language (importing fexprs, etc.) itself becomes a form of per-module boiler-plate. Best keep syntax spec to one small line or attribute per module.
Graphical or structured editing would be one way for an IDE to present and manipulate code in modules that have suitable languages.
Well, this isn’t quite true. It is not unusual to build IDEs atop frameworks rather than atop languages, and rely on some programmer discipline to cover any gaps (e.g. “make sure all your methods terminate”). For example, we can understand Croquet as an IDE for Smalltalk atop the Teatime framework. And I shouldn’t have to mention IDEs for various Java Beans and ESB frameworks.
Where a language can help is improving robustness, reducing the need for self-discipline and boiler plate.
I agree we need better ways to handle these for live programming, especially in open systems. One valuable observation is that we must push state outside our programs if we wish to replace arbitrary parts of our programs without losing or damaging accumulated information. I’ve developed several suitable state models for my RDP paradigm.
I agree. The gaming example of manipulating physics to make a jump seems especially awful, given that in a real game you’d have to rerun the whole level with the new physics – possibly multiple levels.
Okay that’s it, I’ve seen enough references to Inform 7 recently that I need to read something about it 🙂
David, I don’t think you’d have to rerun the whole level. When we are testing simulations, we often create a smaller (dare I say “unit”) test case that establishes the context needed to test a feature. You don’t program the game en-masse, and you won’t run whole levels until you are doing integration testing, at which case live coding and visualization isn’t as useful anyways.
If you had local gravity values (e.g. based on each platform you’re leaping from), perhaps you could get by with unit testing. Otherwise a tweak in gravity to make a leap at platform #108 will require you to test whether you can still make the leaps for platforms #1-#107.
Unfortunately, making gravity a local parameter has terrible gameplay implications unless carefully integrated into the gameplay mechanic. Even having a different gravity value per level is pushing it. Physics is generally not a `local` property subject to easy unit tests.
There are many features that can readily be tested locally. But I was harping specifically on the manipulation of physics.
You can test physics with a small controlled case of a limited number of frames. Replay actually doesn’t require any resources since the initial context is provided by the test case; you can then record and memoize a few minutes to get some omniscient debugging if you want.
This isn’t about “testing physics”, Sean. It’s about testing whether a level is still playable after changing the physics.
You can still do that with integration testing. I agree that Bret’s example is too contrived to see that, that you wouldn’t be testing a jump with specific physics parameters, rather you would probably be testing the level parts themselves.
Regarding the visualization ideas, I believe they are pretty much domain-based (e.g. Bret Victor’s), with no one-size-fits-it-all solution. Nonetheless, IDE could be enhanced to facilitate the creation of such visualization, instead of providing a standard one.
> The reason is that there are fundamental and intractable problems. What about mutable state? What about I/O? What about non-determinism and asynchrony?
Exactly. I’m glad to see other people like yourself not only recognizing this (which seems a bit obvious) but also working on the paradigm shift required to actually have revolutionary IDEs vs. more “tarted-up” editors.
The programmer experience (PX, analog UX) includes language, library, editing, debugging, and so on; it makes sense for language designers to become programming experience designers (PXD, analog UXD) and consider programming holistically. I hereby relinquishment my title of “programming language designer” and will now refer to myself as a PXD; time to reprint my business cards.
There is a lot we can do with live programming and visualization in targeted domains. For example, physics simulations, computer vision, and so on, have frame-based semantics that allow for easy “replay” as well as inputs and outputs that are easily visualized as AfterEffects-style movie clips (or even use contrails as Bret does). Supporting these features in general is much more difficult, and perhaps impossible. Rather, we might as well consider domain specific IDEs for domain specific languages, or better yet a language with the appropriate hooks to enable live programming and visualization (think how toString customizes debugger views today).
There are so many ideas in this area. Perhaps we should do a workshop or something. Alas, too late for SPLASH.
Why bother differentiating? Aren’t we programmers just users of programming tools?
Yes we are users; no we are not end users. Designers for expert tools are very specialized; you don’t throw a typical UX designer on Adobe Illustrator or Autocad and expect anything decent to result for the first few years. So we have the “illustrator experience” and the “architect experience” to consider, with designers who specialize in these areas. Programming experience is more like that.
I understand what you’re saying, but I’d prefer to think of PX as a subcategory of UX and not a peer. The activities we (and architects and editors, etc) are doing aren’t fundamentally different from “normal” users, it’s just more specialized imho.
Of course, all designers specialize to some extent. But designers for expert tools are very different from end-user designers; they have to immerse themselves in a technically non-trivial domain and they have to efficiently leverage the investment that the users are willing to put into the tool as well as the existing investments they have already made for existing tools (this is a biggy). That we deal with abstraction more heavily than other UX disciplines means we are necessarily more dissimilar than similar.
(It seems we’ve reached the nesting limit so I’m replying to my own :))
I think we only disagree on semantics here, but I think the semantics are important. I feel it’s a mistake to not include programmers firmly in the category of “users”. I recognize that is is common practice to do so now.
If I’m building a word processor or spreadsheet I should say “user” to describe folks that will use the spreadsheet or word processor, and “programmer” to describe myself. I think everyone would agree on this.
If I’m building a programming tool I should also use the term “programmer” to refer to myself, but I think it’s important that I continue to think of the folks that will use this tool as “users”. It so happens that my “users” are also then “programmers” for their own “users” making this case a bit meta, but if you only go one level deep it’s still a programmer->user relationship. I don’t understand why we seem to use a whole different standard of design principles for our programmer users than for our “end” users. It could be described as condescending to our “end” users.
I think if we were to follow the semantics I’m proposing it will encourage more things like the things Jonathan linked in this post.
Programmers are users and so usability should be considered. To me, this is just so intrinsic that it is not very debatable and doesn’t deserve much mention.
Design principles…do you have a list? Even designers for end users don’t seem to have such a list (I worked in a studio for 2 years), rather they rely on knowledge, experience, taste, and refined common sense! As there are very few universals in design, its not harmful to distinguish ourselves from other designers (who are free to distinguish themselves also, the design field is very diverse!). Titles are fairly meaningless anyways (I don’t really have business cards).
Don’t get me wrong, I think we should be interacting and exchanging ideas with designers in other areas a lot more than we do. I do this, and I have found every experienced designers to be very different from each other, the field is quite eclectic.
I’ve been pursuing the idea from the other side – aren’t UIs just domain-specific programming tools?
Yes 🙂
So like, UX is PX and not vice versa? Actually, HCI in its early days was full of VPL folks like Lieberman!
Programming Experience Designers? I like it. I agree that we need to consider the programming experience more holistically. The associated community and social experience is also relevant.
Agree totally Sean, in fact I had considered discussing domain specific IDEs for domain specific languages, but thought it would muddy my main point. The most plausible new IDE concepts have been in gaming, including some of Bret’s demos and your own work. Ironically, many researchers are automating the generation of boilerplate IDEs for DSLs. The real opportunity may be for a custom IDE to exploit domain specific semantics and syntax. That is a much more pragmatic route forward than IDEs for Javascript.
I like PXD. A new field is born! Mark the date.
Wait, getting rid of imperative semantics is one of your goals? I thought functional languages like Haskell already solved this.
A common misconception. As Bob Harper says, Haskell is the ultimate imperative programming language. Monads just ghettoize imperative semantics – they don’t transcend them.
I think that’s the point. If you want to do any kind of IO, you simply need to do it imperatively.
In Haskell, at least you can isolate that to a small part of your program and be sure that the rest of your code is pure.
Even much pure Haskell code is imperative – e.g. you’re still concerned about order of operations in a State or ST monad.
Sentences like “Getting rid of imperative semantics is one of the goals. Another is getting rid of source text files” are invalid by definition.
I agree that these things have a lot of drawbacks – but ‘getting rid of X’ is a reasonable goal only when it can be phrased as ‘let’s replace X by Y’. Only with a specific replacement in mind you can even rationally argue if getting rid of X is good or not worth it.
I don’t believe that is what “rational” means. For example, “oil is bad, let’s find a better alternative” is a perfectly rational thought, even if you don’t know what the alternative is; i.e., belief that a better alternative exists is reasonable and not irrational and so won’t get you committed to a mental institution.
Thirty years ago I lead the design and development of the Synon/2 IDE.
It used Action Diagrams for code editing which were stored directly in a database.
It used Entity/Relationship modeling to model the database design and to automatically produce form/report designs. Much in the same way that the Microsoft Lightswitch product does now. The generated programs could switch target language between RPG III, Cobol or PL/I by changing a project setting. The Model and the Action Diagrams were integrated together. There was no round-trip problem as the generated code was never amended.
I am amazed that in thirty years the state of the art has hardly advanced at all. I thought that by now we would have vastly more powerful design abstractions and automated designers for particular problem domains.
The problem is indeed that the programming languages have been tailored for hand crafted design using a text editor as the main tool. Current programming languages seem to have been designed by the guild of programmers to preserve tradecraft and to prevent automation.
Only by modeling the artifacts we are creating can we develop tools to assist us. (And no, I don’t mean UML diagramming).
Peter, I am surprised how little the people working on DSLs today are aware of the history of 4GLs in the 80s and 90s. The problem I observed with 4GLs was vendor lock-in. You could only do exactly what the tool let you do, and you were completely at the mercy of the vendor’s enhancement schedule. That is why so called frameworks are the dominant pattern these days. By using a mainstream PL with an open-source framework you have much more freedom. But you pay in much more complexity, and the loss of domain-specific IDEs.
Here’s another one: Instant C#
Nice find! It should be noted that if the program is small enough, we can achieve expressive visualizations and live programming easily via brute force. This isn’t useful in practice (I did this with ScalaPad when at EPFL), but we should exploit it like crazy to get people to realize the benefits of “what if”…
Programming Languages have made no fundamental progress. In fact conceptually they have even regressed. Originally OO was meant to SIMULATE real world, that simulation focus has quite disappeared. That’s how I tried to restore with “Fractal MVC” http://lepinekong.com/the-advantage-of-fractal-mvc-architecture-simple-bdd-testing/ and looking for tools there weren’t any so I did ask developers to build our own simple (even simplistic) tool to complete the IDE.
Isn’t narrative text about what people are about, though? I mean we talk and speech is inherently linear. As we tell a story over and over, we edit it into a form that expresses it better. When we use symbols to preserve speech on paper, we’re writing. Then we shift our editing to the written word but it works the same way – progressive, linear.
We also draw – we make lines and arrows and things. We make pictures of ideas. Even when we’re presenting abstract concepts, though, we tell their story. Media lets us dramatize ideas, combining words and actions – we’ve been doing that for a long time, too – as a more immersive way of telling a story.
Name one thing you’ve learned that interested you that you didn’t learn via a story of some kind. However we re-imagine programming, if we go from imperative to declarative models, one primary requirement is that the result must tell a story or it won’t communicate anything.
Mathematics is declarative and proofs and derivations in mathematics have a drama to them in the many transformations, the parting and uniting that the mighty heroes x and y undergo. 🙂 Maybe that’s a place to look. But I’m not inclined to believe there’s anything to this until I see a proposed form has an underlying narrative that people can follow, that can capture their imagination.
The standard example is a spreadsheet. But I would be happy even with the narrative power of English. Programs are like narratives that talk of neuron firing patterns rather than emotions and motives.
No doubt there is an entire class of problems that this doesn’t cover, but I don’t believe it’s limited to “factorial functions and the like”. REPLs wouldn’t be anywhere near as popular if this functionality was not useful, I merely see this sort of tech as the next step in REPLs. Useful for only a certain class of problems, yes, but one I feel is large enough to at least explore.
I don’t feel that blankly saying “what about mutable state or asynchrony” really challenges the idea. There’s no reason why this can’t be applied to both these things. There are certain types of mutable state where it perhaps might not work, but far from all forms of it and the same goes for asynchrony.
Ideally this would not overly complicate an IDE, I imagine a simple button next to a method declaration that you can toggle this functionality with (off by default).
I was thinking closer to `unit tests` could simply be declared right along side the function, and their evaluation could be presented. I.e. this sort of live programming works well with zero button testing, so long as we eliminate ambient authority.
Eric – go for it! My comments stem from my own research into live code, and my dissatisfaction with the complexity and awkwardness it seemed to entail. But it is always useful to have fresh minds make a fresh try, and you have the power of Roslyn to help you now. Good luck and I look forward to seeing Instant C#. – Jonathan
I’ve certainly already ran into my share of “complexity and awkwardness”! I have simple goals to avoid falling down the rabbit hole, but I explicitly keep this as a “research project” as it’s all very tentative until things are usable in the real word, not just nice and tidy demos.
Check this out: Circa. Reifies state in the code to directly manipulate it in the IDE. Now that’s what I’m talking about! I take the same approach in Subtext, except that is the only form of state there is.
As far as graphical/visual programming, a picture is worth much less than a thousand words.
This goes back to the original “Structured Programming” debates, like the famous Dijkstra paper “Goto considered harmful.” It was mathematically demonstrated that sequence, selection, and iteration in a language construct could implement any construct in a flow chart, and, in fact, in a general Turing machine.
Now in fact, passive diagrams don’t necessarily have any sequence. Without animation, a diagram is much worse at depicting execution sequence than is the flow of program code.
And I often see, from professional programmers, any number of forms of bulls–t diagramming techniques that would make Edward Tufte add a whole new section to his writings on PowerPoint.
I love using different kinds of UML diagrams to draft out my Java classes and see them at a high level. However, you can’t get the detail into a page of diagram that you do in a page of program code.
In a small Java project there may be a few hundred classes (ignoring the runtime and any Jars needed). You couldn’t fit that level of detail into a diagram.
We don’t use textual language because we want to. We do because we have to. We have to apply lexical rules and syntax in order to get to semantics that express a lot more than you can with a two-dimensional diagram, three-dimensional, or three/four-dimensional animations. Any interesting program is polydimensional; reducing it to a metaphor for a physical object without fundamentally distorting or misrepresenting it is a seriously hard task.
Can someone with more knowledge in the area than I (that would be any of the commenters so far) please expound on how Erlang would or wouldn’t fit into this? Thanks 🙂
First, Erlang supports hot swapping, which is one element of live coding/programming but not the only one (reactive execution to reflect relationships of the new code, and migrating old state to the new program are the other elements). Second, Erlang’s actor model is not very applicable outside of the few domains it was designed for, which is why Erlang is a fairly niche language today despite its benefits. .
Erlang or ideas from Erlang may or may not be part of an answer, but Erlang itself is not an example of IDE/language co-design.
Thanks Sean – that does help.
I’m thinking about the way that Erlang concurrent processes communicate by messaging rather than shared variables, and wondering if an idea of all functions doing that would help in the kind of radical rethinking of programming languages Jonathan mentions in the article. Then I think some more, and in some ways I think that’s not a radical enough change to fundamentally shift our IDE thinking as proposed. I continue to vacillate, so thanks for helping my thoughts.
(… in a different way than Subtext dies, that is.)
“dies” -> “does”. I’ll shut up now; clearly I’m too tired for inflicting myself on you all.
I’ve been seeing this trend for the past few years that there’s a resurgence of interest in “better ways to code”: starting from the DSL wave to language workbenches, subtext and its ilk, to reactive environments like bret victor’s tangle and the latest demo, to touch-based IDEs on tablets.
Maybe PXD is the first in the wave of new buzzwords that will become commonplace in the next decade. Or (like the structured editors of the 80s) this will have its run of popularity and we’ll return back to the cypher-like “all i see is blonde, brunette, redhead” allure of plain text.
I have been thinking about all that’s been talked about above (and am glad to be in the presence of similar-if-bigger minds, thank you) and blogging my rambling thoughts about it for quite a while , and feel that this progression to a better tomorrow cannot be brought about unless:
1. There’s a clear migration path from the current state to the new one. This cannot be a “stop coding in text already, here’s the brand new way”. Legacy code, systems and programmers must be able to make the transition *at their speed*. Only then will businesses buy in – and let’s not delude ourselves that that is not important.
2. Languages are inherently order-of-magnitude closer to the way we think. DSLs are a nice baby step in the direction, but all the tooling excitement around it is distracting from the core point. FONC is a larger, more concrete attempt at attacking the problem, but (like Intentional) their progress is a bit opaque.
3. We solve the problem at large scale (this, btw, is my tie in to jonathan’s original post). IMO, Codebubbles is closer to what we need than Light Table. This is why IDEs are useful – they can handle large codebases. Its nice to argue (like FONC does) that we may not need such large codebases in the first place. But they exist, and must be managed because they are useful. More importantly, *all* codebases end up being big balls of mud (tall claim, yes. i hope to live to see it turned over).
A commenter above said “Any interesting program is polydimensional; reducing it to a metaphor for a physical object without fundamentally distorting or misrepresenting it is a seriously hard task.” All PXDs (nice extension of term, btw) should have this printed and stuck where s/he can see it always.
Aside: is there a place for people interested in this “topic” congregate? If not, shouldnt we create such a space? I have been lurking on alarmingdevelopment for ages and would love to see all the projects (and their authors) in one place.
Vinod, that is certainly a desirable wish list, but unfortunately I don’t believe it is attainable. The history of ideas teaches that breakthroughs are often disruptive: they are largely incompatible with past practice. They also start out toy-like, unable to compete with the status quo.
I agree there should be a forum for theses sorts of discussions, but I don’t yet know what that would be.
The author writes, “See the semantics of the code change as you edit it. Eliminate debugging! Cool idea.”
For those using Visual Studio, does mocking and using tools like NCrunch http://www.ncrunch.net/, which automatically runs your unit tests in the background as you code, come close to achiving this goal?
Programming in the future will involve a lot more cloud programming scenarios, and require much better resource scheduling facilities than we use today such as thread pools and round robin resource pooling in general. When thinking of why an IDE won’t matter in comparison to language design, look at Dropbox. Granted they use Python a lot but how could you compete with them so that you could offer businesses “private cloud dropboxes” rather than dropbox itself? Security and scheduling are the two major hard problems in cloud computing.
This is what I was trying to think about in my half-baked Erlang query earlier.
The Actor Model does allow very general resource scheduling abstractions, but there is a difference between the Actor Model as a model of computation and Erlang, which is a language that affords limited scheduling. The Actor Model literature supports many abstractions for resource scheduling, but the quality of these abstractions gets rejected in language design circles since there are no actor languages with structural guarantees currently for concepts like Bankers and Custodians. For example, purely verifying many object capability security patterns means verifying higher-order behaviors, which is a monumental task and it is something done outside a language today. Language designers would rather it be an intrinsic property so that the language facilitates reasoning about the behavior of the system.
Hi Jonathan. Thanks for the shout out. The response to my ideas has been largely negative, but also extremely reactionary. This tells me we are on to something interesting. We really need a place to discuss such things, however. Perhaps it’s time to create a new google group for post-20th century programming?
After emailing with Josh and a few other folks that have done work and are interested in this area we decided to put together just such a group, and here it is if anyone is interested in continuing this discussion: https://groups.google.com/forum/?fromgroups#!forum/augmented-programming
Hi Jonathan. I totally agree with you. This article is superb. No doubt Languages and IDEs neither can change. I also face so many problems with IDE. I really appreciate your work. thanks for sharing this article.
“… so long as we are programming in descendants of assembly language we will continue to program in descendants of text editors.”
This is a great quote, because it implicitly understands that as long as we are program in descendants of text editors we are inhibited from true progress.
“It is therefore not surprising that our IDEs amount to tarted-up text editors” – even worse, there are many text editors that are way better at editing code than the editors embedded in IDEs. IDEs make up with bling like class browsers and debugger integration, but the text editing itself sucks in most of them. And it’s a pity they don’t have mechanisms to just use external text editors.
OTOH, programming is less about writing assembler and more about somehow persisting and communicating your thoughts. You wouldn’t use the same language for humans, but because computers are very fast and precise idiots, you need to express your thoughts in a very precise way for computers to understand them. Nevertheless, precise or not, you always think using words – concepts, if you will. That’s what programming languages are about, not necessarily about abstractions of assembler. Since your thoughts, while still in your head, are textual, I see no better way to persist/externalize them than using a textual format. Maybe that’s why non-textual programming has caught on only in problem domains where the expected output isn’t your thought process, but the result of it? (I mean, such as electronic circuitry schematics, for example.)