A couple slides don’t make complete sense to me w/o the talk behind them, but some of them made me go “oh wow” (ex: 39). I’m excited to hear the talk 🙂
Jonathan, you usually put together an online video of your presentations. Is ELC big enough for such an effort? Or do you need to dedicate all your efforts right now towards your smart compiler?
Also, if you get the chance, please discuss what you thought of ELC.
I have one gripe with your presentation, and with your previous publications (esp. on coherent binding): You provide too little examples, possible applications and related work. While it becomes clear that you want to advocate paradigms that allow for distribution/concurrency and usability/simplicity, it would be great if you could give more examples than the “task” with start, end and length.
Agreed. This example sucks, but I haven’t yet come up with a better one. It is quite hard to find examples that are simple enough to present in a 15 minute talk. I think I really need to develop the language to the point where I can write some practical programs, which should spawn better examples.
[Re: Final slide] Of course, in The Jetsons, nothing really worked!
Could we expect video from ELC?
It’s not clear (for me) how tree and cursors works.
I’m impressed – I’ve been thinking that possible restrictions to pointers to simplify dependency analysis is a critical goal for some time, and your solution has a lot of potential. This has the potential to be both significantly easier and faster, as well as having very appealing properties when moved into the realm of Subtext-like visual programming. And perhaps most interestingly, it should be both viable on current hardware while bestowing great benefits on future architectures built with it in mind.
One note: while it is elegant to think of steps as creating true dependencies and preventing parallelism, they are not fundamentally different from the dependencies between any two consecutive instructions in imperative programming. They might be dependent, or they might not be – the only certainty is that they are not dependent on anything further down the instruction stream.
Usually at least one instruction will be dependent on the previous step if implemented by a sane programmer though, but even then there is plenty of extra parallelism to be extracted. And because of the absence of true pointer aliasing, much more of that could be done in software than in expensive hardware out-of-order schedulers. The consequences on potential future ISAs are intriguing to ponder, although as I said this seems attractive even on today’s architectures from a performance standpoint (given a decent optimising compiler).
If anyone ever found himself in a position of great influence over the future direction of programming, it would be a travesty for you not to be involved in the process. I do hope the right people are taking notice of these ideas and their future evolution.
Thanks for your encouragement. You are quite right that progressions recreate manually-scheduled dependencies. But at least they are bounded to subtrees of the state. There may be ways to improve upon them. Progressions could support partial ordering of steps to allow more fine-grained dependency expression. I am also thinking of ways to visualize side-effect dependencies in the IDE, in the same spirit as visualizing Boolean logic in Schematic Tables.
Jonathan Edwards started this presentation by showing a small example where the ordering of statements in an implementation is dependent on what representation you use for data, and shows that it’s impossible to handle this case in a general way. From that point he claims that there is a fundamental tension between styles in a language, and you can only get two of these three: Declarative programming, Mutable state and Data Structures. I’m not sure if I agree with his conclusions, and the initial example didn’t feel like anything I’ve ever had trouble with.
Based on the conclusion that you can only have two of the three, he goes on to claim that the thing that cases all these problems is aliasing. So in order to avoid aliasing, his system uses objects where instances are physically always contained within another object. This means you can refer to these objects without having actual pointers – and thus cannot do aliasing either. From that point on, his system allows declarative programming of the flow, where updates never oscillate back out to create more updates.
Lots of interesting ideas in this talk, but I’m not sure I agree with either the premise or the conclusions.
Jonathan is an MIT professor who has worked on Coherence and Subtext for a while. This talk was one of more theoretical talks, packed with ideas. I’ll definitely watch the video at least once to absorb the ideas. In fact, my notes feel really inadequate, but here goes.
We know that imperative programming is bad, e.g., because of concurrency. We prefer declarative programming with clear data flows. Unfortunately, pointers lead to undecidable data flows.
You get to pick 2:
1. Declarative Programming
2. Mutable State
3. Data Structures
(I’m reminded of the CAP theorem…)
What’s the solution? Limit pointers. Combine nesting and binding with modeling of the data flow. The entire system state is modeled as a tree with bidirectional bindings between elements. Domains of interest are nested collections. Pointers are cursors and bidirectional into this structure. This is very similar to a data flow in MVC.
(Again, see the video for yourself…)
I don’t understand, I see the cave man using just a different programming language, and now we know that programming languages don’t matter at all, only the programmers using them matter.
So surely the caveman should be just as productive with or without declarative programming? 🙂
IMO the ideal language would be essentially Set Math, such as where one had X things in Y positions taken Z at a time as the assembly language, which would yield X^Y^Z possible machine instructions, for a given program.
For example, if one had 2 things, in 2 positions taken 2 at a time, (boolean) there are basicl things you can do for manipulation. For example, the truth output for B 1 1 0 0 is automatically equal to 11 11 00 00 to denote “B” in the set 3 things, in 2 positions taken 2 at a time, and is 11 11 11 11 00 00 00 00 represents B in the set 4 things in 2 positions taken 2 at a time.
The compiler would first determine what the set requirements were, and then generate the space to represent it. Inside this auto-generated space, reducing the instructions to the simplist possible rendering would be automatic.
This would be the math space of assembly language, and depending on what one was modeling, the assembly language would be produced on the fly, which would then be translated into whatever machine code was on the particular computer.
It would actually take many pages to describe a computer that worked like this, and the math involved, but the general idea is that it is set theory and permutations that determine the underlying structure of the second from machine language code, which could figured out by the compiler, in order to construct the appropriate model.
Once you had something like this, you would code by doing things like “modeling the space of desires for human interaction with computers” and then translate that into a second set of the same kind of set code, and someone could “attach” to a library that would then automatically generate this, based on their sets.
This model falls apart when trying to represent things such as the stock market, which cannot be modeled in this way, but it would work for many things that are considered important.
A couple slides don’t make complete sense to me w/o the talk behind them, but some of them made me go “oh wow” (ex: 39). I’m excited to hear the talk 🙂
Jonathan, you usually put together an online video of your presentations. Is ELC big enough for such an effort? Or do you need to dedicate all your efforts right now towards your smart compiler?
Also, if you get the chance, please discuss what you thought of ELC.
I have one gripe with your presentation, and with your previous publications (esp. on coherent binding): You provide too little examples, possible applications and related work. While it becomes clear that you want to advocate paradigms that allow for distribution/concurrency and usability/simplicity, it would be great if you could give more examples than the “task” with start, end and length.
Agreed. This example sucks, but I haven’t yet come up with a better one. It is quite hard to find examples that are simple enough to present in a 15 minute talk. I think I really need to develop the language to the point where I can write some practical programs, which should spawn better examples.
[Re: Final slide] Of course, in The Jetsons, nothing really worked!
Could we expect video from ELC?
It’s not clear (for me) how tree and cursors works.
I’m impressed – I’ve been thinking that possible restrictions to pointers to simplify dependency analysis is a critical goal for some time, and your solution has a lot of potential. This has the potential to be both significantly easier and faster, as well as having very appealing properties when moved into the realm of Subtext-like visual programming. And perhaps most interestingly, it should be both viable on current hardware while bestowing great benefits on future architectures built with it in mind.
One note: while it is elegant to think of steps as creating true dependencies and preventing parallelism, they are not fundamentally different from the dependencies between any two consecutive instructions in imperative programming. They might be dependent, or they might not be – the only certainty is that they are not dependent on anything further down the instruction stream.
Usually at least one instruction will be dependent on the previous step if implemented by a sane programmer though, but even then there is plenty of extra parallelism to be extracted. And because of the absence of true pointer aliasing, much more of that could be done in software than in expensive hardware out-of-order schedulers. The consequences on potential future ISAs are intriguing to ponder, although as I said this seems attractive even on today’s architectures from a performance standpoint (given a decent optimising compiler).
If anyone ever found himself in a position of great influence over the future direction of programming, it would be a travesty for you not to be involved in the process. I do hope the right people are taking notice of these ideas and their future evolution.
Thanks for your encouragement. You are quite right that progressions recreate manually-scheduled dependencies. But at least they are bounded to subtrees of the state. There may be ways to improve upon them. Progressions could support partial ordering of steps to allow more fine-grained dependency expression. I am also thinking of ways to visualize side-effect dependencies in the IDE, in the same spirit as visualizing Boolean logic in Schematic Tables.
Review by Ola Bini, Emerging Languages Camp – Day 2:
Review by Dean Wampler, OSCON: Emerging Languages Camp:
I don’t understand, I see the cave man using just a different programming language, and now we know that programming languages don’t matter at all, only the programmers using them matter.
So surely the caveman should be just as productive with or without declarative programming? 🙂
See this funny and intelligent blog entry: http://www.tbray.org/ongoing/When/201x/2010/07/25/Five-Photos-of-OSCON
IMO the ideal language would be essentially Set Math, such as where one had X things in Y positions taken Z at a time as the assembly language, which would yield X^Y^Z possible machine instructions, for a given program.
For example, if one had 2 things, in 2 positions taken 2 at a time, (boolean) there are basicl things you can do for manipulation. For example, the truth output for B 1 1 0 0 is automatically equal to 11 11 00 00 to denote “B” in the set 3 things, in 2 positions taken 2 at a time, and is 11 11 11 11 00 00 00 00 represents B in the set 4 things in 2 positions taken 2 at a time.
The compiler would first determine what the set requirements were, and then generate the space to represent it. Inside this auto-generated space, reducing the instructions to the simplist possible rendering would be automatic.
This would be the math space of assembly language, and depending on what one was modeling, the assembly language would be produced on the fly, which would then be translated into whatever machine code was on the particular computer.
It would actually take many pages to describe a computer that worked like this, and the math involved, but the general idea is that it is set theory and permutations that determine the underlying structure of the second from machine language code, which could figured out by the compiler, in order to construct the appropriate model.
Once you had something like this, you would code by doing things like “modeling the space of desires for human interaction with computers” and then translate that into a second set of the same kind of set code, and someone could “attach” to a library that would then automatically generate this, based on their sets.
This model falls apart when trying to represent things such as the stock market, which cannot be modeled in this way, but it would work for many things that are considered important.