I’ll be demoing my latest work at the Future Programming Workshop at both Strange Loop and SPLASH. My talk is called “Two-way Dataflow”. Here is the abstract:
Subtext is an experiment to radically simplify application programming. The goal is to combine the power of frameworks like Rails and iOS with the simplicity of a spreadsheet. Mutable state is a notorious source of complexity in application programming and indeed has long been a major dilemma in programming language design. I propose a new approach called two-way dataflow, which breaks the program into cyclic output and input phases. Output is handled with traditional one-way dataflow, which is realized here as a form of pure lazy functional programming. Input is governed by a new semantics called one-way action which is a highly restricted form of event-driven imperative programming. These restrictions statically order event execution to avoid callback hell. Two-way dataflow has been designed not only to simplify the semantics of application programming but also to support a representation that, like a spreadsheet, provides a fully WYSIWYG programming experience.
This work is still very experimental and preliminary but I hope it is complete enough to convey several key new ideas that are relevant to the current trends of “reactive” and “live” programming.
13 Replies to “Two-way Dataflow”
I look forward to more details about your one way action idea.
It seems to me it may be somewhat related to the “whiteboard” concept for collaboration between software modules. The “whiteboard” is a shared collection of data where modules put their output. Modules continuously scan the board for data that match their input criteria, when any is found, they process it, and add the result to the board, and so on…
I don’t remember when I read about that, probably many years ago, but now I found this archived paper on the osgi website which seems closely related :
Quite the opposite. I want to make event plumbing completely static and thus simple and predictable. This is the opposite of the mainstream philosophy of late-binding and dynamic semantics.
I have lots of questions about this, but one that springs to mind is the following. Is it possible to change a value located somewhere in the middle of the execution, and have the changes flow both backward and forward (perhaps backward and then forward)? I’m guessing the approach is compositional.
It is possible to go against the flow with recursion-like tricks but I am trying to see how far I can get without it. My hypothesis is that imposing a rigid structure on the program will be a big win for simple programs and inexperienced programmers. This is the opposite of the usual programming language design ideology of maximizing expressive power. I am trying to minimize expressive power to the point where it is just barely enough to build simple programs. Since almost no one has tried doing this I feel like I am exploring a new continent.
The design philosophy of minimising expressive power sounds good, but I think I was asking about something different. Suppose there is an intermediate value in your program’s execution – an output of some sub-computation that feeds into another sub-computation. Are these intermediate values modifiable directly, or only indirectly, i.e. when the output of the whole program is modified, and changes flow back through the intermediate value?
Ah, OK. Intermediate values can be modified by code “downstream” so long as they are public from that scope. The rule (common to most synchronous semantics) is that there can be multiple writers so long as they write the same value. However structured values like records and collections are merged so long as they don’t conflict at their atomic leaf values. Does that answer your question?
Ok, interesting. (The way you merge structured values reminds me of LVars.) This bidirectional approach is definitely a promising avenue to explore, look forward to hearing more!
I hope there will be a video? Sounds great.
Thanks. The Future Programming Workshop is going to use a writers’ workshop process to help us all improve our demo screencasts. We asked everyone to not post their draft videos until they have a chance to revise them after the workshop. So I’ll be publishing in November.
I like the description of “two-way dataflow” given here. I’d be very curious to see more details.
In particular, this description reminds me of the recent work that we’ve been doing on Adapton, an incremental computation approach for demand-driven (and interactive) settings: http://www.cs.umd.edu/~hammer/#adapton
The link above will lead you to a PLDI 2014 paper, as well as a code repository with our OCaml implementation. In particular, the paper describes a simple type system for modeling something that seems closely related to the two-phase input/output cycle mentioned here. The paper also gives a high-level description and empirical evaluation of the OCaml implementation.
In Adapton, we call the mutator the outer layer and the pure, lazy (and incremental) computation over this changing data the inner layer. In Adapton, one of our chief concerns is avoiding re-execution by caching and reusing pieces of the inner-layer dependency graphs.
I’d be very interested to know how the motivation and design of Adapton compares with the one advanced here. Based on the description above, they seem very closely related. Though, Adapton is additionally concerned with reusing existing cached dependency graphs via memoization. (This memoization is in addition to the normal kind of FRP-like memoization, where dataflow nodes whose outputs do not change from the prior run terminate the frontier of change propagation).
Currently, we are working on a more permissive and expressive variation of the above system which will permit more patterns of incremental reuse, and some class of inner-layer effects (so that the inner layer need not always be pure). In the future, I’m interested in applying these techniques to the design and implementation of an IDE-like system. (This is my principle motivation for all of the above).
Interesting work Matthew. I’ll followup via email.
Sounds interesting, I’ve seen the paper but haven’t read it closely yet. You should also check out the managed time paper:
I’ve built an IDE-like system in glitch for glitch (see videos linked in paper, though it’s much more along now) and can share my experience sometime. Basically, I think this is the most permissive solution to this problem (I came up with it out of frustration of FRP), allowing expression of UI, compilers, and non pure computation, but it’s still early…
I would like to see an implementation of this in Scala, perhaps similar to the way ScalaRX works?
I’m being greedy, everyone wants something in their favorite of the moment language.
Comments are closed.