Flux is good enough

The reaction to my latest work [Two-way Dataflow] has been distinctly underwhelming. My diagnosis: I’ve solved a problem most people aren’t aware they have, and it comes at the steep price of replacing the conventional tech stack. Facebook has a cheaper solution: Flux. I have to give them credit for taking the problem seriously (unlike many of the reactive programming enthusiasts) and coming up with a pragmatic solution that just adds another layer to the stack. That’s what programmers want, not a fundamental solution in a new programming paradigm. It is obvious that we can’t keep growing the stack forever but no one wants to deal with it now. Especially computer scientists (don’t get me started).

I think I need to shift focus to end-users and non-programmers. I’ve avoided that because I want to prove that we can dramatically simplify real large-scale programming — simplifying simple problems is not so convincing. But replacing the whole stack is just too hard to do all at once and all by myself, and only doing it on small examples is not convincing either. End-user programming could be a way to get started and get users and (especially) have collaborators. Industrial strength programming would come later, but at least I now have a sketch of how it should look. Building something real would be such a refreshing change from the last decade of thought experiments. I’m excited by the prospect of once again shipping buggy code and having irate users and getting into fights with my colleagues. That’s life.

16 Replies to “Flux is good enough”

  1. The human liberation project needs a lot more general systems work. Trying to teach humanity programming would be like trying to teach humanity sanskrit because we thought it was really important- it’s hubristic.

    There’s completely different realms of expressivity we have yet to unlock, and it will revamp what we think of as programming and language.

    The web is key to a lot of this. The web is one of the first multi-system techs that has become world-spanning. Most programming- even web programming- is still very single system focused, but the real interesting innovation is when we start being able to control and orchestrate a range of data and systems living on a range of different systems in a cohesive fashion. Then I can start to take programming seriously.

    This is the general systems research we need: breaking down the idea of the box as the system, with the program running atop the box. But as far as React goes, if the app is built well, I think it’ll do a lot to alleviate designers concerns about programming – separating templates from program wasn’t a good idea, React has shown us the light and we understand that all we were missing was means of expressing html in our programming languages. I think many designers will find themselves happily coding, giving the simplicity of React’s functional data->view nature. Anywho, cheers. Ever on, ever up.

    1. As for Flux, the uni-directional binding works great in the first example of your subtext Two-way Dataflow video: actions dispatched from any input can update the same chunk of state (data) which will get used multiple times when updating the view (four times, Fahrenheit and Celsius cross the spreadsheet view and the form-input-control view).

      So yeah, props to Flux. It’s pleasantly explicit. And it’s paving the way for a lot of programmers to think of more of their application, it’s modes and behaviors, as state, not so different from the business object state.

      Good lucky, happy hiking.

  2. I hope you can incorporate relation tables into your notion of two-way data flow and UI generation. Relations are constraints and therefore formally lend themselves to extending your model to constraint programming: Specifying one value in a form constrains the values that can be input to its other values.

    The most general UI is a relation table with the columns being the fields of the form, and the rows being the possible combinations of values given the currently inputs. Obviously, you don’t want to use this UI in place of graphical UI constraint satisfaction as in this demo of Cassowary. One step at a time…

  3. Is Subtext 5 available to play with yet? If not, perhaps that’s an alternative reason for the underwhelming reaction? Another alternative: people intrinsically care what facebook does, but not so much inviduals like you and me.

    All this is not to say that your replacing of the stack isn’t a reasonable culprit. I’ve been having similar thoughts with regard to my project. The two approaches I’ve come up with are:

    a) Build an app people care about, and use it as a gateway drug to your stack.
    b) Use your new stack for teaching programming.

    I think that’s what you mean by “end-users and non-programmers”?

  4. “reactive programming enthusiasts”. Ha.

    Regarding switching focus to end user programming, I’m with you 100%. I showed you a very early version Aquameta last time you were in town, with similar goals. Well we’ve been going out talking about it around town, and the response from developers has been very underwhelming, and sometimes even with a subtle resistance to the entire goal of making software development easier. “A solution looking for a problem.” That kind of thing. (Don’t get me started…) But end users get really excited, as do people trying to teach programming, and business users who hate their software and can’t change it. There’s a ton of pressure to simplify software creation, but that pressure isn’t coming from programmers. They don’t have that problem, they can already create software. It’s precisely the wrong audience! I think developers and tool-smiths just have really different focus. Don’t get discouraged, your people are out there. The goal is a massively important social cause that few people are looking at. Now we just have to deliver.

    If you have a great idea for a GUI, I hope you’ll check out Aquameta again now that it’s a little further along. We’re still not to 0.1 but it’s up and open source now. Our GUI stinks but that’s because 90% of the work has gone on under the hood, to try to build an all-data programming stack. You don’t have to replace the whole stack, just datafy it and then it’s easy to build GUIs against. You just write GUIs that manipulate data in the database. I don’t know what the GUI should look like though, so if you have some ideas maybe we could collaborate on it. I swear that datafication of the stack is a huge piece of the puzzle of how we make programming a lot easier.

    I also really like what Rektide was saying about breaking down the idea of the box. WebRTC has some pretty amazing potential to help with this. Seems like we’re just scratching the surface of what can be done with true in-browser p2p comm.

    1. Check out Kaya for an excellent way to expose the reactive relational model to a “spreadsheet IDE”. It seems its eventual consistency persistence layer, prototype orientation and flexible schema semantics might demonstrate Aquameta’s strengths.

      Hopefully Broderick will take two way dataflow as seriously as Edwards has.

    1. I lead the FOAM project, and it’s probably not a coincidence that it could be applicable to Jonathan’s work, as I’ve been following his work since at least 2006-2007, and am a big fan.

      You can see a FOAM version of both the Fahrenheit/Celsius and Todo apps from the DemoCat link provided by Philip above. Not nearly as nifty as Subtext, but as close as you can get with regular JS (I think).

      In the F/C example, you need to specify the formula both ways yourself:

      The relevant part is:

      relate(this.c$, this.f$, this.c2f, this.f2c);

      c2f: function(f) { return 9/5 * f + 32; },
      f2c: function(c) { return 5/9 * ( c – 32 ); }

      Which creates a bi-directional binding between the c and f values.

      What Jonathan call a ‘series’, we call a DAO (Data Access Object) in FOAM.

      There’s now a video available of a recent presentation I gave on FOAM:

      At one point I show how we do two-way binding with time.

      1. Very interesting Kevin. Many people have tried to build a practical Model-based programming environment but these efforts always seem to collapse under their own weight. What are you doing differently this time?

        The key question I ask about these approaches is what is the learning curve? Must you already understand the technologies that are being layered over? If you still need to understand HTML CSS JavaScript HTTP REST SQL … then you are actually increasing the learning curve to gain productivity. That is not an attractive deal to many people.

        Please consider submitting to the Future Programming Workshop.

        1. > What are you doing differently this time?

          Short answer:
          We never edit or check-in generated code.
          Design Patterns make modelling and meta-programming viable.
          We pay careful attention to the design of the software that we create.
          We generate fine-grained components and then use contexts and facades to compose them.
          We augment rather than replace the target language.
          We rely on a small set of strong canonical interfaces.
          We’re Feature-Oriented.
          Active-Models are retained at runtime allowing for data-driven programming.
          FOAM is itself modelled, making it small and uniform.
          Models are first-class data.
          Models can be edited/viewed in our MVC framework.
          Functional Reactive Programming applied to Models gives live-coding.

          Long answer:

          Code is a Liability

          Modelling-tools/code-generators are used to generate you a lot of code, but then this code invariably needed to be edited to add additional or custom behaviour. After you generated the code you would load it up in your editor and search for the /* insert code here */ comments. While these kinds of tools got you off to a good start, you soon ended up with more code than you could maintain. Kind of like winning a luxury home in a lottery, but then not being able to afford the taxes. The problem with this kind of approach was that code needs to be maintained, so is actually a liability, not an asset. Code-generators were actually liability generators. The actual asset is the features that the code supplies. What you want is the features, not the liability. The solution to this is to never generate any code which needs to be edited or checked-in. Generated code should be part of the build process or runtime, but never edited and never checked-in.

          Modelling isn’t Enough, Good Design is Still Required

          The solution to the “extension without modification” problem, described above, is Design Patterns. Modellers only take you x% of the way, but you need some way to add the remaining (100-x)%. We’ve already said that modifying the code is a bad idea, but the “open to extension, closed to modification” feature of Design Patterns, is exactly what we need to solve this problem. The careful application of good Design (Patterns) allows us to generate software components which can be extended externally, through Strategies, Template Methods, Decorators, Composition, Chain of Command, etc. rather than internally by modifying their own code. It’s a shame that by the time Design Patterns came around to make code-generation practical, the idea of code-generation had largely been discredited and abandoned.

          Every time that we thought good design didn’t matter because it was just generated code, we turned out to be wrong and ultimately ran into problems which forced us to go back and fix the design. Even code-reuse is important, which you might not think because you’re just generating the code anyway, but then for web apps, and especially for mobile web apps, you end up with large download sizes and times (although, you should really be downloading your small models to the client and then expanding them there).

          You’re much better off having a poorly designed modeller that produces well designed systems, than you are to have a well designed modeller that produces poorly designed systems. But since FOAM produces well designed output (IMO), and it generates itself, you’re ensured that the modeller and systems that it generates are of equivalent quality (at one level).

          Fine-Grained Components

          Strongly related to the previous point is the use of fine-grained components. Rather than generating large monolithic components or systems from Models, we generate many small fine-grained components. These small components are designed to be used together to form a working system, but you still have the option of replacing, augmenting, rearranging, or recomposing them in some different way. Kind of like the difference between getting a lego toy car instead of a diecast toy car. The lego gives you more reuse and customization options. One problem with fine-grained component models in the past, has been that you’re then required to do the work to compose the many small components into a larger system. We handle this problem in two ways: Context-Orientation and Facades. Context-Orientation is an implicit hierarchical dependency management method which greatly reduced to need for explicit composition. Facades create single components which hide the complexity of composing many smaller ones. For example, in FOAM we have an EasyDAO which is responsible for composing many common DAO (Data Access Object) Strategies (actual DAO implementations that store data) and Decorators (proxies which provide some additional functionality over top of a Strategy) into a working DAO composite. Strategies might be local-storage, indexedDB, MongoDB, a REST server, a simple array, etc., and decorators might be things like sequence number assignment, GUID assignment, logging, profiling, validation, authentication, caching, etc. Not all combinations make sense and some are mutually exclusive, but the EasyDAO takes care of this.

          Augment, Don’t Replace

          We aren’t trying to model 100% of the solution. We’re perfectly happy for a 90% solution (in practice, in Java where you can count the lines of code, we typically generate between 80-98% of the code), and then let you code the rest in your target language(s). A lot of the custom code is only a few lines of code, or even only a single line. This code is actually part of the model and includes things like pre and post property set functions, custom validation, method bodies, etc. When our DSL and target language code work together, but the code is embedded in our DSL, rather than the other way around, we call this an “inverted internal DSL”. Our mLang DSL for specifying database queries, on the other hand, is a regular internal DSL.

          However, for some specific problems, we are actually to come up with 100% solutions. The Chrome App Builder is written in FOAM and is used to Model and generate FOAM ChromeOS Kiosk and Digital Signage apps. This is an entirely code-free modeller, but only for a limited domain. (The app is very popular, with 16k users in the last week, and currently 20% of all new Chrome apps are built with it.)

          Strong Canonical Interfaces

          FOAM has a small number of canonical interfaces that it reuses heavily. Many of these are generated for developers by FOAM, but most of what a FOAM developer does it implement, decorate, or compose this few interfaces: DAO, View, Validator, Authenticator, Agent, Action, Comparator, Adapter, Factory, Parser, Sink, Predicate

          More implementations behind fewer interfaces.

          Best to watch the video for this one.

          Active Models
          FOAM objects retain a reference to their Model at run-time, which is why we call them “Active Models”. This makes data-driven programming easy. By data-driven, we mean code that that reflects on or interprets an object’s Model at runtime to provide some kind of functionality to any type of Modelled data. This is the main alternative to code-generation, which we also support. Examples include a generic DetailView, TableView, various types of DAO’s, JSON adapters, XML Adapters, etc.

          > var john = Person.create({fName: ‘Jonathan’, lName: ‘Edwards’});
          > john.toJSON();
          “model_”: “Person”,
          “fName”: “Jonathan”,
          “lName”: “Edwards”
          // You can reference any object’s model via the model_ property:
          > john.model_.toJSON();
          “model_”: “Model”,
          “id”: “Person”,
          “name”: “Person”,
          “properties”: [
          “name”: “fName”
          “name”: “lName”

          Modelled Model
          FOAM’s Meta-Model is its own Model. Normally you have models described by a meta-model and that meta-model may be described by a smaller weaker meta-meta-model, and you might even have a meta-meta-meta-model, on until it no longer becomes worthwhile. FOAM avoids this downward spiral by looping back on itself.

          > Model.model_ === Model;

          The trick to this is bootstrapping. We start with a simple hand-coded BootstrapModel and then the real (Meta)Model which is it’s own (MetaMeta)Model. We then use the BootstrapModel to compile the Model half-way and then use the Model to compile itself the rest of the way. We have a line of code which reads:

          Model = Model.create(Model)

          Which would be the equivalent of writing a C compiler in C and then using a bootstrap compiler to compile and replace itself with:

          cc cc.c -o cc

          That FOAM bootstraps itself in this way is the primary reason why it is so small and uniform.

          Code is Data, Really
          Everyone knows that code is data, but very rarely is it really first-class data. FOAM Models are modelled, so you can do anything with them that you could do with any other modelled data: display it in an MVC View, store it in a DAO, query it with an mLang (our internal-DSL for database queries), represent it in various languages, convert it to XML or JSON, send it across a network, etc. Meta-Programming now becomes just like regular programming. A refactoring tool would just apply an mLang to a DAO of Models in exactly the same way that an accounting application might apply an mLang to a DAO of accounts receivable.

          MVC Works on Code
          A continuation of the previous point. MVC is a great design for creating applications which view or edit data. You can have multiple views of the same data and have updates to one reflected in all of the others. You can simultaneously view/edit a Model in JSON, XML, Graphical, and UML views, or create new views.

          Functional Reactive Programming
          If you watch the FOAM video you’ll see that we make extensive use of FRP for animations, physics, live-coding, and just generally avoiding callbacks and making MVC simpler. We’re very happy with how this feature has turned out, but it’s mostly orthogonal to the modelling features.

          Some of the above points are really just different ways to say the same thing.

          > The key question I ask about these approaches is what is the learning curve?

          We recently had a developer survey asking new FOAM developers about this. The responses that we got ranged from: “I see Nooglers producing impressive work in a week” to “it takes about three weeks to become comfortable”.

          > Must you already understand the technologies that are being layered over?

          It depends. We don’t fully layer over top of your target language, so you still need to know JS or Java, just not all of it. However, some technologies like JDBC, SQL, IndexedDB, RMI, MongoDB, we completely abstracted away. Some technologies, like HTML and CSS, we partially abstract away. If you’re just going to use existing Views, then you don’t need to know these technologies, but if you want to create new Views, then you do need to know a subset of them.

          At my previous company, with the predecessor to FOAM, the developers joked that they were unemployable anywhere else because they didn’t know anything about: files, sockets, threads, RMI, JDBC, SQL, XML, Servlets, JSP’s, etc.

          > Please consider submitting to the Future Programming Workshop.

          That looks very interesting and applicable.

  5. Leaving Callback Hell is not a good enough trick, ever since Microsoft introduced async (yes, I know, Haskell had it first, but it was Microsoft Research doing Haskell before moving the ideas into C# and VB.NET).

    The problem is more complicated than that, and you would probably be better off dropping a lot of flowery buzzwords from your Two-way dataflow abstract. Comparisons to iOS never meant anything to me, and I didn’t complain since I figured I wasn’t your audience, but maybe that is part of your problem, after hearing your “underwhelming” feedback.

    If you want a simpler message, look at how the Node.js guy talks. He’s just a guy.

    1. I’m working on a different Callback Hell. The name has been appropriated to mean the SYNTACTIC mess of continuation passing style with anonymous function callbacks. All the modern async/await stuff is a way to solve this syntax problem. In my opinion a much better solution to that problem is to have lightweight cooperatively scheduled threads with synchronous blocking calls. This whole async mess is just because Unix threads are heavyweight. As usual we are willing to trade arbitrary code complexity for compatibility.

      But none of that addresses the SEMANTIC HELL that you still have no idea in what order your callbacks/threads are going to run relative to all the others in the program. When state mutation is involved that is a source of unending complexity and grief, and there is no syntactic bandaid to cure it. That is the problem I’ve been working on, but my solution is too radical.

  6. See the Diogenes Institute wiki full spreadsheet for an intermediate step in “programming for everyone” at the document level.

    I was given a challenge by a co-founder of the DoE’s EIA to come up with a model of a national CO2 photosynthesis macroengineering project. The idea is to take all CO2 effluent from fossil fuel electric plants, pipe them to the desert southwest where the insolation was maximum and photosynthetically fix the CO2 in marketable algae biomass of high enough value and at low enough cost to pay for the reengineering of all coal fired power plants clean up their emissions and ship their wastes back to their mines of origin.

    This started out as an ordinary spreadsheet, went to a stock and flow dynamic model and finally required me to modify the WikiMedia software to permit each page of the wiki to act as a “cell” in a spreadsheet that would not only contain references justifying the formula for that page but also recalculate whenever anything was changed — but only things in the dependency tree of the “cell” that changed.

    This might be thought of as a kind of mix between “literate programming” and spreadsheets.

    Two way dataflow would, of course, be of tremendous benefit as there are times when what one is attempting to do is find the “input” values that will yield desired values in dependent “cells”. This would be like “goal seeking” features of conventional spreadsheets — perhaps maximizing particular parameters through stochastic gradient techniques like simulated annealing. Such under-constrained two-way dataflow is properly viewed as a relational reactive programming system in the absence of optimization criteria for searching the parameter space.

    Actually, my motive for writing all this down here is, at present, I was looking for Subtext 5 source so I could use it in a related project where I wanted to program a web page to let some potential clients play around with parameters of a subsystem and see the consequences for their requirements.

    Where can I get Subtext 5?

Comments are closed.