Incompleteness

Bad news: co-action is incomplete. Simple example:
x=true
y=true
if (x') {y'=false, z'=!z'}
if (y') {x'=false}

There is only one coherent execution order. It results in x’=false.
But co-action will get stuck in the first conditional because of the cyclic reference to z. To get unstuck would require backtracking to explore alternative execution orders. That is scary.

It seems the only way to make co-action complete is to make coherence deterministic. That is my strong preference anyway. Unfortunately I have also discovered that the “client/server barriers” I proposed in the NEPLS talk do not work in all cases. I am thinking about alternatives, but so far they all require making static declarations in the code, which is a burden to the programmer. Perhaps it is better to just live with incompleteness for screwy code like the counterexample, which probably never actually occurs. Either way, it is not as pretty a picture as I was hoping for.

19 Replies to “Incompleteness”

  1. I think I’m missing something: why is z' = !z' allowed?

    Also, if you simply remove , z'=!z', how would there be any valid (or “coherent”) execution order?

    1. I spent sometime thinking about this (actually, not thinking about it). If I understand you correctly, the statement z' = !z' is the kind of statement that is perfectly valid in your normal imperative programming language, but something not allowed in co-action. What you’re saying is that “parts” of a co-action program could reduce to a cycle like that. If that’s true, I’m pretty sure the ! is irrelevant; z' = f(z') would be invalid regardless of whether f is the identity function or something else. Here’s why:
      a = b
      b = c
      c = a

      This can be ordered in three different ways to result in x = x (all of a,b,c = a, all = b, or all = c).

      To clarify my second point in my parent comment, isn’t this an invalid program because it has multiple coherent execution orders that yield different end results?
      if (x') {y'=false}
      if (y') {x'=false}

      1. Luke, that’s correct. All cycles, even identity cycles, lead to problems. Saying z’=!z’ is just my way of making the cycle obviously unacceptable and avoiding a more detailed explanation.

  2. In your Onward paper, you write:

    The reaction specified above duplicates the built-in reaction
    of the Sum function. The changed value of the end field is
    referenced as end’, adopting the convention of specification
    languages that primed variables refer to the post-state. Nonprimed
    references in reactions always refer to the pre-state
    prior to all changes in the input transaction. The post-state
    of end has the post-state of length subtracted from it to compute
    the post-state of start. The post-state of length is used
    rather than its pre-state because it could have changed too,
    discussed further below.

    and

    A reaction
    affects another in only one way: if it writes (assigns to) a
    location whose post-state is read by the other.

    I thought that post-states could not be modified within a coaction? If so, how is z’ = !z’ possible? Isn’t a progression precisely in existence to support Luke’s example:

    a = b
    b = c
    c = a

    i.e.,

    BoundedAction: Action
    {
    task, do=>
    {
    prog (a, b, c) [
    a := b;
    b := c;
    c := a;
    ]
    }
    }

    translates into imperative, block-structured programming language:

    var a = b;
    var b = c;
    var c = a;

    In which case, it is not the case that execution order is non-deterministic. Unless I am missing Luke’s point.

    Looking back, part of my and Luke’s may be due to the clumsy syntax in the paper… to be fair, at the time I read the paper, I thought the syntax was clean.

    1. Yes, the syntax in the paper was not fully worked out. I have since defined a precise language and semantics. The counterexamples I have been using are actually sugared forms of that formal syntax. I can’t really explain the details here.

        1. Could you use shape analysis to detect cycles, and then offer the programmer a number of progressions as an alternative?

          …just throwing ideas out there. The idea is that rather than making the programmer do a static declaration, they CHOOSE a static declaration. In this way, the programmer is forced to come up with an abstraction to resolve any programming problems that match Pontus Johnson’s “n-body problem” modeling expressiveness challenge.

          I’m not sure I get the read/write barriers approach. That’s also why I don’t see why it fails.

  3. Why do you have derivation and reaction? If some value derives like x = f(a,b) couldn’t you achieve the same effect by setting a reaction on a and b that sets x to f(a,b)?

    1. Jules, that is a good question. The best answer is that I am not sure anymore. Derivation is lazy and purely functional, and is the normal semantics of getters, whereas reaction is a semantics for setters. But my recent work formalizing coherence does suggest that I could use if for getters also. So I would say the issue of derivation is up in the air at the moment in my mind. I am going to see how far I can get with a purely reactive semantics before considering whether to add derivation back in.

  4. Just a whacky thought, embrace the incompleteness! Maybe you should check out the implementation of a physics engine, which is all about dealing with coherence in an approximate rather than perfect way. In particular, you’d often end up writing things like:

    a’ = b; b’ = a;

    Or even more realistic:

    a’ = .9*a + .1*b;
    b’ = .9*a + .1*b;

    The cycle is perfectly valid and the values eventually converge after a few steps. We can never refer to a’ and b’ in the current step, only a and b. I know this isn’t a universally useful model, but its worth thinking about a bit. Also, there other ways to solve multiple recursive constraints (or in particular ODEs), but this is my personal favorite because its easy to program.

    1. One problem with that is that the system isn’t really interesting anymore if you do that 😉 Then it reduces to mutable references with callbacks that get triggered when they value they hold changes.

      1. Of course it would still be interesting to see which abstractions and libraries you can build on top of it, and to see if the incoherence is a problem. But the system itself isn’t new and publishable in that case…

      1. Not necessarily, I only use physics right now in the context of UI which is an obvious fit because for visual properties, temporary incoherence is called “animation.”

        There are other ODE solvers out there that work in one step. I would just look at the field for some ideas/inspiration, because I see that your examples are similar to what I see there.

          1. Maybe start here:

            http://www.teknikus.dk/tj/gdc2001.htm

            Its a very simple technique but easy to understand and implement. Their are more complicated techniques, like gauss seidel or split impulses, but it might not be useful to understand them to get a feel for the solution space.

            I also think what you are doing might be related to general constraint programming…

    1. Thanks Paul, it is worth taking a look at Harel’s semantics to possibly learn some tricks. I doubt they address my problem though, because Statecharts assume it away: parallel state transitions are completely invisible to each other. Each transition must be a pure function of the prior state. I want to assemble synchronous dataflows within a step. Much harder.

Comments are closed.