Draft Onward paper

At long last, I have posted a draft of the paper I am submitting to Onward this year. Still pretty rough, but it is due a week from Monday, so I am pushing it out now to get feedback in time. Coherent Reaction. All comments welcome. What are the hard parts to understand? What is the related work?

As you can see, I have changed the name from Juncture to Coherence.

Thanks – Jonathan

Update: Submitted! Revised version at above link. Thanks for everyone’s help.

14 Replies to “Draft Onward paper”

  1. Hi, I found your website through comments from LtU, and I gotta say, I really like the ideas you put into subtext and coherent reaction. Here are my two cents on Coherence:

    line28: what’s the apostrophe doing in end’ is it necessary?
    How do you check for side effects? Is the value cached? may be slow if it isn’t.

    Hello() doesn’t fit in with the definition up to this point. If you start with Hello.do:= [], it would be easier to comprehend. You can then say Hello(param) is automatically translated into Hello.do:=param.
    What happens if you want two reactions to perform the same thing? Is there a mechanism for it, or replicate code?

    rendered on 49 -> rendered on 48

    line 43: currentPage = blah -> currentPage := blah — unless it’s supposed to be lazy
    line 50: again… your function definitions aren’t clear, TaskForm(param) is TaskForm.(firstattribute):=param and displays TaskForm.val

    line 57: function def’s again that’s NumerControl/0, but you called a NumberControl/1, do you want #1 to be the first parameter? or let the first attribute be the first parameter?
    line 61: value = DecimalString(#1) doesn’t seem reversible you could have used reactive / derivative aspects to accomplish this: value = itoa(#1) => #1=atoi(value’)

    “but all there consequences are canceled. “-> all “their”

    line 74 — do you need “.” ? can’t you just use end’?
    not too sure if the error should be caught in the right place all the time, if you change “length”, it might also cause “end” to go over. It might be dependant on whether the constraint is on something that is reactive or derivative, making it hard to debug.

    header: [“….”,”….”] -> header := [“….”,”….”] — not sure why you’d want to create a type, then make a variant of it.
    [NumberControl; -> [NumberControl,
    Map(tasklist,Row) -> Map(Row,tasklist) , same with filter
    I don’t see the diff btw virtual sequences and normal lists.

    line 89 — same comment. fxn definition.

    I still need to process virtual trees and error throwing and buffers (which unfortunately is a big part to your paper). Maybe because it’s not fleshed out enough? Sounds cool though ;-).

    Also something weird you do is that the original object is labeled, and some subsequent variations aren’t: task:task1 , project:project1 rather than the other way around.

    I’m guessing it’s not implemented yet and these are ideas for the language? cuz in my (limited) experience, a draft of a language looks very different from the final implementation.

    In general, I like the ideas, but right now it’s a bit too abstract in my head to meaningfully comment on much. I’m a bit worried about how much processing/storage is done on each step, since it sounds like the end system might not scale too well.

  2. “Derivation” seems technically imprecise and pedagogically confusing initially. I would suggest the term “binding action,” to clarify phrases such as “derivation is bidirectional” and “reaction is the opposite of derivation” by using an intuitive term first and then describing how some fields are derived from other fields. This term lends itself to the description “every binding action has a binding reaction,” which seems to me to be more technically precise. Also, on the first column of page two, the phrase “each one executed before any others that if affects” needs an “it” in place of “if.”

  3. Hmmm… First class functions through inheritance (variation) … sounds cool, but too bad you can’t make a variation of another function without changing things.

    What about fxn’s with 2 params when you only supply 1, can that be made into a function as well?

    do => {#2.start:=#1.start;
    #2.length:=Sum(#1.length, #2.length)}

    Job:task1(start:=0, length:=5)


    When you call CombineWJob(task2), if it can map task2 to #2 for CombineTasks, it’ll be pretty cool.

    WeirdFxn:fxn {
    do=> {Print(“first part of the function”), partTwo:=1} //not sure how else to make the reactive call
    partTwo = 0 =>{Print(“second part of the function”)}

    WeirdFxn2:WeirdFxn1(partTwo=0=>{Print(“modified second part”)})

    I guess we can change functions like this, but you’d need to know beforehand that you’re going to modify it later, but now it seems like it’s similar to inheritance.

    Would a recursive call look like this:

    Rec:fxn{ do=> {
    if (leq(#1,0))
    then {Print(#1)}
    else {


    It seems awfully verbose for something you can write in python in one line: range(0,6). At least in lisp, you can write a macro for loops.

    By the way, I’m taking back a lot of my comments (like the one about Hello()). Too bad I can’t edit what I wrote. Just ignore comments that sound stupid.

  4. “like a digital watch” => replace simile with direct example: “such as a real-time patient heart monitor”

    [Someone else also objected to this example. But it is the classic example, used in all the Harel papers. — Jonathan]

    @We still have a lot to learn about building interactive appli-
    cations. We have been building desktop GUI apps for three
    decades, and web apps for almost two, yet it is still diffi-
    cult, time-consuming, and frustrating. Witnessing this fact
    is the continual ferment in application programming frame-

    I think you are missing the wider scope of the problem, and it trickles down throughout your paper.

    Researchers have extensively proposed ways to bring computer science concepts to providing correct, explicit, and performant GUIs. However, the most successful techniques to date are rooted in broad generalizations without the support of mathematics. Examples of these heuristic architectures include Presentation-Abstraction-Control, Model-View-Controller, Hierarchical-Model-View-Controller, Model-View-View-Model, etc. The viewpoint is that all we need are three layers and a three letter acronym. “(A portmanteau of this would be three-layer acronym).

    More recently, Silverlight 2+ has attempted to change this game a bit, by formally requiring basic interactions be described via finite state machines. Nested UI elements can cooperate through composition. Traditional HSMs do cooperation through inheritance. Miro Samek coined the term “behavioral inheritance” to describe this. However, in classical OO, substate/superstate relationships are typically represented via inheritance, not composition. So the HSM folks have always gotten this wrong. By using composition, the subject matter domain becomes more like a real-world problem domain description, too. That is why embedded systems prefer flat state machines to HSMs, in my humble opinion. However, for GUIs, the trick is to use FSMs that are guided by an invisible hand that makes the composite states behave like HSMs, except for with respect to history. Roughly, this is how SL2’s VisualStaateManager works.

    History is really the one aspect of Harel HSMs that are really weird, too. I’ll talk more about this later. We can debate why we want or need history constructs in the problem domain. Such discussion drives at the heart of simple systems, taking away until there is nothing left to take away.

    [The missing “conversational state” section is about using state machines in this way. I hope I get a chance to write it. I am thinking though that I probably already have too many ideas in section 3. – Jonathan]

    Of course you have the chance to write it. Just delete ideological rants like the following:

    @This is work in progress. Tree derivation has been imple-
    mented and studied in prior work on Subtext[]. The key idea
    of coherent reaction is not yet implemented. The Coherence
    language involves a number of novel ideas and unconven-
    tional approaches — too much to explain in one conference
    paper, much less evaluate. This paper focuses on two of these
    ideas, and mentions the others only in passing. Therefore the
    discussion is informal, which may frustrate readers expect-
    ing precision. This is a dilemma. Programming languages
    are a web of interlocking design choices. I believe that fun-
    damental progress requires that we alter many of these de-
    sign choices at the same time, including some that are so
    deeply entrenched that they have become assumptions. Be-
    cause these choices are interlocking, altering just one at a
    time keeps us locked into the few sweet spots that existing
    languages cluster around. But that is all we can do if we
    only discuss ideas that can be precisely defined and rigor-
    ously evaluated in 12 pages. This paper tries to communicate
    a new idea in language design informally, and to show how
    it is useful when combined with a set of other new ideas.

    Stuff like this could get your paper rejected, as it sacrifices content for hubris. On the mailing list, Thomas Lord rewrote it, but he missed the point that you don’t need it at all.

    In fact, right now your references are pretty weak. You have no room for poetry. Cut and paste that paragraph into your Programming Liberation Manifesto, and be done with it.

    Also, the whole paragraph flows awkwardly out of your introduction of your two key concepts, coherent reaction and virtual trees.

    I’ll take a brief moment to mention what references should absolutely be in your paper.

    – Andromeda Project is a database framework that addresses similar concerns to coherent reaction, except that it is targeted primarily toward the backend database as opposed to GUIs. Andromeda does provide a GUI, but it cannot capture the same sequence orderings that its database plumbing can. In fact, the GUI and the backend are separate house / separate bed projects with different objectives. Andromeda uses database triggers to enforce “causality is encapsulation”.

    – UK Researcher PMD Gray spent the greater portion of his research career working toward his magnum opus, Functional Approach to Database Management. http://www.amazon.com/Functional-Approach-Data-Management-Heterogeneous/dp/3540003754/ Currently he works on modular functional compilers targeted toward the database domain.

    – Mattias Felleisen was the first researcher to publish papers on the techniques Morris and Graham used to build Viaweb. It focuses on using continuations to mainatain conversatoinal state. Many web frameworks, including Seaside, Apache Cocoon, and JBoss Seam, use continuations as a way to enforce “conversational state”. JBoss Seam in particular codifies this metaphor as Workspaces and Conversations. The implementation technique involves the use of “Subversion of Control”, a twist on aspect-oriented dependency injection that allows dependency outjection in addition to the traditional injection. The reason for this is to allow web-layer-aware IoC. See JSR-299 for a Sun community process specification – it is not Seam but tries to address the same problems.

    Continuations are currently the trendy technique for developing complex web “applications” where state must be maintained somewhere between requests. You need to address continuations if you want your problem domain to be the Web.

    So far, it looks like from the comments of your paper here and elsewhere that nobody understands what it is your finangling for. Section 2.3 Point 2 says what differentiates you from continuations: “It can see the pending post-state of other fields. But in no case can it see the consequences of any changes it makes, because that would create a causal loop whereby the
    reaction depends upon its own effects.”” I would argue very strongly that the way continuations work — passing an intermediate state to “the rest of the computation” and getting back a final result — violates encapsulation. However, what I have a hard time arguing is why violating encapsulation matters. I think it effects readability, and that has been my argument for some time now. (I have a few other sneaky reasons why they’re bad, but require proof for the average programmer to understand).

    layman reader: http://www.ibm.com/developerworks/library/j-contin.html
    The references section is excellent and includes most of the key academic sources on continuations in web programming.

    The original paper from which the term subversion of control was eventually coined by the JBoss Seam developers:
    Christian Queinnec, Inverting back the inversion of control or, continuations versus page-centric programming, ACM SIGPLAN Notices, v.38 n.2, February 2003 [doi>10.1145/772970.772977]

    I’ve only edited the first few pages of your paper so far, but as you can see, I am trying to be agile here and give you feedback as I make corrections.

    nately state machines have a predefined set of states, so they
    can not handle the complex dynamic states of mainstream
    interactive applications.

    You need to give me an example so that I know I should care. I.e., Prolog doesn’t allow all sorts of rules, only Horn clauses. Show me a mainstream interactive application that cannot be captured by FSM combined with plain old OO problem space abstraction.

    @Controllers are full of hard-wired connec-
    tions and subtle collaborations that defeat modularity, not to
    mention comprehension.

    You sound like me here. However, I am not an academic publishing a paper. I don’t need to motivate my hatred of MVC with examples. As a practitioner, all I have to do is find something better, and even then my employer doesnt care why.

    However, to pushback against myself, I do know why Controllers are bad. Typically I quote Balcer and Mellor’s Executable UML. On page 236, they make it very clear that MVC doesn’t jive with Model-Driven Architecture: “The first rule of control partitioning is that you do not make controller objects.” The second rule of control partitioning is that *you do not make controller objects*.” This is easily the best programming advice I’ve ever received in my career, by far. This book practically prints money for anybody who reads it and understands its teachings.

    H.S. Lahman is a Model-Driven-Architecture guru with an upcoming book. He’s in his 70s and knows more OO than anyone. You should ask him to fully explain to you why Controller objects are evil. Some reasons include poor OO problem space abstraction, lack of testability, disobedience to the Open-Closed Principle, and turning objects from peers into trees. The most obvious, emblematic code smell of an OO project gone wrong is Controller objects, including inheritance-based HSMs.

    You really have to be able to explain how Controllers hard-wire collaborations, because from experience teaching my coworkerss and other real world practitioners, they have a hard time believing it. In fact, on Artima.com recently, Jim Coplien and Trygve Reenskaug recently proclaimed they invented a new architecture: Data-Context-Interaction. I _RIPPED THEM APART_ for how poor their article was. They claimed that they supported rich ways for a view to interact with a model, but every one of their examples used Controller objects to hardwire activity networks. They continuously tried to say that it was simple to change it to make it dynamic, but never did so. It’s beenover a week now and their example hasnt been updated. I guess removing hardwired sequences of collaborations isnt as easy as they thought it might be! The funny thing is Trygve claimed his program was extremely readable, to the point where just by reading it you could tell there was no bugs. Well, if you hardwire a huge chunk of logic, and take away the dynamic interactions that make OO what it is supposed to be (polymorphism for dynamic substitution of behavior), then of course it is readable. It is a line-by-line recipe on how to bake a cake. The keysteeps are hardcoded. This is like going to Costco and just buying the damn cake, already baked and topped with icing.

    Alright, my corrections aren’t even on page 2 yet.

    @The precise order of interre-
    lated event firings is often undocumented, and so context-
    dependent that it can defy documentation.

    Why is context-dependency bad? From a Model-driven architecture Object-oriented Analysis perspective, I know the answer. But from experience I can say your referee will not know the answer.

    I.e., what is wrong with “context-oriented programming”?

    @You don’t know
    when you will be called back by your subscriptions, what
    callbacks have already been called, what callbacks will be
    subsequently called, and what callbacks will be triggered
    implicitly within your callback. Coordinating changes to
    communal state amidst this chaos can be baffling, and is far
    from modular. Callbacks are Hell.

    You have to explain WHY *not knowing* all these things leads to bad design. I.e., people champion separation of concerns, but programs equally suffer from a lack of *integration of concerns*.

    In other words, race conditions. With callbacks, you are typically required to litter your code with locks to make sure that state is not changed until the callback fires. A good example of this is discussed by Bill Wagner in More Effective C#, in the section on single-threaded apartments and WinForms and WPF programming. It talks about various design problems in WinForms and WPF. Allen Holub in Taming Java Threads talks about how the AWT framework that SWT and Swing are built upon is flawed and easily leads to deadlocks (he also provides a workaround in addition to how he thinks itshould’ve originally been designed) .

    Callbacks suffer from another problem, which is that there is no current language that allows the programmatic interfaces to have constraints on the order of when things must occur, PMD Gray talks about this when he slams C++ for not being scalable as a language for codifying rules. Encapsulating rules inside methods buries those rules from visibility of other rules, thus making it impossible to order rule firings correctly to ensure ACID properties. Callbacks are just a specifically more sinister form of encapsulation, because the client programmer who passes in the callback has no control over timing, only modularity… which is actually no modularity at all.

    I would also change “Callbacks are Hell” to “Some programmers colloquially refer to this as Callback Hell.”

    @We will present a sequence of interactions in a
    Read Eval Print Loop (REPL). Programmer input is prefixed
    with a >, printed values with a =, and program output with
    a < . This description is off-kilter. I think you mean processed/evaluated values with a =, and printed values with a <. How is printed values different from output? I found this confusing. --- @The popular web framework JavaServer Faces [] queues up constraint checks as events to be executed in a separate phase following all model state changes. To handle the variety of such coordination issues, JSF defines ten separate phases. Part of the problem with JSF is the J2EE specification and container-managed persistence. It is not fair for you to compare your ideas to the complexity of JSF. In a wider context, we can look at what frameworks like JBoss Seam do to integrate with the J2EE stack. Also, in my book, the big performance atrocity with JSF is that it requires postback to the server. Sure, support for AJAX can be hacked in, but it is monolithic and not conventional OO. What I’m trying to get across here is that JSF is fundamentally a monolithic batch-mode system. Partof the reason is so that it can interop with bad technologies. --- @A Coherence system makes certain structures visible to certain external interfaces (the programmer’s REPL can see everything). So in other words there is no current clear working definition for access visibility, and you have friendship relationships that permit hardwired modules to see everything, including internal state? I don’t understand this from a modularity perspective, and I suspect version 2.0 will scrap this idea in favor of a more modular architectuere for debugging/tracing. @Multiple input changes can be submitted in a batch, and the entire cascade of reactions is processed in an atomic transaction that commits them simultaneously or not at all. What state is the program in upon transaction failure? Do the input fields rollback to previous outputs? If so, do you allow a cut operator or is the solution more like a classical OO MDA approach where instead of truncating history, you simple divide the problem up into small transactions (state transitions) b/c in a proper simple Moore state machine there is no notion of previous state (and again history is seen as being poor problem domain abstraction)? --- Also, stick with “reaction is the opposite of derivation”. Derivation is coarse-grained, binding action is fine-grained. Reactions must be coarse-grained to succeed. My guess is you plan to stick with this, as in the start of section 2.3 you debunk a commenter’s suggestion nicely: “Reactions can make arbitrary changes that need not be the inverse of the derivation they are paired with.” @An extreme case of asymmetry is a derivation that does nothing at all, called an action, used only for the effects of its reaction. I think your example here has an implicit parameter that unnecessarily complicates your model,and I se eno value in it. Print(”hello world!”) is really an example of a reaction mutating a System Standard Output object. This allows you to generalize logging/tracing into a modular language component managed by the VM. The VM intercepts these calls and stores them in a buffer until the transaction mutations complete and as each constraint check is approved. However, you now limit yourself to only updating the output stream once per transaction (but you harden your semantics). --- There is also a much better quote than Mark Wegman’s, by the way, andit better fits with the theme of the paper. Rodney Brooks’ designed the hugely influential Subsumption Architecture for behavior-based robotics. In his essay, “Planning is Just a Way of Avoiding Figuring out What to Do Next” he explains why master plans are only one way of doing complex agent-based behaviors. In a later preface to this essay, he notes: Somehow plans were so central a notion to classical Artificial Intelligence at the time that this criticism was always used as a trump to argue that the approach could go nowhere. In this paper I argued that the notion of a plan was, like cognition as discussed in the preface, at best something that human observers attributed to a system. Actually concretizing a plan in a robot was simply an exercise that gets in the way of the robot being effective. This is a continuation of the argument that the intelligent control of systems needed to be sliced up a different way from the conventional viewpoint. Plans need not exist explicitly. Rather they are a descriptive property of a system constructed in a fundamentally different way.” http://www.ece.osu.edu/~fasiha/Brooks_Planning.html

    Wha-pam. Rodney Brooks is the man, and another great source in my learning of how to build great systems.

  5. And now a perspective from an everyday programmer:

    1. This was a very enjoyable read. Your running example is something we all face (obviously your plan), and I appreciate that you didn’t take too many shortcuts. A group of friends and I are going to try to implement some of your Coherence VM ideas – it was that interesting to us.

    2. The @ operator needs a bit more explanation. Does it walk down trees looking for the first node with that “name”? Your error example is intriguing and I would like to know a bit more about this funny operator.

    [Should be clearer in the figures I have added. The derivation is physically stored “behind” the field. The @ operator jumps behind the field.]

    3. I know you have covered your virtual tree ideas in other papers, but please reference the “one paper to rule them all” so that we can get a refresher course on the differences between “variation” and “derivation” (these words are just too similar for my taste)

    [Best paper is Modular Generation and Customization]

    4. I can’t believe you are discussing a textual language! But at least your REPL has good prompt symbols. 🙂

    5. A concise syntax explanation would help a bit. I’m most concerned with these syntaxes and what they mean:

    foo = Calc(bar) # function call?
    foo = Calc{fast: True}(bar) # function call with keyword args?
    foo = Calc[ … ] # I have no clue.

    [passes sequence as first arg. Clarified in paper]

    And then if we replace the derivation with variation, my brain overflows. Not to dumb your paper down too much, but maybe a few analogies with current languages would clear mystery.

    6. “Buffer” seems to be an odd name for your little utility. I think you are using the word in the Electrical Engineering sense, but it really is painful to keep that sense in mind while reading a CS paper. Would you please just call it “Catch” to match Erlang’s function?

    [renamed to “latch”]

    Thanks for writing this paper! I have found it to be very eye-opening.

    [You are welcome. Thanks for the comments]


  6. Frank,

    Please target the Mono VM as your testbed. Miguel de Icaza has told me (or perhaps I misheard) he feels stuff like the dependencyobject/dependencyproperty system in .net 3.0 should have been folded into the VM (and I agree).

  7. I’ve been thinking about the reactions that you have and how it might be really slow — O(n!) ways of ordering n terms and this is without even considering chains of reactions. For error checking, you cannot terminate as soon as you find a “coherent” solution, in case another ordering of the reactions give you another solution, a logical bug in the code. I’m not completely convinced that there isn’t a “correct” ordering for the terms all the time, if there is one correct ordering, I don’t see a case where another order would work without running into the previous problem. Ya know, the reaction dispatch seems complex enough to possibly be turing complete (like how the macro preprocessor in c is) .

    “If a cycle is detected, or any other error
    occurs, the entire input transaction is aborted”
    I think since reactions are the core of your language, you should explain the cycles (and how to differentiate from recursions that are okay) as well as the “other errors.” A theoretical analysis would be best (analyze as a finite state machine..?), but if not, it would be nice to see some evidence or concrete examples.

    One way of speeding things up is to remember the value of the attributes that you have seen so far. This normally wouldn’t be a good idea for a general purpose PL, but in your language you have static attributes and derivatives that have no side effects, suggesting that you can express attributes as functions. This can allow for automatic dynamic programming, and it should be pretty useful.

    I really like the way you use the virtual tree for buffering. It’s much better than error catching in typical PL’s. Anyways, hope to see the next revision of the paper!

  8. Lee,

    I’m pretty sure the cost can be amortized using multi-stage programming. Since the programmer develops his solution interactively in the REPL using a “live programming language”, it is at worst-case a topological sort on the whole tree. As Coherence is a reactive model for embedded systems, as opposed to transformational, once the data flow is decided for all bindings in the system there is no need at run-time to have computational costs associated with deciding a total ordering of instructions.

    Moreover, the difficulty in solving for a data flow path through the system is proportional in difficulty to the interdependencies in the system. However, once solved for, they are inherently parallel, such as a hardware design language like Verilog or VHDL. The comparison to Verlog and VHDL should help you: they’re for high-speed integrated circuits. Also, Jonathan doesn’t (seem to) allow for RS-Latch style circuits, given his rule that a transaction must change a field to a single value (and why would you want to invert an input within the transaction responsible for processing it?).

    As it is a live programming language, it can and should be extended to support multi-stage programming and gradual typing. Currently, Jonathan shows no examples of where trees can be used as prototypes. Using them in this way requires a globally unique identifier system, as names of trees and names of nodes in each tree would no longer be globally distinct. Moreover, since it is declarative, the compiler can transparently perform a Flyweight pattern for storing prototypes with largely the same default values. The default value does not need to be copied, we only need a memory location that points to a default value. This is how Windows Presentation Foundation (WPF) manages dependencies internally using DependencyObject and DependencyProperty. However, as I mentioned to Frank, this is something that Microsoft should’ve folded into the VM as a VM innovation. Jonathan has already discussed pairing trees with globally unique identifier systems in his other papers, including his recently rejected GPCE paper.

    Furthermore, and I’ve said this many times, we should be programming at dumb terminals but with rich graphics display capabilities – a dumb terminal for the 21st century. Our failure to do so thus far has hampered our exploratory efforts in the visual languages space. Our half of the REPL should not interpret the command language. We should be using a slave scheme suggested by Christopher Hanson where we submit the command to a supercomputer mainframe server running a live instance of the program submitted so far. Effectively, this brings the layman programmer much closer to the power of Emacs and Lisp than any programming language before, because it gives them the power of SLIME/SWANK but also frees them from what Lisp cannot: having to manually order all instructions in the system.

    People act out in horror at the suggestion that we need to re-think dumb terminals and terminal servers for the 21st century. However, computer science moves in cycles and tends to bend back upon itself like an Escher drawing. http://www.worldofescher.com/gallery/A13.html Note that most people are now carrying around “mobile dumb terminals” ironically called “smart phones”. The only smarts in the phone that matter are really its phone capabilities: accessing networks at will, and information within those networks at will. Most of the value in having a phone comes with its ability to interact with any server in the world from anywhere. As a consequence, people now prefer “web applications” that they can access from anywhere.

    There is another good example of terminal server being the way to go. Game companies usually take several days if not weeks to build their entire game. Effectively, they are using a supercomputer to do so. With cloud computing services, some game companies are off-loading these supercomputer-friendly tasks to an elastic cloud they can rent. This is really just a SLIME/SWANK style REPL waiting to be explicitly designed that way to amortize the cost of building the program. To date, only one video game company has been reported as having built video games this way: Naughty Dog Software. Although now merged into Sony and forced to standardize on C++, Naughty Dog’s earlier titles (the Crash Bandicoot and Jax & Dexter series) were written mostly in a custom Lisp called GOAL (Game-Oriented Assembly Lisp). GOAL allowed programmers to re-animate dead characters on-the-fly by hot-swapping the value of a Lisp symbol with a new value, using a SLIME/SWANK-style interactive programming environment.

  9. John,
    I didn’t understand everything you said, jonathan mentioned:
    >In general it is not possible to determine a correct ordering of
    reactions in advance of runtime.

    which I said that I don’t think I agree with. I’m not sure if you’re agreeing with me or not … too many technical terms =(.
    multi-stage programming — I thought this is a meta programming concept to generate optimized code at run time, where it’s the programmer that specifies when a part of the code is generated and optimized.
    I agree with visual languages (I liked the ui for subtext), but I don’t quite see the connection w/ terminals.
    Hmmm, funny about terminal servers — I wrote a blog entry about how it’s the future of computing @ lchou1.blogspot.com . just found out onLive’s actually doing it for games, albeit with an infrastructure that I don’t quite agree with.

  10. End of Sec 2: Put discussion of functions-as-instantiation in a separate paragraph. First reference I think of for this is BETA http://www.daimi.au.dk/~beta/

    I’ll second the syntax summary. In particular, looking between example lines 51 and 58, I wondered why one used ‘:’ and the other ‘=’, and had to back up and find the definitions of those operators. It might also be worth spending a few words on this specific example; until this point the significance of the operators hadn’t sunk in and I was using expressions versus values as a crutch to identify derivations.

  11. Beta’s innovation was the “inner form’ of dispatch, where the superclass controls the subclass. The superclass therefore enforces invariants that it doesn’t trust subclasses to maintain. Personally, I think this is just bad protocol design. Traits also allow maintaining such call-super invariants, but typically I just feel that if you’re doing this, you probably have incorrect problem domain abstraction and you need to re-think your name-space relationships and call boundaries.

  12. Lee,

    In languages such as ML, any value of importance can be named, regardless of whether it is a transient or permanent value. These systems are built on the basic notion of expressions, which are modular building blocks of computation. Furthermore, because expressions have a recursive structure, large expressions can be decomposed into primitive expressions. Also, expressions can be typed to enforce basic mathematical laws such as distribution and associativity, allowing you to dynamically restructure a problem and regroup subexpressions into different subexpressions, usually to make a problem easier to solve. So long as the composite expression remains the same, we’re safe.

    The “making problems easier to solve” aspect is what multi-stage programming is all about, not performance. It just so happens that “better performance” when looking for macro-optimizations is really asking the question, “How can I teach the computer to efficiently solve a class of problems with this domain of types?” So we’re not optimizing the code we generate at run time, we’re optimizing for problem type-solution algorithm selection.

    Yet, the most common approach to multi-stage programming talked about in academic research is transformation-based. There are no libraries or programming projects I know of with multi-stage reactive semantics. This is a huge barrier in in computer science, because most people use things like “loops” and element-based if statements to do what is inherently reactive programming. However, their solution (non-modular, monolithic control flow based on “Controllers”, embodied by loops and if statements) is inherently transformational and possesses no way to do coordination in a modular way, because of the way the system is designed — the most important artifacts are discarded (only to be duplicated later on when these facts are needed). I call this “fact triplication” because whenever you duplicate a fact, it usually exists in another form somewhere. There are usually many forms of a single fact throughout these systems.

  13. Jonathan,

    On page 2 of your final paper, you have a footnote rant:

    @1 For example, when the mouse moves from one control to another, does the
    mouseLeave event fire on the first before the mouseEnter event fires on the
    second? Does your GUI framework document that this order is guaranteed?
    The order is seemingly random in one popular framework.

    Which toolkit? I hate when people put statements like this in a paper. Research is supposed to be open, not “I know something you don’t, but will only provide hints”. This isn’t dueling 16th century mathematicians issuing challenge puzzles for each other to solve. State what toolkit it is,

Comments are closed.