Down the rabbit hole of types

Time for a progress report, now that I have some progress to report. I didn’t get much research done last semester because I was teaching a new class: 6.170 Software Studio. It was a noble experiment with mixed results, but that is another story. Back in March I presented the latest version of Subtext at the IFIP Working Group 2.16 on Programming Language Design. I realized then that Subtext should be statically typed. Ever since I have been falling down the rabbit hole of types.

Why static types? As I sheepishly admitted to my class, while my research is on dynamic languages I prefer to program in static languages. I like the way types give guaranteed correct documentation as well as implicit assertion checking. I especially like the IDE support made possible by static types, such as the rename refactoring. Teaching in Ruby on Rails and JavaScript last semester has only reaffirmed these opinions.

Perhaps the most compelling need for static types are in the UI and the DB, which ironically are considered extraneous by much of the PL community. It is commonplace to map a standard model object like a Customer into a corresponding screen form. Screen fields are in a sense statically typed: a field can accept a string, or maybe a number, or perhaps select from an enumeration, but you certainly can’t inject an arbitrary object pointer. Traditionally dynamic languages use templates to hard-code this type information into a UI language like HTML, spreading and coupling design information across different languages. A framework like Rails maintains its own internal type information from which it can generate screen forms and run an object-relational mapping. I conjecture that any sufficiently advanced dynamic language framework has its own ad hoc, informally specified, bug-ridden, slow implementation of a type system [with apologies to Greenspun]. DB schemas are really a form of static typing too, enforcing a rigid homogeneity needed for queries and indexing. Since UIs and DBs are naturally statically typed, it is natural that Subtext be also.

I have spent the last nine months trying to work out an acceptable type system. People give me funny looks when I say that. After all, type theory is well understood, and most modern languages have converged on largely similar types. What’s the problem? The problem is that these systems are far too complex. What percentage of Java programmers fully understand its type system? I’m guessing 1%. I shudder to think what it is for Scala. I am not blaming type theorists – they are just the messenger – telling us that our language semantics are too complex to make sense of at compile time. I have considered and discarded maybe a dozen type systems because they were all just too damn complicated. I need something dead simple that will be clear to a novice programmer.

It was only at the point of desperation that I could accept the solution: reject first-class functions. Specifically, function calls must statically resolve to a literal function. Subclasses are limited to static hierarchies (no inner classes). There is no truly dynamic dispatch: all function and methods calls can in principle be inlined. This restriction bans dynamically created closures, often used in callbacks. But a major goal of Subtext is to avoid the need for callbacks in the first place. Fancy closure tricks are disallowed, but I want to discourage tricky programming anyway. Fortunately key functional idioms like mapping, folding, and comprehensions can still be provided by generics (aka templates).

This design decision triggers a gratifying cascade of simplifications to the semantics and type system of the language. In fact the type system sublimes into what might better be called “static prototypes”. This feel right. It is also highly suggestive. My recent work has been about simplifying side-effects by restricting the ability to read and write through pointer variables. Now to simplify static types I am also restricting the ability to jump through pointers. The emerging moral is that pointers are the root of all semantic evil.

Eliminating first-class functions is not a fashionable idea to say the least. Functional programming is all the rage in modern languages, as well as modern programming culture, not to mention its long dominance of PL theory. I too was a true believer. But I must follow where simplicity leads. What a long strange trip it’s been.

30 Replies to “Down the rabbit hole of types”

  1. Hi Jonathan,

    I think it’s good that you’re experimenting with getting rid of first-class functions because I know that you’re not doing it out of ignorance but that you fully appreciate them and will thus now have to find some other mechanism to achieve their benefit. I look forward to seeing what you come up with!

    In general I think it’s healthy for designers to constrain themselves by removing an “essential” mechanism because it forces them to think deeply about what that mechanism really provided.

    The approach reminds me of one of David Barbour’s posts where he deliberately removes local state, another mechanism that most never question.
    http://awelonblue.wordpress.com/2012/10/21/local-state-is-poison/

  2. Hi Jonathan,

    Glad to see a new post of Alarming Development and hear that you are progress Subtext.

    Out of interest, what consideration have you give to “optional typing”?

    I tried to wrap my head around adding type information to an enhanced spreadsheet application I worked on. I was looking at something between optional types and static types. I think the type system would need to interact in a stronger way then optional types envisage. Notionally I referred to this as a schematic types.

    Do you think there is a interesting space between optional and static types?

    1. Hi Gary, long time. How is Sumwise going? I’m not sure what you mean by schematic types – do you have an example? There are situations where dynamic types are essential – like connecting to a DB and adapting to its schema, but I envision using a meta-level for that with static meta-types.

      Cheers, Jonathan

  3. Hi Jonathan,

    Great to hear you’re looking into types. My old project of Argot (which I’m afraid doesn’t have a web presence at this stage) was all about a binary type system for communication. I came to the conclusion that communication of information between systems requires fluidity of data between the language and the communication channels. Argot was built on being a reflective of the binary format, however, over time I came to realise that it might have been better to focus on the structure of the data first and the format second. Ideally the structure of the data would then have a reflection in the language type system. I proposed casting elements in the language that would allow a data format to be mutated into other formats and versions of the type. This goes against the Object Orientated paradigm, but sounds like it aligns a little more with what you’re doing. I’ll be interested to see what you come up with.

    Regards,
    David.

  4. First-class functions are also bad because they lead to control inversion, which I find intrinsically hard for users even more than complex types. It will be difficult to do functional orgami, but then array programming languages got away without first class functions just fine.

    I spent some time trying to create a new OO type system that supports global type inference (none of this damn hybrid typing or local type inference). But then I saw that it wasn’t really adding to my live programming story so put it on the back burner. I’m afraid that my love of static typing might be clouding my vision somewhat; if you have code continuously executing, isn’t there something you can do with that to make dynamic feel more static?

    1. The problem with live code execution is that it only explores one branch of a conditional at a time, or one case of a pattern match, or one route through a polymorphic dispatch. Static types are like a quantum superposition of all possible execution paths. So static types are the truest form of live code! This is one of the fundamental problems with live code I would mention in the position paper we discussed writing.

      1. Only if these static types contain enough information. Agda-style dependent static types really cater to this definition, whereas Java-style types that degenerate into information-free “Object” types. Haskell is a nice middle-ground.

        As an aside, and an agreement with many others, your notion of “simplicity” seems very weird if it includes subtyping but excludes ordinary functions as values.

        What notion of simplicity are you using? Surely it’s not Kolmogorov simplicity, because then you’d just end up with the lambda calculus or System F…

        1. I don’t have subtypes or function types. The only form of subsumption is via template instantiation, which amounts to “static duck types”. You can partially simulate subclassing that way, but it was misleading to imply I am supporting subclasses. Sorry. It is probably fruitless to discuss type theory details until I write things up precisely.

          By “simple” I mean easy to learn and use for a novice programmer. That is my primary goal.

      2. Static types are like a quantum superposition of all possible execution paths. So static types are the truest form of live code!

        Wow, what a great way to put it.

      3. Live coding depends on ability to change code at one locus to effect behavioral change elsewhere. I wonder how you would control program behavior if you’re already executing all paths simultaneously.

        If you want to explore multiple execution paths simultaneously in an interesting way, I suggest pursuing probabilistic programming models.

      4. Static types aim for omniscience and achieve either very little or cascade into exponential time problems that probably will never be solved efficiently. I’m totally a static types guy and deeply appreciate the feedback (early error detection, code completion) it provides, but I pragmatically think that we will always need to “debug” our code by running it on very sample inputs. There must be some unification potential their, and maybe I just need to shut off some of my biases to see it.

        1. Totally agree Sean. My types are also prototypes, so you always get to see at least one prototypical execution per def/call/instantiation site. You are right that types fail at omniscience. What I am observing is the obvious fact that when you decrease the expressiveness of the language its types become correspondingly more knowledgeable. My design philosophy these days is semantic minimalism.

          1. If you are aiming for semantic minimalism, don’t forget your wabi sabi where you allow for and tolerate a lot of imperfection. Static types are often a trap into the admirable but ultimately fruitless pursuit of perfection and elegance. The idea that one can “know” everything statically…

  5. You will ultimately want to support dynamic behaviors. If you don’t do it explicitly, people will just build ad-hoc, informally specified, bug-ridden, slow implementations of half of Common Lisp (to use Greenspun in his original context). They’ll use that to pass their poorly typed representations of ‘functions’, working around Subtext rather than with it. Do you provide something else to fulfill the roles of first-class functions?

    In my work with reactive models, I’ve found that unstructured partial application is often problematic – i.e. for ensuring safe consumption of streams, avoiding space leaks, reasoning about information flow. However, by enforcing a little structure (consistent with a multi-stage programming model), I can safely use dynamic behaviors in a reactive model. The structural constraints are simple:

    1) A pure function represented in a dataflow can be applied to a static constant or to a value in the same dataflow. (It is also possible to zip dataflows together, if you can make them reach the same site.)

    2) An impure behavior represented in a stream can be applied to process other streams that are already reaching a common input site, so long as the outputs may be directed to a common destination. The site of application is modeled as a generic primitive behavior.

    The static types are quite straightforward. But perhaps I’m not seeing why you had a problem with the static types. I would appreciate more details about why you felt abandoning first-class functions was necessary.

    1. My escape hatch is to support meta-programming. But I haven’t worked that out yet, and it depends on a good solution to the old problem of schema change. What do you think are the essential needs for dynamic types?

      1. By dynamic behavior, I mean expression of program behavior that is constructed at runtime (e.g. based on an external configuration). Effective support for dynamic behavior requires that libraries can aide developers in construction, which in turn requires that dynamic behaviors be subject to communication.

        A first class function is a dynamic behavior, but there are dynamic behaviors that are not first-class functions. Examples include scripts, and objects, and the sort of structurally constrained staged metaprogramming I described earlier.

        Regarding dynamic types: those are useful, too. If you favor static types, I recommend you implement some features akin to Haskell’s Generics, Data.Dynamic, and Data.Typeable.

  6. The computing world uses more diverse languages than ever and languages described as user friendly really aren’t simple as you say. But people usually don’t care as simple means “familiar” for them.

    Keeping simplicity as a first goal is noble and pertinent for the long term and I hope this decision for a simpler type system will be well accepted in the context of your project.

    The semantic of C is very simple and its order of complexity is not very different from the interface derived at the machine language level. I do not expect a modern alternative to be much different in this aspect.

    At first I saw Subtext as a quite high level experience in language design. With this simpler type system, I now see it can become pretty low level. Maybe this distinction high level / low level is not very pertinent any more.

    An additionnal bonus towards a simpler, lower level language is that it may be easier to market, as people will not perceive it immediately as a competitor with their current environments.

  7. Really looking forward to see the progress of Subtext. Doing away with first class functions seems to throw away a lot of the concise and expressive powers of functional programming?

    1. Yes and no. I can still support key use cases like maps, folds, and comprehensions that are specified statically. But my design target is application programming, where I believe “concise and expressive” tend to be bad things.

  8. You are rejecting closures, first-class functions, and any type of dynamic jumping. It would now seem impossible to write a generic sort algorithm since you cannot specifiy how comparison should be done — the ordering would be limited strictly to the natural ordering of the elements.

    Lacking function pointers, and I assume virtual functions/interfaces as well, it would be also impossible to implement any abstract mechanism. If a library, say like FreeType, wished to load a font it could not longer take a generic “stream” of data. You’d have to explicitly pass a filename or an array of bytes. This would make libraries extremely inflexible in how they could load data. (The same would apply to writing data).

    I also don’t see how you could provide functions like mapping and folding. Like sort these require an “operation” parameter. This implies some kind of first class function if you are able to pass it to map/fold.

    1. Higher-order functions aren’t technically necessary to support maps, folds, filters, especially if they are built into the language (as they are in many array or DB languages). C++ supports generic programming without run-time functions via a pre-processing step. There was still life before HOFs.

      1. I don’t see how a Comparator is logically distinguishable from a high-order function? It is a variable passed to a C++ sort function which defines the ordering of the sort. I don’t think the mapping performed at compile-time versus run-time makes a strong logical difference. It is merely an optimization by the compiler — it could very well do it at runtime instead.

        I think that by-definition of a higher-order function that anything which takes a Comparator, or Operation paramter, *is* a higher-order function.

        1. Sean is right. Higher-order functions at compile-time are called templates or generics. C++ has been doing it for ages, largely because they want to exploit all the optimization possibilities at compile time. I am doing it instead to simplify the semantics. True higher-order functions require first-class functions, which means that functions are treated like normal values in the language, and can be passed around and stored into data structures. Callbacks are a common example of the use of first-class functions. Hope that is clear.

          1. It’s just a terminology thing I believe. You’re not abandoning high order functions, just any functionality which requires a non-static function call.

            In practive I think you can implement fold and map this way, though I’m curious as to how you’ll be able to implement the libraries I mentioned. That is, ones that wish to have a generic stream for input data.

  9. My experience with static types is the same, dynamic types are cool for some stuff, but, not for everything.

    S.Q.L. queries are another way of dynamic types, each time, a query is made, each field structure its like a new object / class with new properties, a.k.a. a new type.

    I disagree with your view of pointers. Pointers cause problems, but, are a necesarilly evil. Personally, I dislike Java, C#, and related P.L. “object references”, its using pointers pretending not to be pointers.

    Cheers.

Comments are closed.