Why we shouldn’t number

The discussion on my previous post (Why numbering should start at one) suggests a new approach. Maybe these blogging tubes are good for something after all.

We access the elements of a sequence (AKA array, string) using a new immutable datatype called Ordinal. We can ask a non-empty sequence what the ordinals of its first and last elements are. The first ordinal of all sequences are equal, so we can define the literal constant first.

We can add an integer offset to an ordinal to get a new ordinal. We can subtract two ordinals to get the integer distance between them. But we can not add two ordinals together. Most importantly, we can not convert ordinals into integers or vice-versa.

Thus the question of whether to number starting at 0 or 1 is abstracted away. I conjecture that many fencepost errors are also eliminated by making a type distinction between ordinals and lengths, and preventing their direct interconversion. Ordinals can be seen as a special kind of collection iterator with a distance metric and a designated instance first.

Using integers as ordinals is rep exposure! Abstract data types rule.


Based on the conversation below I am retracting this proposal. It doesn’t really solve the problem of indexing conventions, just pushes it up a layer. Not worth the complexity. I am left still believing that that choice of index base is purely conventional, and that the widespread belief that 0-based indexing is superior is just a myth we inherited from Assembly Language which now serves mostly as a tribal marker. I would be happy to see some example that shows a real difference.

33 Replies to “Why we shouldn’t number”

  1. <cynism kind=”mild”>
    I bet it will take under 5 of these blog tube iterations to reinvent the pure iterator that has permeated virtually all programming languages of the last 15 years.
    </cynism>

    On the plus side, this means you’re on solid ground.

    1. They’r only a pain because they are (very) leaky abstractions (most STL implementations mix ‘abstract iterator’ concepts with plain old pointers, e.g.).

      The idea is sound, and as such can be found in all languages with a decent foreach type of construct. Mix in list comprehensions(Boo, Vala, Nemerle, python, C# etc. etc.) and that isn’t even as limited as it sounds at first thought.

      In other words: if you don’t expose any of the underlying machinery (_unlike c++_) and judiciously add he right amount of syntax to support it, this is far better than explicit indexing.

      Let’s wait for the performance bigots to come and shout that there will always be a need to explicitely manage storage layout (i.e. an ‘array list’ type collection that allows insertion at particular index).

      $0.02

      1. The array list is kind of interesting concept I think. It’s a perfect example of how current languages fail at separating concerns. The logical properties of the object are mixed with the technical concerns of how to represent it in the machine.

        If there is a logical reason to require a mapping from a finite set of continuous integers to some object, by all means provide a way to express this in the language. How this requirement will be encoded and treated by a machine should be totally abstracted away though. Preferably expressed in a different artifact all together.

        From my experience there so much encoding of the mechanics of the solution in a system that the actual domain knowledge of the problem being solved, and how, can barley be inferred between the lines., if at all This is IMHO the root of the complexity issues we are suffering.

        I think the best way to get out of this mess is to rethink the whole one layer encode/compile approach to programming that seem to dominate development.

        In the large problems seems to be addressed by implementing interpreters for highly specific and limited languages. Browsers and HTML as an example. Yet in smaller systems we usually encode the problem directly in a language with turing complete semantics. I suspect the reason for this are that we benefit from the referential integrity enforcement of the host language. So the big problem seems to be how to enforce referential integrity between programs and their interpreters in a system where both is subject to change.

        The approach taken by VPRI in their Chains of Meaning [1] is an interesting take on this.

        [1] http://www.vpri.org/pdf/m2009011_chns_mng.pdf

        1. I agree, John. I like the idea of stacks of embedded DSL’s. I have some ideas about how to do that, and how to do reflection and metaprogramming, but I need to get a whole bunch of other stuff resolved first.

  2. So vec[5] becomes vec[first+5]? (Code like vec[5] appears to be quite common in math-intensive contexts, the ones where zero-based indexing goes against the common notation of the field most clearly.)

    1. How about vec[sixth]? But if the 5 is in a variable n, you would need the addition. And to get 1-based indexing there should be an Ordinal origin=first-1, so you can say vec[origin+n]. So maybe all I’ve done is to recreate the indexing convention problem with more syntax. Need to think about that.

      1. Yeah, I think you are primarily just substituting one syntax for another here. If “zeroth” were in more common use, you’d more clearly see the same problem with “first” as you do with 1 vs. 0.

        Both systems represent the exact same semantics, so I don’t think it is really an issue of representation exposure anymore (0-based is already abstracted beyond just memory arrays).

        IMHO, the crux of this issue is consistent user expectations, so the worst thing you could do is pick one now and then switch to another one later.

    1. You are right Roly. I was just trying to point out the intentional difference. Perhaps it is like a unit system, where people classify numbers as pounds or kilograms to help keep things straight.

      1. My question was going more for: Can we list all cases that require indices to express, and find abstractions that deal with these cases more elegantly?

          1. Exactly.
            I think Mathematica effectively already has this in Form of pattern matching. For instance, an (inefficient) sort without indices may be implemented like this:

            sortswap[{x1___, a_, x2___, b_, x3___}] := {x1, b, x2, a, x3} /;
            b < a
            sortswap[list_List] := list;
            strangesort[list_List] := FixedPoint[sortswap, list]

            The first line defines the sortswap function to find 2 elements anywhere in the list where the second is smaller than the first. The second line applies to sorted listes and returns them unchanged; the third line defines the sort as the fixed point of the repeated application of the sortswap function.

            I am not entirely sure whether this kind of pattern matching and replacement covers all instances of index usage, and if it is more elegant in every case. But I do like it 🙂

  3. What about infinite sequences, or streams where the end is not yet known? Handling both cases in the same style would be a big conceptual win in my book.

    And what about slices of indicies? E.g. only evens, only odds, basically anything the math intensive languages can handle.

    1. Hi Mac – I have standard lazy functional collection constructors, and the integers themselves are a collection, so infinite sequences are supported. Trying to access their end will not terminate.

  4. Why are you bothered where indexing starts anyway? It doesn’t matter. How often, in real code, do you use indexes? We picked 0 as the start index for sequences many years ago, everyone agreed on this. Occasionally someone protests, but as developers we are still massively in agreement that convention matters most, so there are no surprises. This is one of the reasons people were so pissed off with VB’s stupid design – which still causes countless bugs today.

    So EOD. Leave it. It doesn’t matter.

    1. You are right, Rik. I don’t actually care all that much, and I will go with whatever the users of my language seem to prefer. The point of this discussion was more to air out my hypothesis that this really is an arbitrary choice, and that the programming folklore about how 0-based indexing is superior turns out to be a myth.

  5. Another moment of nostalgia for Modula-3. One of the further advantages of defining specific ordinal types is that it avoids applying values in the wrong place, The compiler will stop you indexing a month with the days in a week.

  6. As you’ve already stated, this doesn’t solve the problem. The question I would ask is, why do arrays fail when I try to access a non-existent element?
    Why can’t we use a Dictionary for array functionality? This way we can allow access to non-existent elements by number. We can even allow the developer to determine the start and datatype of their index.

    Also I think developers should move away from arrays themselves as much as possible. In C# for example, you can use IEnumerable to iterate through any type of list you may encounter. Also there are better implementations for storing elements in a list type, like List, HashTable and Collection.

    1. Absolutely, I am only talking about the logical interface, not the physical implementation. Abstractly, there are only two kinds of collections: maps and sequences. Everything else is premature optimization.

    2. Why can’t we use a Dictionary for array functionality?

      Actually, some scripting languages do exactly lists (e.g. Lua, to my knowledge).

  7. There is absolutely nothing wrong with my factory approach I described in your previous blog post, and it requires no language complexity whatsoever. The trade-off is that it then becomes an idiom that beginners must learn, but this is how life works.

  8. For me, the most annoying thing relating to representation exposure is floating point numbers. Even their very name exposes their representation! The idea of floating point arithmetic is essentially a hack done in the name of speed. And they are not just incorrect (I could live with that), they are also annoying and confusing to use. If you design your language from the ground up, can you please get rid of the nonsense of floating point numbers?

    1. Well said Adam. I plan on having only infinite-precision rationals. Anything else is premature optimization. Transcendental functions (e.g. sin) will have to take a required precision, or more ideally have the required precision be specified in the type of their result. Such precision types would also permit compiler optimizations into machine integers and floating point. I know there has been tons of research on that, but I haven’t studied it because numeric computation is pretty low priority for me. I think the last time I used a trig function was in 1972 for a Space War.

  9. Jonathan, I think you’ve re-discovered yet *another* part of Haskell here.

    If you look at Haskell’s array interface, you will find that it uses neither 0-based nor 1-based indexing. Instead, Haskell defines a type-class “Ix” (short for Index) that can be used to index arrays.

    When you create an array, you specify the range (first, last). These don’t even have to be integers, they can be tuples or any other instance of the Ix type.

    Under the hood (and in the implementation of the Ix class) you may find specifics related to 0-based indexing, but as an array user, you just get to choose your (first, last) indices.

    I really think you should bite the bullet and study Haskell instead of slowly rediscovering it, bit by bit.

    1. Haskell is certainly a great source of carefully thought out ideas to learn from. Indexing is not one of them though. The (!!) operator does 0-based indexing into lists. Nothing to see here, move along.

        1. Programmer-specified array bounds. An old idea, as prior commenters have pointed out. Seems to me like extra work for the programmer with little benefit. Better to just adopt a convention and skip the boilerplate.

          1. If you adopt a convention, it better be the one Dijkstra advocates 🙂

            I think your original point was that we don’t need to expose a convention (in many cases we don’t), but Dijkstra’s point is that when we do, the (almost always) simpler convention is the 0-based one.

        2. Actually my point is that Dijkstra was wrong. Once we have abstract interfaces that properly encapsulate the difference between positions and lengths the convention becomes arbitrary.

Comments are closed.