For some time now people have been talking about web browsers as an application platform that supplants the PC. So called “Rich Internet Applications” use JavaScript or Flash or Silverlight to provide an attractive user interface (unlike HTML) close to that of traditional PC apps. JavaScript has been recently getting more attention because of some new VM’s that dramatically increase its performance (i.e., TraceMonkey, SquirrelFish, and V8). Dan Ingalls gave a talk at CUSEC about the Lively Kernel, which is a Smalltalk-style environment for JavaScript, running completely inside the browser. He was initially concerned that JavaScript is just a toy language, but in the end concluded that it is “good enough”. That prompted me to take a closer look.
Prototypes – cool. First-class functions – great. Reflection – good. Collections – uhh, where are the collections? Like sets, lists, maps, and so on. JavaScript has exactly one form of structure, the object, which is a hashed map from strings to objects. These strings are typically field names. Arrays are simulated by converting the indices to decimal strings and hashing on them. You can build your own linked lists, but you can’t iterate over them with for
. Here is the killer: if you use an object as a hash key, it does a toString on it. And the default implementation of toString returns "[object Object]"
, so all objects are equal as hash keys. There is no equivalent to Java hashcodes based on the memory address of the object. It is impossible to build a hash table in JavaScript that works on arbitrary objects. You would have to manually allocate unique ID’s for every object and include them in the toString. So no collections in JavaScript.
Adobe provides a true built-in hashtable in ActionScript 3. There are also a few collection classes hiding in Flex’s database access libraries. There are other goodies for real programming like optional typing, classes, interfaces, modules, and namespaces. But ActionScript is in an awkward position now because it is a snapshot of JavaScript 2, which has been abandoned. JavaScript 2 was shaping up to be a promising and innovative language, but it broke compatibility with JavaScript 1. The forces of ignorance killed it off with the battle cry of “it breaks the web”. A great tragedy. The web development community consciously chose to stick with a pathetically crippled technology to avoid having to change.
JavaScript is good enough. It is good enough in exactly the way that MS-DOS was good enough. Like MS-DOS, it was thrown together as a quick and dirty hack, is deeply flawed, and can not be fixed because of compatibility constraints. It is good enough that you can deploy hordes of programmers to produce crappy software that sort of works. And that is good enough for making money.
[Postscript – I see from the comments on Hacker News that I didn’t make it sufficiently clear that I am judging JavaScript’s suitability for building substantial programs, not just adding frills to webpages. JavaScript is fine for the latter.]
On the bright side, it can be treated as an intermediate language. A bit like bytecode. Have programs written in another language compiled into Javascript. To expand on that idea, JavaScript can itself be an interpreter. With the new engines it might just be fast enough.
Was initially puzzled by your comment about collections. I’ve just not felt any lack, and was not sure why, at first.
There are two main use-cases for Javascript. One is scripting in the web browser to manipulate the DOM. The other is scripting on the server, most likely within the JVM.
[Right. I am talking about the idea of using JavaScript for building complex client-side apps, much like the original vision of Java applets. That is what this whole RIA thing is about. – Jonathan]
In the web browser you should not be running a lot of lines of code, or doing a lot of iterations, or mungling through large data. Objects can function as sets and maps. Arrays can function as lists and iterators (generate an array when you need an iterator). More that good enough in this context.
BTW, Arrays are like – but not exactly – objects.
The use-case for an object as a hash key is to map to another object/value. I think the reason I’ve not felt a lack here is that all objects in Javascript are mutable. Want a mapping from object “o1” to object “o2”? Use “o1.mappingTo = o2” (or the like).
On the server can have more lines of code, can be less modest about iterations, and might have very large data. The near-transparent mapping from Javascript to Java means you can use Java classes, as needed. If the Java collection classes buy you essential performance on large data – no problem – use the Java classes. For scripting on the server – more than good enough.
I’ve used Javascript to write an Ant task:
http://bannister.us/weblog/2008/06/11/propertyimport-task-for-ant/
And have series of examples for problems I needed to solve:
http://bannister.us/weblog/examples/
Yeah, I agree with the last commenter.
In practice this lack of maps doesn’t seem to have hurt me yet. Can you give an example of where this has been problematic?
I thought Javascript 2 was abandoned because it was basically trying to turn Javascript (a dynamically typed prototype-based OO language) into Java (static typing, class-based OO). In doing so, it duplicated a lot of existing functionality: classes do what prototypes already do but differently, for example. It was turning into a classic design-by-committee effort that was, thankfully, nipped in the bud.
[Nat – On the contrary, what I heard is that they were breaking new ground in responsible language design. They brought in language theorists to assess the soundness of the design. They were specifying the language not with English prose but an actual interpreter written in ML. Unprecedented, really. As far as I can tell, it was shot down out of fear of change, and possibly ulterior motives to sabotage Adobe. – Jonathan]
Nat,
Not true.
It was not design-by-committee. It was design-by-two-committees. One group lost, the other more or less won. Language design in corporate IT is just like American politics: it is a game played between the 40 yard lines.
Jonathan,
So can we agree that Silverlight 2+ and .NET is a better approximation of what the Web should be like?
The Microsoft people more or less “get it”. And so does Miguel de Icaza of Novell after his colossal failure with Bonobo recreating CORBA in Gnome.
Erik Meijer has said that his favorite language is actually VB.NET 9.0. I understand why. It is actually the best enterprise-grade language, but nobody knows about it. The reason why nobody programs in VB.NET 9 is b/c of all the “line noise’ with underscores.
_
etc..
VB also often compiles down to better MSIL than C#. It is also the only .NET language with first-class edit-and-continue Lisp-like debugging capabilities. Funnily, edit-and-continue was supposed to come into C# for a long time, but Microsoft felt that feature wasn’t useful for Enterprise programmers.
The language I like the most is F#, but I absolutely hate how they chose to keep OCaml naming conventions. It is just another “line noise” unnecessary distraction. Also, the F# library blows.
Preston,
Arrays have no semantics. They are not first-class collections. Do not use them in any public API, regardless of the language you use. Wrap arrays with a public type that exposes semantics.
Furthermore, a good collection library should support a meta-object protocol with features like rejecting changes. This allows collections to be passed around as references, and allows client classes to view the internal state of objects, without breaking the object’s encapsulation. For instance, Peter Sestoft’s C5 collections library for .NET is an order of magnitude easier to program with than the .NET Generics-based Collections provided by the .NET BCL. With C5, I can easily state the semantics of how client classes can capture references to internal state, and how they are allowed to walk the internal state of my object. This allows generally good OO design heuristics, like the Law of Demeter (which most programmers completely misinterpret upon hearing it).
Google and GWT have the right idea, Javascript is a scourge that should be covered up until its finally usurped at a latter date. If Sun had moved quicker (and in the right direction) we would have real java objects on the client side, instead of an Internet bloated with tangled heaps of an alpha test language that we call javascript. I think it’s time that the development community at large realises that language lock is the wrong answer and take a page from Sun and MS. Go with the VM, allowing for multiple languages to exist that compile into the same byte code. Then everyone can rejoice and most importantly, STFU 🙂
You’re right in the sense that JS is “good enough” for most of basic usages, but almost useless for writing bigger software. It’s the reason why there’s been recently a lot of higher level languages that generates JS code. Either Java (GWT) or haXe (http://haxe.org)
@Preston’s “elegant distributed applications” trackback
In my experience, communicating intent is nothing to be “guilty” of. You don’t have to withold my name “to protect the innocent”, because I know what I’m talking about.
Chances are, you should either (a) ask questions to make sure you understand, instead of writing a long blog post (b) wait to hear my point repeated by someone else, except stated differently. I find human beings often need to hear a good idea several times to understand it. I’ll say it again: arrays have no semantics.
In JavaScript, arrays are much worse because of push/pop methods that are supposed to “help” but only obfuscate intent. It makes more sense to wrap the array in a simple type whose guts you never have to look at. Moreover, upon wrapping it, YOU FULLY CONTROL ITS ACCESS AND MUTATION SEMANTICS. Any references you export are under your supervision.
Aside from that, your position is based on many faulty assumptions typical of JavaScript developers who don’t understand how complicated JavaScript makes client-side rich interactions. For instance, you don’t “put code complexity on the server”. You put complexity into the data model and process model. Where that goes in a client/server scheme depends on your app’s communication protocol. e.g., With REST, state is kept on the client.
@JohnZ
Since I have found the need to respond to the same notions more than once, it made sense to write a weblog post (to which I could later refer), rather than as a comment to one particular article.
Script in the web browser is programming in the small – or should be. If script in the browser grows large … you may have a problem.
There is another characteristic I have noted of programmers. Not all climb the learning curve to generate good abstractions. Those that do often generate entire forests of abstractions. The next stage on the learning curve is to use *exactly* enough abstraction, and no more. Even fewer programmers make it that far.
Sounds very much like you have not yet learned minimalism.
Iterations are lists, conceptually. Array serve as lists, in Javascript, and are entirely sufficient in small scripts. If script is small enough to be read entire, then you have less need of encapsulation. Semantics are not just in abstractions, but also in usage. Given small usage, array semantics inferred from usage is sufficient.
Server-side usage is relatively insensitive to “fat” code. Liberal use of abstractions and encapsulation costs little, and may be of benefit to groups of programmers. Server-side code is compiled once, and loads cheaply. The opposite is true of scipt in the web browser.
Of course, if you really do need encapsulation – if you have more than a simple iteration – then a simple closure can provide exactly and whatever you need, and is more casually mutable than possible with static-classed languages. This gives Javascript both the high and low ground in building exactly as much abstraction as the job needs.
All of which suits building small scripts to run in the client.
I am not criticizing. I’m simply saying, please use my name instead of making it sound like I’m not an experienced professional. You asked whether a collections library was necessary, and I provided my opinion based on years of experience.
I know what minimalism is. I just can’t feed and cloth myself writing 10 line JavaScripts, especially when every company in my industry is in an arm’s race to uber-rich internet apps.
Minimalism occurs locally and globally.
Creating a minimal design globally is difficult (in fact, the only way to do it is to write a Turchin Supercompiler for client/server apps in order to have an optimization engine). Most programmers screw it up horribly. Others are willing to accept global minimums are hard to plan for, and may also change with respect to time. In response, they target a local minima. A good example is SproutCore defining MVC strictly in terms of the client, truly putting the “Desktop in the Browser” by forcing the Desktop’s primary GUI architecture (MVC) into the browser. It’s a hack, but a smart one.
Hope this doesn’t sound “macho”. To the contrary, my opinions are formed from years of sucking at programming (too many bugs) and then turning to a better approach where my code is easier to read, UNDERSTAND, and maintain, and contains much fewer bugs. My morale is much higher. If minimalism means returning to my coding practices of my teenage years, then I pass.
“The web development community consciously chose to stick with a pathetically crippled technology to avoid having to change.”
Yes, I remember voting for that.
[Maybe you called it “not breaking the web”, which seems to have been the rallying cry.]
I think my ballot paper looked like something this:
[_] Make browser that no one will use
[X] Make browser that some people will use
Supposing we, the development community, consciously decided to break the web, how would we do that? All we can do is create something “perfect” and let it sit there, irrelevantly. The imperfect, unholy-mess web will still be there, much bigger than our perfect-but-irrelevant version.
The only way to change the web is to come up with some way to evolve it gradually enough so that people benefit without having to take on a cost.
If I wanted to make the perfect web browser with the perfect programming model, I could do it right now. The only thing that would stand between me and successful widespread adoption is that I don’t command some kind of army to visit all the homes and businesses and force everyone else to install my browser at gunpoint.
By the way, Douglas Crockford’s book is the one to read for the details of what is wrong or right with JavaScript. It’s called “JavaScript – the Good Parts” and it’s a very thin book.
[Daniel – thanks for clarifying your position. My comments were based on an alternative view. – Jonathan]