My colleague Danny Dig wants to rebut my last post, but is in the process of moving, so I will attempt to paraphrase his position. Danny is going to the Universal Parallel Computing Research Center, richly funded by Intel and Microsoft along with a sister lab at Berkeley. That fact alone argues against my impression that multicore research is burning out. Where there is funding, there is research.
Danny points out that latency can often be a more important issue than raw speed, and requires parallelism even on single-core processors. Performance tuning is often only needed in small regions of code, but latency improvement can require more global design changes. Latency is an important end-user issue, making parallelism an essential feature that must be embraced rather than an implementation detail to be hidden.
Danny also points out that we don’t necessarily need to adopt a whole new programming language semantics in order to make parallelism easier. His latest work is a good example: developing refactoring tools that assist in the tricky business of adding parallelism to existing code. Rather than force a whole new programming language on us, it helps us deploy existing parallelization techniques. Great idea.
These arguments force me to admit that parallelism is essential, that it is not just a performance obsession, and that current research on it could in the end be a good thing for programming overall. I still don’t feel like working on parallelism myself, mostly because I have burned out on it. I guess what really bothers me is the way that multicore is hyped up as the most important issue facing software. Parallelism matters, but we have way bigger problems, like figuring out how to not totally suck at programming.
Update: It finally hit me what really bugs me about the whole multicore frenzy. I have been trying to convince people of the need to radically change our programming languages in order to make them easier to use. The difficulty of programming is a fundamental issue for our entire field. Almost no one cares. Academics treat it as a disreputable topic. But give people a chance to speed up their programs, and they are willing to consider all manner of radical new programming languages. Just pointing out that performance is not a universal problem subjects me to a stream of abuse. It makes me angry and jealous.
seems like the post didn’t come out well in the web page.
I read the thing using the rss feed and then on the website I only see the first sentence.
[Hacked! – Jonathan]
The point of parallelism is not “speed” but “reliability” (aka Engineering).
Applications architected out of small, encapsulated units are more likely to produce working (aka reliable) software.
The concept of “processes” produces the best encapsulation known to man.
The *only* reason that “processes” have not been used as a fundamental unit of architecture is due to efficiency concerns.
Now that we have the luxury of using huge amounts of cpu power, the emphasis should be on correctness. I.E. we should only be using languages that describe software architecture in parallel terms. Parallel down to the statement level, if not lower.
Hardware design works “better” than software design because the fundamental unit of hardware (the “chip”, I.C.) is parallel in operation. The issues of parallelism, e.g. “race conditions” in asynchronous hardware, have been studied and solutions are well documented. We (software artisans) should be stealing this knowledge from the hardware world. The only reason why we can’t do this is that our paradigm is fundamentally non-parallel. We use call-return everywhere and that imposes a horrible cost on our designs.
In his original article, Jonathan Edwards claims that multi-processing is hard and bug-laden. This is only due to the fact that we (software artisans) try to layer parallelism on top of call-return protocols. RTOS’s / processes / threads / preemptive multitasking are all epicycles that attempt to correct a fundamental deficiency in our manner of thinking.
Think of a software unit as a stand-alone process. The software unit can accept inputs (events) and produce outputs (events). No further protocol is implied. (I.E. the sender doesn’t know when the event arrives, the sender doesn’t need to wait, and v.v.).
What could be simpler?
The main reason that we don’t think in such simple terms is that our paradigm has been corrupted by call-return.
(email me if you want to discuss: tarvydas@visualframeworks.com)
Paul Tarvydas
Toronto
my correct email address is
tarvydas@visualframeworksinc.com
I sympathise with your distaste of the hype around multicore technology. However, I’d like to stress that
from a programming/system modelling point of view there is much more to it than gains in performance and scalability. If done right, going from objects to actors can lead to a natural partitioning of system entities, easier distribution, looser coupling and to a more robust system architecture (cf. larger erlang systems). In the long run it might lead to truely autonomous (for your facourite definition of that term) agent based systems. Hence, multicore technology appears much more as a means that makes system design based on thousands of concurrently execution actors feasable than as an approach for increased performance and scalability. Therefore the question of how multicore technology can be used to derive new powerful modelling primitives (beyond actors) appeals as a worthwhile topic for programming language research.
I fully agree that the *first* priority of software design should be on the reliability front. Most business programmers have the task of converting business rules into executable form: the vast majority of these have no need for high performance on the first pass. Optimization should take place either automatically (via optimizing compilers, caching, etc) or as a last result when real world needs shown it to be necessary.
Yet, clockspeed advances are simply not going to happen in a way that they were as the makers were rushing to 4Ghz and then retreated when it was understood that they were on the path to making small suns, not processors.
What is continuing is the original promise of Moore: transistor counts keep climbing. Considering that we get those transistors “for free” insomuch as we can have two, four, eight, etc cores for the same price, it seems whimsical to ignore multicore technology… we are going to get it if we “want” it or not.
More importantly, what we perceive as “excess” processing power will eventually be harnessed for new interfaces. After all, the spreadsheet required a base level of screen interactivity and “wasted” cycles to provide the high level view that it did, yet I would consider the spreadsheet to be the reason that business adopted the microcomputer, way back in the day.
I can’t imagine that we *won’t* find some ways to extend abstractions further, on the back of all that excess computing power, to support the ability to program at what today are considered absurdly high levels. After all, people used to laugh as the Lisp community for “knowing the value of everything and the cost of nothing”, referring to the “wasteful” way that computation happened in that environment. Now we see mainstream languages, like C#, adopting the functional way (if only in fits and starts), because we *can*. I’m personally thrilled, as functional and declarative programming strip away much that can go wrong in the procedural world and allow more reliable designs… but only with the support of absurd amounts of computing power that we waste to make it work.
@John Lopez
@allow more reliable designs… but only with the support of absurd amounts of computing power that we waste to make it work.
uh, this is a contradiction, in my experience. what design becomes more reliable by turning an O(n) into O(n^2)? the truth rests with norbert weiner’s observation; complex systems have many layers of complexity, some accelerating structural decay and others fortifying against decay.
While you are correct on a theoretical level, my comment is about the ability of programmers to create code that is reliable. Millions upon millions of business workers write spreadsheets without realizing that they are “coding” the financial calculations. The simplicity and reliability provided by the underlying system is what makes this possible in the first place.
Yes, sometimes the underlying system has bugs, but does anyone honestly think that a spreadsheet is likely to contain *more* bugs than the same user trying to write, say Basic code, to perform the same task? We burn a lot of cycles to provide the spreadsheet UI and formula resolution system. In return, we have a system simple enough, reliable enough and robust enough for a “non-programmer” to achieve useful results. (Yes, there are examples of the model being pushed to far… I have coded custom solutions to replace “spreadsheet hell”, but the general principle remains.)
Spreadsheets, by default, do not have a stable structure. In fact, it has implicit structure that the user unwittingly changes through menu invocations (ever wonder what that REF! means? It’s probably there because you mixed coalgebraic and algebraic operators and the software didn’t know what to do so it just barfed, REF! REF!, into the cells). You have to add stability through immutable cells, and “spreadsheet designs” like the “stair-step layout” (a useful spreadsheet design strategy very few people actually use), and then lock in that layout so that row and column insertions within each “stair-step” do not degenerate the ad-hoc structure of the spreadsheet. However, people don’t use “stair-step”, because what they often really want to do is take a spreadsheet they worked one day on, and print it as a report. Stair-step layouts don’t look nice printed. All this does is point out how lame reporting software is. Nobody gets it right, so users spend 8 hours hacking in Excel. So right here, you’ve got Excel’s biggest selling point as its biggest weakness when it comes time for maintenance.
Simplicity and reliability is an unfounded characterization, based on the number of heuristics required to successfully maintain spreadsheets. Excel is pretty rocking at ultra simple tasks, like fantasy football draft lists, or maintaining a programming project task schedule (such as the one mentioned in Joel Spolsky’s Painless Software Schedules). Once the user is tasked with, “Re-plan the project schedule for the skyscraper in light of changes in resource availability”, they need Microsoft Project.
Also, the biggest waste in spreadsheets is now gone. In the 80s, spreadsheet programs allocated memory based on the visual dimensions of your spreadsheet. This meant that if you only had one bit of data in cell Z99, you just allocated 99*26 cells of memory, a true Schlemiel the Painter algorithm for memory allocation. Eventually, they moved to sparse matrices models of memory, and Schlemiel was replaced by a programmer who knew how to apply a Flyweight design.
Any cycles spent rendering data to the screen for user manipulation are well-spent. And your attitude about clock cycles being wasted on a “formula resolution system” is exactly the sort of thinking that leads to embedding application logic in static is-a hierarchies. Historically, its the biggest mistake that programmers refuse to admit: Non-technical issues dominate technical problems, and the best business application architecture is based on non-technical issues first and foremost. Its sort of like this question of disbelief people have, “So, you want a macro language, a way to dynamically record those macros easily, AND a wizard language that encapsulates commonly recorded macro language instructions?” Yes, yes, I do. I want to make money, and that means fast turnaround time for my customers. If your customers think you suck at programming, it is because you are not doing it right and focusing on performance prematurely. I’ll add useful features and worry about performance when I get a “program is too slow” bug in my call tracking system. Wasting computer clock cycles pales in comparison to wasting human clock cycles; we’re the real-world problem solvers.
Hmmm, we seem to be talking past one another, and I’m at my self imposed 3rd comment to a thread. (Saves quite a bit of pointless back in forth I have found).
The spreadsheet was just an example of using cycles to provide VALUE, which seems to have been lost when you railed against “your attitude about clock cycles being wasted”. Perhaps wasted was a bad word…. I really meant to point out that we burn a lot of cycles to provide value to business users who otherwise would be unlikely to complete a task.
Spreadsheets are just one example though, and I pointed out that they have limits. I would hope that the disclaimers were strong enough to not lead to the thinking that they were full programming environments.
As another example, Prolog style logic languages are terribly inefficient from a cycle count point of view, but allow specification of problem spaces that are succinct and powerful. I will trade cycles for clear problem specification languages.
Our fancy GUIs likewise “waste” cycles, yet I wouldn’t want to go back to command lines for a large number of tasks.
So you and I appear to be in agreement. Wasting human cycles *is* worse than CPU cycles, which leads back to my apparently poorly typed premise: we should embrace multi-cores or any other improvements in technology because it is those “wasted” resources that make our lives easier in the long run.
Jonathan, it’s been more than a year since you released your demo about subtext 2. Should I still hope for the release? Subtext is the most exciting software-thing (except for Wolfenstein: Enemy Territory ) in my life. I played with the first version, and found it interesting. Sort of sparkled my brain. Let us know what’s happening around subtext 2. Thanks and bye
[Dercsár – I am glad you like it. See my earlier post on what’s next. Subtext is going to have to hide inside a Trojan Horse of some sort, which I think may be a modeling language for web apps. A goal is to release something that others can play with, instead of the demo-ware I’ve built so far. – Jonathan]
Updated original post
i sure do wish i had a zillion bucks to fund you guys. software sucks buttocks and ain’t no fun. -Mr. Java Day Job.
p.s. i do hope that massive concurrency can be leveraged to let us take more componentized / fail only / erlang / clojure approaches.
Re: releasing subtext. please, just try to put out what you have. open source it. find some like minded folks on lambda-the-ultimate.org to work on it in their spare time with you. remain the benevolent dictator. it is more important to get it out and available to people, not matter how foobar’d it is right now! than to wait and wait and wait for it to somehow magically become releasable. i say this all because i honestly want it to succeed and take me away.
Re: releasing subtext.
Release early, release often, or your project will die. It takes no more than a minute to type “git init”, tar the result, and stick it somewhere. You don’t even need to understand what the command does, it’s just a release. It doesn’t matter if your code sucks, especially when it’s meant as an example of “note that code sucks, this is my alternative”.
It doesn’t matter if it’s broken, or if you think it’s useless, or if it’s hard to make it run at all, or if the whole point is that the code shouldn’t matter, because the design is what’s important.
You can’t arbitrarily declare that the world isn’t ready for your ideas and hide away your prototype until “the right moment” or because you “just need to figure out this one thing first”. That’s what people who are insane, liars, or just plain wrong, do with their mythical perpetual motion devices / cars that run on water / cure for all disease. Release now, whatever you’ve done. Let people play with it, let people talk about it, let people ignore it.
Written because the RSS feed for this place hasn’t moved in months and the idea is too interesting to let it die without some squirming.
The creators of multi-core processor chips have added additional ALUs but have not improved access to memory. In data intensive situations the problem with additional cores is being able to feed them. The bottleneck is access to memory. Eventually, as you add more cores, you cannot keep them all busy because some of them will be waiting on a memory fetch at any given time. As the number of cores increase, the effect of this bottleneck increases. Thus as the number of cores increase we receive less return on each core until we have no additional benefit and most likely performance degradation as more overhead is required to manage the data flow.