Reflecting upon my previous post, I am wondering why LISP triumphalists like Paul Graham annoy me so much? Perhaps it is because I used to be one myself, in spirit if not in syntax. And also because I now see them as a major symptom of what ails programming.
It is an easy trap to fall into. Programming requires a certain kind of analytical intelligence. Being more intelligent in that way increases your programming ability exponentially. It is emotionally satisfying to think of yourself as a different species from the average programmer. Programming becomes a demonstration of your superior intellect. Surely such powers shouldn’t be wasted on mundane chores, but instead applied to timeless works of brilliant inspiration, to be admired by the common programmer only through protective eyewear. What a load of self-indulgent adolescent crap. In programming as in the rest of life, attitude trumps intelligence. I had to learn that the hard way.
My road to perdition was the seemingly noble cause of doing computer science research in the real world. It is in the crucible of practical problems that great new ideas can form. Unfortunately the result was a strange brew of triumph and disaster. I built some systems that were uniquely powerful and far ahead of their time. Yet they were also wildly idiosyncratic, and deeply flawed in some practical aspects. I inflicted a legacy of pain upon the programmers who had to work with my misbegotten creations for decades afterwards. I too was afflicted. I spent those decades debugging the Frankenstein. In the OS crash debugger. In hex. Because wasn’t it oh so clever and efficient to build a radically new kind of database with extreme reliability requirements as a highly multithreaded kernel extension to the OS?
What may have saved my soul is that I did not sell the company and walk away from my mess. I stuck with it for 20 years, and eventually cleaned up the mess as best I could with a massive migration to a new framework. That experience taught me a lot about what really matters in programming. It is not about solving puzzles and being the brightest kid in the class. It is about realizing that the complexity of software dwarfs even the most brilliant human; that cleverness cannot win. The only weapons we have are simplicity and convention. Tattoo that on your forehead in reverse so that you always see it reflected in the screen. What is truly decisive on the battlefield are attitudes: hard work, responsibility, and paying attention to reality instead of the voiceover in your head.
Programming is an embarrassment compared to other fields of engineering and design. Our mainstream culture is one of adolescent self-indulgence. It is like something from Gulliver’s Travels, with the curly-bracketeers vs. the indentationites vs. the parenthesesophiles. The only thing that everyone seems to agree upon is how stupid all the other programmers are. Try googling “stupid programmers”. We have met the enemy, and he is us.
Programming will not grow up until our culture grows up. We can only patiently and persistently do our part to elevate the level of discourse, and share what wisdom we have gained.
P.S. I deleted a bunch of the cursing, but I think I’ll let the rest stay as a testament to my point.
It almost sounded for a minute like you were worshipping “standards” and “conventions” as if together they form a “silver bullet”. And then it sounded like you were criticizing those who treat “intelligence” as a silver bullet, or “tools”. TANSB. You know that. So don’t fall back on anything as a single guarantor of success. If there’s a single guarantor of success, it’s not giving up. 20 years. That’s not giving up. So good for you. But there wasn’t one thing there, responsible for your success, unless that one thing is just *you*.
After 15 years in the trenches, I’m used to doing the impossible. I get asked to maintain stability in an application, while adding features. I get asked to solve problems nobody has specs for yet, by four PM today. I get asked “how long till it ships?” by people who have been living and working beside the software that I have written over the last 8 years, who can’t be bothered to learn how the software works, or what it is called, or how it helps our customers, or the business that we run. I am the version control and configuration management czar. I set up the bug tracking database, or it doesn’t get set up. I do it, or it doesn’t get done. Oh, and I write 98% of the code that powers almost all our applications.
In short, I am a software professional. If you lift the lid and look inside, you might notice I’m arrogant. Sure. But I’m awesome, too.
I’m used to taking all the $*** people hand out, and trying to build a beautiful elegant product with it. If I have a little hissy fit here and there, about where to put parenthesis, while in the end, I ship reliable high quality software, that is better than anything you could have done yourself, then allow me to have my opinions on the little things.
It’s the least you can do. It’s nice when the guys who run companies now, pontificate on the coding practices of the guys in the trenches. Oh how nice that someone noticed that I write software. Today, someone came into my office and wanted to know the answer to a particular question he had. He’s asked me the same question seventy-five times in eight years.
Sometimes I think, the problem isn’t intelligence, or standards. The problem is that nobody can keep all this stuff in their heads. Software is slippery stuff. Like oil.
And if anybody should be embarrassed, it’s the engineers at BP. Not me.
Warren
This is a good and necessary follow-up to the previous post, but it could be further honed. The valuable conclusion is written between the lines.
In the previous post, Edwards was right in criticizing the “super-tools” camp, but he inadvertently built up the “super-programmer” camp. Now, he says, the real culprit is a self-indulgent “attitude,” which impairs programmers’ judgment so that they make poor choices in tools, architectures, strategies, implementations, etc.
The problem is that there’s a false dichotomy here, between attitude and intelligence. After all, it’s perfectly possible to be both pompous and right. So, why is this often not the case in computer programming?
The real determining factor is experience. Make a lot of mistakes, and you’ll learn from them. Be a good learner, and you won’t repeat them. You’ll make good software.
Edwards gives as an example his own evolution from a pompous asshole who made shitty software to a humbler producer of stuff that people can actually use — it was due precisely to the bitter taste left over from his previous failures. It’s fitting that he entitled his piece “mea culpa.”
It’s not that being pompous leads us to make mistakes. It’s that being pompous inhibits our ability to learn from mistakes, or even to recognize them as such. It’s thus not a crisis of attitude, but a crisis of learning.
In other words, life-long-learning is the path to all the other developer-virtues.
You can be a master and have the “beginner mind”.
W
It is unfortunate, but the quantity, flavor and quality of a number of the posts to this article have given great credibility to it, even if a person was blind to the industry as whole.
Part of the problem with a comparison between this ‘discipline’ and many others is that it is so young and rapidly fluid. Programming languages, scripting languages, methodologies and philosophies are sprouting up and dying out in very quick succession almost continually. The internet adds to this by making information (and opinions) of all kinds and from nearly every geographic location almost instantly available to all, and that information is overwhelming in volume. Go onto the web, as perhaps nearly every programmer has, and attempt to learn a programming language, or understand a development concept, and see the results. A massive, un-absorbable, amount of information is available and much of it so widely varying in the professed experts’ opinions that one can only be left highly confused until an irrational amount of time has been spent sifting through what was found. All the while the would-be programmer is actually learning heuristically using example code as signposts only.
Formal education is of nearly as little use, if for differing reasons. When I went to college, I wanted to be a programmer and immediately declared it as my major. I had chosen a college with a good reputation for computer science. I quickly discovered that everything they taught was well behind what the industry and the ‘real world’ was actually using and needing. The formal structure was excessively archaic and stifling. Almost all of my programming experience is self-taught and hands-on. It continues to be that way for me as I expect it is with many, many others, if not most.
Though programming, and applications development in general, can be quite lucrative for a few, the vast swamp that is the majority of developers are as I… self-taught, being bulled from one deadline to another from those that have little understanding or tolerance for the creative process involved in a discipline that, as a whole, exhibits little discipline. It is the huge demand for large amounts of cheap software that drives much of this. That demand is in large part driven by an environment that is in a constant state of rapid flux. How many versions of android have already been released? On how many differing hardware platforms? How many languages, like c#, have been developed to help accomodate the rapidity of change and allow developers to at least *try* to keep up? How many of those have been learned and then lost? How can we address such issues with a resolution?
I do not know any of those answers, not fully. I am not sure I have met anyone that has. I am not sure I would believe any “experts” that say they do. Sooner or later other “experts” will come along to say the opposite. Almost all of it being merely opinion but presented as fact. That leaves me, like so many others, with only myself to rely upon for knowledge and the sifting of it. Other fields have formalization. They have nearly immutable building blocks, assured knowledge that is timed tested and few of the masters of those fields disagree with, with which to climb to higher ground and establish it as a science and a discipline. We mostly have gurus with pockets of followings or mythic figures with schools of faithful disciples and acolytes trailing them.
That is the way I see it. I am willing to bet others feel this way as well. Perhaps it is only to be expected from something as young as programming. Perhaps there are so many adolescent minds because programming, itself, is very much an adolescent. Maybe we will all grow up together.
Your write this as if every home and building is built up to the highest standards. Twin towers? They had a structural bug in them didn’t they? New homes and remodeling projects are often riddled with structural bugs.
It all comes down to time and money.
RE: The Twin Towers. No, they were actually reasonably soundly constructed, given the constraints of the time. By the 1990s, trans-continental flights laden with massive amounts of fuel were commonplace, and at that time, the buildings should have been re-evaluated. But in terms of construction, they were built to withstand a hit from the typical plane of the day, as well they should be.
Either they had a flaw or where imploded. A sound building will not had fallen the way they did. The steel cage should have stood from the rubble instead of collapsing downwards at the same time as the rest of the building. Thereof the steel cage was of a lesser grade steel or the building was purposely imploded.
What are you talking about? Please provide evidence that you know what you are talking about – how many skyscrapers have you designed? How many of them were designed to stay standing when a plane literally crashed into them? EVERY expert has gone on record stating that there was no flaw in how the twin towers were built, but of course, they never asked YOU – some random guy posting like he knows something on a blog post.
Ricardo, I’m sorry, but you seem to have been trusting the Internet again. I won’t attempt to disprove the 9/11 truther paranoia here, but you’re misreading what I wrote. I was merely talking about the past. When the Twin Towers were designed, airplane crashes were taken into account, and proper planning was done to account for that. It was decades later when much larger planes with much more fuel became commonplace that a problem (regardless of truther theories about the reasons for the fall) certainly did arise. So back to the topic at hand, it wasn’t a design and construction problem, and thus a bad analogy.
The software development field attracts a lot of intelligent people. Many of those people think that intelligence makes them a good software engineer, but in reality it takes a good deal of experience, maturity, and practical-mindedness. Unfortunately, it takes a long time for many to figure that out, and some (most?) never do.
However, I’ve often seen software engineering compared to other engineering fields in a negative light, as if it should be the same as they are, and, frankly, I think it’s bunk.
People’s expectations of software aren’t the same as products of other engineering fields. Architects design buildings that stand for 50 years. Sometimes buildings are remodeled, but often their basic structure and function remain intact for very long periods of time. Software, on the other hand, is constantly evolving to meet new demands and exploit new opportunities, and it is expected by consumers that it be so. Those expectations create pressure that prevents software development projects from delivering good results consistently in the way that other engineering fields do.
The underlying issue is the inherently scalable nature of software. A programmer can create something useful that can be duplicated indefinately at virtually no cost. Because of that a programmer can do things that make you or others orders of magnitude more productive in the future. But, that feature of the software engineering field cuts both ways: doing it the “wrong” way can cost massive amounts of productivity in comparison to the “right” approach. Implementing a database as an os extension, making it nearly impossible to debug, is just one example. However, even small design decisions can have huge, unforeseen, impacts on future productivity of both those who develop the software and those who use it. That combined with the complexity of software and the uncertainty of future requirements makes determining the “right” way to implement something nearly impossible.
This unique feature of software puts Software Engineering squarely in the realm of Extremistan, and all other engineering fields belong to Mediocristan. You could make Software Engineering as reliable as other engineering fields, but you’d have to stymie the pace of progress to do so. For that reason, I don’t think the two will ever be the same, nor should they be. However, I do believe there better approaches than the ones we have now, and I think Jonathan’s work goes a long way to finding some of them.
PS – If you don’t know what “Extremistan” is, read Black Swan.
100% Agree with you Kevin. Technology advances too fast to allow software to be methodically calculated and engineered like a building. By the time the software is completed, the technology world would have advanced to such a degree as to render the software obsolete.
Not rely true, Smalltalk had some ideas so ahead of it’s time that most programming languages and frameworks (not event mentioning IDEs) are still catching up.
Ada / Spark provided a way to create properly engineered and formally proven to be correct software.
Now question is why people are using Ruby that has virtually no type safety, and language still doesn’t have official formal specification, only reference implementation.
The real world connects lisp/ruby/python fanboys with managers that have no way to evaluate technology, and VCs with huge amounts of money counting for a huge payout (which is possible). This creates mix that could never happen in other industries.
No one sane would invest 5 mil dollars for skyscraper, that someone promised to build using new super-uber-dynamic-saphire-enchanced-marshmellows instead of steel and concrete. All this for half the price and half the time, any expert you’d ask would call bull. But in software engineering, this happens every day!
Even if there would be someone crazy enough to try that, there are safety laws you have to follow, and noone would risk potential lawsuit for endangering humans.
In software industry no such “breaks” for human greed/stupidity/ego exist, so that’s why we have situation like this.
Yea! You’d think those stupid VCs would learn after realizing that there have never been any successful sites/apps built with dynamic programming languages!
It appears you guys are missing the point.
To design a building to last for 50 years you need to consider future maintenance much harder than personal style. You use standard replaceable light bulbs and sockets. In software we love to use the clever light fixture that will last no longer than the inauguration day; fixed in a way that it is impossible to replace without tearing down the wall.
Simplicity and convention! Easy to maintain and built to last. Your cleverness will cost hundreds of thousands of dollars to your company in the long run, and many headaches.
True elegance reside in the art of not making things noticeable.
“To design a building to last for 50 years you need to consider future maintenance much harder than personal style.”
Completely agree. Maintainability is usually the most important design consideration, and personal style often gets in the way of that. Hence, my first paragraph.
My point is that software doesn’t last, regardless. What’s “simple and conventional” today will be obsolete in three years. In the world of ever shifting technologies and requirements you will always make design decisions that you will later end up regreting.
Kevin,
Along with being a 15+ year veteran of software development, I am also a small business owner (SBO). What I have come to learn as a SBO is that technology simply does NOT matter. It does to those of us that make a living by solving problems with technology, but not to the business person. Do you really think that 99.9% of the users out there care about the difference in Outlook 2000 vs Outlook 2010? They just want to send emails, if it works, great, if not they are very unhappy. If anything technology gets in their way when they want to get their real job done.
My point is that software is not the end. The ‘end’ is solving the problem at hand: with Outlook it is sending/receiving mail. Did folks have this need to send/receive emails 20 years ago? Yea (for a few), will we need to do it in 50 years, yea. The problem we are solving today via software will be there in 50 years.
What we need to do is: Design/model a solution to the problem, relatively independent of the current technology. As an example, design and model the database to a system without regards to if it will be implemented in PostgreSQL, Microsoft SQL, Firebird, or Oracle. Today one might be ideal, tomorrow another actual implementation. What is important though is to have that accurate design/model. That way, 5/10 years later when a business decision is made to redo the system because of changes in technology or changes in the business itself, there is a good solid blue print in which to start making the changes.
Let’s look for a moment at cars. That is an industry where technology is rapidly changing and there is new models EVERY year! If you don’t think they have changed much, go look at how efficient modern engines are today compare to the past. Take it in 10 year increments, we don’t see it because we don’t look, but compare the efficiency of a 2010 car to a 1980 car, WOW!
Do you think they start from scratch each year when creating the next model year? No, they have solid, designed in which the engineers go in and change. Even when they are creating a whole new model, they don’t start with a blank slate, I am sure they start with a template. And they never build anything without a real design.
Software needs to be the same way, all our languages, IDE’s, databases, OS’s, etc. are simply tools to solve a problem. We love the tools; the same way a car mechanic loves his/her Craftsman or Mac tools. The person that takes in his/her car to get fixed only cares that the problem be solved; they don’t give a hoot about the tools used to solve the problem. The same is true of our customer, whether that is the end user, our boss, or the product manager, they don’t care about the tools, they care that the problem is solved.
The design/modeling is the process of solving the problem independent of the technology, or as independent as possible. Implementation is using the technology to create the actual solution. Where I agree an implementation should have planned obsolescence, the design/model should not.
It all goes back to that saying: I stand on the shoulders of giants. Engineers do this by designing and building upon old designs. Techniques learned when building with stones hundreds of years ago are used today with the most modern of material, the design hasn’t changes, the technology has.
Software development is NO different from the Engineering fields: There are problems people need solved that have existed for a while and will continue to exist. We can use one set of tools today to solve the problem, another set of tools tomorrow, but be it today or be it 2060, it will be basically the same problem.
As software creators, we need to stop looking at this short term and start looking long term!
Sam,
I appreciate the comments. I’m keenly aware that users want solutions to problems, and don’t care about the technology. I also agree with you completely about what you say about modeling processes and solutions. I am a practitioner of OOA/UML, and what you said pretty well describes my approach to software design. It can definately help insulate you from technology changes, and even changes to business requirements to a lesser extent, but it’s not a panacea.
What I’m trying to get at is that in software design, small decisions have a big impact on productivity. As an example, I recently made a small change to one of our internal business applications that measurably boosted the productivity of our recruiting department by 85%. The problem is that we don’t always know what decisions will have those impacts and what those impacts will be (or else I would have made that change 9 months earlier).
People are often willing to take greater risks in the software development projects than in other fields because that volatility in outcomes creates greater opportunity for reward, and the consequences of failure are often less drastic. That is not to say that some people don’t take stupid risks. But, there is a balance along the continuum of risk and certainty, and most of the time that balance lies farther on the riskier side in software development than in other engineering fields.
As a final thought, try comparing the success of other engineering fields to only that part of software engineering tht involves projects where the consequences of failure are truly great (e.g., flight control software). There have still been major failures even in that limited scope of software engineering. However, I suspect that comparison would be much more favorable to the software engineering field. But, to suggest that all software should be developed with the same degree of quality control is just impractical.
Sorry, Sam, but your whole comment is mistaken, because it’s built on a mistaken notion. To illustrate, let me pick on just one point:
20 years ago, there was no spam, no HTML mail, and no email malware.
That’s just another form of too-clever madness, like building a database as a kernel extension. It sure sounds sensible on paper to a lot of people, though.
The amount of fatalism in the comments here is staggering. My approach to creating software is to always try to do a better job than the last time, and that means continuously evaluating the tools and techniques you use for building it. Technology matters, and you should build systems for the long run (if only because if you don’t, you and/ or your users will pay the price a few months, not years, down the line). Short-sightedness is a problem in US culture that unfortunately seems to have found it’s way into software engineering as well.
I have mixed feelings. Without the people that think out of the box an do things in a different way. Innovation is halted. Sure there is need for the programmers that conform to a system and make it operational. But without the ones that think different, the rest will be doing the same things making software development unnecesary.
We need both the conformist for they create stability. And the dreamers for they move us ahead.
I don’t understand what you talking about.
I agree, and I think you’re on to something bigger. It’s easy to be arrogant when you’re young and don’t know any better. And if you’re really good at your job, and everyone defers to you, you never learn. You think you’re always right. You never gain wisdom.
It’s tragic.
Sometimes making a big mistake is the best thing that can happen.
I think that I understand what you are saying, and it seems to fall within the realm of a generic inquiry into what results–or what the nature of intellectual transition is–between the clash of naive illusions of grandeur (or dreams of becoming a lauded tech celebrity in this particular case) and the eventual sobering–and to an extent relieving–appreciation of a mature grounded work ethic. Perhaps our collective cultural infancy is somewhat exacerbated in the case of the coding world by the (addmittedly speculative) prospect that the bulk of us who pursue a sustained interest/vocation in the world of “computer science” are actually some of the more awkward and less well-adjusted members of American society (I know this is fuel for an enormous tangent discussion), and therefore may very well be more prone to seek unconventional intruments effectively functioning as niche fitness-meters. Add to this the fact that we toil in an environment where our individual intellects are regularly stretched and measured, and predictably the product of our behavior reflects that of one who has developed a shallow virtually one-dimensional form of self-appraisal. In other words, we lose sight of our identities in the larger pragmatic context and instead become arbitrary competitors in some intractable short-sighted contest revolving around our own bizarre brand of geek ego, finished with a dallop of warped inferiority complex.
You say that “Programming will not grow up until our culture grows up.”
Well that’s essentially the reason to love “LISP triumphalists like Paul Graham”: their society is more like an academic one, hence the fair amount of respect to anyone’s deeds and less insulting everyone else for being so-stupid-not-to-understand-what-you-are-saying as is in your post.