The Great Software Stagnation

Software is eating the world. But progress in software technology itself largely stalled around 1996. Here’s what we had then, in chronological order:

LISP, Algol, Basic, APL, Unix, C, SQL, Oracle, Smalltalk, Windows, C++, LabView, HyperCard, Mathematica, Haskell, WWW, Python, Mosaic, Java, JavaScript, Ruby, Flash, Postgress.

Since 1996 we’ve gotten:

IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, Rust, React, Docker, Kubernetes, Wasm.

All of these latter technologies are useful incremental improvements on top of the foundational technologies that came before. For example Rails was a great improvement in web application productivity, achieved by gluing together a bunch of existing technologies in a nicely structured way. But it didn’t invent anything fundamentally new. Likewise V8 made new applications possible by speeding up JavaScript, extending techniques invented in Smalltalk and Java. Yes, there is localized progress – for example ownership types were invented in 98 and popularized in Rust. But Since 1996 almost everything has been cleverly repackaging and re-engineering prior inventions. Or adding leaky layers to partially paper over problems below. Nothing is obsoleted, and the teetering stack grows ever higher. Yes, there has been progress, but it is localized and tame. We seem to have lost the nerve to upset the status quo. (Except Machine Learning, which has made real progress, but is also arguably an entirely different kind of software. I am talking here about human programming. )

Those of us who worked in the 70’s-90’s surfed endless waves of revolutionary changes. It felt like that was the nature of software, to be continually disrupted by new platforms and paradigms. And then it stopped. It’s as if we hit a wall in 1996. What the hell happened in 1996? I think what happened was the internet boom. Suddenly, for the first time ever, programmers could get rich quick. The smart ambitious people flooded into Silicon Valley. But you can’t do research at a startup (I have the scars from trying). New technology takes a long time and is very risky. The sound business plan is to lever up with VC money, throw it at elite programmers who can wrangle the crappy current tech, then cash out. There is no room for technology invention in startups.

Today only megacorps like Google/Facebook/Amazon/Microsoft have the money and time horizons to create new technology. But they only seem to be interested in solving their own problems in the least disruptive way possible.

Don’t look to Computer Science for help. First of all, most of our software technology was built in companies (or corporate labs) outside of academic Computer Science. Secondly, Computer Science strongly disincentivizes risky long-range research. That’s not how you get tenure.

The risk-aversion and hyper-professionalization of Computer Science is part of a larger worrisome trend throughout Science and indeed all of Western Civilization that is the subject of much recent discussion (see The Great Stagnation, Progress Studies, It’s Time to Build). Ironically, a number of highly successful software entrepreneurs are involved in this movement, and are quite proud of the progress wrought from commercialization of the internet, yet seem oblivious to the stagnation and rot within software itself.

But maybe I’m imagining things. Maybe the reason progress stopped in 1996 is that we invented everything. Maybe there are no more radical breakthroughs possible, and all that’s left is to tinker around the edges. This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent. We’ve just lost the will to improve.

[The discussion continues at Open source is stifling progress.]

[Jan 5: Deleted then reinstated by popular demand. The argument could certainly be improved. I should at least note that there has been substantial technical progress in scaling web performance to meet the needs of the MegaTechCorps and Unicorns. The problem is that much else has languished, or actually worsened. For example small-scale business applications as were built with Visual Basic, and small-scale creative work as with Flash, and small-scale personal applications as with HyperCard. This trend is alarmingly similar to the increasing inequality throughout our society.]

58 Replies to “The Great Software Stagnation”

  1. A related problem: we’re stuck using bad, old technologies also. SQL is the most awful, but that the filesystem is the base storage is also terrible. Also: Javascript, desktop UIs, …

    Also, we (the software industry) stopped trying to enable users to solve their own computing problems. We invented the spreadsheet, the user-friendly database (eg FileMaker), and… that’s it.

    1. I would disagree with your second paragraph. There have been many other advancements that have empowered the users to solve their own computing problems. We have BI tools (Power BI, Tableau, etc.), no/low code solutions (Power Apps, etc.), and Robotic Process Automation (UiPath, etc.)

  2. Html5,css3,SQLite,android,tensorflow, and all of the data science libs. Andrew NG was using Octave for his 2011 course. Arguably, windows was better in 2004 when you could use XP with google desktop. Since then, google desktop disappeared, and win10 became bloated because it was redesigned to spy on customers.

    1. When I read this, all I hear is the babel in “Jean Sammit’s Tower of
      Babel, her book on Progamming
      Languages. Maybe we should look
      deeper into the past to see what we
      are missing. Dan Friedman

  3. It’s the cycle of civilization.

    https://youtu.be/pobG397KwDw

    The opening of the New World to western European settlement provided a temporary reprieve by permitting young men to settle a homestead with which they could afford to form a family. The 1950s were the last hurrah of this.

    Since then it has been all about who could capture positive network externalities (network effects). “Web 2.0” wasn’t technological, it was a shift in business model to network effect capture.

    1. I remember when Web 2.0 got out.
      I thought and said: ok so nothing’s really new, nothing’s really gonna change.

      I was wrong. Everything changed.

      Like nobody foresaw the internet, nobody foresaw the social networks. Gee nobody even foresaw what the “like button” would do! (did you watch The Social Dilemma?).

      Web 2.0 wasn’t technological (maybe it was). It was definitely a paradigm shift.

  4. Jonathan, maybe you can find inspiration in the struggles of this technology:
    https://en.wikipedia.org/wiki/3D_XPoint

    I have kept an eye on this and related tech that sprang from Stan Ovshinsky’s Energy Conversion Devices labs in the ’60’s and 70’s, watching it much more intently once it was formally announced by an Intel/Micron JV in mid 2015. It is mind-boggling in its potential.

    It was my privilege to share some of the man-years you worked on persistence problems. I wonder what we would have been able to do with hardware technology that combined the qualities of what the industry currently thinks of as memory (bit addressability and sub-microsecond read/write latencies) with the qualities it currently thinks of as storage (low cost per bit and persistence)?

    From the earliest electronic devices that we would recognize as computers, memory and storage have been separate. First swapping, then virtual memory/paging, then ever more complex layers of caching, have been added to computer system architectures to lower the cost of memory while preserving its speed. Then along comes a technology that renders those additions superfluous. But its primary virtue – unification of memory and storage – languishes; it has been used so far only where it can be slotted into pre-existing roles as one or the other.

    This is tantalizingly similar to the conceptual breakthroughs and subsequent frustrations you chronicle here, so it is worth considering whether the potential paths forward for a hardware breakthrough could model or inspire paths forward for software breakthroughs.

    Perhaps 3D-Xpoint will finally come into its own as the Internet Of Things demands tiny, cheap, ultra-low-power devices with persistent storage and simple software and hardware architecture.

  5. I think this is false, even in terms of the kind of list you have: Consider: Rust, Scala, Clojure, Agda, Idris, TypeScript, PyPy, Liquid Haskell. Then consider the major changes to Haskell, Racket, or Java in that time (generics! macros! dependent types!). Then consider the big new research areas in that time: gradual types, solver aided programming, and more.

    It is true that a lot of intellectual energy went from single user systems into distributed systems since 1996, but that’s hardly a failure of our discipline.

    1. You have a good point about Racket, Sam. I’m less swayed by the other things in your list, which are mostly research yet to be usable by non-PhDs (dependent types, solver-whatever) or further examples of recycling old ideas (generics, macros, FP). TypeScript is a great example of the lengths we are willing to go to live with a flawed popular technology because fixing it is seemingly beyond the capabilities of our civilization.

      Racket, though, has been valiantly fighting the trend. It has continually evolved and reformed through solid research. I don’t understand why it isn’t more influential, especially compared with Clojure. Does the JVM explain the difference? Rich Hickey’s charisma?

  6. Consider unums and posit computing. They improve on floating point and doubles in representing real numbers in accuracy, resolution, exceptions, and resource requirements. We have been stuck with floating point for over 50 years. Perhaps posits are an incremental improvement, but sometimes it takes a long time to update a long-used technology.

  7. The main problem is that we use the wrong abstractions in computer language and user interface, also computer language and user interface needs to be united into one single user interface. About abstraction, I don’t mean the choices that are presented to us, I mean we need completely new abstractions for both.

    1. Hi Paul. I guess, we are on the same path: http://bit.ly/2yJBnzX
      The subroutine (call) paradigm causes the main problems of today’s SW. We need to get rid of it. Not in algorithms but when composing systems from components.

  8. “Everything that can be invented has been invented.” – Commissioner US patent office 1889

    I was in the silicon valley from 96->06, and my anecdotal cultural perspective.
    The industry was a bit of wild-west and many employees in different roles
    did not have tech backgrounds. Some of the best minds, were not college educated, but self-taught. There is an old saying that “innovation comes from the factory floor”. In tech, the factory was all around the office. We didn’t have rigid heirarchies, processes, and management. Beauracracy has since created barriers of entry, combined with outsourcing, H1-B, regular layoffs, metrics, reviews, business efficiencies, large HR. Once production, support,
    engineering, etc. were separated from decision makers; progress slowed. The factory floor was gone. Personally, I can’t think of much interesting that has come out of the silicon valley since I left. (I know, you doubt this. You have many examples to prove me wrong. Research them, and I bet the groundwork for most was created long ago. It’s just a new wrapper…but, but..cloud! lol).

    1. There is a quantity/quality misalignment here for the ‘self-taught’ ‘factory-floor’ innovation argument.
      Because we tend to allocate the ‘elite’ tag to a mere 10% of folks, we then have a 9:1 bias toward the ‘non-elite’ coming up with a focussed idea that has real benefit locally.
      It’s mainly the human brain capacity limitations that stops the aspiring ‘elite’ from being that much better than the rest of the humans. Plus our hindsight bias.

      Software is moving from ‘coding’ to ‘engineering’, i.e. it’s no longer the technology that drives progress, but the environmental limitations.

  9. I think you have to decide if what you want is important new ideas (in which case Agda and Idris count) or software that does something new for lots of people using previous ideas (in which case Java generics or Go are big steps forward). Your list of pre-96 systems includes plenty of both, but you rule out new things using either one.

    It strikes me that you don’t like the innovations since 96 as much as you like the earlier ones, which is fair enough, but it doesn’t mean they aren’t there.

    1. What I want is new ideas that improve practice. That is one definition of progress. Dependent types are still at at the stage of an intellectual curiosity that hasn’t made it to the shop floor.

  10. Bill Gates according to the Antitrust case and judgement by judge Thomas Penfield Jackson was partially responsible for setting back the software industry decades to come.

  11. Changes in hardware and the growing complexity of software are forcing us to rethink the foundations of programming. Just like the builders of Europe’s great gothic cathedrals we’ve been honing our craft to the limits of material and structure. There is an unfinished gothic cathedral in Beauvais, France, that stands witness to this deeply human struggle with limitations. It was intended to beat all previous records of height and lightness, but it suffered a series of collapses. Ad hoc measures like iron rods and wooden supports keep it from disintegrating, but obviously a lot of things went wrong. From a modern perspective, it’s a miracle that so many gothic structures had been successfully completed without the help of modern material science, computer modelling, finite element analysis, and general math and physics. I hope future generations will be as admiring of the programming skills we’ve been displaying in building complex operating systems, web servers, and the internet infrastructure. And, frankly, they should, because we’ve done all this based on very flimsy theoretical foundations. We have to fix those foundations if we want to move forward.

    Source: https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/

  12. I think this is at least partially right. The more blatant, and even more undeniable slowing of computing development is in hardware, though it started maybe Circa 2005, not 1996. In 2020 you can pretty easily use a 10 year old computer, and it’ll do just about everything a 2020 computer will (with the exception of maybe some high end games). If you try the same thing in 2000 with a 1990 computer, you won’t even be able to put a browser on the thing,

    I think you’re likely closest to the reason when you say that “everything has been invented”. Obviously this is not not true. But fields eventually reach maturity, and innovation slows. I think this is by far the more likely explanation than “the internet stole all the smart people”.

    Look at the Aerospace industry. Is there really anything drastically different about a 747, developed in the 60s, and became popular in the 70s, from the 787, an Airliner developed 50 years later? It’s lighter, uses less fuel, has some improved avionics… but it’s not a revolution.

    The same thing has happened with computing. We’re not at this breakneck pace of change anymore. The low hanging fruit has been picked. Now the problems are just a lot harder.

    Taking a step even FURTHER back, the rate of change of just about everything has slowed. My grandfather was born in 1902, and in just 50 years we went from not having flight to Jet airlines flying over the ocean at 500mph. We went from gas lanterns to fully electric lights in every home. We went from having having no radio, to having radio+TV. We went from transportation being on horse and buggy to your own personal automobile. I could go on, but you get the point.

    In the last 50 years? Well… computers and the internet came into everyone’s life, and.. cell phones. TVs are bigger and higher resolution? Everyone can record TV now. That’s….. it mostly. We have some better pharmaceutical drugs, but nothing earth shattering. Cancer is more curable now?

    The point being, things have improved, but it’s NOTHING like the massive change we had in the early 20th century. If the rate of change had continued, we’d have colonies on mars, stopped the aging process, cured most disease, and have Rosy from the Jetsons cleaning the house. Sadly, we have none of that.

    1. “Look at the Aerospace industry…” Bad example, as SpaceX has been shaking things up and innovating like crazy. You could call it an industry that thought it was mature, but wasn’t.

    2. So, I was born in 1957. The world looks pretty much the same now as then, and I have previously argued that not much has changed for most people. However, the technology that supports our world is a different story. We went from flying 5 miles high at 450 mph to 150 miles high at 17,500 mph. We went from thumb-sized vacuum tubes to 8nm feature sizes in our computers. We went from discovering the double-helix to inexpensive genome mapping as a diagnostic tool. You can compare the de Havilland Comet to the Falcon-9 and say that nothing’s changed because they are both aluminum tubes on the outside, I guess, but I think you’re fooling yourself.

  13. A problem I found around 1995 was that far greater proportion of coding effort was directed at the GUI (Windows 95 anyone?) rather than the original purpose of the software. And yes, as others have said, the technology improved and narrowed the diversity of targets as a consequence.

    When you look at the minute space needed (and imposed) by 8 bit systems, coding (often in assembler) had to be slick and efficient. Now, not so much.

  14. I have published Pliant programming language in 1999, and it’s completely different because the core of it is to define how a program has to be coded at various stages, and the extensions mechanism used to define the transitions, instead of trying to provide a new language with a fixed set of cool features.
    In other words, contrary to all other programming languages, Pliant as no fixed syntax and semantic.
    The key advantage is that you don’t need to define a new version of the language in order to get a new feature well suited for programming in a specific domain: you just provide the new feature as a library. Moreover you don’t have to break existing programs because each Pliant module will compile with the suited language level semantic.
    See https://www.fullpliant.org/

    It took me 15 years to design it. It had absolutely no success because I just published the code and made no communication. Success is the result of both technical merit and marketing. When computing systems get more complex, marketing is favored over technical merit because more parts of the computing system are used as black boxes that will not be seriously reviewed and understood before deciding. So I personally see years around 2000 as the point where marketing gets the sole king.

    FullPliant operating system is complete system (all but the kernel) that fits in roughly 200 000 lines. If you count lines in your Linux (or other) system, you will get 100 times more. What are these extra 99% lines useful for ?
    If you really want to understand it, you will have to randomly dig in some lines of the bulky standard system and find their equivalent in FullPliant. It all starts at poor language level …

  15. I believe you have hit the nail on the head. I see that in so many aspects of our (I live in the United States) society, Not just Computer Science. However as a former semi-professional programmer and former hobbyist programmer and 53 years old I have seen many fads come and go. I’m in no way smart enough to talk about all these different things quickly but you are so right.

    I am grateful that my son, who is going to school for Computer Science, is at least going to a college that will provide him exposure to many things but also show heavily the research side that is needed to go with it. Younger people tend to hate research and just want something to work the first time or throw it out.

  16. Could this, at least partly, be interpreted as a result of path dependency?

  17. Also Perl in 1996, which powered quite a bit of the early web. My first web applications in that year were based on mod_perl.

  18. In capitalism there is social-economic stratification which then solidifies into classes and eventual stagnation. The same dynamics happens on smaller scales within corporate, academic and other institutions. Capitalism trends toward monopoly and stagnation despite the myth of competition. Government tries to impose regulations and stimulus to randomly shuffling the board but the system will always settle to the least energy state. This isn’t just happening in Software it’s happening in all industries.

  19. Oh Gawd! These comments!

    The very LAST people we should ask about the state of programming is the programmers themselves. We’ll just end up with a whiny overly opinionated factually dubious religious diatribe about the value of unums over floating-point, or the miracles of rust, or why Perl is the one true solution. If you can trust software developers to do one thing, it is to get lost in a woods of their own making and fail to see the forest for the trees.

    The axis that matters is what end users are now able to do with the software programmers have created, and how that state of the art has improved since y2k or 1984, or pick your favorite date, and plainly it has. As much as 95% of it might be hardware improvements over the years, but the other 95% is the software created to leverage that hardware. (Dubious math intended.) We do far more on far smaller devices than anyone dreamed of back in those days. The futuristic Star Trek communicator, or data pad imagined 25-40 years ago is a pale shadow of what we have now. They can do it, because you did it, and all of the false whining about how nothing is improved is plainly wrong headed.

    I’m in the trenches with you, but damn does the untutored philosophizing get tiring sometimes.

  20. And you even didn’t mention the worst, Agile and Scrum. Try to invent anything in two weeks delivery cycles.

    1. Those aren’t weakness of agile and scrum. They’re weaknesses of management who, without a larger vision or expertise in a domain, can only work on small chunks of progress at a time.

  21. There’s new innovation. It just isn’t happening in academia or industry. Somehow they’ve both missed the internet revolution and what it was about. (Getting movies on demand isn’t it.)

    Find the Singularity project on github. There’s some new computer science there that will revolutionize many fields.

    1. “Find the Singularity project on github. There’s some new computer science there that will revolutionize many fields.”

      With a name as popular as “Singularity” you may want to actually provide a link. I found not less than 3 when searching “Singularity project on github” including 2 flavors of “github”

  22. Or perhaps some of us, like me, have seen what software technology has enabled and recoiled in horror. Yes there’s some good, but overwhelmingly what I see is horrific.

  23. I have developed the next generation of software. But I can’t find anyone to champion it. So I am going alone and may die before I get it to market.

    ps it simulates Reality – you can create universes with it:). It has had 63300 views on Naked Scientists.

    1. Advancements in software depend on advances in the human mind’s ability to handle more and more complex abstractions. Very few people will have lived a long enough and healthy enough life to achieve this state. I am one, which is why I have 63400 view of my theory on Naked Scientists.

  24. Technologies can and do hit plateaus. Exponential growth must have a limit. It’s not so much “we’ve invented everything possible” as “we’ve done everything under current understanding”. There could be more to do, but it will require a huge breakthrough.

    Our culture tends to think of technology as always trending rapidly upwards, but this is only due to a unique time in human history. For most of our ancestors, the tools one generation used to do their work were the same basic design as the next.

    This attitude towards technological growth isn’t healthy. Taking a pause and being more thoughtful about our choices would be a good thing.

  25. It is difficult to gauge the magnitude of an “incremental” improvement. For instance, you could drive a car down the road at 60mph or fly across the Atlantic Ocean in 1935, but I wouldn’t call the Tesla Roadster or 787 incremental improvements. If you’re old enough to remember coding in vi, eclipse is a monumental improvement in capability and helpfulness. The conceptual space between C and C++ or C++ and Java is not “incremental,” even though the number of concepts added or changed is small.

    Perhaps the rate of change has declined simply because the existing tools are extremely productive, and the need for change is less. Wood frame houses are about 1,000 years old, but they still serve us.

  26. I tend to agree with the author as my own view on our field, which I spent over 4 decades in, reached its zenith in 2010 when Microsoft decided to stop refining WebForms and went full-hog with their version of ASP.NET MVC.

    Despite this, there are fatal flaws in believing that software must be ever changing considering that nothing the current set of development tools has achieved is in reality merely doing the same things we have always done. And this is because, at least in the business environment, few real hardware changes have occurred that required new ways of thinking about software development.

    A web page is still a web page, whether you prefer to service it at the front-end or at the server.

    And C++, Java, C#, and VB.NET are still all very robust languages that can create just about any application one desires. Python, PHP, Ruby and a few others do the same thing but in the interpretive mode. Yet, only Python can actually be considered a general development language.

    Today, developers are seemingly more concerned about the tools they use instead of the development of high-quality applications.

    I still use .NET 4.6 for my current development efforts. Why use anything else? Because its cool? Because Microsoft is rewriting the .NET Framework into the Core version and somehow this is supposed to have some type of huge impact on what I am developing as well what many others are developing in their own business environments? It won’t, since Core will be used in the same manner as the original framework was.

    Because so many have pursued the creation of new tools without actually considering their benefits, our profession has become a quagmire of nonsense that has only reintroduced high levels of complexity, far steeper learning curves, and even greater numbers of defects since no one any longer takes the time to really use and learn the tools they are using. Instead, everyone is waiting for the next “magic bullet”. It is like waiting for Godot… He never comes…

    Until the underlying hardware and protocols actually change, the tools we are using will not really change all that much. And for most general programming tasks, we don’t need them to, as we already have just about everything we need to do our development tasks. Unfortunately, most have thrown out the rather mature technologies for the shiny new toys on the block.

    When the majority in our profession realize that most of this is marketing fluff and nothing else, our profession may return to a saner level of comprehension.

    Until then, the software blob we have created will only get larger… And worse…

  27. What happened was the internet, but not for the reason your saying. The Internet ( and the web browser) brought us a totally different vehicle to run our applications.

    Being in a browser, being ‘delivered’ by a server via the browser, is an entirely different experience, and one which is far different from the previous technology. The programming techniques changed in this endeavor and brought a different with it a different method, controlled by browsers. Oh.. NOW we are onto something!

    Instead of ‘flipping the bits’ a different way, (ie using a different language and compiler methods ), we have been throttled by an engine that ‘makes that decision for us’, with some VERY limited exceptions.

    Of course there are lots of methods to work around inside the box of the browser, and more methods continue to be invented, however those methods are STILL inside the box. The beauty of the languages you mentioned are the differences of them! Not, JUST the language, but the way the compiler itself talked to the machine, how the code was compiled and delivered to the hardware and those differences made all the difference. And those differences brought with them the reason and validity of some of those languages.

    There are still plenty of advancements, however many are built on the backs of the previous, and so we are still moving and growing, however the languages are fractured as you say, between ‘the machine’ and ‘the browser’ and many true advancements are bringing more to the language then starting over. ( IMO – programming for more than 35 years !)

  28. The velocity is still there, but from a particular point of view, the apparent motion is less. When you look through your window, you see a 2-D representation of a 3-D world. Motion towards and away from the window is much less apparent.

    There is a difference between the discovery phase, and maturity where further advances see diminishing returns. The design of hammers and jetliners are examples. New designs are likely to yield only incremental improvements.

    To pick one example: CPU architecture. Up through the 1980s we were struggling to get a 32-bit virtual memory CPU onto a chip (and thus to a jump in economy and performance). Could say we have not advanced much from the VAX, since. Except when reading the architecture manual for the ARM chip that goes in a Raspberry Pi, stated giggling. This cheap little board is more advanced than anything in the 1990s. This is velocity.

    Linux is another example. Linux is Unix … except it really is not. Spent a fair amount of time in the kernel sources of late. Came to realize this is in part a global research project. The amount of engineering in Linux, no company could afford. The level of complexity is again far in excess of anything before. Velocity.

  29. Interesting read. However, I don’t think it’s a fair assessment of the state of software. My own career over the last 10-15 years has benefited from several (IMO) big game changers in the industry:

    The growth of mobile / smart phones. I was part of several startups that used mobile technology (and the flood of mobile data it enabled) to build mobile-oriented online businesses that couldn’t have existed before that.
    The growth of “big data”. (E.g., Hadoop, Spark, etc.) Those same startups made heavy use of big data processing to crunch orders of magnitude more data than would have been possible in the past. (Or at least impossible for a startup that small.)
    Cloud computing … plus Terraform. Granted, I’m also a bit of an ops guy, but this was a game changer for me. Not just that we could run systems in the cloud – or even run big data systems in the cloud. But that the infrastructure needed for those systems could be programmed to, using cloud vendor API’s (and Terraform). Absolute game changer in the Ops space.
    Docker and Terraform. I know you already mentioned these, but these were a game changer for me as well in the Ops world. There was no longer any need to run different applications differently – i.e., create different machine images for different types of workloads and run them on different fleets of hardware. Now every machine runs the same image – a Kubernetes node image – and we can deploy and run anything on that. Plus Kubernetes gives us nearly unlimited scalability and, as a bonus, self-healing capabilities. Oh yeah, and that’s all programmable via an API too! Again, another huge game changer in the Ops space. It’s fundamentally transformed the way we deploy, run, and support systems and applications.

    There’s innovation happening. You just need to know where to look for it! 🙂

  30. I agree with your observation of general software stagnation and offer my own observations – new languages tend to be developed by those peoples who are most experienced and proficient with the existing languages so they produce languages that resemble what already exists. Sort of like Picasso’s blue period – every new painting, no matter how expert or innovative, was still a blue painting.

    I also believe that programming languages are problematic for development of applications that precisely because they are language-based. I believe that our human minds have difficulty processing the complexity of programming languages because our brains just have trouble focusing on multiple items (Ref: George A. Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” in Psychological Review 1956) which limits our ability to accurately manage the inherent complexity of any program longer than 10 lines, not to mention million-line modern programs. Back in 2013, I blogged multiple times about “Physical Reasons Languages Fail” and “Programming Languages Are Not Our Friends” at https://aviancomputing.net/programming-languages-are-not-our-friends-1/ From that perspective, software stagnation is the logical result of the complexity of programming languages.

    And finally, success breeds imitators. Successful (especially financially successful) programs generate multiple imitators because true innovation is much harder. So anyone who dreams of big rewards will try to make a better Facebook or better 1st-person shooter and most venture capitalists won’t be interested in investing in innovation because their goal is to get a large return on their investment and not improvements.

    So I hope you continue your project and look forward to hearing more about it.

  31. Lets face it there is only so far you can develop Assembler. +1

    but ‘glorified Assemblers’ … !

    We are well hamstrung by the computing model (Von Neumann, and variants). We’ll need the math equivalent of moving from Roman to Arabic to real to complex to make fresh big leaps.

  32. At same time I could agree and disagree that there is no great innovation over time in software. As already commented by others in your blog, some innovation was made and it’s evident. What pushes those innovations are not so evident.

    The foundations of Computer Science lay down to a mixture of government and oligopolies interests. Maybe what today concerns us about innovation is actually the fact that all interests are been concentrated within few oligopolies even without participation of government agents. And all what we are seen recently is our liberty to innovate, experiment and create been sterilised.

    As developers, we need to understand that this is happening in order to cooperate with each other and search for a solution on Intellectual Property as a instrument on power concentration. Open Source movement is not a problem per se as it could serve as “common property”, but it could be used to concentrate power, removing incentives to software innovation and deprecating labour costs. Every market player that already has some sort of intellectual property will advocate in favour of open source due to these reasons. Or they make lobbies to create market regulation.

    I suggest you to open the debate on your blog using these premises.

  33. “Today only megacorps like Google/Facebook/Amazon/Microsoft have the money and time horizons to create new technology.”

    More like: everyone else has to be extra careful to stay away from anything remotely like FAANG might possibly ever do, or they’ll swoop in and eat your lunch.

    Software (and any field of technology that uses it, which is all of them) is now cheaper and faster to develop than any time in history, by a country mile. There are more people, there’s more knowledge, there’s faster computers, and there’s better supply chains. For the people who get into technology to create the next great thing (and not just to make a quick buck), the only reason they’d now be hesitant to work on projects with greater scope is if they have some greater fear.

  34. When I was learning C#/WPF, I though “It’s AMAZING what C has evolved into!”.

    There’s an evolution from C to C++ to C# to C#/WPF. Each is built on its predecessors, and gives developers more power.

  35. Software Stagnation may be true. But the world never stop.The mixture between software and other things will always happen.We also have a great future.

Comments are closed.