There are a thousand computer languages to choose from. How is a scientific person supposed to sort out which language is worth studying? Given that all of the top 10 languages in use are over 20 years old, surely there must be something superior by now. You wouldn’t dream of using a cellphone 20 years old, so why are people using computer languages that are so ancient? Is there no comparable advancement in software compared to hardware? Are these old languages really that beautifully designed, or is inertia the driving force in their continued popularity? Given how often software projects overrun their budgets, and have unpredictable development times points to some serious flaws in the popular languages.
Without measurement, there is no science, and i propose we start by ranking languages by one of the most important qualities. Like weight, passenger volume, miles per gallon in a car, there are objective measurements that can be taken on computer languages to help the process of sorting and ranking. There are objective qualities, and subjective qualities. Let’s try to focus on objective qualities, because computer language discussions get nasty fast, because a lot it is driven by personal taste.
Since computer languages vary so much, and since the tasks they can perform widely vary, it is much harder to rank them in a single chart. Let’s try a strategy the US EPA uses for miles per gallon measurement, you put a car through a fixed route, and see how they do. You keep the route the same, so that from year to year you can see the progress. That’s a sensible way to do it. There are 2 measurements that matter the most for programming, and although these quantities take some effort to measure, it is worth doing. The two measurements in question are:
1) ability to use interchangeable parts, and
2) the mean time to repair (MTTR) by someone other than the author (BSOTTA). That’s a military grade acronym there!
In the area of interchangeable parts, our industry reached a peak in VB6 from Microsoft. It generated a thriving ecosystem of little modules people could buy and plug into their program for various purposes. It had millions of users, but MS abandoned it when they went into .NET. Borland’s Delphi has a small group that shares and sells parts, but it is mostly a dormant system. The current languages and toolchains have not created an ecosystem of interchangeable parts. To include some component from GitHub which has tens of millions of projects freely available, entails a nightmare of dependency and library conflicts. I would say that we are currently not in an era of interchangeable parts, and thus we have massive duplication of effort, and no marketplace where people can share and use chunks of proven code easily. I will address interchangeable parts in another posting.
Let’s get back to MTTR BSOTTA: To measure this we take a series of programs of ascending complexity, and do the following tests: 1) we intentionally introduce an error into the program, and measure how long it takes to fix. 2) we ask for a change in the business logic or graphical presentation and see how long that takes. In both cases the task has to be accomplished without breaking the other functions of the program. For an accurate measurement, you will need to measure different people doing the work. One can also roughly gauge the skill level of the test programmers. Since programmers vary in skill, this testing regime will not be completely accurate; however it will immediately show why some of the languages of the past have been discarded. APL, LISP, and FORTH are all amazingly brief languages, but companies over time learned to avoid those 3 languages because they have some of the highest MTTR BSOTTA scores (which is bad). Programs are written in a few months typically, then are run for decades oftentimes, and the difficulty of maintaining products in languages that are “tricky”. You can produce good work in any language, and no language is can prevent bad design. But the language does have a great influence on how people build projects, and how much self-documentation comes out of the code.
As the test program specifications grow in complexity, and you will see how non-linear the repair time becomes. In our commercial reality it reaches a unique state, which is similar to what I call “Artichoke Mode”, which is the condition where a programmer gets so tired, that they start undoing correct valuable work by mistake. In the case of the largest commercial projects, the team isn’t tired, but the project is so large and complex, the engineers start to break as many things as they fix, and the product reaches a steady state of bugs in the hundreds of thousands where almost no progress is made even though expenditures are huge. MS Windows, Apple’s OSX are in that condition. Their employment numbers are at an all-time high yet hardly any new products emerge. When was the last time Microsoft created a new program of value? And why does Apple take 6 releases to fix obvious bugs in OSX? In OSX 10.13 and 10.14 they broke printing until many months had passed. It isn’t a pretty picture at the big companies.
I have designed the Beads language for the lowest MTTR BSOTTA in history by lowering nesting depth, and avoiding high abstractions which impede understanding, and avoiding use of large numbers of API’s. Normally one achieves power in software through fancy abstractions and deep layering. Beads combats complexity by employing symmetry and making the underlying model reversible.