Poor quality software is killing people

We have recently seen one of the largest killings by bad programming in the case of the Boeing 737 Max

The Boeing company has a black eye from their recent screw-up with two of their newest jets crashing.

Simply put, this error was caused by poor quality programming, and not following standard aircraft safety software principles. It was exacerbated by their greed in trying to sell what is an essential safety feature as a downloadable paid option, which the low-cost airlines like Lion and Ethiopian airlines didn’t buy.  

This is a powerful jet, and it can if you pull the stick up go into a stall pretty easily. That is not a defect, that is an inherent risk when you have a plane with really potent engines which are much safer in other scenarios, so nothing wrong with having powerful engines. But the input to the stall detector software was a single sensor, even though there are 2 sensors on the plane, it only read one of them. And so when the single sensor malfunctioned, the plane thinks that it is heading up when it isn’t. The second mistake, even more stupid, is that the computer code that said:

  IF sensor > 34 degrees then push nose down 1 degrees/sec  

(my formula is approximate), didn’t have a loop counter, so that if this had been done more than a few times it would stop for a while, as clearly the pilot is trying to override the program. In the Lion Air case, the pilot tried over and over to pull up from impending doom, and lost the battle, killing all aboard. Boeing has just updated the software, and added a few lines of code to this little program. This error will costs them billions when all the lawsuits are settled.


I was trained in programming at JPL as a youth, and when they are making space probes that will travel for 10 years  without the possibility of any repair, they always put in an odd number of sensors for each critical measurement, and have the sensors vote on the measurement, and if it is 2 to 1 they pick the majority, and eventually turn off the bad sensor because there is no point in listening to it. To have only one sensor, which is unfortunately all too common in automotive safety systems, is a bad practice, because that single cheap sensor can cause a serious accident. This is the thing that worries me about all these fancy car safety systems, is that they are put in by cheap companies that don’t have any redundancy on the critical systems.  

I remember a friend sued Porsche when they got one of the high end Cayenne Turbo models and it changed lanes abruptly almost killing them. This is all due to bad programming, and instead of letting car companies keep their code secret, i believe all safety systems for any device (car, bus, train, plane, nuclear power plant) should have their code openly published, so that outside programmers can inspect it and find weaknesses. There are a lot of retired and under-employed programmers who should receive a bounty for finding errors and dangers in code that is critical to the safety of the public.

Anyone with experience in military or space software systems, would have raised red flags on the Boeing code, which was clearly done by rookie programmers and it indicates that Boeing has a serious internal malfunction that they would try to monetize a critical safety system. They are spending more money on lobbying now to fix it, when what they need is better engineering management, not more lobbyists.

What should a future general purpose programming language look like?

The dominant language of the future should tackle the central problem of writing software, which is that small numbers of human errors consume the vast majority of the total time and effort spent. Many programmers estimate that over 80% of their time is not spent designing and coding, but in that process euphemistically called “debugging”, where a small number of errors consume a disproportionate amount of time. Thus the main feature should be eliminating this largest section of time, and the other features should support the use of interchangeable parts. In Prof. Wirth’s Modula-2 he reached a high water mark for interchangeable parts, offering separate compilation of modules, and protection against a module changing and a client of the module not realizing that the interfaces had changed. No subsequent language to my knowledge has this feature, except for Beads, a language in the “next-gen” race, along with Elm, Red, and others.

As for other language features, it should try to be a simpler language, breaking free from the mistakes of the past like OOP, and avoiding inventing some new complex, hyper-abstract set of concepts like functors and monads as are in vogue today. Simplicity is a feature everyone can enjoy.

Inertia, the most powerful force in the universe

The great Elbert Hubbard wrote around 1912 the following:

“The reason men oppose progress is not that they hate progress, but that they love inertia.”

We are in the decade of a major change in computer programming languages. The prior ones served their purposes, but the massive need for programming has reached such a degree that continuing cumbersome, labor intensive, frustrating processes has to give way to a simpler, easier, more productive toolchain. The foundation stone of a new toolchain is a new language.

The order of the day is A) simplicity, B) clarity, and C) reusable parts

Simplicity is never easy to achieve; you have to know the balance between a small group of powerful primitives and a larger set of more powerful, but less general, primitives.

Clarity is often the subject of great debates, because people who are well trained in a language, find it easy to read. I occasionally get into heated, almost religious in fervor debates with proponents of extremely unclear languages like Lisp and Haskell. Their adherents simply can’t remember a time when they didn’t understand, and assume that their prodigious memories are common. Try testing your tool on a 70+ year old, and then tell me how it went! A 70 year old learning to program for the first time is about the same as a 12 year old. Both will struggle with the current toolchains.

GM kills the Volt, in the process of self-destructing

GM killing the Volt car is a bad idea.

But then they have resisted all advancement from their internal brains which are considerable.

GM was always far ahead of Ford from a technical and reliability aspect, but then i am referring to their peak in 1959. After that time the founding genius, Alfred P. Sloan’s energies were starting to dissipate. GM invented the Neodymium magnet but didn’t ever build motors.

Their solar powered car they did with McCready of CalTech was 20 years ahead of any other vehicle, and later when they built the EV1 they ended up crushing the few cars they made out of spite basically. People loved those cars, there would be no need for Tesla had GM just allowed a money losing division to build the future. You have to invest in the beginning to build a new business. No new technology besides rare things like Genentech’s insulin is immediately profitable
And let’s not talk about Saturn, which had many innovative practices, but because it made less profit than their bad practices they continued to strip mine their customer goodwill, using the accumulated trust and confidence in their brand to ship crappy cars.

Only the recent editions of the corvette are any good, the rest of the GM lineup is not attractive to me, and i would pick a Japanese or German or Swedish car over anything but a corvette

GM is basically downsizing, admitting defeat in passenger cars (like ford). The tragedy is that there are cars from europe, little tiny ones, that could succeed in urban environments in smaller numbers, but GM just won’t allow them in the country. Which leaves the overpriced and impossible to work on BMW owned Mini Cooper to own that city hipster market

Cars are about emotion. Whether you are looking for a land yacht, a tiny sports car, or a soccer mom people mover, you have to identify the emotion and deliver a purity of design. The central problem is not their propulsion units, their manufacturing practices, but their unwillingness to allow a single vision to control a car from start to finish like the greatest designer of them all Ferdinand Porsche. They continue to have committees design and agree on things, and the result is a craptastic watered down aspect to every car (except the Corvette, which after decades of mechanical imcompetence finally bought a ferrari and studied how it could go around curves).  

GM is another classic american tragedy where designers are mere stylists. Great design is behind all the winning products, and brilliant design can outlive technical changes far longer than you would imagine. The Porsche 911 is the one of longest running models in auto history. 

If they had any brains, they would realize that producing smaller quantities of nicer cars at a higher price would be achievable now with 3D printing, and all the robotic advances. The days of having to make every car out of the same parts is over; you can 3D print final quality metal and plastic parts, and with their engineering prowess they could make replica cars of great designs, and instead of making drool-worthy concept cars that never ship, they could actually make the ones that the customers indicated they really want. 

Look at the money people are paying for old Jaguars and replicas of Steve McQueen’s Bullitt era mustang. People want futuristic stuff, or they want tiny, or they want fast, or they want big, not some compromise of all of those characteristics. They need to give car loving designers a cost constraint, but then give them the freedom to make it great. A few years back they went to Pebble Beach, which is the #1 confab of car nuts, and showed the Escala

escala1.jpeg


people loved it. And said right there they would buy one in a second. It was huge (17.5 feet), sleek, very luxurious. Then some numbskull goes “it will be too expensive”.  They are morons. They will promise something like it in a few years, but when it comes out it won’t be 17.5 ft and the temptation to use tons of previously generated parts will just be too great, and they will have lost all their credibility again. Meanwhile pickup trucks are getting close to 80k in some cases, defying all common sense.  The interior was incredibly elegant on that concept car, cashmere. It was gorgeous and luxurious. You know they won’t put cashmere on the production car.

People would buy this car. They showed it to the target audience, they said hell yes!, and then they ignore the feedback. This is the doom of a spineless organization that can’t build a product their own staff love. Meanwhile, as we speak, the same clean design is now shipping in the Volvo XC40, which they are renting for something like 600/month.  

When america uses good design, we are unbeatable. America has had the greatest industrial designers in the history of the world like Thomas Edison and Henry Dreyfus. But by designer i don’t mean some stylist who adds pin striping,  i mean the product’s total vision from inside out. 

McLaren, Koniggsegg, and doing such a great job making supercars, selling all they can make. The writing is on the wall. play with passion or leave the game. 






Medicare for all is a dumb idea

I normally write only about technology, but Ocasio Cortez’ comments about medicare for all, which many people have echoed must be rebutted.

No one is entitled to the free labor of another. So the idea that we should get unlimited free medical care doesn't pencil out. Even if we had such a thing tomorrow, it wouldn't work because the supply of trained medical personnel is insufficient to handle the current load, much less an increased load. Insurance companies stall like hell so that they can stretch their resources.

A better solution instead of trying to do price controls (or robbing peter to pay paul via some transfer payment scheme), is to realize that the curriculum of our public schools hasn't been updated in 100 years, and that perhaps 1/2 of all the time in school should be devoted to bringing every high school graduate up to the level of 2nd year nursing school, and that instead of continuing with the ridiculous 8 year and more medical school process which costs hundreds of thousands per doctor, create super narrow medical education tracks that finish in 2 years, but only qualify you for specific procedures.

Hospital procedures are broken down very precisely now, and by changing how many, and how, we train medical professionals, the costs could come down by a factor of 5. Why aren't we teaching this very valuable information to all students in our public schools? Why are we letting junior high and high school students regurgitate obsolete subjects when what we all need to know nowadays, is how to take care of the human body, especially our own!

It is time that we stopped leaving things to expensive professionals, and shared this knowledge with a vastly wider pool of people, which will not only lower costs, but when people get more medical education they don't eat as poorly, nor are they so likely to get so fat, which is a big contributing factor in American's slightly declining health statistics. Also, a more informed customer base makes for better doctors, as the bad ones would be flushed out more quickly.

JetBrains MPS, and thoughts on the next big language

1) Games are very much going to determine the outcome of the "next gen language". Game programming is arguably the majority of all graphical interactive coding today. Not only is this borne out by the statistics from the App Stores, which shows that games are more than 2x larger than any other category of product:

https://www.statista.com/statistics/270291/popular-categories-in-the-app-store/

but also, when you look at all the dashboard companies popping up, the gamification of business products is well under way, and what was a stodgy dull statistical program is now singing and dancing. Get into your brand new car, and dashboard does a song and dance. I don't care where you turn, customers are suckers for flashing lights and motion, and if your language can't draw well, it is going to be a hard sell. A language doesn't have to full-tilt into 3D complexity, but if you can't drastically simplify the pain of laying out screens in the quirky and frustrating HTML/CSS abomination, why did you bother making your tool in the first place? This by the way is why i consider terminal-based languages like LISP and FORTH to be near useless in this era. There is ample evidence that drawing needs to be integrated into the language.

2) This is why MPS is non-starter for me; I don't see a drawing system. The majority of all my code in every graphical interactive product i have made has been related to drawing. From a word-count perspective, drawing consumes an awful lot of program code. Numbers are easy. They have a value. Period. But a piece of text, it has a font list, a size, optional bold and italic, justification, indenting, stroke color, background color, and on and on. So naturally text is going to dominate the code. If you are building a billing system, generating a nice looking PDF bill for the customer, is a ton of work to drawing nicely, with pagination that works well. I spent decades in word processing/desktop publishing/graphic design product space, and there is just a lot of tricky stuff relating to languages. And don't get me started on the complexities of making your product read well in Asian languages. That was my specialty. 

And since it isn't just about drawing, but interacting, that is why HTML/CSS/JS is such a nightmare, and why there are so many frameworks, because the designers of the web did a rather poor job at anticipating interactivity, and their approach of laying out pages not with function calls but with a textual description basically calls forth a very complex framework system to compensate for this mistake. complex domain specific languages aren't a computable readable model; imagine if the web had an internal model that was not textual, that would have made it so much easier to build interactive graphics. A next gen language to succeed will at least need to allow people to not have to wrestle with webkit, which has a nasty habit of scrambling your layout when a tiny error is made. 

Apple has done a lot of work in their storyboard system in XCODE to make laying out things easier, although it is still evolving and i wouldn't call it settled. I don't know the android studio well, i imagine it has tools for this as well. But i would like to see a cross-platform layout system that makes it easy for a single code base to nicely fit into whatever device you are on. Making layouts fluid should be part of the language, and anyone who thinks they can just live on top of HTML/CSS they are doomed IMHO.

3) As for correctness by construction, there is ample evidence that completely untyped languages that can mutate the type of a variable accidentally from number to string, are very dangerous. The mistake of imitating ActionScript2's overloading of the + operator to mean both addition and string concatenation has caused countless errors in JS. If they had used PHP's & operator, or some other punctuation, millions of man-hours would have been saved. TypeScript and other transpilers/preprocessors are clearly a great win because JS by itself is a minefield. A successful next gen language will eliminate a lot of errors. Eve was very much inspired by SQL, which is a very declarative style of language, with little instruction given as to how do it; you just tell it what to do. However, it isn't that easy to recast a chess program into that style. there is a lot of sequential processing to do, so some compromise has to be reached, where you eliminate as many sequence related errors as you can at compile time by letting the compiler do more work, but still retaining the ability to specify the proper sequence for things to be done in, unambiguously of course else you have obscured the product, which is counter-productive. I believe that sequence related errors constitute half of all debugging time, so eliminating mistakes of sequence should yield a 2x improvement. 

4) there are many additional syntactical features one can add to a language, like runtime physical units, that allow the product do more integrity checking and catch subtle errors quickly. The Wirth family of languages emphasized compile time checks, and Modula-2 for example had overflow, underflow, range, array bounds, nil pointers, undefined variables, checks all of which could be disabled. In my Discus product, we leave the checks on until we ship the product, and it shrinks by 30% because the overhead of checking is substantial. Nowadays, with computers idle 98% of the time, one can argue that leaving the checks on in production products is now feasible, and probably a good idea. All that Microsoft C code, with no runtime checks, is a security hazard, and Microsoft has been endlessly patching their Windows monstrosity for decades now with no end in sight. When you pass an array to a Modula-2 function, the array bounds are sent in a hidden parameter, which allows the called function to not go over the limit. This doesn't exist in C, which means that any large C program is forever doomed to be unreliable. I cannot understand why the executives chose such flabby languages to standardize on. Surely they must have known early that the cost of all these tiny errors in the sum total represented a maintenance nightmare. Java has plenty of problems of its own, and don't get me started on the flaws of OOP.  Thank goodness few of the next gen projects even consider building atop OOP paradigms.

R.I.P. Stan Lee, the most creative writer since Jules Verne

Stan Lee died the other day.

He was one of the most creative americans that ever lived.

We all know that George Lucas is a very creative fellow, having created a few dozen major characters, like Luke Skywalker, Han Solo, and villains like Jabba the Hutt and Boba Fett.  Stan Lee created at least 5 to 10 times more characters than Lucas. I think he was the most imaginative writer since Jules Verne. 


When i was young, we had a comic rack at the local drug store in town, and they cost 12 cents each. At the time we were reading them, in the early days of Marvel Comics, the print run of each issue was around 10,000 copies. So if you imagine the retailer got half, the total revenue to marvel per issue was about $600. They were printed on the cheapest newsprint paper, with just a few colors, and were banged out at an incredible pace. I had several first editions, and like every other mother, mine were thrown out when i went to college… they are so valuable now because so few people kept them away from their mothers! hah. Nobody took them seriously and it took a while for their science-fiction style to catch on.


These original Marvel comics are masterpieces of the genre. They can be purchased under the “marvel masterworks” series at Amazon in both hardcover and paperback. They make a wonderful gift for a young kid who doesn’t realize the source material behind the many successful films is so good. They were so far ahead of other comics of the time, it isn’t funny. Batman and Superman, the two big characters of the entrenched competitor DC comics were crude in comparison, with stupid villains who were typically bank robbers or thugs. Marvel comics on the other hand created a roster of villains really worth fighting, with names like Magneto who could control metal, Doctor Doom, who wanted to take over the planet, or Kang the Conqueror who comes from the future to take over the universe, or Galactus who would like to consume all the life force in our solar system for breakfast. Marvel heroes had personal problems; they didn’t have much money, or their girlfriends were mad at them, they might even regret their superpowers. From any dimension you look at, the output of Marvels’ first decade is fantastic stuff; it is like Ian Fleming’s James Bond series, a highly original body of work that will still be enjoyable for a long time to come, and imitated without attribution constantly. 


The secret to Stan Lee’s incredible output - and it is amazing how much stuff he cranked out - was that he worked in an unprecedented way with the artists who helped invent the characters and drew the comics. He would write a very short summary of what happens in the 16 page story and then let the artists draw whatever they wanted, and then he would add the words in later. In this way the stories progressed nicely, and the artists loved the freedom, and they did wonderful work. 

Lee did not profit much from his work at Marvel. He was outmaneuvered in the board room many times, and when Marvel was finally sold to disney for billions he got nothing. Frankly the comic book business is a pathetic business compared to movies, and the technology to do his films justice did not come into existence until recently.  But he did pretty well overall, and was beloved by the many millions of people who have come to know his characters like Spider man, etc., mostly from movies.

His appearance in the films always makes me cry, because i loved his work so much. He was so clever and funny. 


Every time the films have departed from the original material, it has been to the movie’s detriment. When Marvel films have failed, it was because they didn’t trust the super genius of the creators, and think people can’t handle it. One of marvel’s best comics was the Fantastic Four, yet they have bungled Dr. Doom and in the last film made him into a corporate weasel pursuing money instead of a megalomaniac who is so smart he thinks the world should grovel at his feet.

The best films Captain America (#1), Dr. Strange, and Ant Man. Those show the wild range of Stan Lee’s imagination, and how he would move between patriotic world war 2 era, to eastern mysticism, and then biotechnology.

With his passing, and with George Lucas’ retirement, one has to wonder, where is the next great set of characters and stories to come from? The answer is that Overwatch (a video game from Blizzard) is the crucible of animated/superhero characters for the youngest generation, and mark my words, Overwatch will be bigger than marvel, because it includes characters across the globe, and thus will be relatable to everyone. 



How much is programming going to change in the next few years?

Programming is going to change dramatically, probably by 2020, which is not that far away. New languages and techniques are in the lab now that will trim the fat out of the development process. Currently programmers think they spend most of their time designing and typing, and estimate that debugging is perhaps 20–25% of the total effort. In reality, design, typing, and compiling are very straightforward, and 85% or more of the total effort is spent debugging and refining the software. The debugging process, which to be completely honest is the programmer fixing their own problems - is a huge area of waste, and finally techniques and tools are in the pipeline to shortcut that process. The net result will be about a 3:1 overall improvement in productivity (because debugging will cease to be difficult), but more importantly a 10:1 reduction in the frustration level one must endure to tolerate being a programmer.

The world of programming is currently populated by people with incredible, far out on the tail levels of patience; the kind of people who can do a 7000 piece jigsaw puzzle and enjoy it, while the ordinary person would give up. The lowering of this “frustration barrier” will allow millions of ordinary people to enjoy programming. Let’s face it, the computer is mankind’s most powerful and interesting invention, and everyone should have some fun with it. It is intensely satisfying to see a robot follow your instructions exactly, tirelessly, with no whining like your kids ;->

As for where this improvement is going to originate from, it isn’t going to come from academia, which refuses for the most part to build practical, useful tools, and will come from small entrepreneurial teams funded by themselves, angel investors, or crowdfunding. I can’t tell you how many academics flat out refuse to talk to industry people, as they live inside a bubble which is based on the status from publishing into journals that they only read among themselves. The academic world couldn’t be more corrupt and dysfunctional than it is in 2018. The cost-effectiveness of conventional colleges is abysmal, and if you look at graphs like:

College Tuition and Fees vs Overall Inflation

you will see the unsustainable trajectory that they are on. Another area where improvements won’t be coming from are large companies like Apple and Facebook, which profit mightily from things staying exactly as they are, and also in large companies, a disruptive technology like this would invalidate and seriously depreciate their multi-billion-dollar codebases, so even if it was invented there, it would not see the light of day.

Blockchain and Patent Medicines

In the 1800’s patent medicines were the rage, and people drove around selling magical cures.
Today, blockchain is the magical cure. Everyone including a friend starting an insurance company includes blockchain on their website, as it makes everything better… 

The de medici family invented the double entry bookkeeping system, which kept two simultaneous books one using debits and the other credits, so that they could keep banking honest and detect embezzling/sloppy bookkeepers. That invention powered the family to a huge advantage in banking. The blockchain concept, of distributing the ledger into not just 2 sets of books but 1000 sets of books makes it virtually impossible to cheat, so it is an improvement to be sure.

However, if you have an honest person running things, you can do just fine with one set of books, and i personally would derive no benefit whatsoever from having my bookkeeping spread onto 1000 computers. The only people who need blockchain are people doing illegal, off the book transactions where they don’t want anyone to know how much and with whom they are doing business. So all this crypto stuff is helping tyrants move their money around the world without resorting to couriers, bearer bonds, gold bullion, diamonds, etc.

But for ordinary people such as myself blockchain is a minor change to how things are stored on computers, and mean absolutely zero. The fact that 150 billion was invested last year into cryptocurrencies, half of that will disappear, as the organizers cash out their “pet rock” gains, and will be remembered in the distant future as another mass mania like the Tulip Mania.

How I fought Amazon....to a draw

Jeff Bezos is well on his way to be the richest man in history, and by a wide margin. His final fortune will make Bill Gates, the former long-running richest man in the world look like just another billionaire. Bezos is extremely skilled with computers, and fighting his Amazon company is a formidable challenge, but one that any company selling similar products in the marketplace has to face. Amazon has terrific customer service, a fabulous logistics team, and a great reputation. But is their pricing advantage that built the company.

In my case I was helping out a friend who had a packaged software company, who sold products on his website, but also had an affiliate store in Amazon. He had a few hundred shrink-wrap kids educational computer games. These products sold from $10 to $40 each, and the problem he faced was that although he ran a tight ship with family doing the work, Amazon was undercutting his pricing slightly, and when you did a search, Amazon was one cent cheaper than his pricing and given the two vendors are selling exactly the same thing, the consumers in a very rational behavior pattern, selected the cheapest product, and Amazon was getting most of the business.

After some quick analysis, i realized that he was competing against a computer algorithm, not a human strategist who varies his attack pattern. I wrote a program that scanned the pricing on Amazon’s store, and then automatically in a little Python script adjusted his prices so that they were 1 cent lower than Amazon. He runs this every day, and at night Amazon adjusts their prices one cent lower again. So he runs the program every day, and thus it alternates who has the lower price, but he is beating them because Amazon is only adjusting once per day at a fixed time. The price battle continues, with Amazon and him dropping a penny at a time, until it reaches some magical threshold below the wholesale cost of the item, in the Amazon computer whereby amazon doesn’t want to lose any more money, and Amazon sets the price back $39.95. At which case my little program set his price to $39.94 and the battle starts over again.

Most of the time the product is above wholesale pricing, and he is beating amazon, but always on a treadmill of running the script so that he stays ahead of them. Amazon is a fairly simple system, and the concept of holding 100% markup retail pricing for ordinary products is a vanished thing. Sure, in luxury goods like those sold by LVMH (owned by the richest man in France) they can control the retailers and prevent any discounting, but for the vast majority of products, every retailer is going to have to do battle with Amazon.

Amazon is a tough competitor, and its computer is not afraid to go below cost for a spell in order to get you to quit, but its internal algorithm is more attuned to making a profit than crushing all the other retailers, so it is possible to fight them to a draw. Amazon destroyed book retailers through a scheme where they discounted the bestsellers, because their statistical analysis showed that the heavy readers (the 20%) were the people who kept bookstores alive. So he wiped out the bookstores’ air supply by eliminating that crucial income component, which he could subsidize with higher prices on lower selling books. The other thing he did which the government should never have allowed was he bought the main book distribution company in the USA, Baker and Taylor, and once he had control of the wholesaler he put the squeeze on the stores by lowering their basic profit margin. That should never have been allowed by the government. The destruction of bookstores was quite an evil for the USA, because book culture is what built up the USA in the first place.

I hope this inspires more script writers to help the independent merchants who have to deal with this tough algorithmic selling system of Amazon. Since Amazon sells millions of products, unless you are in an area targeted for destruction (like computer hosting, Bezos’ second multi-billion dollar business), you don’t have fight an opponent who has any imagination. The Amazon computer does the same thing all the time, and it has been shown with chess computers that if you repeat yourself too regularly it is a weakness.

Wisdom from the past in Software Development

There are many classic books about software development that have been published in the last 50 years. Some of these like "The Mythical Man-Month" are almost completly applicable to today; others are a bit more dated. In the older knowledge, like that of Robert Glass, and Capers Jones, there are aspects of eternal truth in their books; for example the Capers Jones idea was that languages had an intrinsic measurable power to them, measured by means of brevity. the shorter the program, the more powerful the language. Elm, Red and several other new languages by this measure are examples of extremely powerful languages. However, Robert Glass points out in his book that 40 to 80% of all cost of software is in maintenance of existing code. And in this area, these very brief languages may not excel. The higher the power level of a language, the more difficult it can be to read, because more is happening under the hood. All evidence shows that the industry selected for highest number of billable hours, which steered them towards COBOL, then Java, its successor language. I think that we in the new language community should acknowledge that inertia is the most powerful force in the universe, and that the mainstream programmers will never adopt an efficient, compact notation to replace Java because it would mean the loss of too many billable hours. It would be like asking lawyers to adopt standardized forms for court cases, and end the practice of making custom documents for routine items. In the legal world, the bulk of costs of a lawsuit are done during discovery phase, and in actuality most cases are settled very near the court date, which means discovery costs were wasted. If lawyers stopped doing this their incomes would plummet.

The next generation powerful, but simple languages, will forever be a relatively small proportion of the usage space. It doesn't mean it isn't worth building these tools; definitely it is worth doing, but let's not be overly optimistic that programmers industry-wide are willing to see their incomes drop. Just as in law, there is a relatively fixed number of disputes happening per year, and there is also a relatively fixed amount of software to be built. It isn't infinite. Once one company builds a great self-driving car software suite, there won't be need for 100 others. Once you have built a bug-free stack for 5G cellular radio, it will last 10 to 20 years. There is a lot of inertia out there, and inferior technologies like object oriented programming, which is one of the greatest boondoggles ever foisted upon the business community has only been admitted as a boondoggle by a minority of programmers such as myself. Now that there is a mainstream alternative to OOP, in the name of Functional programming, finally people can admit OOP is crap. But until they had a replacement they didn't want to get in trouble for continuing to use such a failing technology. 

By failing technology i mean, chronic project overruns, and horribly buggy software. We can, and will, do better.

Boy am I getting fed up with academia

Boy am I getting fed up with academics who claim they are working in the area of advanced computer languages, and don’t have the time of day to even peek at any one of a dozen great new programming language projects underway. They are all busy proving that 2+2 is 4 or some such trivial task, and their vaunted proving systems which they have been working on for 20 years can’t even prove a tic-tac-toe game correct, because a program that draws on the screen is beyond their start of the art. Just because you can't prove graphical interactive software is correct isn't stopping billions of people from using their cellphones every day. At some point practicality has to be considered. There is nothing settled or wonderful about the current state of software development techniques, and when i went to college the universities were doing research that moved the industry forward. The universities are the natural origin place for new languages and techniques, and instead they are becoming so conservative and hostile to new ideas that they are part of the problem. Sorry, but with the billions being poured into higher education, and computer science departments in hundreds of universities around the world, we should see more progress! The educational system in computer science is burning a lot of money with negligible contributions being made in many areas.

chair_pic.jpg

 

 

 

Javascript vs. Actionscript 3

A lot of people disparage Actionscript 3 as a dying or dead language, and heap scorn upon it. However, this is really just a smear campaign started by the sometimes mean-spirited Steve Jobs. Javascript is almost a perfect copy of Actionscript 2, and one by one the JS team has been adding in the missing features that Actionscript 3 added.  One of the most important features added in ES2015 is the module system, and now you can import from different modules, and keep your namespaces separate. Interchangeable parts is one of the two super important features of the next generation of software technology, and modules are essential.

However, I just found an unbelievable JS bug that is on Safari and Chrome. If you are writing some JS, so look out for this one:

lets say you have a standard library module and it defines some functions and constants. But FOOBAR isn’t one of them.

import * as std from './stdlib.js’; 
let z1 = FOOBAR; // compiler finds this error, FOOBAR the local name is not defined yet 
let z2 = std.FOOBAR; // this is a valid module name, but not a known symbol inside the module std, compiler lets it slide 
let z3 = stx.FOOBAR; // compiler catches this error, module name was misspelled.

If you make a typographical error and import a symbol that doesn’t exist, perhaps because you made a small spelling error, the compiler doesn’t catch the undefined name, and treats it like a property that doesn’t exist, which it is not. They forgot that a module name prefix is not an object, and that all imported symbols must be found.

This is a massive error in both Chrome and Safari, i can’t believe they didn’t catch this. It is most unfortunate that the folks at Google and Apple didn’t spend more time with Ada or Modula-2, which not only had modules but separately compilable modules, something that JS doesn’t have. Frankly you cannot have interchangeable parts without separate compilation. I wonder how progressive web apps are going to work completely without addressing some of these issues.

This makes modules incredibly dangerous compared to one big glob of code. The whole point of modules is to split namespaces so you don’t accidentally use a variable that isn’t part of your region of code, so this is tantamount to sabotage of the module system.

This is yet another reason why Javascript is such a piece of crap. The fact that they have't figured this out in 3 years since 2015 goes to show how poorly thought out and implement JS is, and i an many others can't wait for the day when we bury that language!

Writing in Actionscript 3 you get a very solid compiler and toolchain, and AS3 and JS are so close now, you can convert one to the other with mostly just a series of find/replace operations in a text editor. Instead of using TypeScript, i suggest you try Actionscript 3 which retains the type information at runtime, and use AS3 for your mobile and desktop targets, and then convert to JS using a simple script to run in the web. This way you get good protection. TypeScript can't carry the checks into runtime.

Scale of sophistication of languages

There are many aspects to a programming language. One way to evaluate the sophistication is to make a radar graph. Here we present the following scales, where in each category we list the scale of sophistication, where 1 is primitive, and 5 is deluxe.

Type protection

  1. easy to make a mistake; turn a number into a string accidentally

  2. silent incorrect conversion

  3. some type checks

  4. strong implicit types

  5. Modula-2 type airtight range checks, etc.

Arithmetic safety

  1. + operator overloaded, can't tell what operator is actually being used

  2. overflows undetected

  3. selective control of overflow/underflow detection (Modula-2)

  4. improved DEC64 arithmetic 0.1 + 0.2 does equal 0.3

  5. infinite precision (Mathematica)

Primitive data types supported

  1. numbers, strings, Boolean

  2. includes one dimensional arrays

  3. includes multi-dimensional arrays

  4. includes structured types, dates, records, sounds, etc.

  5. includes the SuperTree

Graphical model sophistication

  1. none, all drawing is done via library routines

  2. console drawing built into language

  3. 2D primitives, weak structuring

  4. 2D primitives, strong structuring

  5. 3D drawing (unity)

Database sophistication

  1. external to language

  2. indexed sequential or hierarchical model

  3. relational database built in or merged into language (PHP)

  4. entity-relation database built in

  5. graph database built-in (Neo4J)

Automatic dependency calculation (also called lazy evaluation)

  1. none (C, JavaScript)

  2. evaluation when needed of quantities

  3. automatic derivation of proper order to evaluate (Excel)

  4. automatic derived virtual quantities

  5. automatic calculation of code to execute, with backtracking (PROLOG)

Automatic dependency drawing (also called auto refresh)

  1. none (C, JavaScript)

  2. simple redraw support (Win32)

  3. automatic redraw using framework (React)

  4. automatic redraw without having to use framework

  5. automatic redraw of code that references changed quantities

The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in.

The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in.

Simplification is easy

Okay ready to write my program now...

Okay ready to write my program now...

Today we in the programming world are facing an explosion of code libraries. People are writing code all over the place, and to save duplication of effort, we are trying as a profession to share the previously done work. There are two problems that immediately crop up. Let's for argument sake imagine that we are using someone's bar chart making module, and that module was at version 1 when we used it. If we copy the bar chart code over, and incorporate it into our project as a copy, it will continue to work as it did. If we disconnect from the update stream, if there are errors in the barchart version 1, we won't get those fixes. If we link to the constantly evolving barchart code, and the author of barchart changes how you use the barchart module, then our code can, and likely will eventually break. So we are stuck between the choice of copying, creating duplication of immediately obsolete code, or being at the mercy of breaking changes in the evolving components we reference.

This in a nutshell is what is driving the explosion of slightly variated code bases, which today is exploding at a geometric rate. There are billions of lines of code written, yet every day it is ignored by many developers because who knows how good the code is, or what trajectory the code base is on.

The solution is that evolution of modules must proceed in a highly upward compatible way. To keep version 1 still working, but allow for new features to be added, one must change how a component is accessed. In modern languages, the only way to reference external code is to call a function with parameters. Once the parameters change in name, position, or quantity, most languages will break on that inevitable change. So the enemy of stability is function parameters, which have in most languages a rigid order. The other improvement one can make is to send a version number to all functions, telling the intention to use functionality as of version 1. Let's say that version 2 of the barchart plays music, you can send version 1 as an input parameter to the module, and then the designer can guarantee that version 1 behaves the same as before. 

How you communicate large and complex amounts of data to modules is the key weakness in modern languages. In Pascal, C, Fortran, COBOL, and other languages of the 60's and 70's you had records. Those were removed in Java, JavaScript, and many other languages of the 80's 90's and 2000's. If you don't have records, but amorphous objects, you then have a nightmare of missing information error states. So the tradeoff in module access is that either you force a rigid record structure, or the module code must increase in "toughness", the ability to not break if inputs are missing. 

In the old days, every function had an error return. This is now considered cumbersome, and in JavaScript for example one hardly ever sees error results being passed back or checked. This creates really flabby code, where errors propagate silently, and like a metastasizing cancer, move to a new location. It is gratifying to see error codes return in Go, however, that is not the best way to do error returns. Excel is the master at handling errors, and programming langauges would be well advised to study Excel more closely.

The graveyard of once-promising computer languages

 

I agree that one should fix a language's flaws, even if it means making breaking changes, however, that doesn’t change the fact that history shows you seem to only get one chance to get a language right. Oberon for example, had a fatal flaw in version 1, and because of that flaw, the early adopters, who are so critical to the success of a language, abandoned it, and moved elsewhere. The truth is that the adoption by “the herd” is driven by a small group of tastemakers. This is true in so many different fields. Take music for example, there are critics and “hip” people who spread the word that so-and-so is “hot”. The have a huge effect on the endpoint of the popularity curve, because the tastemakers function as filters for the society at large, who are so busy in their toils that they don’t experiment much, and the safety of herds relies on mass adoption and clustering. Want to be a hotshot artist in the USA? Get on the cover of Art Forum magazine.  

Humans are a herd animal like horses, and they run in packs, and stragglers are picked off by predators…so all evidence shows that your version 1 either makes you or breaks you. After all, the number of people using a language is a big factor in picking it to begin with. I have a saying, inertia is the most powerful force in the universe. Looking at the pile of dead languages, and there are over a 1000 that didn’t even make it to probably 10k users, the early adopters are a picky bunch, and don’t suffer glaring omissions kindly.

But being the eternal optimist that I am (and what programmer isn’t an optimist), i have made provision for graceful evolution of the Beads language by incorporating a feature whereby the code can be automatically upgraded to later versions, something that should be present in all languages but is inexplicably rare. 

To make the language comparison process less religious and more objective, i am performing “drag races”, where each language can be directly compared to another, by means of implementing a precisely specified task, and then to really add to the realism of the challenge, we take the perfectly operating program, modify the spec in some small way, and hand it to a new person, not the author, to update that code base to conform to the changed spec. For example, in Chess you could augment the rules, by allowing castling through other pieces, or giving the pawn the option to move 3 squares on the first move, not just 1 or 2. That is the acid test, can a different person improve someone else’s work? This is measurable, and actually matters the most, because code re-use is terribly low. In the real world, laws and conditions are constantly changing, and a program has to evolve. 

And to the point of allowing a language to evolve gracefully, i have added a feature to the syntax so that old code can be automatically updated as the language evolves. My experience shows that in business, programs last decades, and by that time the language has evolved so much that the existing code base gets cut off from the flow of the language, because it would disrupt the reliability to upgrade the compiler. People freeze the compiler more than they freeze the code, because it will wreak unknown, but possibly drastic effects. As soon as a big program stabilizes, there is a natural tendency to break it off of the “stream of constant breakage”.  I know that in my day job in a telecom firm, we have downgraded almost every PC in the company to Windows 7, because it works so much more smoothly. Look at the outrageous tactics Microsoft is employing to force their user base into Windows 10. i personally think Steve Ballmer should be in chains for his role in productivity abatement.

Programming challenge - a chess game

I present here an excellent challenge for those wishing to see how good their favorite language is. This is a project that should come in around 2500 words, so not a large program by any stretch of the imagination. Although programming is often quoted in lines, some languages are more vertical than others and that measurement highly distorts the true size of a program. Writers of english such as journalists are very familiar with word counts, and so we are adopting henceforth the word as the basic measurement of a program's size. It translates to under 1000 lines.

The commonly used Hello World type of program tells you almost nothing about a programming language, and is a waste of time. The snake challenge is okay, but it is still a little too simple to show how good a language is. A bad language can deliver a snake program and it doesn't seem bad at all. But this task is of sufficient complexity that the true merits of a language shine through. 

You can get the chess challenge specification and sample implementation here.

Here are some screenshots from the reference implementation, which is included in the specification package. I will post the various entrants once they are submitted, so people can compare side-by-side the various language implementations.

screenshot_land.jpeg
screenshot_portrait.jpeg

The problem with Donald Knuth's Algorithms Books

First, let me say before i get critical about Knuth's work, that he did an incredible amount of systematic, meticulous, highly accurate and important work. The problems that i am going to talk about relate to the format he used to encode his work. The bottom line is that Knuth was like having a Shakespeare who decided to write in pig latin.

Knuth came up with the idea of literate programming as a sequel to structured programming. Unfortunately his approach was completely wrong. I am not the only one who has pointed this out (http://akkartik.name/post/literate-programming). I am a nobody compared to Knuth, so maybe stating the emperor has no clothes on may seem offensive. I don't mean any personal disrespect to Knuth, but his TeX product has left me cold from day 1.  Having your code comments presented in nicer typography is well and good, but honestly, do you consider Knuth's TeX system any good? It is a disaster in my book, a bizarre, complex curiosity that he squandered a big chunk of his career on. Knuth was so ridiculous that he refused to use TrueType for encoding fonts. He invented his own font format, Metafont. He was so famous and influential nobody pushed back on him, but who else on earth would refuse to use one of the commercial type formats? Its like someone building a motorcycle and deciding that neither English units or Metric is good enough, and that you need to use an incompatible set of nuts and bolts. The history of computers is full of non-agreement on basic standards. ASCII vs. EBCDIC, Mac vs. PC, iOS vs. Android, etc., but why when 99.9% of the world is one of two camps, do you invent your own 3rd form which has no substantial advantages. 

Knuth's choice of MIX was never fine at any time. At any moment from 1965 onward a single company has owned the lions share of the CPU market. It was IBM, then DEC, and for at least 36 years since the IBM PC, the Intel instruction set has 99% of the desktop market. Nowadays the ARM instruction set has 99% of the mobile market, but server and desktop are 99% intel architecture, and if Knuth had for example picked the intel instruction set, not only would most of the code still run fine, because the intel architecture has been phenomenally backwards compatible, but there are many commercial cross-assembler tools that efficiently convert intel instruction set to other chips like Motorola 68000, MIPS, ARM, etc.. By using MIX he doomed his unbelievably meticulous work to be basically unused today. What company can make a living selling and maintaining a MIX cross assembler, when only one human in history ever used MIX? I argue that Knuth was being perversely manufacturer neutral. I can't tell you how many programmers i have seen have his books on their shelf, but never actually used them.

What creates the best products?

 

In a recent interview, one of the founding geniuses of Apple pointed out that when people bring forward a product that they want, you get the best results. This is the problem with giant corporations and innovation, the individual is blended out of the picture. In fact, when you look at the track results from large companies, they usually acquire new ideas from outside.  

“...when things come from yourself, knowing what you would like very much and being in control of it, that's when you get the best products.” 

Wozniak / interview from September 2017

Programming challenge - a Snake game

This is one of the benchmark programs I am using to compare one language to another. Try building this game in your language of choice, and submit your program and we will analyze your code and show how it stacks up compared to other implementations. There is a live programming tutorial on YouTube that builds a more rudimentary version of the snake program in under 5 minutes. However, that 5 minute program has numerous flaws and is only about half of this specification. It does show how flexible and fast JavaScript is. 

The game runs at 6 frames per second, and the snake moves one cell per frame.  If the snake moves into the apple the length is increased by one, a crunch sound is emitted, and the apple is moved to a new cell. If the snake crosses over itself, then a beep is emitted and the part of the snake from that cell onward is erased. The snake wraps around the board in all four directions.

            The playing board, drawn solid black, is subdivided into square cells of 42 points. At the start, the snake is set to length 1 and positioned at cell (4,4). The apple, which is the goal of the snake to eat, is one cell drawn as a circle in HTML color crimson, and is placed at a random location on the board.

 

At the start of the game the snake is paused. As soon as an arrow key is pressed, the snake begins to move in that direction, and leaves a trail behind. Initially the trail is limited to 5 cells total. If the snake moves into the Apple, the snake's length grows by 1. Once the apple is eaten, it is immediately respawned into a random new cell. The apple may end up on occasion placed inside the body of the snake, however, the snake only is considered to have eaten the apple if the head moves onto the apple.  The snake cells are drawn as 40 pt squares with 2pts on the right and bottom to create a separation. The snake cells are drawn in alternating colors lime green and lawn green. The head of the snake is drawn as a rounded rectangle with a corner radius of 8 pt and a border of 2 pt dark green. The remaining cells of the snake are drawn as a rounded rectangle with 2 pt corner radius.

            The only input to the game is the keyboard. The arrow keys change the direction of the snake, however, to prevent frustration the player is not allowed to move the snake head back into itself. So if the snake is traveling east, the attempted movement west is ignored. A command to move in the direction already in motion is ignored. To permit fast maneuvers the direction inputs are queued, so that one can do WEST - SOUTH - EAST to perform a 180 degree turn in 3 consecutive frames.  Pressing the space bar pauses or resumes the game.

            As the snake grows and shrinks, the current size and high score is reported at the top of the screen as a topmost layer at 30% opacity, centered in 28 pt black text, for example: "8 high:22".

            The default starting window size is 700 x 700 interior pixels. If the user resizes the window, the game is reset back to the starting state.

            Note: since the window size will not usually be an even multiple of 42 points, the cells are slightly stretched to not have any dead space leftover.  So the program must first figure out how many whole 42 point cells can fit in the X and Y directions, then divide the width and height by the number of cells in each direction.

To help you with your game, here is the specification in PDF format and the sound files as a zip file.