Wisdom from the past in Software Development

There are many classic books about software development that have been published in the last 50 years. Some of these like "The Mythical Man-Month" are almost completly applicable to today; others are a bit more dated. In the older knowledge, like that of Robert Glass, and Capers Jones, there are aspects of eternal truth in their books; for example the Capers Jones idea was that languages had an intrinsic measurable power to them, measured by means of brevity. the shorter the program, the more powerful the language. Elm, Red and several other new languages by this measure are examples of extremely powerful languages. However, Robert Glass points out in his book that 40 to 80% of all cost of software is in maintenance of existing code. And in this area, these very brief languages may not excel. The higher the power level of a language, the more difficult it can be to read, because more is happening under the hood. All evidence shows that the industry selected for highest number of billable hours, which steered them towards COBOL, then Java, its successor language. I think that we in the new language community should acknowledge that inertia is the most powerful force in the universe, and that the mainstream programmers will never adopt an efficient, compact notation to replace Java because it would mean the loss of too many billable hours. It would be like asking lawyers to adopt standardized forms for court cases, and end the practice of making custom documents for routine items. In the legal world, the bulk of costs of a lawsuit are done during discovery phase, and in actuality most cases are settled very near the court date, which means discovery costs were wasted. If lawyers stopped doing this their incomes would plummet.

The next generation powerful, but simple languages, will forever be a relatively small proportion of the usage space. It doesn't mean it isn't worth building these tools; definitely it is worth doing, but let's not be overly optimistic that programmers industry-wide are willing to see their incomes drop. Just as in law, there is a relatively fixed number of disputes happening per year, and there is also a relatively fixed amount of software to be built. It isn't infinite. Once one company builds a great self-driving car software suite, there won't be need for 100 others. Once you have built a bug-free stack for 5G cellular radio, it will last 10 to 20 years. There is a lot of inertia out there, and inferior technologies like object oriented programming, which is one of the greatest boondoggles ever foisted upon the business community has only been admitted as a boondoggle by a minority of programmers such as myself. Now that there is a mainstream alternative to OOP, in the name of Functional programming, finally people can admit OOP is crap. But until they had a replacement they didn't want to get in trouble for continuing to use such a failing technology. 

By failing technology i mean, chronic project overruns, and horribly buggy software. We can, and will, do better.

Boy am I getting fed up with academia

Boy am I getting fed up with academics who claim they are working in the area of advanced computer languages, and don’t have the time of day to even peek at any one of a dozen great new programming language projects underway. They are all busy proving that 2+2 is 4 or some such trivial task, and their vaunted proving systems which they have been working on for 20 years can’t even prove a tic-tac-toe game correct, because a program that draws on the screen is beyond their start of the art. Just because you can't prove graphical interactive software is correct isn't stopping billions of people from using their cellphones every day. At some point practicality has to be considered. There is nothing settled or wonderful about the current state of software development techniques, and when i went to college the universities were doing research that moved the industry forward. The universities are the natural origin place for new languages and techniques, and instead they are becoming so conservative and hostile to new ideas that they are part of the problem. Sorry, but with the billions being poured into higher education, and computer science departments in hundreds of universities around the world, we should see more progress! The educational system in computer science is burning a lot of money with negligible contributions being made in many areas.





Javascript vs. Actionscript 3

A lot of people disparage Actionscript 3 as a dying or dead language, and heap scorn upon it. However, this is really just a smear campaign started by the sometimes mean-spirited Steve Jobs. Javascript is almost a perfect copy of Actionscript 2, and one by one the JS team has been adding in the missing features that Actionscript 3 added.  One of the most important features added in ES2015 is the module system, and now you can import from different modules, and keep your namespaces separate. Interchangeable parts is one of the two super important features of the next generation of software technology, and modules are essential.

However, I just found an unbelievable JS bug that is on Safari and Chrome. If you are writing some JS, so look out for this one:

lets say you have a standard library module and it defines some functions and constants. But FOOBAR isn’t one of them.

import * as std from './stdlib.js’; 
let z1 = FOOBAR; // compiler finds this error, FOOBAR the local name is not defined yet 
let z2 = std.FOOBAR; // this is a valid module name, but not a known symbol inside the module std, compiler lets it slide 
let z3 = stx.FOOBAR; // compiler catches this error, module name was misspelled.

If you make a typographical error and import a symbol that doesn’t exist, perhaps because you made a small spelling error, the compiler doesn’t catch the undefined name, and treats it like a property that doesn’t exist, which it is not. They forgot that a module name prefix is not an object, and that all imported symbols must be found.

This is a massive error in both Chrome and Safari, i can’t believe they didn’t catch this. It is most unfortunate that the folks at Google and Apple didn’t spend more time with Ada or Modula-2, which not only had modules but separately compilable modules, something that JS doesn’t have. Frankly you cannot have interchangeable parts without separate compilation. I wonder how progressive web apps are going to work completely without addressing some of these issues.

This makes modules incredibly dangerous compared to one big glob of code. The whole point of modules is to split namespaces so you don’t accidentally use a variable that isn’t part of your region of code, so this is tantamount to sabotage of the module system.

This is yet another reason why Javascript is such a piece of crap. The fact that they have't figured this out in 3 years since 2015 goes to show how poorly thought out and implement JS is, and i an many others can't wait for the day when we bury that language!

Writing in Actionscript 3 you get a very solid compiler and toolchain, and AS3 and JS are so close now, you can convert one to the other with mostly just a series of find/replace operations in a text editor. Instead of using TypeScript, i suggest you try Actionscript 3 which retains the type information at runtime, and use AS3 for your mobile and desktop targets, and then convert to JS using a simple script to run in the web. This way you get good protection. TypeScript can't carry the checks into runtime.

Scale of sophistication of languages

There are many aspects to a programming language. One way to evaluate the sophistication is to make a radar graph. Here we present the following scales, where in each category we list the scale of sophistication, where 1 is primitive, and 5 is deluxe.

Type protection

  1. easy to make a mistake; turn a number into a string accidentally

  2. silent incorrect conversion

  3. some type checks

  4. strong implicit types

  5. Modula-2 type airtight range checks, etc.

Arithmetic safety

  1. + operator overloaded, can't tell what operator is actually being used

  2. overflows undetected

  3. selective control of overflow/underflow detection (Modula-2)

  4. improved DEC64 arithmetic 0.1 + 0.2 does equal 0.3

  5. infinite precision (Mathematica)

Primitive data types supported

  1. numbers, strings, Boolean

  2. includes one dimensional arrays

  3. includes multi-dimensional arrays

  4. includes structured types, dates, records, sounds, etc.

  5. includes the SuperTree

Graphical model sophistication

  1. none, all drawing is done via library routines

  2. console drawing built into language

  3. 2D primitives, weak structuring

  4. 2D primitives, strong structuring

  5. 3D drawing (unity)

Database sophistication

  1. external to language

  2. indexed sequential or hierarchical model

  3. relational database built in or merged into language (PHP)

  4. entity-relation database built in

  5. graph database built-in (Neo4J)

Automatic dependency calculation (also called lazy evaluation)

  1. none (C, JavaScript)

  2. evaluation when needed of quantities

  3. automatic derivation of proper order to evaluate (Excel)

  4. automatic derived virtual quantities

  5. automatic calculation of code to execute, with backtracking (PROLOG)

Automatic dependency drawing (also called auto refresh)

  1. none (C, JavaScript)

  2. simple redraw support (Win32)

  3. automatic redraw using framework (React)

  4. automatic redraw without having to use framework

  5. automatic redraw of code that references changed quantities

The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in.

The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in.

Simplification is easy

Okay ready to write my program now...

Okay ready to write my program now...

Today we in the programming world are facing an explosion of code libraries. People are writing code all over the place, and to save duplication of effort, we are trying as a profession to share the previously done work. There are two problems that immediately crop up. Let's for argument sake imagine that we are using someone's bar chart making module, and that module was at version 1 when we used it. If we copy the bar chart code over, and incorporate it into our project as a copy, it will continue to work as it did. If we disconnect from the update stream, if there are errors in the barchart version 1, we won't get those fixes. If we link to the constantly evolving barchart code, and the author of barchart changes how you use the barchart module, then our code can, and likely will eventually break. So we are stuck between the choice of copying, creating duplication of immediately obsolete code, or being at the mercy of breaking changes in the evolving components we reference.

This in a nutshell is what is driving the explosion of slightly variated code bases, which today is exploding at a geometric rate. There are billions of lines of code written, yet every day it is ignored by many developers because who knows how good the code is, or what trajectory the code base is on.

The solution is that evolution of modules must proceed in a highly upward compatible way. To keep version 1 still working, but allow for new features to be added, one must change how a component is accessed. In modern languages, the only way to reference external code is to call a function with parameters. Once the parameters change in name, position, or quantity, most languages will break on that inevitable change. So the enemy of stability is function parameters, which have in most languages a rigid order. The other improvement one can make is to send a version number to all functions, telling the intention to use functionality as of version 1. Let's say that version 2 of the barchart plays music, you can send version 1 as an input parameter to the module, and then the designer can guarantee that version 1 behaves the same as before. 

How you communicate large and complex amounts of data to modules is the key weakness in modern languages. In Pascal, C, Fortran, COBOL, and other languages of the 60's and 70's you had records. Those were removed in Java, JavaScript, and many other languages of the 80's 90's and 2000's. If you don't have records, but amorphous objects, you then have a nightmare of missing information error states. So the tradeoff in module access is that either you force a rigid record structure, or the module code must increase in "toughness", the ability to not break if inputs are missing. 

In the old days, every function had an error return. This is now considered cumbersome, and in JavaScript for example one hardly ever sees error results being passed back or checked. This creates really flabby code, where errors propagate silently, and like a metastasizing cancer, move to a new location. It is gratifying to see error codes return in Go, however, that is not the best way to do error returns. Excel is the master at handling errors, and programming langauges would be well advised to study Excel more closely.

The graveyard of once-promising computer languages


I agree that one should fix a language's flaws, even if it means making breaking changes, however, that doesn’t change the fact that history shows you seem to only get one chance to get a language right. Oberon for example, had a fatal flaw in version 1, and because of that flaw, the early adopters, who are so critical to the success of a language, abandoned it, and moved elsewhere. The truth is that the adoption by “the herd” is driven by a small group of tastemakers. This is true in so many different fields. Take music for example, there are critics and “hip” people who spread the word that so-and-so is “hot”. The have a huge effect on the endpoint of the popularity curve, because the tastemakers function as filters for the society at large, who are so busy in their toils that they don’t experiment much, and the safety of herds relies on mass adoption and clustering. Want to be a hotshot artist in the USA? Get on the cover of Art Forum magazine.  

Humans are a herd animal like horses, and they run in packs, and stragglers are picked off by predators…so all evidence shows that your version 1 either makes you or breaks you. After all, the number of people using a language is a big factor in picking it to begin with. I have a saying, inertia is the most powerful force in the universe. Looking at the pile of dead languages, and there are over a 1000 that didn’t even make it to probably 10k users, the early adopters are a picky bunch, and don’t suffer glaring omissions kindly.

But being the eternal optimist that I am (and what programmer isn’t an optimist), i have made provision for graceful evolution of the Beads language by incorporating a feature whereby the code can be automatically upgraded to later versions, something that should be present in all languages but is inexplicably rare. 

To make the language comparison process less religious and more objective, i am performing “drag races”, where each language can be directly compared to another, by means of implementing a precisely specified task, and then to really add to the realism of the challenge, we take the perfectly operating program, modify the spec in some small way, and hand it to a new person, not the author, to update that code base to conform to the changed spec. For example, in Chess you could augment the rules, by allowing castling through other pieces, or giving the pawn the option to move 3 squares on the first move, not just 1 or 2. That is the acid test, can a different person improve someone else’s work? This is measurable, and actually matters the most, because code re-use is terribly low. In the real world, laws and conditions are constantly changing, and a program has to evolve. 

And to the point of allowing a language to evolve gracefully, i have added a feature to the syntax so that old code can be automatically updated as the language evolves. My experience shows that in business, programs last decades, and by that time the language has evolved so much that the existing code base gets cut off from the flow of the language, because it would disrupt the reliability to upgrade the compiler. People freeze the compiler more than they freeze the code, because it will wreak unknown, but possibly drastic effects. As soon as a big program stabilizes, there is a natural tendency to break it off of the “stream of constant breakage”.  I know that in my day job in a telecom firm, we have downgraded almost every PC in the company to Windows 7, because it works so much more smoothly. Look at the outrageous tactics Microsoft is employing to force their user base into Windows 10. i personally think Steve Ballmer should be in chains for his role in productivity abatement.

Programming challenge - a chess game

I present here an excellent challenge for those wishing to see how good their favorite language is. This is a project that should come in around 2500 words, so not a large program by any stretch of the imagination. Although programming is often quoted in lines, some languages are more vertical than others and that measurement highly distorts the true size of a program. Writers of english such as journalists are very familiar with word counts, and so we are adopting henceforth the word as the basic measurement of a program's size. It translates to under 1000 lines.

The commonly used Hello World type of program tells you almost nothing about a programming language, and is a waste of time. The snake challenge is okay, but it is still a little too simple to show how good a language is. A bad language can deliver a snake program and it doesn't seem bad at all. But this task is of sufficient complexity that the true merits of a language shine through. 

You can get the chess challenge specification and sample implementation here.

Here are some screenshots from the reference implementation, which is included in the specification package. I will post the various entrants once they are submitted, so people can compare side-by-side the various language implementations.


The problem with Donald Knuth's Algorithms Books

First, let me say before i get critical about Knuth's work, that he did an incredible amount of systematic, meticulous, highly accurate and important work. The problems that i am going to talk about relate to the format he used to encode his work. The bottom line is that Knuth was like having a Shakespeare who decided to write in pig latin.

Knuth came up with the idea of literate programming as a sequel to structured programming. Unfortunately his approach was completely wrong. I am not the only one who has pointed this out (http://akkartik.name/post/literate-programming). I am a nobody compared to Knuth, so maybe stating the emperor has no clothes on may seem offensive. I don't mean any personal disrespect to Knuth, but his TeX product has left me cold from day 1.  Having your code comments presented in nicer typography is well and good, but honestly, do you consider Knuth's TeX system any good? It is a disaster in my book, a bizarre, complex curiosity that he squandered a big chunk of his career on. Knuth was so ridiculous that he refused to use TrueType for encoding fonts. He invented his own font format, Metafont. He was so famous and influential nobody pushed back on him, but who else on earth would refuse to use one of the commercial type formats? Its like someone building a motorcycle and deciding that neither English units or Metric is good enough, and that you need to use an incompatible set of nuts and bolts. The history of computers is full of non-agreement on basic standards. ASCII vs. EBCDIC, Mac vs. PC, iOS vs. Android, etc., but why when 99.9% of the world is one of two camps, do you invent your own 3rd form which has no substantial advantages. 

Knuth's choice of MIX was never fine at any time. At any moment from 1965 onward a single company has owned the lions share of the CPU market. It was IBM, then DEC, and for at least 36 years since the IBM PC, the Intel instruction set has 99% of the desktop market. Nowadays the ARM instruction set has 99% of the mobile market, but server and desktop are 99% intel architecture, and if Knuth had for example picked the intel instruction set, not only would most of the code still run fine, because the intel architecture has been phenomenally backwards compatible, but there are many commercial cross-assembler tools that efficiently convert intel instruction set to other chips like Motorola 68000, MIPS, ARM, etc.. By using MIX he doomed his unbelievably meticulous work to be basically unused today. What company can make a living selling and maintaining a MIX cross assembler, when only one human in history ever used MIX? I argue that Knuth was being perversely manufacturer neutral. I can't tell you how many programmers i have seen have his books on their shelf, but never actually used them.

What creates the best products?


In a recent interview, one of the founding geniuses of Apple pointed out that when people bring forward a product that they want, you get the best results. This is the problem with giant corporations and innovation, the individual is blended out of the picture. In fact, when you look at the track results from large companies, they usually acquire new ideas from outside.  

“...when things come from yourself, knowing what you would like very much and being in control of it, that's when you get the best products.” 

Wozniak / interview from September 2017

Programming challenge - a Snake game

This is one of the benchmark programs I am using to compare one language to another. Try building this game in your language of choice, and submit your program and we will analyze your code and show how it stacks up compared to other implementations. There is a live programming tutorial on YouTube that builds a more rudimentary version of the snake program in under 5 minutes. However, that 5 minute program has numerous flaws and is only about half of this specification. It does show how flexible and fast JavaScript is. 

The game runs at 6 frames per second, and the snake moves one cell per frame.  If the snake moves into the apple the length is increased by one, a crunch sound is emitted, and the apple is moved to a new cell. If the snake crosses over itself, then a beep is emitted and the part of the snake from that cell onward is erased. The snake wraps around the board in all four directions.

            The playing board, drawn solid black, is subdivided into square cells of 42 points. At the start, the snake is set to length 1 and positioned at cell (4,4). The apple, which is the goal of the snake to eat, is one cell drawn as a circle in HTML color crimson, and is placed at a random location on the board.


At the start of the game the snake is paused. As soon as an arrow key is pressed, the snake begins to move in that direction, and leaves a trail behind. Initially the trail is limited to 5 cells total. If the snake moves into the Apple, the snake's length grows by 1. Once the apple is eaten, it is immediately respawned into a random new cell. The apple may end up on occasion placed inside the body of the snake, however, the snake only is considered to have eaten the apple if the head moves onto the apple.  The snake cells are drawn as 40 pt squares with 2pts on the right and bottom to create a separation. The snake cells are drawn in alternating colors lime green and lawn green. The head of the snake is drawn as a rounded rectangle with a corner radius of 8 pt and a border of 2 pt dark green. The remaining cells of the snake are drawn as a rounded rectangle with 2 pt corner radius.

            The only input to the game is the keyboard. The arrow keys change the direction of the snake, however, to prevent frustration the player is not allowed to move the snake head back into itself. So if the snake is traveling east, the attempted movement west is ignored. A command to move in the direction already in motion is ignored. To permit fast maneuvers the direction inputs are queued, so that one can do WEST - SOUTH - EAST to perform a 180 degree turn in 3 consecutive frames.  Pressing the space bar pauses or resumes the game.

            As the snake grows and shrinks, the current size and high score is reported at the top of the screen as a topmost layer at 30% opacity, centered in 28 pt black text, for example: "8 high:22".

            The default starting window size is 700 x 700 interior pixels. If the user resizes the window, the game is reset back to the starting state.

            Note: since the window size will not usually be an even multiple of 42 points, the cells are slightly stretched to not have any dead space leftover.  So the program must first figure out how many whole 42 point cells can fit in the X and Y directions, then divide the width and height by the number of cells in each direction.

To help you with your game, here is the specification in PDF format and the sound files as a zip file.


Can I sue someone for stealing my idea?

A lawsuit is a 4-way race. There are two attorneys, and the two litigants. The lawyers come in first and second, and the only real uncertainty in this contest is finding out who comes in last. A successful lawsuit from the point of view of the lawyers is a long, protracted battle on paper that drains both sides more or less equally. If you have the luxury of a large cash position, and are in no particular hurry, a lawsuit is a wonderful way to drain the finances and pin down your opponent. Note that in most cases lawsuits only benefit large corporations, who can easily absorb the costs of litigation when compared to their immense cash flows, while for an individual a lawsuit is a great burden. So assuming you are the solo entrepreneur, litigation is out of the question.

The sad truth is that the inventors and discovers of immensely important inventions like the laser beam, transistor, radio, etc. rarely have benefited in history. It is quite rare that scientific invention, and engineering breakthroughs generate riches for the actual inventor or discoverer. We see a lot of millionaire movie stars and athletes and salespeople. They are paid fantastic sums, while the people who create the breakthroughs of society often toil away in a startup that fails, but the idea was sound, and gradually gets adopted widely and the world benefits. This is life as it is in our century. Maybe someday athletes won’t be so highly prized, and idea people will get more recognition and reward. It not happening any time soon, in fact the recent record setting $200 million for 5 year contract for a basketball player shows that athletics are gaining ground, and VC firms seem to get the majority of the money in startups, usually more than the founders.

In today’s world, showing people your idea comes with the risk of it leaking out. The investors you visit and demo to, may have already invested in a similar project but it lacks some of your ideas, and you can expect that whatever materials you give to anyone will be distributed. As my friend Paul says, a secret is a piece of information you tell one person at a time. Avoid showing people your inner secrets, it is too risky. Who wants to be around that grumpy, bitter, person who experienced idea theft. Some people only have one or two good ideas in their whole life. So treat them like golden inspiration, and do your best to refine the idea so that it can be commercially successful.  That may mean bringing in a partner who has no technical expertise, but knows how to sell. Many companies are based on a pair of complementary people like Wozniak and Jobs, or Roy Disney and Walt Disney. But also keep in mind that ideas are born of the time, and when Alexander Graham Bell was in the patent office in Washington, DC submitting his telephone, there were people right next to him in line with almost the same idea! Numerous times in history the same product has been invented at around the same time in different countries. The reason Edison was the greatest inventor of all time, is because he was developing products at the very moment electricity and modern technology came into being. So he could put together all these combinations of sub-technologies that together created the phonograph, or the movie projector, etc. So don’t wait too long on your idea. The tide waits for no man, and your idea might not be useful in 5 years. Sometimes you have to be willing to drop everything else to get your idea done faster. This is the exciting, risky world of invention we are talking about. Not for the faint of heart!

What makes a 10x programmer?

There has always been a huge range of productivity and quality amongst programmers, and in the old days was estimated at a 20:1 range. In the field of the Arts there is more than 1000:1 range in skill; in fact, it is impractical to compare skill in the arts as it is subjective, but at an objective level there are obvious signs of superhuman capabilities, especially in music composition (which is a different kind of “programming”). Take for example Donizetti’s composition of “Don Pasquale” in around two weeks which is like writing 50,000 lines of code in 2 weeks, an incredible feat one of the greatest artistic achievements of all time. So 10x for programming doesn’t sound unreasonable at all.

An environment where peace and quiet are available is very important for productivity, but putting a lousy programmer in a quiet environment still won't fix their code. Having studied a lot of code, i have determined that great programmers all use the same fundamental technique. The purposed of this technique is to generate fewer errors during authoring and a more regular code structure that also makes it faster to write as the code follows a more regular pattern. Keep in mind that approximately 85% of your time as a programmer is spent fixing your own mistakes. If you learn to write in a way that has near zero errors, you will be 8x faster that you were before, so that is well on your way to 10x.

This technique is not taught in school or even in books. The technique can be roughly described as "complexity reduction". As N. Wirth pointed out in his famous book "Algorithms + Data structures = Programming", all programs break down into these two parts: Algorithms and Data Structures. The fundamental algorithms of most commercial programming tasks are pretty simple. 99% of us programmers are not building vision systems or trying to translate from english to other languages... most of us have pretty simple algorithms to follow, like “sort this table of sales results in descending order and take the top 10”. Also, most of the data structures we use are often fairly simple, like a table of transactions for the month. So given that the majority of business programming tasks have simple data structures and simple algorithms all the programs should be more or less identical when you remove the skin. Programmers build a bridge from the customer problem space to the final API of the operating system or environment that is being used. Some programmers construct a bridge that goes nearly straight between these two endpoints, and others create a more wandering path.

The best programmers minimize the number of variables, the number of layers, and through a gradual reduction process create less code, that is clearer and cleaner in purpose. This reduction process is fairly mechanical, and all the best programmers instinctively do it, while bad programmers don't see the reduction process. In the old days of circuit design, you would use Karnaugh maps to reduce the number of gates to express a truth table, the best programmers are doing the same exact kind of thing in their head to minimize the total amount of logic and layers.

Could we improve the skill of programmers? Can this be taught? Absolutely. But in order to learn you have to be willing to learn first, and a lot of people think that because they got a program working that it is "good enough". Most people feel that however ugly, long-winded, and tortuous the path was, is unimportant.

Imagine in the arts and crafts if execution of the raw task was only considered. So Barbra Streisand and Celine Dion would be underbid by sandpaper-voiced tonal disasters, because "hey, they sing the same words and get through the song ok". Thank goodness in music and art we have a lot of freedom to choose what we listen to, and talented people tend to get a large following.

Because so few people can quickly read the work of other programmers (usually because the code is so long and complex), we live in an era where code quality is invisible, and the sublime skill that exists in some people is completely unrecognized. Without recognition, how can training get anywhere? In bridge building, connoisseurs all admire the work of Robert Maillart. Look him up, that guy was incredible in the early 1900’s; a real fusion between art and science. The engineering fields don't get the recognition they deserve for the elegant, artistic work that exists. How about the Hetch Hetchy water system of San Francisco, that sends water 160 miles away almost completely powered by gravity? That is insanely clever. A modern wonder of the world if you ask me.

It is unfortunate that so few companies have the luxury of having two or more teams working on the same problem so that the best team can be identified. Usually we see only one team, and so objectively speaking one is often in mystery as to how good the team is. Interestingly enough, back in the days of Thomas J. Watson at IBM, when a critical task came up, he would typically put 3 teams on the task, and whichever lab won the contest, got an increase of staff and money. This approach worked very well for IBM, which was basically a monopoly at the time, and kept the internal teams at a very high level of energy. I worked on an IBM project after Watson's retirement, and by then the management of IBM had so deteriorated in integrity and guts that every team had to be a winner.

Another thing we see today is a fake 10x programmer, who copies and pastes large chunks of code from other programmers and stuffs his project with tens of thousands (if not hundreds of thousands) of lines of code that do some mysterious thing. So it is extremely foolish to measure productivity by lines of code, because people can create a monster code base in no time flat by merging modules that they get from other sources. I admire short, clear, concise programs that get the job done and are almost eternal in their purity.

Is quantum computing real?

Quantum computing is real. I would describe it as a real boondoggle. The definition of a boondoggle is “work or activity that is wasteful or pointless but gives the appearance of having value”.

The people selling quantum computing mention cryptography, and other things that seem plausible. When the NSA wants to crack codes, they just fire up a million core computer, and those don’t require any breakthroughs, just lots of hardware and a big room. Quantum machines have no practical use, nor will they for decades to come.

It is extremely fascinating to play around with superconductive materials. So I don’t blame researchers for using any excuse to obtain funding. At very low temperatures, metal loses its normal resistance to electricity, and one can produce phenomenally large magnetic fields, which then creates all sorts of exotic conditions that at room temperature don’t exist.

You have to consider quantum computing to be a side-branch of particle physics, which is a fascinating and baffling area of study. But don’t expect to see a practical product in your lifetime. It is most unfortunate that the area of physics that holds the greatest promise of improving mankind’s life on earth, low energy nuclear reactions (also called “cold fusion”), is getting negligible research money, and instead we spend billions on things like hot fusion which has zero likelihood of producing usable results. Hot fusion is predicted to work "in 30 years" for the last 50 years. 30 years means that is enough time to retire and die before they can assign blame for lying.

There is a fellow who is working on low energy nuclear reactions who postulates a lower state of the hydrogen atom he calls the “hydrino”, and proposes that the dark matter the astrophysicists insist exists is big quantities of hydrinos. Anyone who has studied the standard model has to admit that it is a very messy, unsatisfying theory, and doesn’t explain radioactive decay at all. There are many mysteries to still unravel, and any day i expect a major breakthrough that will cause the current standard model to put into the wastebin.

Is global mutable state bad?

This is one of the great buzzword phrases in computers today: eliminating mutable state. Too bad these faddists fail to understand how a computer actually works. A computer fundamentally uses a global mutable state, called RAM and CPU registers. Those are constantly mutating, and are globally shared. Yes, you can get into trouble, but all programs eventually devolve into something that updates global mutable state. Otherwise no work is actually done. How one can structure this inevitable changing of global mutable state so as to have the fewest programming errors is an area of great theoretical and commercial interest. But to characterize the basic operation of what must eventually happen as “bad” is somewhat absurd.

One might more properly phrase the question, “how can i reduce the errors in my program to a minimum, while still accomplishing my goal in a reasonable time frame, and with a finished product that can be understood by other people without them scratching their heads on how it works”.

There are languages like LISP and FORTH, that are in the family called transformation languages, that win every contest imaginable for program brevity, but i defy you to understand a large FORTH or LISP program! Very tough.


How do below average developers survive?

How do below average workers survive? Simple, we are human beings and pure performance has never been the benchmark for employee retention. Some people are fun to have around, some are loyal, some are owed favors, some are relatives, some bring out great qualities in others even though they themselves don’t appear to do much (the catalyst type of person)… you get the picture. Half of all workers are below average, so who cares? And who is some all-mighty judge deciding who is better or worse than another person? I am a terrific programmer, but no programmer knows every application area or toolchain. If you put a giant Ruby on Rails program in front of me, i would be clueless, because although i can work in a dozen languages, Ruby isn’t one of them, and it would take me months to get up to speed on it.

The real question one should ask is: how do i stay up with the constant changes without burning out? And how can i maximize the value of what i do know?

In a well run company, the systems are so well designed that an ordinary person can get the job done in an 8 hour day without heavy stress. That is the beauty of a well-designed company system. In the entrepreneurial discussions you hear absolute nonsense about how you should only “hire great people”. If everyone was great, your company would be unprofitable because they would be spending too much money on all these great people. As my dear friend J. Moon points out: Ray Kroc didn’t invent the hamburger, but he created an amazing system that delivered a consistent product at a massive scale, with ordinary people.

Why isn't the Haskell language more popular?

Please note that this answer is not a criticism of Haskell, but an impartial observation of the qualities it possesses, which explains why it is not in general use. We know that some people love it, but isn’t under serious consideration by anyone to replace Java or JavaScript, the two most popular languages today ( excluding MS Excel). And you could replace Haskell with any number of powerful languages that have been around for more than 20 years yet are still very obscure.

A computer language can be characterized in many ways, but the most important aspects are:

(A) power,

(B) brevity,

(C) range,

(D) efficiency in speed/size, and

(E) ease of reading and maintaining

power can be thought of as, can a program be assembled from smaller pieces into a powerful complex final product?

brevity is about how many keystrokes you type to achieve a result. often a powerful language has brevity, however, extreme brevity such as APL and LISP possess entails trade-offs.

range is about how many different kinds of systems you can build with the language. Real time communications systems? Video games? Car combustion computer software? Payroll? etc.

efficiency is about how many resources it takes to run the final work product. A combustion computer needs to start up very fast, and often has serious constraints on RAM and CPU power.

ease of reading and maintaining is all about someone other than the original author being able to understand and modify some code without breaking the system.

In the commercial world ease of reading and maintaining is the dominant concern. If you use an obscure language, not many people can read it. So people tend to use the same languages they did 10 years ago, and language preferences move at glacial speed.

Haskell is not great in D (efficiency), but really suffers in (E) ease of reading and maintaining. Otherwise it is terrific, and the people who only care about properties A, B, C can sing praises of Haskell until they are hoarse. However, (E) is the fatal flaw in Haskell, and large Haskell programs are exceedingly difficult to understand. It will never be popular, and just like LISP and dozens of other powerful languages, they will remain niche forever. There is a reason why BASIC, C, Pascal, and many other very simple languages are so influential, because they are pretty easy to read. Any increase in power at the expense of ease of reading is ultimately judged by the general public as an unacceptable tradeoff. A program is written once, and used for decades, and along the way will pass through many hands.

Please note that i am not considering the benefits to the programmer who may be seeking job security; there are many instances in the history of programming languages where the bulk of programmers selected the most verbose language available so as to increase billable hours, and rejected superior alternatives. In defense of Haskell and other advanced languages, part of the resistance is a realization that a more powerful and brief language would reduce overall billable hours.

Debugging - the subtle lie

"Debugging is like being the detective in a crime movie where you're also the murderer"  - Filipe Fortes

From the earliest times in programming, people have used the term debugging. At one point there was a moth in a relay, but programmers had to invent a term for what they spend most of their time doing, so it was inevitable that a word would be invented to describe the process of fixing your own mistakes. It is estimated that 85% of the spent programming is consumed fixing the mistakes put into the code by the author. I know of no other job where 85% of the time is spent fixing mistakes of your own making. Programmers imagine they spend their time typing in programs, and so people often quest after briefer, more compact notations. The language called APL was the all-time champion of brevity, but its cryptic notation ensured that programs were almost write-only; only the author typically could understand them. The reality is that the computer requires such inhuman precision that it is a struggle for us to program, because the tiny mistakes that are made, often just a few words, cause the program to act in baffling ways.

Any programming language that purports to be a significant improvement over current technologies must make error correction a much easier, smoother process. In fact brevity may work against reducing error. Simplicity, regularity, and the ability to understand other people's code is paramount in the nex† generation language race.



Stock tip of the day - buy Ford (F)

You can profit from my research and losses.

Ford is my stock recommendation. it is near the one year low, and pays over 5% dividends. I have lost a lot of money on it, watching it sink as if it was a doomed company. This month’s sales are 7% down from last year. Well, last year was the highest on record. Overall 2017 is a good year for them. Today the Tesla Motors stock market cap passed Ford. Tesla is now worth $48 billion, and Ford is worth $45 billion. 

Compare sales:

Tesla:  $7 billion/year  vs. Ford: $150 billion/year  (21 times larger)


Tesla $0.7 billion loss (10% loss) vs. Ford: profitable

Cars shipped:

Tesla:  approx. 100,000 cars/year, one factory vs. Ford: 3,200,000/year, factories in dozens of countries (32 times larger)

Ford sells products all over the world to the masses. It has always been a cyclical company, as it cannot escape the overall financial state of its customers. Tesla is a luxury goods supplier, and is fairly immune to the general business cycle.

There is nothing seriously wrong with Ford, other than they are an old company with lots of pensions to pay, as automotive companies became paternalistic and rather socialist. Their product line has been steadily upgraded. They just came out with an aluminum F250 truck, which is the largest selling commercial and utility truck. That business alone would make them a large company. 

Tesla is a full of hot air company. They have never shipped a mass-market product. They lose money every year. They pay no dividends. Since they have no previous track record they can tease about a fabulous new product without disrupting the sales of their existing products. In the software field we call this advertising of a non-existent product “vaporware”. Since Elon comes from the software industry he has adopted the same tactics, that of promising a perfect new product that is always a little late, except that it turns out to be a lot behind schedule. 

Tesla has no service network to speak of and cannot possibly supply all the parts and repairs for the 500,000 cars they plan to build by the end of next year. When people have a luxury car they don’t put a lot of miles on them; that doesn’t tax your repair and parts capabilities, because the average wealthy person has many cars, and travels around the world so much they drive their cars little, not to mention having multiple homes so the miles driven per car is very low. Compare that to a commuter or contractor who beats the hell out of their single vehicle. Did you know the average age of a pickup truck in the USA is close to 20 years old? Ford will be a huge company for decades to come simply from selling replacement trucks.

Tesla is a software company, and that is Elon’s great strength. As cars become more software oriented, Elon’s company will do well. Ford and GM have really lousy programmers, have never paid attention to it before. So they will struggle in this transition. however, you can always buy software from a 3rd party, and it isn’t hard to run new software. Don’t like your autopilot vendor like MobileEyes which got fired from Tesla after the crash? Just buy a program and run it on the same computer. 

Once you enter the mass market as Elon is planning to do, their cars will get dented and scraped, and the need to supply parts and service will skyrocket, and since Elon hates dealers (refuses to have a dealer network), his antagonistic attitude towards third party vendors will bite him in the ass. You simply cannot deploy mass quantities of a product without a dealer/service network. This is how Citroen, and the French totally failed in the USA. They didn’t build a network of quality dealers. I have no love for auto dealers, and their ripoff attitudes, but it is a fact that the parts warehouse of Honda in Reno, can get any part for any car in 24 hours to anywhere in California. 

There are numerous stories on the internet of people waiting months for a Tesla part. if that was your only car you would be furious. 

People are irrationally pounding on Ford, and over-optimistic on Tesla. I doubt you can get 5% on a bond which is only going to go down as interest rates creep back up. I expect people to stay pessimistic on ford until the model 3 is late, as it inevitably will be. Any one missing part will stop his assembly line, and ramping up by a factor of 10 always has unforeseen challenges.

also, i am calling peak apple, the minute they fill up their spaceship building with employees (which should happen around September), they can shut the doors because no new idea will get out the door, because no new idea will be worthy or sufficiently important compared to the majestic building it came from. 



The era of interchangeable parts

The two big challenges for the next gen computer language are reducing complexity and creating a system where interchangeable parts are usable.  There are many wonderful new languages like Elixir, Julia, Parasail, Go, Rust, Dart, Swift, etc., but... none of those languages actually tackle the 2 big issues of complexity and interchangeable parts. In fact, none of these languages even really tries. They are instead adding features on top of their foundation language, and merely improving a few areas. For example, Parasail does a phenomenal job of making it much easier to utilize parallel processing. But the rest of the language is fairly conventional. None of these languages will likely displace the current leading languages (JavaScript, Python, etc.) because the improvement is insufficient. If something is only 10% better than the existing technology, the pressure to change is very slight.  The mass of developers inevitably coalesce upon a standard that the group as a whole picks. History shows that the programming community as a whole selects the language that generates the most billable hours.  We have seen this before with COBOL, and Java. JavaScript is now #1 in terms of code being written, and as you can see from the diagram from the previous blog post, it requires a bewildering array of tools to make it even tolerable. It is time for JavaScript to die, and so the burning question arises: which of the new languages has the potential to displace JavaScript (and most of the other top 10 languages).

The current candidate languages/systems are:

1) Red,  2) Eve, 3) Elm, 4) GunDB, 5) Beads, 6) a few other languages which are in stealth and haven't shown their hands. Let's look at the first three candidates:

Red (red-lang.org) is an improved version of Rebol. Rebol is a LISP family language that has a much cleaner syntax. In some ways it is reminiscent of Objective-C. The beauty of Red is that you only need a small set of tools, and because each module is its own domain specific language, you can hook up lots of components together. However, the Red language doesn't have a database inside the language, and that means you are inevitably going to have a serious mismatch between the very flexible language and an inflexible database. If you dramatically increase the power of a car's engine, and don't beef up the transmission, the car will burn out the transmission. The database is like the transmission, it has to absorb all the power the engine creates. And this is a central flaw in the in the thinking of Red. If they don't include a database, it can't go much further ahead of the current languages. 

Eve (with-eve.com) is a new database system and language. It has inspiration MySQL, and goes way beyond it with an interesting free-form database based on entity/attribute. It adds a feature which allows code to depend on values in records, and to automatically redraw affected DIV blocks. The developers didn't like the problems variables and loops, so they banished them from the language. But banishing one of the fundamental operations of the underlying CPU never works in the long run. If you don't have variables, then people will put their variables into your database and ruin the purity of your system. The other great limitation in Eve is that they are still presenting to the programmer the HTML drawing model, which is terrible. Since most graphical interactive programs comprise 85% of drawing code (my own experience), if you don't fix the lousy graphical model of HTML you can't really fix programming. Eve is still in constant flux, and i expect they will backpedal and fix these problems due to user demand.

Elm (elm-lang.org) is a very interesting language. It was inspired by Haskell, but is a much more pragmatic approach. Elm has a very simplified graphical model, and of these entries has the most evolved toolchain. It does however present some difficult abstractions to the beginning, and is not that easy to read for a beginning, as you must have a tremendous memory to remember how many arguments each function needs. By eliminating the parentheses that LISP used, you then rely on the programmer's memory. Evan the author has a prodigious memory, and i suspect that people will find it a little difficult to learn. It doesn't really have a database system.

Gun DB (github.com/gundb) was designed as a graph database, but is evolving a general purpose language. Out of the box it offers an offline-first database that can sync with other peers. This feature alone makes what is a very tricky thing program pretty easy. Graph databases are the ultimate in flexibility. Currently the best graph database implementation is Neo4J, however, Neo4J has gotten sidetracked into connecting their system to older databases, and is burning up their development staff on a hopeless quest for compatibility which can never be achieved with the big database vendors. The Neo4J Cypher language is very weak, and you can't build programs in it. GunDB has not implemented the relationships part of Neo4J (its most interesting aspect), but it is a lot of fun, and the pragmatism of the author would indicate it will evolve into a useful thing.

We will talk about Beads in another post. It combines a graph database, with a simplified drawing model, all based on a declarative + deductive language that has a robust mathematical foundation.

One of the things that is built into Excel that is not present in any other language than Beads is the property of mathematical closure in the underlying arithmetic. If you divide by zero, what happens? If you multiply by an undefined variable what happens? If you take the square root of a negative number, and then multiply that by 2 what do you get? many of these languages don't have a precise definition of what happens, and this leads to program errors. The protected arithmetic of Excel is one of the main reasons the amateur programmers of the world, which number in the hundreds of millions, don't venture into conventional languages, which don't protect the user.






A clear example of the mess programming is in

A very enterprising Kamran Ahmed and associates has created a wonderful set of charts showing how deep and wide the mess is in development today. The amount of knowledge one has to have in order to be productive using conventional tools is absurd. The true next generation language will roll up all of these components into one simple tool. 

(for the full diagram see https://github.com/kamranahmedse/developer-roadmap/blob/master/README.md