R.I.P. Stan Lee, the most creative writer since Jules Verne

Stan Lee died the other day.

He was one of the most creative americans that ever lived.

We all know that George Lucas is a very creative fellow, having created a few dozen major characters, like Luke Skywalker, Han Solo, and villains like Jabba the Hutt and Boba Fett.  Stan Lee created at least 5 to 10 times more characters than Lucas. I think he was the most imaginative writer since Jules Verne. 


When i was young, we had a comic rack at the local drug store in town, and they cost 12 cents each. At the time we were reading them, in the early days of Marvel Comics, the print run of each issue was around 10,000 copies. So if you imagine the retailer got half, the total revenue to marvel per issue was about $600. They were printed on the cheapest newsprint paper, with just a few colors, and were banged out at an incredible pace. I had several first editions, and like every other mother, mine were thrown out when i went to college… they are so valuable now because so few people kept them away from their mothers! hah. Nobody took them seriously and it took a while for their science-fiction style to catch on.


These original Marvel comics are masterpieces of the genre. They can be purchased under the “marvel masterworks” series at Amazon in both hardcover and paperback. They make a wonderful gift for a young kid who doesn’t realize the source material behind the many successful films is so good. They were so far ahead of other comics of the time, it isn’t funny. Batman and Superman, the two big characters of the entrenched competitor DC comics were crude in comparison, with stupid villains who were typically bank robbers or thugs. Marvel comics on the other hand created a roster of villains really worth fighting, with names like Magneto who could control metal, Doctor Doom, who wanted to take over the planet, or Kang the Conqueror who comes from the future to take over the universe, or Galactus who would like to consume all the life force in our solar system for breakfast. Marvel heroes had personal problems; they didn’t have much money, or their girlfriends were mad at them, they might even regret their superpowers. From any dimension you look at, the output of Marvels’ first decade is fantastic stuff; it is like Ian Fleming’s James Bond series, a highly original body of work that will still be enjoyable for a long time to come, and imitated without attribution constantly. 


The secret to Stan Lee’s incredible output - and it is amazing how much stuff he cranked out - was that he worked in an unprecedented way with the artists who helped invent the characters and drew the comics. He would write a very short summary of what happens in the 16 page story and then let the artists draw whatever they wanted, and then he would add the words in later. In this way the stories progressed nicely, and the artists loved the freedom, and they did wonderful work. 

Lee did not profit much from his work at Marvel. He was outmaneuvered in the board room many times, and when Marvel was finally sold to disney for billions he got nothing. Frankly the comic book business is a pathetic business compared to movies, and the technology to do his films justice did not come into existence until recently.  But he did pretty well overall, and was beloved by the many millions of people who have come to know his characters like Spider man, etc., mostly from movies.

His appearance in the films always makes me cry, because i loved his work so much. He was so clever and funny. 


Every time the films have departed from the original material, it has been to the movie’s detriment. When Marvel films have failed, it was because they didn’t trust the super genius of the creators, and think people can’t handle it. One of marvel’s best comics was the Fantastic Four, yet they have bungled Dr. Doom and in the last film made him into a corporate weasel after money instead of a megalomaniac who is so smart he thinks the world should grovel at his feet.

The best films Captain America (#1), Dr. Strange, and Ant Man. Those show the wild range of Stan Lee’s imagination, and how he would move between patriotic world war 2 era, to eastern mysticism, and then biotechnology.

With his passing, and with George Lucas’ retirement, one has to wonder, where is the next great set of characters and stories to come from? The answer is that Overwatch (a video game from Blizzard) is the crucible of animated/superhero characters for the youngest generation, and mark my words, Overwatch will be bigger than marvel, because it includes characters across the globe, and thus will be relatable to everyone. 



How much is programming going to change in the next few years?

Programming is going to change dramatically, probably by 2020, which is not that far away. New languages and techniques are in the lab now that will trim the fat out of the development process. Currently programmers think they spend most of their time designing and typing, and estimate that debugging is perhaps 20–25% of the total effort. In reality, design, typing, and compiling are very straightforward, and 85% or more of the total effort is spent debugging and refining the software. The debugging process, which to be completely honest is the programmer fixing their own problems - is a huge area of waste, and finally techniques and tools are in the pipeline to shortcut that process. The net result will be about a 3:1 overall improvement in productivity (because debugging will cease to be difficult), but more importantly a 10:1 reduction in the frustration level one must endure to tolerate being a programmer.

The world of programming is currently populated by people with incredible, far out on the tail levels of patience; the kind of people who can do a 7000 piece jigsaw puzzle and enjoy it, while the ordinary person would give up. The lowering of this “frustration barrier” will allow millions of ordinary people to enjoy programming. Let’s face it, the computer is mankind’s most powerful and interesting invention, and everyone should have some fun with it. It is intensely satisfying to see a robot follow your instructions exactly, tirelessly, with no whining like your kids ;->

As for where this improvement is going to originate from, it isn’t going to come from academia, which refuses for the most part to build practical, useful tools, and will come from small entrepreneurial teams funded by themselves, angel investors, or crowdfunding. I can’t tell you how many academics flat out refuse to talk to industry people, as they live inside a bubble which is based on the status from publishing into journals that they only read among themselves. The academic world couldn’t be more corrupt and dysfunctional than it is in 2018. The cost-effectiveness of conventional colleges is abysmal, and if you look at graphs like:

College Tuition and Fees vs Overall Inflation

you will see the unsustainable trajectory that they are on. Another area where improvements won’t be coming from are large companies like Apple and Facebook, which profit mightily from things staying exactly as they are, and also in large companies, a disruptive technology like this would invalidate and seriously depreciate their multi-billion-dollar codebases, so even if it was invented there, it would not see the light of day.

Blockchain and Patent Medicines

In the 1800’s patent medicines were the rage, and people drove around selling magical cures.
Today, blockchain is the magical cure. Everyone including a friend starting an insurance company includes blockchain on their website, as it makes everything better… 

The de medici family invented the double entry bookkeeping system, which kept two simultaneous books one using debits and the other credits, so that they could keep banking honest and detect embezzling/sloppy bookkeepers. That invention powered the family to a huge advantage in banking. The blockchain concept, of distributing the ledger into not just 2 sets of books but 1000 sets of books makes it virtually impossible to cheat, so it is an improvement to be sure.

However, if you have an honest person running things, you can do just fine with one set of books, and i personally would derive no benefit whatsoever from having my bookkeeping spread onto 1000 computers. The only people who need blockchain are people doing illegal, off the book transactions where they don’t want anyone to know how much and with whom they are doing business. So all this crypto stuff is helping tyrants move their money around the world without resorting to couriers, bearer bonds, gold bullion, diamonds, etc.

But for ordinary people such as myself blockchain is a minor change to how things are stored on computers, and mean absolutely zero. The fact that 150 billion was invested last year into cryptocurrencies, half of that will disappear, as the organizers cash out their “pet rock” gains, and will be remembered in the distant future as another mass mania like the Tulip Mania.

How I fought Amazon....to a draw

Jeff Bezos is well on his way to be the richest man in history, and by a wide margin. His final fortune will make Bill Gates, the former long-running richest man in the world look like just another billionaire. Bezos is extremely skilled with computers, and fighting his Amazon company is a formidable challenge, but one that any company selling similar products in the marketplace has to face. Amazon has terrific customer service, a fabulous logistics team, and a great reputation. But is their pricing advantage that built the company.

In my case I was helping out a friend who had a packaged software company, who sold products on his website, but also had an affiliate store in Amazon. He had a few hundred shrink-wrap kids educational computer games. These products sold from $10 to $40 each, and the problem he faced was that although he ran a tight ship with family doing the work, Amazon was undercutting his pricing slightly, and when you did a search, Amazon was one cent cheaper than his pricing and given the two vendors are selling exactly the same thing, the consumers in a very rational behavior pattern, selected the cheapest product, and Amazon was getting most of the business.

After some quick analysis, i realized that he was competing against a computer algorithm, not a human strategist who varies his attack pattern. I wrote a program that scanned the pricing on Amazon’s store, and then automatically in a little Python script adjusted his prices so that they were 1 cent lower than Amazon. He runs this every day, and at night Amazon adjusts their prices one cent lower again. So he runs the program every day, and thus it alternates who has the lower price, but he is beating them because Amazon is only adjusting once per day at a fixed time. The price battle continues, with Amazon and him dropping a penny at a time, until it reaches some magical threshold below the wholesale cost of the item, in the Amazon computer whereby amazon doesn’t want to lose any more money, and Amazon sets the price back $39.95. At which case my little program set his price to $39.94 and the battle starts over again.

Most of the time the product is above wholesale pricing, and he is beating amazon, but always on a treadmill of running the script so that he stays ahead of them. Amazon is a fairly simple system, and the concept of holding 100% markup retail pricing for ordinary products is a vanished thing. Sure, in luxury goods like those sold by LVMH (owned by the richest man in France) they can control the retailers and prevent any discounting, but for the vast majority of products, every retailer is going to have to do battle with Amazon.

Amazon is a tough competitor, and its computer is not afraid to go below cost for a spell in order to get you to quit, but its internal algorithm is more attuned to making a profit than crushing all the other retailers, so it is possible to fight them to a draw. Amazon destroyed book retailers through a scheme where they discounted the bestsellers, because their statistical analysis showed that the heavy readers (the 20%) were the people who kept bookstores alive. So he wiped out the bookstores’ air supply by eliminating that crucial income component, which he could subsidize with higher prices on lower selling books. The other thing he did which the government should never have allowed was he bought the main book distribution company in the USA, Baker and Taylor, and once he had control of the wholesaler he put the squeeze on the stores by lowering their basic profit margin. That should never have been allowed by the government. The destruction of bookstores was quite an evil for the USA, because book culture is what built up the USA in the first place.

I hope this inspires more script writers to help the independent merchants who have to deal with this tough algorithmic selling system of Amazon. Since Amazon sells millions of products, unless you are in an area targeted for destruction (like computer hosting, Bezos’ second multi-billion dollar business), you don’t have fight an opponent who has any imagination. The Amazon computer does the same thing all the time, and it has been shown with chess computers that if you repeat yourself too regularly it is a weakness.

Wisdom from the past in Software Development

There are many classic books about software development that have been published in the last 50 years. Some of these like "The Mythical Man-Month" are almost completly applicable to today; others are a bit more dated. In the older knowledge, like that of Robert Glass, and Capers Jones, there are aspects of eternal truth in their books; for example the Capers Jones idea was that languages had an intrinsic measurable power to them, measured by means of brevity. the shorter the program, the more powerful the language. Elm, Red and several other new languages by this measure are examples of extremely powerful languages. However, Robert Glass points out in his book that 40 to 80% of all cost of software is in maintenance of existing code. And in this area, these very brief languages may not excel. The higher the power level of a language, the more difficult it can be to read, because more is happening under the hood. All evidence shows that the industry selected for highest number of billable hours, which steered them towards COBOL, then Java, its successor language. I think that we in the new language community should acknowledge that inertia is the most powerful force in the universe, and that the mainstream programmers will never adopt an efficient, compact notation to replace Java because it would mean the loss of too many billable hours. It would be like asking lawyers to adopt standardized forms for court cases, and end the practice of making custom documents for routine items. In the legal world, the bulk of costs of a lawsuit are done during discovery phase, and in actuality most cases are settled very near the court date, which means discovery costs were wasted. If lawyers stopped doing this their incomes would plummet.

The next generation powerful, but simple languages, will forever be a relatively small proportion of the usage space. It doesn't mean it isn't worth building these tools; definitely it is worth doing, but let's not be overly optimistic that programmers industry-wide are willing to see their incomes drop. Just as in law, there is a relatively fixed number of disputes happening per year, and there is also a relatively fixed amount of software to be built. It isn't infinite. Once one company builds a great self-driving car software suite, there won't be need for 100 others. Once you have built a bug-free stack for 5G cellular radio, it will last 10 to 20 years. There is a lot of inertia out there, and inferior technologies like object oriented programming, which is one of the greatest boondoggles ever foisted upon the business community has only been admitted as a boondoggle by a minority of programmers such as myself. Now that there is a mainstream alternative to OOP, in the name of Functional programming, finally people can admit OOP is crap. But until they had a replacement they didn't want to get in trouble for continuing to use such a failing technology. 

By failing technology i mean, chronic project overruns, and horribly buggy software. We can, and will, do better.

Boy am I getting fed up with academia

Boy am I getting fed up with academics who claim they are working in the area of advanced computer languages, and don’t have the time of day to even peek at any one of a dozen great new programming language projects underway. They are all busy proving that 2+2 is 4 or some such trivial task, and their vaunted proving systems which they have been working on for 20 years can’t even prove a tic-tac-toe game correct, because a program that draws on the screen is beyond their start of the art. Just because you can't prove graphical interactive software is correct isn't stopping billions of people from using their cellphones every day. At some point practicality has to be considered. There is nothing settled or wonderful about the current state of software development techniques, and when i went to college the universities were doing research that moved the industry forward. The universities are the natural origin place for new languages and techniques, and instead they are becoming so conservative and hostile to new ideas that they are part of the problem. Sorry, but with the billions being poured into higher education, and computer science departments in hundreds of universities around the world, we should see more progress! The educational system in computer science is burning a lot of money with negligible contributions being made in many areas.

chair_pic.jpg

 

 

 

Javascript vs. Actionscript 3

A lot of people disparage Actionscript 3 as a dying or dead language, and heap scorn upon it. However, this is really just a smear campaign started by the sometimes mean-spirited Steve Jobs. Javascript is almost a perfect copy of Actionscript 2, and one by one the JS team has been adding in the missing features that Actionscript 3 added.  One of the most important features added in ES2015 is the module system, and now you can import from different modules, and keep your namespaces separate. Interchangeable parts is one of the two super important features of the next generation of software technology, and modules are essential.

However, I just found an unbelievable JS bug that is on Safari and Chrome. If you are writing some JS, so look out for this one:

lets say you have a standard library module and it defines some functions and constants. But FOOBAR isn’t one of them.

import * as std from './stdlib.js’; 
let z1 = FOOBAR; // compiler finds this error, FOOBAR the local name is not defined yet 
let z2 = std.FOOBAR; // this is a valid module name, but not a known symbol inside the module std, compiler lets it slide 
let z3 = stx.FOOBAR; // compiler catches this error, module name was misspelled.

If you make a typographical error and import a symbol that doesn’t exist, perhaps because you made a small spelling error, the compiler doesn’t catch the undefined name, and treats it like a property that doesn’t exist, which it is not. They forgot that a module name prefix is not an object, and that all imported symbols must be found.

This is a massive error in both Chrome and Safari, i can’t believe they didn’t catch this. It is most unfortunate that the folks at Google and Apple didn’t spend more time with Ada or Modula-2, which not only had modules but separately compilable modules, something that JS doesn’t have. Frankly you cannot have interchangeable parts without separate compilation. I wonder how progressive web apps are going to work completely without addressing some of these issues.

This makes modules incredibly dangerous compared to one big glob of code. The whole point of modules is to split namespaces so you don’t accidentally use a variable that isn’t part of your region of code, so this is tantamount to sabotage of the module system.

This is yet another reason why Javascript is such a piece of crap. The fact that they have't figured this out in 3 years since 2015 goes to show how poorly thought out and implement JS is, and i an many others can't wait for the day when we bury that language!

Writing in Actionscript 3 you get a very solid compiler and toolchain, and AS3 and JS are so close now, you can convert one to the other with mostly just a series of find/replace operations in a text editor. Instead of using TypeScript, i suggest you try Actionscript 3 which retains the type information at runtime, and use AS3 for your mobile and desktop targets, and then convert to JS using a simple script to run in the web. This way you get good protection. TypeScript can't carry the checks into runtime.

Scale of sophistication of langauges

There are many aspects to a programming language. One way to evaluate the sophistication is to make a radar graph. Here we present the following scales, where in each category we list the scale of sophistication, where 1 is primitive, and 5 is deluxe.

Type protection

  1.       easy to make a mistake; turn a number into a string accidentally
  2.       silent incorrect conversion
  3.       some type checks
  4.       strong implicit types
  5.       Modula-2 type airtight range checks, etc.

Arithmetic safety

  1.    + operator overloaded, can't tell what operator is actually being used
  2.    overflows undetected
  3.    selective control of overflow/underflow detection (Modula-2)
  4.    improved DEC64 arithmetic  0.1 + 0.2 does equal 0.3
  5.    infinite precision (Mathematica)

Primitive data types supported

  1.    numbers, strings, Boolean
  2.    includes one dimensional arrays
  3.    includes multi-dimensional arrays
  4.    includes structured types, dates, records, sounds, etc.
  5.    includes the SuperTree

Graphical model sophistication

  1.   none, all drawing is done via library routines
  2.   console drawing built into language
  3.   2D primitives, weak structuring
  4.   2D primitives, strong structuring
  5.   3D drawing  (unity)

Database sophistication

  1.    external to language
  2.    indexed sequential or hierarchical model
  3.    relational database built in or merged into language (PHP)
  4.    entity-relation database built in
  5.    graph database built-in (Neo4J)

Automatic dependency calculation (also called lazy evaluation)

  1.    none  (C, JavaScript)
  2.    evaluation when needed of quantities
  3.    automatic derivation of proper order to evaluate (Excel)
  4.    automatic derived virtual quantities
  5.    automatic calculation of code to execute, with backtracking (PROLOG)

Automatic dependency drawing (also called auto refresh)

  1.    none (C, JavaScript)
  2.    simple redraw support (Win32)
  3.    automatic redraw using framework (React)
  4.    automatic redraw without having to use framework
  5.    automatic redraw of code that references changed quantities
 The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in. 

The area enclosed by the shape is a good approximation of the relative power of the language. As you can see from the diagram above, JavaScript is a fairly weak language, and Swift is more powerful. The Beads design is much more advanced than Swift. Fans of Dart, Go, Red, etc., are invited to send me their numbers for each of the categories and I will add them in. 

Simplification is easy

 Okay ready to write my program now...

Okay ready to write my program now...

Today we in the programming world are facing an explosion of code libraries. People are writing code all over the place, and to save duplication of effort, we are trying as a profession to share the previously done work. There are two problems that immediately crop up. Let's for argument sake imagine that we are using someone's bar chart making module, and that module was at version 1 when we used it. If we copy the bar chart code over, and incorporate it into our project as a copy, it will continue to work as it did. If we disconnect from the update stream, if there are errors in the barchart version 1, we won't get those fixes. If we link to the constantly evolving barchart code, and the author of barchart changes how you use the barchart module, then our code can, and likely will eventually break. So we are stuck between the choice of copying, creating duplication of immediately obsolete code, or being at the mercy of breaking changes in the evolving components we reference.

This in a nutshell is what is driving the explosion of slightly variated code bases, which today is exploding at a geometric rate. There are billions of lines of code written, yet every day it is ignored by many developers because who knows how good the code is, or what trajectory the code base is on.

The solution is that evolution of modules must proceed in a highly upward compatible way. To keep version 1 still working, but allow for new features to be added, one must change how a component is accessed. In modern languages, the only way to reference external code is to call a function with parameters. Once the parameters change in name, position, or quantity, most languages will break on that inevitable change. So the enemy of stability is function parameters, which have in most languages a rigid order. The other improvement one can make is to send a version number to all functions, telling the intention to use functionality as of version 1. Let's say that version 2 of the barchart plays music, you can send version 1 as an input parameter to the module, and then the designer can guarantee that version 1 behaves the same as before. 

How you communicate large and complex amounts of data to modules is the key weakness in modern languages. In Pascal, C, Fortran, COBOL, and other languages of the 60's and 70's you had records. Those were removed in Java, JavaScript, and many other languages of the 80's 90's and 2000's. If you don't have records, but amorphous objects, you then have a nightmare of missing information error states. So the tradeoff in module access is that either you force a rigid record structure, or the module code must increase in "toughness", the ability to not break if inputs are missing. 

In the old days, every function had an error return. This is now considered cumbersome, and in JavaScript for example one hardly ever sees error results being passed back or checked. This creates really flabby code, where errors propagate silently, and like a metastasizing cancer, move to a new location. It is gratifying to see error codes return in Go, however, that is not the best way to do error returns. Excel is the master at handling errors, and programming langauges would be well advised to study Excel more closely.

The graveyard of once-promising computer languages

 

I agree that one should fix a language's flaws, even if it means making breaking changes, however, that doesn’t change the fact that history shows you seem to only get one chance to get a language right. Oberon for example, had a fatal flaw in version 1, and because of that flaw, the early adopters, who are so critical to the success of a language, abandoned it, and moved elsewhere. The truth is that the adoption by “the herd” is driven by a small group of tastemakers. This is true in so many different fields. Take music for example, there are critics and “hip” people who spread the word that so-and-so is “hot”. The have a huge effect on the endpoint of the popularity curve, because the tastemakers function as filters for the society at large, who are so busy in their toils that they don’t experiment much, and the safety of herds relies on mass adoption and clustering. Want to be a hotshot artist in the USA? Get on the cover of Art Forum magazine.  

Humans are a herd animal like horses, and they run in packs, and stragglers are picked off by predators…so all evidence shows that your version 1 either makes you or breaks you. After all, the number of people using a language is a big factor in picking it to begin with. I have a saying, inertia is the most powerful force in the universe. Looking at the pile of dead languages, and there are over a 1000 that didn’t even make it to probably 10k users, the early adopters are a picky bunch, and don’t suffer glaring omissions kindly.

But being the eternal optimist that I am (and what programmer isn’t an optimist), i have made provision for graceful evolution of the Beads language by incorporating a feature whereby the code can be automatically upgraded to later versions, something that should be present in all languages but is inexplicably rare. 

To make the language comparison process less religious and more objective, i am performing “drag races”, where each language can be directly compared to another, by means of implementing a precisely specified task, and then to really add to the realism of the challenge, we take the perfectly operating program, modify the spec in some small way, and hand it to a new person, not the author, to update that code base to conform to the changed spec. For example, in Chess you could augment the rules, by allowing castling through other pieces, or giving the pawn the option to move 3 squares on the first move, not just 1 or 2. That is the acid test, can a different person improve someone else’s work? This is measurable, and actually matters the most, because code re-use is terribly low. In the real world, laws and conditions are constantly changing, and a program has to evolve. 

And to the point of allowing a language to evolve gracefully, i have added a feature to the syntax so that old code can be automatically updated as the language evolves. My experience shows that in business, programs last decades, and by that time the language has evolved so much that the existing code base gets cut off from the flow of the language, because it would disrupt the reliability to upgrade the compiler. People freeze the compiler more than they freeze the code, because it will wreak unknown, but possibly drastic effects. As soon as a big program stabilizes, there is a natural tendency to break it off of the “stream of constant breakage”.  I know that in my day job in a telecom firm, we have downgraded almost every PC in the company to Windows 7, because it works so much more smoothly. Look at the outrageous tactics Microsoft is employing to force their user base into Windows 10. i personally think Steve Ballmer should be in chains for his role in productivity abatement.

Programming challenge - a chess game

I present here an excellent challenge for those wishing to see how good their favorite language is. This is a project that should come in around 2500 words, so not a large program by any stretch of the imagination. Although programming is often quoted in lines, some languages are more vertical than others and that measurement highly distorts the true size of a program. Writers of english such as journalists are very familiar with word counts, and so we are adopting henceforth the word as the basic measurement of a program's size. It translates to under 1000 lines.

The commonly used Hello World type of program tells you almost nothing about a programming language, and is a waste of time. The snake challenge is okay, but it is still a little too simple to show how good a language is. A bad language can deliver a snake program and it doesn't seem bad at all. But this task is of sufficient complexity that the true merits of a language shine through. 

You can get the chess challenge specification and sample implementation here.

Here are some screenshots from the reference implementation, which is included in the specification package. I will post the various entrants once they are submitted, so people can compare side-by-side the various language implementations.

screenshot_land.jpeg
screenshot_portrait.jpeg

The problem with Donald Knuth's Algorithms Books

First, let me say before i get critical about Knuth's work, that he did an incredible amount of systematic, meticulous, highly accurate and important work. The problems that i am going to talk about relate to the format he used to encode his work. The bottom line is that Knuth was like having a Shakespeare who decided to write in pig latin.

Knuth came up with the idea of literate programming as a sequel to structured programming. Unfortunately his approach was completely wrong. I am not the only one who has pointed this out (http://akkartik.name/post/literate-programming). I am a nobody compared to Knuth, so maybe stating the emperor has no clothes on may seem offensive. I don't mean any personal disrespect to Knuth, but his TeX product has left me cold from day 1.  Having your code comments presented in nicer typography is well and good, but honestly, do you consider Knuth's TeX system any good? It is a disaster in my book, a bizarre, complex curiosity that he squandered a big chunk of his career on. Knuth was so ridiculous that he refused to use TrueType for encoding fonts. He invented his own font format, Metafont. He was so famous and influential nobody pushed back on him, but who else on earth would refuse to use one of the commercial type formats? Its like someone building a motorcycle and deciding that neither English units or Metric is good enough, and that you need to use an incompatible set of nuts and bolts. The history of computers is full of non-agreement on basic standards. ASCII vs. EBCDIC, Mac vs. PC, iOS vs. Android, etc., but why when 99.9% of the world is one of two camps, do you invent your own 3rd form which has no substantial advantages. 

Knuth's choice of MIX was never fine at any time. At any moment from 1965 onward a single company has owned the lions share of the CPU market. It was IBM, then DEC, and for at least 36 years since the IBM PC, the Intel instruction set has 99% of the desktop market. Nowadays the ARM instruction set has 99% of the mobile market, but server and desktop are 99% intel architecture, and if Knuth had for example picked the intel instruction set, not only would most of the code still run fine, because the intel architecture has been phenomenally backwards compatible, but there are many commercial cross-assembler tools that efficiently convert intel instruction set to other chips like Motorola 68000, MIPS, ARM, etc.. By using MIX he doomed his unbelievably meticulous work to be basically unused today. What company can make a living selling and maintaining a MIX cross assembler, when only one human in history ever used MIX? I argue that Knuth was being perversely manufacturer neutral. I can't tell you how many programmers i have seen have his books on their shelf, but never actually used them.

What creates the best products?

 

In a recent interview, one of the founding geniuses of Apple pointed out that when people bring forward a product that they want, you get the best results. This is the problem with giant corporations and innovation, the individual is blended out of the picture. In fact, when you look at the track results from large companies, they usually acquire new ideas from outside.  

“...when things come from yourself, knowing what you would like very much and being in control of it, that's when you get the best products.” 

Wozniak / interview from September 2017

Programming challenge - a Snake game

This is one of the benchmark programs I am using to compare one language to another. Try building this game in your language of choice, and submit your program and we will analyze your code and show how it stacks up compared to other implementations. There is a live programming tutorial on YouTube that builds a more rudimentary version of the snake program in under 5 minutes. However, that 5 minute program has numerous flaws and is only about half of this specification. It does show how flexible and fast JavaScript is. 

The game runs at 6 frames per second, and the snake moves one cell per frame.  If the snake moves into the apple the length is increased by one, a crunch sound is emitted, and the apple is moved to a new cell. If the snake crosses over itself, then a beep is emitted and the part of the snake from that cell onward is erased. The snake wraps around the board in all four directions.

            The playing board, drawn solid black, is subdivided into square cells of 42 points. At the start, the snake is set to length 1 and positioned at cell (4,4). The apple, which is the goal of the snake to eat, is one cell drawn as a circle in HTML color crimson, and is placed at a random location on the board.

 

At the start of the game the snake is paused. As soon as an arrow key is pressed, the snake begins to move in that direction, and leaves a trail behind. Initially the trail is limited to 5 cells total. If the snake moves into the Apple, the snake's length grows by 1. Once the apple is eaten, it is immediately respawned into a random new cell. The apple may end up on occasion placed inside the body of the snake, however, the snake only is considered to have eaten the apple if the head moves onto the apple.  The snake cells are drawn as 40 pt squares with 2pts on the right and bottom to create a separation. The snake cells are drawn in alternating colors lime green and lawn green. The head of the snake is drawn as a rounded rectangle with a corner radius of 8 pt and a border of 2 pt dark green. The remaining cells of the snake are drawn as a rounded rectangle with 2 pt corner radius.

            The only input to the game is the keyboard. The arrow keys change the direction of the snake, however, to prevent frustration the player is not allowed to move the snake head back into itself. So if the snake is traveling east, the attempted movement west is ignored. A command to move in the direction already in motion is ignored. To permit fast maneuvers the direction inputs are queued, so that one can do WEST - SOUTH - EAST to perform a 180 degree turn in 3 consecutive frames.  Pressing the space bar pauses or resumes the game.

            As the snake grows and shrinks, the current size and high score is reported at the top of the screen as a topmost layer at 30% opacity, centered in 28 pt black text, for example: "8 high:22".

            The default starting window size is 700 x 700 interior pixels. If the user resizes the window, the game is reset back to the starting state.

            Note: since the window size will not usually be an even multiple of 42 points, the cells are slightly stretched to not have any dead space leftover.  So the program must first figure out how many whole 42 point cells can fit in the X and Y directions, then divide the width and height by the number of cells in each direction.

To help you with your game, here is the specification in PDF format and the sound files as a zip file.

 

Can I sue someone for stealing my idea?

A lawsuit is a 4-way race. There are two attorneys, and the two litigants. The lawyers come in first and second, and the only real uncertainty in this contest is finding out who comes in last. A successful lawsuit from the point of view of the lawyers is a long, protracted battle on paper that drains both sides more or less equally. If you have the luxury of a large cash position, and are in no particular hurry, a lawsuit is a wonderful way to drain the finances and pin down your opponent. Note that in most cases lawsuits only benefit large corporations, who can easily absorb the costs of litigation when compared to their immense cash flows, while for an individual a lawsuit is a great burden. So assuming you are the solo entrepreneur, litigation is out of the question.

The sad truth is that the inventors and discovers of immensely important inventions like the laser beam, transistor, radio, etc. rarely have benefited in history. It is quite rare that scientific invention, and engineering breakthroughs generate riches for the actual inventor or discoverer. We see a lot of millionaire movie stars and athletes and salespeople. They are paid fantastic sums, while the people who create the breakthroughs of society often toil away in a startup that fails, but the idea was sound, and gradually gets adopted widely and the world benefits. This is life as it is in our century. Maybe someday athletes won’t be so highly prized, and idea people will get more recognition and reward. It not happening any time soon, in fact the recent record setting $200 million for 5 year contract for a basketball player shows that athletics are gaining ground, and VC firms seem to get the majority of the money in startups, usually more than the founders.

In today’s world, showing people your idea comes with the risk of it leaking out. The investors you visit and demo to, may have already invested in a similar project but it lacks some of your ideas, and you can expect that whatever materials you give to anyone will be distributed. As my friend Paul says, a secret is a piece of information you tell one person at a time. Avoid showing people your inner secrets, it is too risky. Who wants to be around that grumpy, bitter, person who experienced idea theft. Some people only have one or two good ideas in their whole life. So treat them like golden inspiration, and do your best to refine the idea so that it can be commercially successful.  That may mean bringing in a partner who has no technical expertise, but knows how to sell. Many companies are based on a pair of complementary people like Wozniak and Jobs, or Roy Disney and Walt Disney. But also keep in mind that ideas are born of the time, and when Alexander Graham Bell was in the patent office in Washington, DC submitting his telephone, there were people right next to him in line with almost the same idea! Numerous times in history the same product has been invented at around the same time in different countries. The reason Edison was the greatest inventor of all time, is because he was developing products at the very moment electricity and modern technology came into being. So he could put together all these combinations of sub-technologies that together created the phonograph, or the movie projector, etc. So don’t wait too long on your idea. The tide waits for no man, and your idea might not be useful in 5 years. Sometimes you have to be willing to drop everything else to get your idea done faster. This is the exciting, risky world of invention we are talking about. Not for the faint of heart!

What makes a 10x programmer?

There has always been a huge range of productivity and quality amongst programmers, and in the old days was estimated at a 20:1 range. In the field of the Arts there is more than 1000:1 range in skill; in fact, it is impractical to compare skill in the arts as it is subjective, but at an objective level there are obvious signs of superhuman capabilities, especially in music composition (which is a different kind of “programming”). Take for example Donizetti’s composition of “Don Pasquale” in around two weeks which is like writing 50,000 lines of code in 2 weeks, an incredible feat one of the greatest artistic achievements of all time. So 10x for programming doesn’t sound unreasonable at all.

An environment where peace and quiet are available is very important for productivity, but putting a lousy programmer in a quiet environment still won't fix their code. Having studied a lot of code, i have determined that great programmers all use the same fundamental technique. The purposed of this technique is to generate fewer errors during authoring and a more regular code structure that also makes it faster to write as the code follows a more regular pattern. Keep in mind that approximately 85% of your time as a programmer is spent fixing your own mistakes. If you learn to write in a way that has near zero errors, you will be 8x faster that you were before, so that is well on your way to 10x.

This technique is not taught in school or even in books. The technique can be roughly described as "complexity reduction". As N. Wirth pointed out in his famous book "Algorithms + Data structures = Programming", all programs break down into these two parts: Algorithms and Data Structures. The fundamental algorithms of most commercial programming tasks are pretty simple. 99% of us programmers are not building vision systems or trying to translate from english to other languages... most of us have pretty simple algorithms to follow, like “sort this table of sales results in descending order and take the top 10”. Also, most of the data structures we use are often fairly simple, like a table of transactions for the month. So given that the majority of business programming tasks have simple data structures and simple algorithms all the programs should be more or less identical when you remove the skin. Programmers build a bridge from the customer problem space to the final API of the operating system or environment that is being used. Some programmers construct a bridge that goes nearly straight between these two endpoints, and others create a more wandering path.

The best programmers minimize the number of variables, the number of layers, and through a gradual reduction process create less code, that is clearer and cleaner in purpose. This reduction process is fairly mechanical, and all the best programmers instinctively do it, while bad programmers don't see the reduction process. In the old days of circuit design, you would use Karnaugh maps to reduce the number of gates to express a truth table, the best programmers are doing the same exact kind of thing in their head to minimize the total amount of logic and layers.

Could we improve the skill of programmers? Can this be taught? Absolutely. But in order to learn you have to be willing to learn first, and a lot of people think that because they got a program working that it is "good enough". Most people feel that however ugly, long-winded, and tortuous the path was, is unimportant.

Imagine in the arts and crafts if execution of the raw task was only considered. So Barbra Streisand and Celine Dion would be underbid by sandpaper-voiced tonal disasters, because "hey, they sing the same words and get through the song ok". Thank goodness in music and art we have a lot of freedom to choose what we listen to, and talented people tend to get a large following.

Because so few people can quickly read the work of other programmers (usually because the code is so long and complex), we live in an era where code quality is invisible, and the sublime skill that exists in some people is completely unrecognized. Without recognition, how can training get anywhere? In bridge building, connoisseurs all admire the work of Robert Maillart. Look him up, that guy was incredible in the early 1900’s; a real fusion between art and science. The engineering fields don't get the recognition they deserve for the elegant, artistic work that exists. How about the Hetch Hetchy water system of San Francisco, that sends water 160 miles away almost completely powered by gravity? That is insanely clever. A modern wonder of the world if you ask me.

It is unfortunate that so few companies have the luxury of having two or more teams working on the same problem so that the best team can be identified. Usually we see only one team, and so objectively speaking one is often in mystery as to how good the team is. Interestingly enough, back in the days of Thomas J. Watson at IBM, when a critical task came up, he would typically put 3 teams on the task, and whichever lab won the contest, got an increase of staff and money. This approach worked very well for IBM, which was basically a monopoly at the time, and kept the internal teams at a very high level of energy. I worked on an IBM project after Watson's retirement, and by then the management of IBM had so deteriorated in integrity and guts that every team had to be a winner.

Another thing we see today is a fake 10x programmer, who copies and pastes large chunks of code from other programmers and stuffs his project with tens of thousands (if not hundreds of thousands) of lines of code that do some mysterious thing. So it is extremely foolish to measure productivity by lines of code, because people can create a monster code base in no time flat by merging modules that they get from other sources. I admire short, clear, concise programs that get the job done and are almost eternal in their purity.

Is quantum computing real?

Quantum computing is real. I would describe it as a real boondoggle. The definition of a boondoggle is “work or activity that is wasteful or pointless but gives the appearance of having value”.

The people selling quantum computing mention cryptography, and other things that seem plausible. When the NSA wants to crack codes, they just fire up a million core computer, and those don’t require any breakthroughs, just lots of hardware and a big room. Quantum machines have no practical use, nor will they for decades to come.

It is extremely fascinating to play around with superconductive materials. So I don’t blame researchers for using any excuse to obtain funding. At very low temperatures, metal loses its normal resistance to electricity, and one can produce phenomenally large magnetic fields, which then creates all sorts of exotic conditions that at room temperature don’t exist.

You have to consider quantum computing to be a side-branch of particle physics, which is a fascinating and baffling area of study. But don’t expect to see a practical product in your lifetime. It is most unfortunate that the area of physics that holds the greatest promise of improving mankind’s life on earth, low energy nuclear reactions (also called “cold fusion”), is getting negligible research money, and instead we spend billions on things like hot fusion which has zero likelihood of producing usable results. Hot fusion is predicted to work "in 30 years" for the last 50 years. 30 years means that is enough time to retire and die before they can assign blame for lying.

There is a fellow who is working on low energy nuclear reactions who postulates a lower state of the hydrogen atom he calls the “hydrino”, and proposes that the dark matter the astrophysicists insist exists is big quantities of hydrinos. Anyone who has studied the standard model has to admit that it is a very messy, unsatisfying theory, and doesn’t explain radioactive decay at all. There are many mysteries to still unravel, and any day i expect a major breakthrough that will cause the current standard model to put into the wastebin.

Is global mutable state bad?

This is one of the great buzzword phrases in computers today: eliminating mutable state. Too bad these faddists fail to understand how a computer actually works. A computer fundamentally uses a global mutable state, called RAM and CPU registers. Those are constantly mutating, and are globally shared. Yes, you can get into trouble, but all programs eventually devolve into something that updates global mutable state. Otherwise no work is actually done. How one can structure this inevitable changing of global mutable state so as to have the fewest programming errors is an area of great theoretical and commercial interest. But to characterize the basic operation of what must eventually happen as “bad” is somewhat absurd.

One might more properly phrase the question, “how can i reduce the errors in my program to a minimum, while still accomplishing my goal in a reasonable time frame, and with a finished product that can be understood by other people without them scratching their heads on how it works”.

There are languages like LISP and FORTH, that are in the family called transformation languages, that win every contest imaginable for program brevity, but i defy you to understand a large FORTH or LISP program! Very tough.

 

How do below average developers survive?

How do below average workers survive? Simple, we are human beings and pure performance has never been the benchmark for employee retention. Some people are fun to have around, some are loyal, some are owed favors, some are relatives, some bring out great qualities in others even though they themselves don’t appear to do much (the catalyst type of person)… you get the picture. Half of all workers are below average, so who cares? And who is some all-mighty judge deciding who is better or worse than another person? I am a terrific programmer, but no programmer knows every application area or toolchain. If you put a giant Ruby on Rails program in front of me, i would be clueless, because although i can work in a dozen languages, Ruby isn’t one of them, and it would take me months to get up to speed on it.

The real question one should ask is: how do i stay up with the constant changes without burning out? And how can i maximize the value of what i do know?

In a well run company, the systems are so well designed that an ordinary person can get the job done in an 8 hour day without heavy stress. That is the beauty of a well-designed company system. In the entrepreneurial discussions you hear absolute nonsense about how you should only “hire great people”. If everyone was great, your company would be unprofitable because they would be spending too much money on all these great people. As my dear friend J. Moon points out: Ray Kroc didn’t invent the hamburger, but he created an amazing system that delivered a consistent product at a massive scale, with ordinary people.

Why isn't the Haskell language more popular?

Please note that this answer is not a criticism of Haskell, but an impartial observation of the qualities it possesses, which explains why it is not in general use. We know that some people love it, but isn’t under serious consideration by anyone to replace Java or JavaScript, the two most popular languages today ( excluding MS Excel). And you could replace Haskell with any number of powerful languages that have been around for more than 20 years yet are still very obscure.

A computer language can be characterized in many ways, but the most important aspects are:

(A) power,

(B) brevity,

(C) range,

(D) efficiency in speed/size, and

(E) ease of reading and maintaining

power can be thought of as, can a program be assembled from smaller pieces into a powerful complex final product?

brevity is about how many keystrokes you type to achieve a result. often a powerful language has brevity, however, extreme brevity such as APL and LISP possess entails trade-offs.

range is about how many different kinds of systems you can build with the language. Real time communications systems? Video games? Car combustion computer software? Payroll? etc.

efficiency is about how many resources it takes to run the final work product. A combustion computer needs to start up very fast, and often has serious constraints on RAM and CPU power.

ease of reading and maintaining is all about someone other than the original author being able to understand and modify some code without breaking the system.

In the commercial world ease of reading and maintaining is the dominant concern. If you use an obscure language, not many people can read it. So people tend to use the same languages they did 10 years ago, and language preferences move at glacial speed.

Haskell is not great in D (efficiency), but really suffers in (E) ease of reading and maintaining. Otherwise it is terrific, and the people who only care about properties A, B, C can sing praises of Haskell until they are hoarse. However, (E) is the fatal flaw in Haskell, and large Haskell programs are exceedingly difficult to understand. It will never be popular, and just like LISP and dozens of other powerful languages, they will remain niche forever. There is a reason why BASIC, C, Pascal, and many other very simple languages are so influential, because they are pretty easy to read. Any increase in power at the expense of ease of reading is ultimately judged by the general public as an unacceptable tradeoff. A program is written once, and used for decades, and along the way will pass through many hands.

Please note that i am not considering the benefits to the programmer who may be seeking job security; there are many instances in the history of programming languages where the bulk of programmers selected the most verbose language available so as to increase billable hours, and rejected superior alternatives. In defense of Haskell and other advanced languages, part of the resistance is a realization that a more powerful and brief language would reduce overall billable hours.