Egads! Another new computer language? Why?

I started the Beads project because I was fed up with the current complexity of software development.

I was transitioning from iPhone and Android apps to web apps, and was shocked to learn that all of my code and tools had to be completely replaced for web apps, which for compatibility have to be written in HTML/CSS/JS.

HTML/CSS/JS is a crazy mixed-up mess. At no point in prior programming history did one jump around between three different languages inside one source code file. HTML can draw and style, but has no variables, looping or if statement. CSS can style and animate, but has no variables, looping or if statement. JavaScript can’t draw, but has variables looping and an IF statement. The three languages don’t even agree on the most fundamental syntax of all, which is how to denote comments. Each sub-language has its own line comment and block comment syntax. That is just the tip of the iceberg of how crazy it is. I’ve been programming since 1970, and back in the punchcard day you had to mix a few lines of JCL into the front and the rear of your card deck to get it accepted by the IBM 370, but that was trivial compared to the nutty situation of today.

And the JS frameworks, they come and go like storms. Trying to use a piece of code that was written in a different framework is either impossible or creates a double framework hypercomplex mess. And even if you endure the framework turmoil, it only handles one target platform, the web. You can’t easily take your software from the web to iOS or Android. The tools that are cross-platform like Electron are very complex. So one must conclude software is too difficult to build, and that the giants of the industry have succeeded in keeping their developers like some private army in 1500’s Italy.

I started this project to make programming much simpler, like it was back in the old days, but allow you to easily draw nice graphics and use the latest devices. But even more importantly than making it simpler, I also wanted to make software more reliable. We are seeing the evidence of sloppy programming killing people like the recent Boeing MCAS system disaster, or the Toyota braking system mistake, which killed many. Both of these disasters were caused by programs that weren’t robust.

There is another goal I am pursuing, which is to make programming less frustrating. Right now the field of software development is restricted to people with almost infinite amounts of patience. You can measure the patience of a person by asking them the toughest jigsaw puzzle they can stand. Most programmers are in the 7000 pieces or greater category, while the general population doesn’t go beyond 1000 pieces. The tedious aspects of programming are not intrinsic to the field, but a byproduct of poor languages and toolchains. We need to open up programming to a wider range of personality types.

Humans are fallible, and it is too easy to make a small error in programming. I want the computer to help the programmer more, by either making it impossible to make certain kinds of mistakes due to the design of the language, and when I do make a mistake, I want the computer to find it quickly, and if a user of my program should encounter an error, I want to be sure that I can repeat that issue back in the lab and fix every single error reported.

To make things 100% repeatable, i built Beads from the ground up so that it was reversible in a special way: I want to be able to go back in time, and see what the screen used to look like. I don’t want to just reverse the state of my variables, I need to see what it looked like earlier, because debugging animations and fancy graphics requires this. That is not an easy thing to do because the underlying CPU architectures were designed 50 years ago, and made no provisions for such a thing. The Intel and ARM CPU chips, which power 99.9% of all computers and phones in the world today only go forwards. So i needed to build an emulation of a new kind of computer that can go backwards.

I wanted to fix some glaring omissions in computer languages that people have been asking for since the 70’s. The NASA Mars Climate Observer worth hundreds of millions crashed because someone used meters instead of feet for height in a calculation. So i added physical units of measurement so in your Beads program you can add 3 feet + 4 meters + 2 inches and it will calculate correctly. These units are carried at run time, and the arithmetic will be checked as the program is running. I guarantee you this feature will be copied into Julia and other languages once people get a taste for this, but you saw it first here.

Making layouts that flow into the hardware screen in a liquid manner is the big reason people use frameworks, and to eliminate the need for frameworks, I have put the layout planning and flowing into the language itself. This eliminates a whole layer of additional complexity by adding it into the language.

Help start a revolution of simplicity in Programming! Sign up today

Are you like me, sick and tired of the current web stack and its absurd complexity? When in history did you write in 3 incompatible languages inside one source file? HTML can draw, but has no arithmetic or logic. CSS can style, but has no variables, looping or IF statements. JavaScript has variables, arithmetic and logic, but can’t draw! Then people started adding frameworks to supposedly help out, and now you have React, Vue, and many others. The frameworks change constantly, and code will barely last a few years without breaking. Because of the constant churn, using parts of other people’s projects becomes a nightmare of dependencies and versions conflicts. Who wants to learn multiple “make” systems, which are complex languages into themselves?

And why when you want to move your web app to iOS or Android require learning entirely different languages and toolchains? The computers inside all these devices are nearly identical in structure. This is a huge waste of human effort, building the same product three different ways because the platform owners want to create private armies of developers. The dominant companies all benefit by maintaining the status quo of hypercomplexity. Apple, Oracle, Microsoft, Facebook, Google, all stand to gain if we just make things harder to build, because they are already on top, and have huge staffs and fancy private tools outsiders don’t get to see.

The only way to combat this mess, is to start completely from scratch. And I mean all the way down the bottom, redesigning arithmetic so that you can add 3 feet + 2 meters, and have it automatically do the units conversions. Bad habits and poor decisions of the past need to be discarded, and forgotten good ideas from the brilliant inventions of the 70’s need to be resurrected and finally used.

Forget about complex API’s, you should be able to build a Chess program without only 20 different function calls. Projects should be simple, usually a single compact file, and you should be able to target 5 platforms with a single code base (web, mac, win, iOS, android).

I have been working steadily on a new language that will be easy to learn and use for making graphical interactive software. I am looking for people to test it prior to its public release. Please send an email to, and let me know your time zone, the kinds of projects you like to build, and I will get in touch with you. I am sure some will be disappointed i don’t just post the whole thing today, but in order to give startup training in lieu of the documentation being completed, I need to limit the amount of input I receive.

If you are a mac or windows platform user, and are interested in new languages, like Python or Modula-2, and would like to help start a revolution of interchangeable parts, I invite you to look deeper into my project.

Please note that for Web development you only need a text editor like NotePad++ or TextWrangler and the compiler+runtime which I supply. For mac, win, etc. the compiler generates ActionScript3 code so you can use the excellent Adobe AIR system which runs on Mac/Windows/IOS/Android very nicely. At this stage until we have our own debugger, to debug on those platforms it requires Adobe Animate or FlashBuilder for debugging.

I have posted some sample programs, annotated,, just so you can get a flavor of the language. These are tiny examples, and do not show the many advanced features.

A very important quality in a programming language - MTTR BSOTTA

There are a thousand computer languages to choose from. How is a scientific person supposed to sort out which language is worth studying? Given that all of the top 10 languages in use are over 20 years old, surely there must be something superior by now. You wouldn’t dream of using a cellphone 20 years old, so why are people using computer languages that are so ancient? Is there no comparable advancement in software compared to hardware? Are these old languages really that beautifully designed, or is inertia the driving force in their continued popularity? Given how often software projects overrun their budgets, and have unpredictable development times points to some serious flaws in the popular languages.

Without measurement, there is no science, and i propose we start by ranking languages by one of the most important qualities. Like weight, passenger volume, miles per gallon in a car, there are objective measurements that can be taken on computer languages to help the process of sorting and ranking. There are objective qualities, and subjective qualities. Let’s try to focus on objective qualities, because computer language discussions get nasty fast, because a lot it is driven by personal taste.

Since computer languages vary so much, and since the tasks they can perform widely vary, it is much harder to rank them in a single chart. Let’s try a strategy the US EPA uses for miles per gallon measurement, you put a car through a fixed route, and see how they do. You keep the route the same, so that from year to year you can see the progress. That’s a sensible way to do it. There are 2 measurements that matter the most for programming, and although these quantities take some effort to measure, it is worth doing. The two measurements in question are:
1) ability to use interchangeable parts, and
2) the mean time to repair (MTTR) by someone other than the author (BSOTTA). That’s a military grade acronym there!

In the area of interchangeable parts, our industry reached a peak in VB6 from Microsoft. It generated a thriving ecosystem of little modules people could buy and plug into their program for various purposes. It had millions of users, but MS abandoned it when they went into .NET. Borland’s Delphi has a small group that shares and sells parts, but it is mostly a dormant system. The current languages and toolchains have not created an ecosystem of interchangeable parts. To include some component from GitHub which has tens of millions of projects freely available, entails a nightmare of dependency and library conflicts. I would say that we are currently not in an era of interchangeable parts, and thus we have massive duplication of effort, and no marketplace where people can share and use chunks of proven code easily. I will address interchangeable parts in another posting.

Let’s get back to MTTR BSOTTA: To measure this we take a series of programs of ascending complexity, and do the following tests: 1) we intentionally introduce an error into the program, and measure how long it takes to fix. 2) we ask for a change in the business logic or graphical presentation and see how long that takes. In both cases the task has to be accomplished without breaking the other functions of the program. For an accurate measurement, you will need to measure different people doing the work. One can also roughly gauge the skill level of the test programmers. Since programmers vary in skill, this testing regime will not be completely accurate; however it will immediately show why some of the languages of the past have been discarded. APL, LISP, and FORTH are all amazingly brief languages, but companies over time learned to avoid those 3 languages because they have some of the highest MTTR BSOTTA scores (which is bad). Programs are written in a few months typically, then are run for decades oftentimes, and the difficulty of maintaining products in languages that are “tricky”. You can produce good work in any language, and no language is can prevent bad design. But the language does have a great influence on how people build projects, and how much self-documentation comes out of the code.

As the test program specifications grow in complexity, and you will see how non-linear the repair time becomes. In our commercial reality it reaches a unique state, which is similar to what I call “Artichoke Mode”, which is the condition where a programmer gets so tired, that they start undoing correct valuable work by mistake. In the case of the largest commercial projects, the team isn’t tired, but the project is so large and complex, the engineers start to break as many things as they fix, and the product reaches a steady state of bugs in the hundreds of thousands where almost no progress is made even though expenditures are huge. MS Windows, Apple’s OSX are in that condition. Their employment numbers are at an all-time high yet hardly any new products emerge. When was the last time Microsoft created a new program of value? And why does Apple take 6 releases to fix obvious bugs in OSX? In OSX 10.13 and 10.14 they broke printing until many months had passed. It isn’t a pretty picture at the big companies.

I have designed the Beads language for the lowest MTTR BSOTTA in history by lowering nesting depth, and avoiding high abstractions which impede understanding, and avoiding use of large numbers of API’s. Normally one achieves power in software through fancy abstractions and deep layering. Beads combats complexity by employing symmetry and making the underlying model reversible.

The Beads system in 10 minutes

A Beads program is so compact in its notation, that it is in essence an executable specification.  I estimate half the code (as measured in words) compared to conventional languages like Java, twice the readability (as measured by a new person not the author understanding the program), and ten times more repeatability (in terms of reproducing customer bug reports) compared to existing tool chains. Help me find out how good it is by taking it for a test drive yourself, by comparing your favorite language in an apples-to-apples comparison test.

The purpose of the Beads project is to drastically simplify the task of authoring and maintaining graphical interactive software. At present, the complexity of the current development stack is unbearably high, and there are very low amounts of code re-use happening, which is causing similar programs to be written over and over. The most troubling part of the current software development process is the great difficulty of transferring ownership of a project to new staff members. It is quite common for minor changes from new staff members to have serious repercussions, and as a result software products are endlessly being patched. If you examine the employee counts of the major firms in the software industry, their staffs grow far faster than the number of products, indicating that maintenance costs are consuming an increasing percentage of the total spending on software.

The Beads system, which is based on the Beads language and its surrounding toolchain, is suitable for building the majority of the graphical interactive software that people need in business and industry.  It directly competes with TypeScript, JavaScript, Java, Python, Ruby, PHP, C++, C#, Go, Swift and other similar languages. It delivers a machine-independent virtual computer that can offer a very long lifespan of software, something that the frameworks and tools of today cannot do. It can do this because the language contains a drawing/event/database model inside the language, with almost no external dependencies. Many of the tools and frameworks being used today cannot even deliver 5 years of lifespan before the products built using those tools no longer function properly.

Let's examine a few key features that sets Beads apart from some commonly used tools. Some people will object to including Excel, but Excel contains a programming language inside, and it is used as a programming platform far more than people realize.


Let's discuss the key features briefly:

The first major feature is that Beads programs have what some people call time traveling debugging. 99% of all computers today use either the Intel or ARM instruction set inside the primary CPU chip, which is the master brain of the computer. Intel and ARM computers can only run forwards. They have the ability to stop instantly at a specific point (they call this a "breakpoint"), but neither chip can run backwards. The Beads system creates a virtual computer of a more futuristic kind that can go backwards. Going backwards is very useful, because a computer program that malfunctions is like a train going off the rails, and crashing into the woods; you have to go backwards in time to find the actual source of the problem. The task of debugging which consumes over 80% of all programming effort, is hindered by not having a reverse gear. Debugging is the process of learning the cause from an effect, and mentally the programmer is backing up to where the instructions were incorrect.

Another major problem in software development is the issue of reproducing a problem.  Once a product leaves the safe confines of the developers' lab or the company's quality assurance department, which systematically tests the software product, inevitably it malfunctions on an end user's computer, which has a slightly different state than the one in the lab.  So big companies like Adobe and Apple have gigantic databases of  unresolved customer bug reports. Adobe gets 25,000 bug reports a day by my estimation. A great quantity of the reported problems cannot be reproduced by the development team, which wastes their time, and frustrates the user because their problem doesn't get resolved in a timely manner. The net result is the overall impression by users that software is flaky and somewhat unreliable. Because of the state tracking system that allows Beads to run backwards during development, that same tracking system runs in production version software, and the user can elect to submit their history data for the purposes of error reporting, and the developers can run backwards from the problem state to find the cause. This will be a much appreciated feature.

Automatic sequencing of computation, and automatic refresh of affected areas of the screen greatly reduces the number of errors in a graphical interactive product. Once you have hundreds of controls and pieces of data on the screen, it becomes very difficult to track the dependencies of the interacting items, and the Beads language does this automatically, eliminating a whole category of common errors in software development. 

Beads corrects a longstanding omission in computer languages, which typically lack the ability to store physical units of measurement as a value and a unit (5 kilograms). It is an incredibly common error in Excel to have mismatch in the units because quantities carry no units, and it is up to the programmer of the spreadsheet to remember to either divide by or multiply by some conversion factor The NASA Mars Climate Observer probe was lost due to a units programming error. Beads includes all the common engineering and scientific units of Mass, Length, Time, Energy, etc., and also innovates in arithmetic safety so there is never any undefined behavior, which plagues many other languages. Engineering programmers will greatly appreciate this feature.

Languages like PHP include built-in hooks for connecting to an external relational database such as MySQL. Relational databases are being superseded by graph databases like Neo4J, and given the predicted future trajectory, Beads incorporates a graph database in the language so that you do not depend on an external database system. External databases can't possibly support reverse execution or protected arithmetic, so it was essential to include a database in the language itself. By including the database inside the language, it makes interchangeable parts more feasible, because all programs share the same data structuring methods.

There are many other innovations and improvements in Beads, but these are the most significant differences.  

To author a product, you write a program in the Beads language either by typing in the code or with the assistance of various graphical helper tools, then translate your Beads code into an intermediate form using a compiler. The supported platforms are:

Platform support for version 1 (note that JS server side runs under Node.JS)

Platform support for version 1 (note that JS server side runs under Node.JS)

For the desktop and mobile platforms, the current version of Beads emits Adobe ActionScript3 code, which is then published into native desktop and mobile platform products. For web platforms, the Beads compiler converts Beads language code into JavaScript, which is run on server platforms via Node.JS or similar server tools that run JS.

The Beads language presents to the author a new kind of computer that can draw on the screen, print reports, with the absolute minimum number of operating system functions. For the most part, the operating system is not directly contacted, and you can build a wide variety of application programs without knowing any details of the operating system. The language contains a drawing model (currently 2D only), a graph database, a deductive engine, an event model, and a resource library subsystem. You are programming a computer that has been designed to eliminate most of the common errors of programming.

Beads makes programming much simpler.  Other programmers' code is easier to understand, and it helps usher in a new era of interchangeable parts in software, and will open up programming to a wider variety of personality types.

If the Beads language interests you, please drop me a line. I am conducting a private beta test, and would like to have people from diverse backgrounds take it for a test run. Please include a one sentence summary of the kinds of projects you like to make, and whether you are a Mac or Windows platform user. Send inquiries to All experience levels are welcome.






Today's empires of copyright infringement

We are almost into 2020, and looking back over the decade, we have seen the rise of a series of multi-billion dollar valuation corporations, that have tiny staffs, all in most part based on inducing millions of private citizens to take behavior which was considered fair use when they took something and shared it with their small group of friends, and through the multiplicative effect of a billion personal computer users, all connected together on the World-Wide-Web, turn it into a potent destructive force, which is ruining entire industries.

Let’s start with criminal organization #1 : YouTube. This company lets you open your own TV channel. They store and deliver the video at great cost (hundreds of millions per year) to however many people want to watch it, anywhere in the world, without an FCC license or spending a dime yourself. Your main goal in YouTube is to rack up as many views as possible, so the advertising YouTube sells can generate views and clicks. The majority of the content viewed on YouTube is stolen fragments of concert DVDs, films, and TV programs, all of which are owned by various copyright holders, with none of whom are compensated. Under the current law the copyright owners have the burden to find their content among the billions of videos on file, and report each one, which will then be taken down after identifying it specifically. It is such an exhausting and taxing job that only a few studios bother to protect their current properties, and so the infringement rages on. When Prince was alive for example, he had a paid staff whose sole job was to keep his old concert footage off YouTube. With his untimely death that staff has been terminated, and now you can find thousands of Prince videos. So you can block things if you are diligent, but you will have to do it every day of the year, forever, which is a significant cost. YouTube makes billions in ad revenue, passes on millions to the infringers, and thus is the overseer to tens of millions of petty copyright criminals working as independent contractors in their ecosystem.

Example #2: Pinterest. This is a website (now worth over 13 billion) that lets people take a bunch of photographs and arrange them into a little art gallery kind of scene. Hundreds of millions of people browse Pinterest to see the latest fashions, archival images of movie stars, paparazzi photos, pictures of plants, everything. You pick a theme for your collection, and they store and deliver these photos in perpetuity for free, to however many people (and that might be millions) that does a search for that theme. You can look up fashion, art, but a lot of it movie star glamour shots, etc. The majority of the photographs are reprinted without attribution or permission, and thus Pinterest is the single most powerful force in destroying the value of the value of photography in the history of photography.; How can one get paid for a picture of Audrey Hepburn when your coffee table book has had its key pages scanned, and fed back into a website that aggressively promotes this infringing content? This isn’t a guy xeroxing a photo out of a book, and pinning on his wall, this is a guy taking a photo, scanning it, and sharing it with millions, wiping out the economic foundations of photography, not to mention destroying the coffee table book biz, when the juiciest photos have leaked out.

Youtube has its Chinese, Russian, and other country copycats, and the net sum of all these types of companies is the elimination of copyright as meaningful protection for the intellectual worker class. The music business is so destroyed, that only a few live acts which can fill stadiums like Taylor Swift, make tons of money and the 99% of the rest of musicians live in greatly reduced circumstances.

Copyright, patent, and trademark law protected creative workers. It built our rich culture and sustained a middle class set of artists, thinkers, creative people of all kinds. We must immediately update the laws and strip these quasi-criminial organizations of their means to be rich by inducing millions of private citizens to infringe. We could start by fining them 10 times whatever advertising revenue they get from display infringing works. We could make it so that all photographs are stored on a single database and monetized. Maybe 0.0001 cents per view is all you get, but multiplied by a million that would add up. The computer has no problem tracking billions of tiny transactions. One of the basic features inside cryptocurrency is the ability to subdivide a currency to a microscopic level. There are many solutions possible, some will be more feasible, or politically acceptable. But nobody is even talking about fixing this disaster.

If you steal a $10 candy bar from a local 7-11 store you can be fined up to $500, but more importantly the legal fees can be $10k for a criminal defense attorney. What YouTube and Pinterest are doing is wiping out copyright. But copyright is too long nowadays, many of the owners lose track, and for the benefit of society we should cap it after a short period. The greedy Disney company championed in Congress by successfully lobbying to extend Copyright further and further because they wanted to protect their old-ass licensed properties like Mickey Mouse. What they should have done is allow companies with huge investments in some character or thing, to be able to pay to extended protection for a specific thing indefinitely for an annual fee. Then we could make copyright 20-30 years for example, and that would stop damage to newer works, but let old stuff get circulated, and allow companies that build on various things continue when it is warranted.

YouTube is full of clips from Cheers, Seinfeld, and other sitcoms. They shouldn’t have a single clip. They didn’t pay the millions of dollars for the hundreds of creative writers, actors, directors, stage hands, and editors.

And I haven’t even begun to discuss software piracy which has ruined my profession.

Invention vs. Improvement

If you ask people what kind of eating utensil they wish they had, they can't imagine anything but what they know; it takes a genius to invent the spork, the trong, and the splayd. If you ask people to imagine a color they haven't seen, they can't. Invention is not primarily driven by minor improvements.

Small improvements are the realm of the engineer, which is a very conservative profession. I foolishly attended an engineering school, when i would have been happier in a science-oriented university. The training of engineers is primarily negative: they see what failed, and avoid it, using safe-for-deployment proven methods. Really quite opposite in their fundamental attitudes about change. The artist/inventor boldly goes where nobody imagines, and often suffers greatly because people reject things oftentimes when it is too far from their current experience.

For example, I was one of a lucky few students who attended a talk by John Backus, the inventor of FORTRAN, about his new Functional language, and when he told the professors in attendance that you couldn't update a variable they were shocked and leaving the room all shook their heads and exclaimed he was a madman. Fast forward to now and everyone and their brother is talking about functional languages. Panasonic used to always end their ads with the slogan "slightly ahead of our time".

It is far more lucrative to be slightly ahead than way ahead! Or as they say, you can tell the pioneers by the arrows in their chest.

What new programming languages are coming soon and is it worth learning any of them?

Swift, Rust, Go, Kotlin, etc. are several years old now. Heck, Swift is up to version 5, and the original designer has left Apple to my knowledge. So those don’t fit my definition of “new”. I suggest you go to the future programming group on Slack, there is a spreadsheet maintained by that group that tracks the various new languages which are still in the labs but will come out either in 2019 or 2020. The new languages have names like Luna, Dark, Red, Beads, … some of them are specific to a particular task like back-end services, but some are general purpose languages angling to replace JS and Python.

Swift is still an Apple-only language; better than Objective-C but so wedded to the OSX underpinnings that it will likely never be popular elsewhere. Rust is enjoyed by people doing system programming, as its memory ownership mechanism solves some tough problems relating to multi-threading. Kotlin is JetBrains’s (the big Czech software tool house) baby, and is one way Google can still use the JVM and avoid getting sued by Oracle who owns Java. Go is a mild improvement, but I can’t get excited by its rather minor advantages.

The truly revolutionary languages coming have really exciting properties. Some have reversibility built into the system, others have a dual graphical-textual representation, some are working on the issue of interchangeable parts so that bigger projects can be assembled out of standard components, and some have automatic build systems embedded in the language so that you don’t have to learn some horrible “make” tool like Ant, Gradle, etc. A huge simplification is coming, and it will make programming so much easier, the old timers will be grumpy about it, and talk about the hard old days when they walked miles through the snow to go fix a bug, and how young programmers have it so easy.

Uber is about to go public. What a shame.

the Uber corporation is about to go public. They are going for a market cap of more than 90 billion.

This is a failure of government regulation of the highest order. For the past hundred years we have worked out things like business licenses, minimum wages, working conditions, etc.

Uber flaunted every rule, and now their founders instead of facing jail and public shame are about to become some of the richest people on earth. They have impoverished tens of thousands of stupid young men who can’t calculate the cost of the wear and tear on their cars of driving, they destroyed an existing industry which although fossilized, was fossilized partly because of government greed.

Instead of writing their own little app, which i calculate would cost no more than 10 million to build, and less than 1/10th that to operate, the governments and taxi organizations have let this very evil company come in, and employing the most despicable business practices, dirty in the extreme, Uber has acquired massive market share across the globe, creating the world’s largest sweatshop.

That they will be worth more than Mercedes which is 100 year old company that works hard to make a quality product, Uber which has no factories, no inventory, no products for sale, and lives on the stupidity of young men which is boundless,   is now worth more. 

I say lives on the stupidity of young men, because young men want to have a car, and they think that working for below minimum wage while they wear out their car is winning, only to eventually quit after a few years to be replaced by the next sucker.

The credit card companies take anywhere from 3 to 5% of each transaction, which is a scandal in itself, but Uber takes 20% of each transaction for itself, when it has negligible costs. This type of software should be supplied by a public utility and the cost shared across all transportation subsystems. There is no benefit to creating a worldwide organization who cares nothing for their employees.

Uber did really nasty things like when you launch their app, they show fake cars in your area, so it looks like dozens of cars are ready to pick you up. they detected when government officials were hailing rides so as to avoid being fined. they detected if someone was working for another company by peeking at the apps installed on your phone, and if you were a Lyft driver they would try to sabotage the Lyft relationship. They are brilliant, but quite evil.

Object Oriented Programming is bad

In honor of the late great Joe Armstrong, the inventor of the Erlang language (which has been upgraded into Elixir), i present his argument on why OOP is bad. Unfortunately i cannot date this article.

====== from Joe Armstrong:

When I was first introduced to the idea of OOP I was sceptical but didn't know why—it just felt “wrong”. After its introduction OOP became very popular (I will explain why later) and criticising OOP was rather like “swearing in church”. OOness became something that every respectable language just had to have.

As Erlang became popular we were often asked “Is Erlang OO”—well, of course the true answer was “No of course not”—but we didn't care to say this out loud—so we invented a serious of ingenious ways of answering the question that were designed to give the impression that Erlang was (sort of) OO (If you waved your hands a lot) but not really (if you listened to what we actually said, and read the small print carefully).

At this point I am reminded of the keynote speech of the then boss of IBM in France who addressed the audience at the 7th IEEE Logic programming conference in Paris. IBM Prolog had added a lot of OO extensions. When asked why he replied:

Our customers wanted OO Prolog so we made OO Prolog

I remember thinking “How simple, no qualms of conscience, no soul-searching, no asking ‘Is this the right thing to do’ …”

Why OO sucks

My principal objection to OOP goes back to the basic ideas involved, I will outline some of these ideas and my objections to them.

Objection 1.  Data structure and functions should not be bound together

Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds. Why is this?

  • Functions do things. They have inputs and outputs. The inputs and outputs are data structures, which get changed by the functions. In most languages functions are built from sequences of imperatives: “Do this and then that …”. To understand functions you have to understand the order in which things get done. (In lazy FPLs and logical languages this restriction is relaxed.)

  • Data structures just are. They don't do anything. They are intrinsically declarative. “Understanding” a data structure is a lot easier than “understanding” a function.

Functions are understood as black boxes that transform inputs to outputs. If I understand the input and the output then I have understood the function. This does not mean to say that I could have written the function.

Functions are usually “understood” by observing that they are the things in a computational system whose job is to transfer data structures of type T1 into data structure of type T2.

Since functions and data structures are completely different types of animal it is fundamentally incorrect to lock them up in the same cage.

Objection 2.  Everything has to be an object.

Consider “time”. In an OO language a “time” has to be an object. (In Smalltalk, even “3” is an object.) But in a non OO language a “time” is a instance of a data type. For example, in Erlang there are lots of different varieties of time, which can be clearly and unambiguously specified using type declarations, as follows:

-deftype day()     = 1..31.
-deftype month()   = 1..12.
-deftype year()    = int().
-deftype hour()    = 1..24.
-deftype minute()  = 1..60.
-deftype second()  = 1..60.
-deftype abstime() = {abstime,year(),month(),day(),hour(),min(),sec()}.
-deftype hms()     = {hms,hour(),min(),sec()}.

Note that these definitions do not belong to any particular object. they are ubiquitous and data structures representing times can be manipulated by any function in the system.

There are no associated methods.

Objection 3.  In an OOPL data type definitions are spread out all over the place.

In an OOPL data type definitions belong to objects. So I can't find all the data type definition in one place. In Erlang or C I can define all my data types in a single include file or data dictionary. In an OOPL I can't—the data type definitions are spread out all over the place.

Let me give an example of this. Suppose I want to define a ubiquitous data structure. A ubiquitous data type is a data type that occurs “all over the place” in a system.

As Lisp programmers have know for a long time it is better to have a smallish number of ubiquitous data types and a large number of small functions that work on them, than to have a large number of data types and a small number of functions that work on them.

A ubiquitous data structure is something like a linked list, or an array or a hash table or a more advanced object like a time or date or filename.

In an OOPL I have to choose some base object in which I will define the ubiquitous data structure. All other objects that want to use this data structure must inherit this object. Suppose now I want to create some “time” object, where does this belong and in which object…

Objection 4.  Objects have private state.

State is the root of all evil. In particular functions with side effects should be avoided.

While state in programming languages is undesirable, in the real world state abounds. I am highly interested in the state of my bank account, and when I deposit or withdraw money from my bank I expect the state of my bank account to be correctly updated.

Given that state exists in the real world what facilities should programming language provide for dealing with state?

  • OOPLs say “hide the state from the programmer”. The state is hidden and visible only through access functions.

  • Conventional programming languages (C, Pascal) say that the visibility of state variables is controlled by the scope rules of the language.

  • Pure declarative languages say that there is no state. The global state of the system is carried into all functions and comes out from all functions. Mechanisms like monads (for FPLs) and DCGs (logic languages) are used to hide state from the programmer so they can program “as if state didn't matter” but have full access to the state of the system should this be necessary.

The “hide the state from the programmer” option chosen by OOPLs is the worst possible choice. Instead of revealing the state and trying to find ways to minimise the nuisance of state, they hide it away.

Why was OO popular?

  • Reason 1. It was thought to be easy to learn.

  • Reason 2. It was thought to make code reuse easier.

  • Reason 3. It was hyped.

  • Reason 4. It created a new software industry.

I see no evidence of 1 and 2. Reasons 3 and 4 seem to be the driving force behind the technology. If a language technology is so bad that it creates a new industry to solve problems of its own making then it must be a good idea for the guys who want to make money.

This is is the real driving force behind OOPs.

How did we lose the technology to go to the Moon? What exactly is the problem?

I grew up a space fanatic and fully expected to visit the moon and work in the space industry. Unfortunately for me, they ended the Apollo program as i entered college, so that dream didn’t happen.

We did not lose the technology to go to the moon. That is nonsense. America merely lost interest in going to the moon. I bought and read the books that a company put together on each Apollo mission, that includes all the photographs, debriefing info, transcripts, etc., and you can see by the last few missions that people were losing interest rapidly. The first moon landing was mind blowing, but like any stimulus if it doesn’t change, the brain starts to make that normal (and boring). The moon has so little variety to it, with the complete absence of life, it is such a dull place that nobody will care much if we ever go back. There are more wonders to be discovered in the oceans than on the moon, so i reluctantly agree with the man in the streets who thinks that it would be a waste of money to send people there again. The Chinese are hell-bent on going there just so they can show they are as good as anyone technologically.

We are far better off exploring aquaculture, floating cities, and all the other futuristic things that really matter to the human race. After all the oceans cover the majority of the earth’s surface, and we have done a piss-poor job of managing the oceans. Our inability to be a good steward of the fish is resulting in population crashes for a variety of species. Let JPL send their unmanned probes to through the solar system; it is far cheaper, and very effective. The moon has little to offer us. It might make an exotic vacation resort, but considering how dangerous it is in space it might never be that popular. Instead, let’s fix up the planet to a much nicer state before we go off blowing wads of money pushing around people in aluminum boxes. There are no other habitable places in our solar system, and we are more than 1000 years away from another habitable place, so until Warp Drive, we better bite the bullet and fix up our blue pearl!

Functional Programming

There are many buzzwords in the programming profession. These terms are bandied about with great regularity, and mean almost nothing. Terms like Object Oriented Programming (OOP), Functional Programming (FP), Top Down Design (TDD), etc.

Fundamentally, we only have one kind of computer with two variations: the Intel and ARM instruction sets which drive 99.9% of all computers used today. Everything on top of these two hardware platforms is software, and since the hardware's only commonly used instructions are arithmetic, copy, load/store, compare, branch and call/return, the most powerful instruction is the function call and return, and every language from Assembler onward has striven to wring as much utility out of the call/return instruction.

Functional programming is where you try to give functions more weight, as opposed to the move instruction which was COBOL's stock in trade. So one cannot be against functions, it is one of the only power tools we have. Every good program uses the same principles that are espoused in FP, and Every Functional Program has to store some mutable state somewhere, because the underlying hardware only operates with mutable state, so it is pretty self-defeating to create very hard abstractions like Monads and Monoids. Sometimes the priesthood of programmers tries to create obscurity where none need exist.

What i am against is waving these banners around, like OOP, Functional Programming, Top Down Design, etc., when what we really want is reliable software that is easy to understand. We are evolving towards better notations, but unless you change the hardware (and adding more cores does very little to help) you are pretending these terms actually mean something.

Computers are very simple at the core, and programming needn’t be that hard or frustrating. it will always be exacting, as there is no human experience where something is done over a million times in one second! The speed and accuracy of machines has always impressed humans, and computers are over a million times cheaper than when they were invented, that is incredible progress!

The crime wave of the 2000's

In US history, everyone learns about Al Capone, Chicago gangs around the time of Prohibition of alcohol, which was a failed experiment. From todays WSJ headlines:

The FCC Has Fined Robocallers $208 Million. It’s Collected $6,790.

America’s telecommunications regulators have levied hefty financial penalties against illegal robocallers and demanded that bad actors repay millions to their victims. But years later, little money has been collected.

We are now in a new age of crime, where most of the crime is perpetrated via the internet, and the criminals are rarely caught or punished.

The criminals operating in Robodialing and telephone fraud pretending to be the IRS, etc., are annoying 100x their actual defrauded customers in possibly one of the most destructive industries that has ever existed. If someone robs your house, he doesn’t ransack 100 neighbors that night while robbing 1, but in the telephone fraud industry they annoy 1000s of people before finding a sucker to steal from.
Unfortunately, we just spent 80 million investigating if a hotel tycoon was conspiring with a foreign country. That 80 million could have been spent on wiping out the robodialling industry, and saved the world possibly the lost time of 40 billion calls times 1/6th of a minute to hang up on the assholes that ruin many a tender moment (at minimum wage of $12/hour they are wasting 80 billion dollars of peoples’ time per year in the USA alone). And where is homeland security in all this? We spend billions on those slackers to ride around in black Escadades, always on the prowl for nearly non-existent terrorists, and training park rangers on weekends with automatic weapons for that squirrel invasion that never happens.

The government’s super powerful criminal justice system apparatus, which has most of its employees at the city and county level, is now mostly occupied with cycling through their expensive system the mentally ill near-or-at homeless. they make the homeless more miserable, and because we have no agreement as a society which school of psychology has any theraputive effectiveness, punishment therapy produces poor results, and it just gets worse.

I work in an industry, which is responsible for a big chunk of the crime. Robodialling not only annoys people, but much of the calling is direct fraud. Lets assume that the criminal is buying their phone service for 0.7 cent a minute. And if you have someone on the phone an hour that means $4.20/hour for the cost of the calling, and perhaps $5.80/hour for the criminals’ employee. Thus for $10/hour, less than the minimum wage, you can keep someone working in a criminal business, safely operating 8000 miles away.

Once you make more than $10/hour stealing from people, which evidently is not that hard, your business will naturally grow, and you will hire more staff. So what we are seeing is the positive feedback loop intrinsic to capitalism, and the exponential growth of a successful industry doing various criminal things, breaking laws left and right.

They can stop this crap immediately if they wanted to. If you had to put a $100 deposit on hold for any phone number you wanted, and lost the deposit should you be found to be robodialling, that would stop it cold. You have only 4 companies handling 90% of all the phone calls in the country (ATT, Verizon, T-mobile, Sprint), and they know damn well who is doing this crap. 

Unfortunately our government leaders are so out of touch with regular life they haven’t noticed this crime wave, and so it continues on.

Poor quality software is killing people

We have recently seen one of the largest killings by bad programming in the case of the Boeing 737 Max

The Boeing company has a black eye from their recent screw-up with two of their newest jets crashing.

Simply put, this error was caused by poor quality programming, and not following standard aircraft safety software principles. It was exacerbated by their greed in trying to sell what is an essential safety feature as a downloadable paid option, which the low-cost airlines like Lion and Ethiopian airlines didn’t buy.  

This is a powerful jet, and it can if you pull the stick up go into a stall pretty easily. That is not a defect, that is an inherent risk when you have a plane with really potent engines which are much safer in other scenarios, so nothing wrong with having powerful engines. But the input to the stall detector software was a single sensor, even though there are 2 sensors on the plane, it only read one of them. And so when the single sensor malfunctioned, the plane thinks that it is heading up when it isn’t. The second mistake, even more stupid, is that the computer code that said:

  IF sensor > 34 degrees then push nose down 1 degrees/sec  

(my formula is approximate), didn’t have a loop counter, so that if this had been done more than a few times it would stop for a while, as clearly the pilot is trying to override the program. In the Lion Air case, the pilot tried over and over to pull up from impending doom, and lost the battle, killing all aboard. Boeing has just updated the software, and added a few lines of code to this little program. This error will costs them billions when all the lawsuits are settled.

I was trained in programming at JPL as a youth, and when they are making space probes that will travel for 10 years  without the possibility of any repair, they always put in an odd number of sensors for each critical measurement, and have the sensors vote on the measurement, and if it is 2 to 1 they pick the majority, and eventually turn off the bad sensor because there is no point in listening to it. To have only one sensor, which is unfortunately all too common in automotive safety systems, is a bad practice, because that single cheap sensor can cause a serious accident. This is the thing that worries me about all these fancy car safety systems, is that they are put in by cheap companies that don’t have any redundancy on the critical systems.  

I remember a friend sued Porsche when they got one of the high end Cayenne Turbo models and it changed lanes abruptly almost killing them. This is all due to bad programming, and instead of letting car companies keep their code secret, i believe all safety systems for any device (car, bus, train, plane, nuclear power plant) should have their code openly published, so that outside programmers can inspect it and find weaknesses. There are a lot of retired and under-employed programmers who should receive a bounty for finding errors and dangers in code that is critical to the safety of the public.

Anyone with experience in military or space software systems, would have raised red flags on the Boeing code, which was clearly done by rookie programmers and it indicates that Boeing has a serious internal malfunction that they would try to monetize a critical safety system. They are spending more money on lobbying now to fix it, when what they need is better engineering management, not more lobbyists.

What should a future general purpose programming language look like?

The dominant language of the future should tackle the central problem of writing software, which is that small numbers of human errors consume the vast majority of the total time and effort spent. Many programmers estimate that over 80% of their time is not spent designing and coding, but in that process euphemistically called “debugging”, where a small number of errors consume a disproportionate amount of time. Thus the main feature should be eliminating this largest section of time, and the other features should support the use of interchangeable parts. In Prof. Wirth’s Modula-2 he reached a high water mark for interchangeable parts, offering separate compilation of modules, and protection against a module changing and a client of the module not realizing that the interfaces had changed. No subsequent language to my knowledge has this feature, except for Beads, a language in the “next-gen” race, along with Elm, Red, and others.

As for other language features, it should try to be a simpler language, breaking free from the mistakes of the past like OOP, and avoiding inventing some new complex, hyper-abstract set of concepts like functors and monads as are in vogue today. Simplicity is a feature everyone can enjoy.

Inertia, the most powerful force in the universe

The great Elbert Hubbard wrote around 1912 the following:

“The reason men oppose progress is not that they hate progress, but that they love inertia.”

We are in the decade of a major change in computer programming languages. The prior ones served their purposes, but the massive need for programming has reached such a degree that continuing cumbersome, labor intensive, frustrating processes has to give way to a simpler, easier, more productive toolchain. The foundation stone of a new toolchain is a new language.

The order of the day is A) simplicity, B) clarity, and C) reusable parts

Simplicity is never easy to achieve; you have to know the balance between a small group of powerful primitives and a larger set of more powerful, but less general, primitives.

Clarity is often the subject of great debates, because people who are well trained in a language, find it easy to read. I occasionally get into heated, almost religious in fervor debates with proponents of extremely unclear languages like Lisp and Haskell. Their adherents simply can’t remember a time when they didn’t understand, and assume that their prodigious memories are common. Try testing your tool on a 70+ year old, and then tell me how it went! A 70 year old learning to program for the first time is about the same as a 12 year old. Both will struggle with the current toolchains.

GM kills the Volt, in the process of self-destructing

GM killing the Volt car is a bad idea.

But then they have resisted all advancement from their internal brains which are considerable.

GM was always far ahead of Ford from a technical and reliability aspect, but then i am referring to their peak in 1959. After that time the founding genius, Alfred P. Sloan’s energies were starting to dissipate. GM invented the Neodymium magnet but didn’t ever build motors.

Their solar powered car they did with McCready of CalTech was 20 years ahead of any other vehicle, and later when they built the EV1 they ended up crushing the few cars they made out of spite basically. People loved those cars, there would be no need for Tesla had GM just allowed a money losing division to build the future. You have to invest in the beginning to build a new business. No new technology besides rare things like Genentech’s insulin is immediately profitable
And let’s not talk about Saturn, which had many innovative practices, but because it made less profit than their bad practices they continued to strip mine their customer goodwill, using the accumulated trust and confidence in their brand to ship crappy cars.

Only the recent editions of the corvette are any good, the rest of the GM lineup is not attractive to me, and i would pick a Japanese or German or Swedish car over anything but a corvette

GM is basically downsizing, admitting defeat in passenger cars (like ford). The tragedy is that there are cars from europe, little tiny ones, that could succeed in urban environments in smaller numbers, but GM just won’t allow them in the country. Which leaves the overpriced and impossible to work on BMW owned Mini Cooper to own that city hipster market

Cars are about emotion. Whether you are looking for a land yacht, a tiny sports car, or a soccer mom people mover, you have to identify the emotion and deliver a purity of design. The central problem is not their propulsion units, their manufacturing practices, but their unwillingness to allow a single vision to control a car from start to finish like the greatest designer of them all Ferdinand Porsche. They continue to have committees design and agree on things, and the result is a craptastic watered down aspect to every car (except the Corvette, which after decades of mechanical imcompetence finally bought a ferrari and studied how it could go around curves).  

GM is another classic american tragedy where designers are mere stylists. Great design is behind all the winning products, and brilliant design can outlive technical changes far longer than you would imagine. The Porsche 911 is the one of longest running models in auto history. 

If they had any brains, they would realize that producing smaller quantities of nicer cars at a higher price would be achievable now with 3D printing, and all the robotic advances. The days of having to make every car out of the same parts is over; you can 3D print final quality metal and plastic parts, and with their engineering prowess they could make replica cars of great designs, and instead of making drool-worthy concept cars that never ship, they could actually make the ones that the customers indicated they really want. 

Look at the money people are paying for old Jaguars and replicas of Steve McQueen’s Bullitt era mustang. People want futuristic stuff, or they want tiny, or they want fast, or they want big, not some compromise of all of those characteristics. They need to give car loving designers a cost constraint, but then give them the freedom to make it great. A few years back they went to Pebble Beach, which is the #1 confab of car nuts, and showed the Escala


people loved it. And said right there they would buy one in a second. It was huge (17.5 feet), sleek, very luxurious. Then some numbskull goes “it will be too expensive”.  They are morons. They will promise something like it in a few years, but when it comes out it won’t be 17.5 ft and the temptation to use tons of previously generated parts will just be too great, and they will have lost all their credibility again. Meanwhile pickup trucks are getting close to 80k in some cases, defying all common sense.  The interior was incredibly elegant on that concept car, cashmere. It was gorgeous and luxurious. You know they won’t put cashmere on the production car.

People would buy this car. They showed it to the target audience, they said hell yes!, and then they ignore the feedback. This is the doom of a spineless organization that can’t build a product their own staff love. Meanwhile, as we speak, the same clean design is now shipping in the Volvo XC40, which they are renting for something like 600/month.  

When america uses good design, we are unbeatable. America has had the greatest industrial designers in the history of the world like Thomas Edison and Henry Dreyfus. But by designer i don’t mean some stylist who adds pin striping,  i mean the product’s total vision from inside out. 

McLaren, Koniggsegg, and doing such a great job making supercars, selling all they can make. The writing is on the wall. play with passion or leave the game. 

Medicare for all is a dumb idea

I normally write only about technology, but Ocasio Cortez’ comments about medicare for all, which many people have echoed must be rebutted.

No one is entitled to the free labor of another. So the idea that we should get unlimited free medical care doesn't pencil out. Even if we had such a thing tomorrow, it wouldn't work because the supply of trained medical personnel is insufficient to handle the current load, much less an increased load. Insurance companies stall like hell so that they can stretch their resources.

A better solution instead of trying to do price controls (or robbing peter to pay paul via some transfer payment scheme), is to realize that the curriculum of our public schools hasn't been updated in 100 years, and that perhaps 1/2 of all the time in school should be devoted to bringing every high school graduate up to the level of 2nd year nursing school, and that instead of continuing with the ridiculous 8 year and more medical school process which costs hundreds of thousands per doctor, create super narrow medical education tracks that finish in 2 years, but only qualify you for specific procedures.

Hospital procedures are broken down very precisely now, and by changing how many, and how, we train medical professionals, the costs could come down by a factor of 5. Why aren't we teaching this very valuable information to all students in our public schools? Why are we letting junior high and high school students regurgitate obsolete subjects when what we all need to know nowadays, is how to take care of the human body, especially our own!

It is time that we stopped leaving things to expensive professionals, and shared this knowledge with a vastly wider pool of people, which will not only lower costs, but when people get more medical education they don't eat as poorly, nor are they so likely to get so fat, which is a big contributing factor in American's slightly declining health statistics. Also, a more informed customer base makes for better doctors, as the bad ones would be flushed out more quickly.

JetBrains MPS, and thoughts on the next big language

1) Games are very much going to determine the outcome of the "next gen language". Game programming is arguably the majority of all graphical interactive coding today. Not only is this borne out by the statistics from the App Stores, which shows that games are more than 2x larger than any other category of product:

but also, when you look at all the dashboard companies popping up, the gamification of business products is well under way, and what was a stodgy dull statistical program is now singing and dancing. Get into your brand new car, and dashboard does a song and dance. I don't care where you turn, customers are suckers for flashing lights and motion, and if your language can't draw well, it is going to be a hard sell. A language doesn't have to full-tilt into 3D complexity, but if you can't drastically simplify the pain of laying out screens in the quirky and frustrating HTML/CSS abomination, why did you bother making your tool in the first place? This by the way is why i consider terminal-based languages like LISP and FORTH to be near useless in this era. There is ample evidence that drawing needs to be integrated into the language.

2) This is why MPS is non-starter for me; I don't see a drawing system. The majority of all my code in every graphical interactive product i have made has been related to drawing. From a word-count perspective, drawing consumes an awful lot of program code. Numbers are easy. They have a value. Period. But a piece of text, it has a font list, a size, optional bold and italic, justification, indenting, stroke color, background color, and on and on. So naturally text is going to dominate the code. If you are building a billing system, generating a nice looking PDF bill for the customer, is a ton of work to drawing nicely, with pagination that works well. I spent decades in word processing/desktop publishing/graphic design product space, and there is just a lot of tricky stuff relating to languages. And don't get me started on the complexities of making your product read well in Asian languages. That was my specialty. 

And since it isn't just about drawing, but interacting, that is why HTML/CSS/JS is such a nightmare, and why there are so many frameworks, because the designers of the web did a rather poor job at anticipating interactivity, and their approach of laying out pages not with function calls but with a textual description basically calls forth a very complex framework system to compensate for this mistake. complex domain specific languages aren't a computable readable model; imagine if the web had an internal model that was not textual, that would have made it so much easier to build interactive graphics. A next gen language to succeed will at least need to allow people to not have to wrestle with webkit, which has a nasty habit of scrambling your layout when a tiny error is made. 

Apple has done a lot of work in their storyboard system in XCODE to make laying out things easier, although it is still evolving and i wouldn't call it settled. I don't know the android studio well, i imagine it has tools for this as well. But i would like to see a cross-platform layout system that makes it easy for a single code base to nicely fit into whatever device you are on. Making layouts fluid should be part of the language, and anyone who thinks they can just live on top of HTML/CSS they are doomed IMHO.

3) As for correctness by construction, there is ample evidence that completely untyped languages that can mutate the type of a variable accidentally from number to string, are very dangerous. The mistake of imitating ActionScript2's overloading of the + operator to mean both addition and string concatenation has caused countless errors in JS. If they had used PHP's & operator, or some other punctuation, millions of man-hours would have been saved. TypeScript and other transpilers/preprocessors are clearly a great win because JS by itself is a minefield. A successful next gen language will eliminate a lot of errors. Eve was very much inspired by SQL, which is a very declarative style of language, with little instruction given as to how do it; you just tell it what to do. However, it isn't that easy to recast a chess program into that style. there is a lot of sequential processing to do, so some compromise has to be reached, where you eliminate as many sequence related errors as you can at compile time by letting the compiler do more work, but still retaining the ability to specify the proper sequence for things to be done in, unambiguously of course else you have obscured the product, which is counter-productive. I believe that sequence related errors constitute half of all debugging time, so eliminating mistakes of sequence should yield a 2x improvement. 

4) there are many additional syntactical features one can add to a language, like runtime physical units, that allow the product do more integrity checking and catch subtle errors quickly. The Wirth family of languages emphasized compile time checks, and Modula-2 for example had overflow, underflow, range, array bounds, nil pointers, undefined variables, checks all of which could be disabled. In my Discus product, we leave the checks on until we ship the product, and it shrinks by 30% because the overhead of checking is substantial. Nowadays, with computers idle 98% of the time, one can argue that leaving the checks on in production products is now feasible, and probably a good idea. All that Microsoft C code, with no runtime checks, is a security hazard, and Microsoft has been endlessly patching their Windows monstrosity for decades now with no end in sight. When you pass an array to a Modula-2 function, the array bounds are sent in a hidden parameter, which allows the called function to not go over the limit. This doesn't exist in C, which means that any large C program is forever doomed to be unreliable. I cannot understand why the executives chose such flabby languages to standardize on. Surely they must have known early that the cost of all these tiny errors in the sum total represented a maintenance nightmare. Java has plenty of problems of its own, and don't get me started on the flaws of OOP.  Thank goodness few of the next gen projects even consider building atop OOP paradigms.

R.I.P. Stan Lee, the most creative writer since Jules Verne

Stan Lee died the other day.

He was one of the most creative americans that ever lived.

We all know that George Lucas is a very creative fellow, having created a few dozen major characters, like Luke Skywalker, Han Solo, and villains like Jabba the Hutt and Boba Fett.  Stan Lee created at least 5 to 10 times more characters than Lucas. I think he was the most imaginative writer since Jules Verne. 

When i was young, we had a comic rack at the local drug store in town, and they cost 12 cents each. At the time we were reading them, in the early days of Marvel Comics, the print run of each issue was around 10,000 copies. So if you imagine the retailer got half, the total revenue to marvel per issue was about $600. They were printed on the cheapest newsprint paper, with just a few colors, and were banged out at an incredible pace. I had several first editions, and like every other mother, mine were thrown out when i went to college… they are so valuable now because so few people kept them away from their mothers! hah. Nobody took them seriously and it took a while for their science-fiction style to catch on.

These original Marvel comics are masterpieces of the genre. They can be purchased under the “marvel masterworks” series at Amazon in both hardcover and paperback. They make a wonderful gift for a young kid who doesn’t realize the source material behind the many successful films is so good. They were so far ahead of other comics of the time, it isn’t funny. Batman and Superman, the two big characters of the entrenched competitor DC comics were crude in comparison, with stupid villains who were typically bank robbers or thugs. Marvel comics on the other hand created a roster of villains really worth fighting, with names like Magneto who could control metal, Doctor Doom, who wanted to take over the planet, or Kang the Conqueror who comes from the future to take over the universe, or Galactus who would like to consume all the life force in our solar system for breakfast. Marvel heroes had personal problems; they didn’t have much money, or their girlfriends were mad at them, they might even regret their superpowers. From any dimension you look at, the output of Marvels’ first decade is fantastic stuff; it is like Ian Fleming’s James Bond series, a highly original body of work that will still be enjoyable for a long time to come, and imitated without attribution constantly. 

The secret to Stan Lee’s incredible output - and it is amazing how much stuff he cranked out - was that he worked in an unprecedented way with the artists who helped invent the characters and drew the comics. He would write a very short summary of what happens in the 16 page story and then let the artists draw whatever they wanted, and then he would add the words in later. In this way the stories progressed nicely, and the artists loved the freedom, and they did wonderful work. 

Lee did not profit much from his work at Marvel. He was outmaneuvered in the board room many times, and when Marvel was finally sold to disney for billions he got nothing. Frankly the comic book business is a pathetic business compared to movies, and the technology to do his films justice did not come into existence until recently.  But he did pretty well overall, and was beloved by the many millions of people who have come to know his characters like Spider man, etc., mostly from movies.

His appearance in the films always makes me cry, because i loved his work so much. He was so clever and funny. 

Every time the films have departed from the original material, it has been to the movie’s detriment. When Marvel films have failed, it was because they didn’t trust the super genius of the creators, and think people can’t handle it. One of marvel’s best comics was the Fantastic Four, yet they have bungled Dr. Doom and in the last film made him into a corporate weasel pursuing money instead of a megalomaniac who is so smart he thinks the world should grovel at his feet.

The best films Captain America (#1), Dr. Strange, and Ant Man. Those show the wild range of Stan Lee’s imagination, and how he would move between patriotic world war 2 era, to eastern mysticism, and then biotechnology.

With his passing, and with George Lucas’ retirement, one has to wonder, where is the next great set of characters and stories to come from? The answer is that Overwatch (a video game from Blizzard) is the crucible of animated/superhero characters for the youngest generation, and mark my words, Overwatch will be bigger than marvel, because it includes characters across the globe, and thus will be relatable to everyone. 

How much is programming going to change in the next few years?

Programming is going to change dramatically, probably by 2020, which is not that far away. New languages and techniques are in the lab now that will trim the fat out of the development process. Currently programmers think they spend most of their time designing and typing, and estimate that debugging is perhaps 20–25% of the total effort. In reality, design, typing, and compiling are very straightforward, and 85% or more of the total effort is spent debugging and refining the software. The debugging process, which to be completely honest is the programmer fixing their own problems - is a huge area of waste, and finally techniques and tools are in the pipeline to shortcut that process. The net result will be about a 3:1 overall improvement in productivity (because debugging will cease to be difficult), but more importantly a 10:1 reduction in the frustration level one must endure to tolerate being a programmer.

The world of programming is currently populated by people with incredible, far out on the tail levels of patience; the kind of people who can do a 7000 piece jigsaw puzzle and enjoy it, while the ordinary person would give up. The lowering of this “frustration barrier” will allow millions of ordinary people to enjoy programming. Let’s face it, the computer is mankind’s most powerful and interesting invention, and everyone should have some fun with it. It is intensely satisfying to see a robot follow your instructions exactly, tirelessly, with no whining like your kids ;->

As for where this improvement is going to originate from, it isn’t going to come from academia, which refuses for the most part to build practical, useful tools, and will come from small entrepreneurial teams funded by themselves, angel investors, or crowdfunding. I can’t tell you how many academics flat out refuse to talk to industry people, as they live inside a bubble which is based on the status from publishing into journals that they only read among themselves. The academic world couldn’t be more corrupt and dysfunctional than it is in 2018. The cost-effectiveness of conventional colleges is abysmal, and if you look at graphs like:

College Tuition and Fees vs Overall Inflation

you will see the unsustainable trajectory that they are on. Another area where improvements won’t be coming from are large companies like Apple and Facebook, which profit mightily from things staying exactly as they are, and also in large companies, a disruptive technology like this would invalidate and seriously depreciate their multi-billion-dollar codebases, so even if it was invented there, it would not see the light of day.