The Black Liszt

How to Improve Software Productivity and Quality: Schema Enhancements

Most efforts to improve programmer productivity and software quality fail to generate lasting gains. New languages, new project management and the rest are decades-long disappointments – not that anyone admits failure, of course.

The general approach of software abstraction, i.e., moving program definition from imperative code to declarative metadata, has decades of success to prove its viability. It’s a peculiar fact of software history and Computer Science that the approach is not mainstream. So much the more competitive advantage for hungry teams that want to fight the entrenched software armies and win!

The first step – and it’s a big one! – on the journey to building better software more quickly is to migrate application functionality from lines of code to attributes in central schema (data) definitions.

Data Definitions and Schemas

Every software language has two kinds of statements: statements that define and name data and statements that do things that are related to getting, processing and storing data. Definitions are like a map of what exists. Action statements are like sets of directions for going between places on a map. The map/directions metaphor is key here.

In practice, programmers tend to first create the data definitions and then proceed to spend the vast majority of their time and effort creating and evolving the action statements. If you look at most programs, the vast majority of the lines are “action” lines.

The action lines are endlessly complex, needing books to describe all the kinds of statements, the grammar, the available libraries and frameworks, etc. The data definitions are extremely simple. They first and foremost name a piece of data, and then (usually) give its type, which is one of a small selection of things like integer, character, and floating point (a number that has decimal digits). There are often some grouping and array options that allow you to put data items into a block (like address with street, town and state) and sets (like an array for days in a year).

One of the peculiar elements of software language evolution is whether the data used in a program is defined in a single place or multiple places. You would think – correctly! – that the sensible choice is a single definition. That was the case for the early batch-oriented languages like COBOL, which has a shared copybook library of data definitions. A single definition was a key aspect of the 4-GL languages that fueled their high productivity.

Then the DBMS grew as a standard part of the software toolkit; each DBMS has its own set of data definitions, called a “schema.” Schemas enable each piece of data to have a name, a data type and be part of a grouping (table). That’s pretty much it! Then software began to be developed in layers, like UI, server and database, each with its own data/schema definitions and language. Next came services and distributed applications, each with its own data definitions and often written in different languages. Each of these things need to “talk” with each other, passing and getting back data, with further definitions for the interfaces.

The result of all this was an explosion of data definitions, with what amounts to the same data being defined multiple times in multiple languages and locations in a program.

In terms of maps and directions, this is very much like having many different collections of directions, each of which has exactly and only the parts of the map those directions traverse. Insane!

The BIG First Step towards Productivity and Quality

The first big step towards sanity, with the nice side effect of productivity and quality, is to centralize all of a program’s data definitions in a single place. Eliminate the redundancy!

Yes, it may take a bit of work. The central schema would be stored in a multi-part file in a standardized format, with selectors and generators for each program that shared the schema. Each sub-program (like a UI or service) would generally only use some of the program’s data, and would name the part it used in a header. A translator/generator would then grab the relevant subset of definitions and generate them in the format required for the language of the program – generally not a hard task, and one that in the future should be provided as a widely-available toolset.

Why bother? Make your change in ONE place, and with no further work it’s deployed in ALL relevant places. Quality (no errors, no missing a place to change) and productivity (less work). You just have to bend your head around the "radical" thought that data can be defined outside of a program.

If you're scratching your head and thinking that this approach doesn't fit into the object-oriented paradigm in which data definitions are an integral part of the code that works with them, i.e. a Class, you're right. Only by breaking this death-grip can we eliminate the horrible cancer of redundant data definitions that make bodies of O-O code so hard to write and change. That is the single biggest reason why O-O is bad -- but there are more!

The BIG Next Step towards Productivity and Quality

Depending on your situation, this can be your first step.

Data definitions, as you may know, are pretty sparse. There is a huge amount of information we know about data that we normally express in various languages, often in many places. When we put a field on a screen, we may:

  • Set permissions to make it not visible, read-only or editable.
  • If the field can be entered, it may be required or optional
  • Display a label for the field
  • Control the size and format of the field to handle things like selecting from a list of choices or entering a date
  • Check the input to make sure it’s valid, and display an error message if it isn’t
  • Fields may be grouped for display and be given a label, like an address

Here's the core move: each one of the above bullet items -- and more! -- should be defined as attributes of the data/schema definition. In other words, these things shouldn't be arguments of functions or otherwise part of procedural code. They should be just like the Type attribute of a data definition is, an attribute of the data definition.

This is just in the UI layer. Why not take what’s defined there and apply it as required at the server and database layers – surely you want the same error checking there as well, right?

Another GIANT step forward

Now we get to some fun stuff. You know all that rhetoric about “inheritance” you hear about in the object-oriented world? The stuff that sounds good but never much pans out? In schemas and data definitions, inheritance is simple and … it’s effective! It’s been implemented for a long time in the DBMS concept of domains, but it makes sense to greatly extend it and make it multi-level and multi-parent.

You’ve gone to the trouble of defining the multi-field group of address. There may be variations that have lots in common, like billing and shipping address. Why define each kind of address from scratch? Why not define the common parts once and then say what’s unique about shipping and billing?

Once you’re in the world of inheritance, you start getting some killer quality and productivity. Suppose it’s decades ago and the USPS has decided to add another 4 digits to the zip code. Bummer. If you’re in the enhanced schema world, you just go into the master definition, make the change, and voila! Every use of zip code is now updated.

Schema updating with databases

Every step you take down the road of centralized schema takes some work but delivers serious benefits. So let’s turn to database schema updates.

Everyone who works with a database knows that updating the database schema is a process. Generally you try to make updates backwards compatible. It’s nearly always the case that the database schema change has to be applied to the test version of the database first. Then you update the programs that depend on the new or changed schema elements and test with the database. When it’s OK, you do the same to the production system, updating the production database first before releasing the code that uses it.

Having a centralized schema that encompasses all programs and databases doesn’t change this, but makes it easier – fewer steps with fewer mistakes. First you make the change in the centralized schema. Then it’s a process of generating the data definitions first for the test systems (database and programs) and then to the production system. You may have made just a couple changes to the centralized schema, but because of inheritance and all the data definitions that are generated, you might end up with dozens of changes in your overall system – UI pages, back end services, API calls and definitions and the database schema. Making an omission or mistake on just one of the dozens of changes means a bug that has to be found and fixed.

Conclusion

I’ve only scratched the surface of a huge subject in this post. But in practice, it’s a hill you can climb. Each step yields benefits, and successive steps deliver increasingly large results in terms of productivity and quality. The overall picture should be clear: you are taking a wide variety of data definitions expressed in code in different languages and parts of a system and step by step, collapsing them into a small number of declarative, meta-data attributes of a centralized schema. A simple generator (compile-time or run-time) can turn the centralized information into what’s needed to make the system work.

In doing this, you have removed a great deal of redundancy from your system. You’ve made it easier to change. While rarely looked on as a key thing to strive for, the fact that the vast majority of what we do to software is change it makes non-redundancy the most important measure of goodness that software can have.

What I've described here are just the first steps up the mountain. Near the mountain's top, most of a program's functionality is defined by metadata!

FWIW, the concept I'm explaining here is an OLD one. It's been around and been implemented to varying extents in many successful production systems. It's the core of climbing the tree of abstraction. When and to the extent it's been implemented, the productivity and quality gains have in fact been achieved. Ever hear of the RAILS framework in Ruby, implementing the DRY (Don't Repeat Yourself) concept? A limited version of the same idea. Apple's credit card runs on a system built on these principles today. This approach is practical and proven. But it's orthogonal to the general thoughts about software that are generally taught in Computer Science and practiced in mainstream organizations.

This means that it's a super-power that software ninjas can use to program circles around the lumbering armies of mainstream software development organizations.

Posted by David B. Black on 05/23/2022 at 03:15 PM | Permalink | Comments (0)

The Goals of Software Architecture

What goals should software architecture strive to meet? You would think that this subject would have been intensely debated in industry and academia and the issue resolved decades ago. Sadly, such is not the case. Not only can't we build good software that works in a timely and cost-effective way, we don't even have agreement or even discussion about the goals for software architecture!

Given the on-going nightmare of software building and the crisis in software that still going strong after more than 50 years, you would think that solving the issue would be top-of-mind. As far as I can tell, not only is it not top-of-mind, it’s not even bottom-of-mind. Arguably, it’s out-of-mind.

What is Software Architecture?

A software architecture comprises the tools, languages, libraries, frameworks and overall design approach to building a body of software. While the mainstream approach is that the best architecture depends on the functional requirements of the software, wouldn’t it be nice if there were a set of architectural goals that were largely independent of the requirements for the software? Certainly such an independence would be desirable, because it would shorten and de-risk the path to success. Read on and judge for yourself whether there is a set of goals that the vast majority of software efforts could reasonably share.

The Goals

Here’s a crack at common-sense goals that all software architectures should strive to achieve and/or enable. The earlier items on the list should be very familiar. The later items may not be goals of every software effort; the greater in scope the software effort, the more their importance is likely to increase.

  • Fast to build
    • This is nearly universal. Given a choice, who wants to spend more time and money getting a software job done?
  • View and test as you build
    • Do you want to be surprised at the end by functionality that isn't right or deep flaws that would have been easy to fix during the process?
  • Easy to change course while building
    • No set of initial requirements is perfect. Things change, and you learn as you see early results. There should be near-zero cost of making changes as you go.
  • Minimal effort for fully automated regression testing
    • What you've built should work. When you add and change, you shouldn't break what you've already built. There should be near-zero cost for comprehensive, on-going regression testing.
  • Seconds to deploy and re-deploy
    • Whether your software is in progress or "done," deploying a new version should be near-immediate.
  • Gradual, controlled roll-out
    • When you "release" your software, who exactly sees the new version? It is usually important to control who sees new versions when.
  • Minimal translation required from requirements to implementation
    • The shortest path with the least translation from what is wanted to the details of building it yields speed, accuracy and mis-translations.
  • Likelihood of slowness, crashes or downtime near zero
    • 'Nuff said.
  • Easily deployed to all functions in an organization
    • Everything that is common among functions and departments is shared
    • Only differences between functions and departments needs to be built
  • Minimal effort to support varying interfaces and roles
    • Incorporate different languages, interfaces, modes of interaction and user roles into every aspect of the system’s operation in a central way
  • Easily increase sophisticated work handling
    • Seamless incorporation of history, evolving personalization, segmentation and contextualization in all functions and each stage of every workflow
  • Easily incorporate sophisticated analytics
    • Seamless ability to integrate on and off-line Analytics, ML, and AI into workflows
  • Changes the same as building
    • Since software spends most of its life being changed, all of the above for changes

Let’s have a show of hands. Anyone who thinks these are bad or irrelevant goals for software, please raise your hand. Anyone?

I'm well aware that the later goals may not be among the early deliverables of a given project. However, it's important to acknowledge such goals and their rising importance over time so that the methods to achieve earlier goals don't increase the difficulty of meeting the later ones.

Typical Responses to the Goals

I have asked scores of top software people and managers about one or more of these goals. I detail the range of typical responses to a couple of them in my book on Software Quality.

After the blank stare, the response I've most often gotten is a strong statement about the software architecture and/or project management methods they support. These include:

  • We strictly adhere to Object-oriented principles and use language X that minimizes programmer errors
  • We practice TDD (test-driven development)
  • We practice X, Y or Z variant of Agile with squads for speed
  • We have a micro-services architecture with enterprise queuing and strictly enforced contracts between services
  • Our quality team is building a comprehensive set of regression tests and a rich sandbox environment.
  • We practice continuous release and deployment. We practice dev ops.
  • We have a data science team that is testing advanced methods for our application

I never get any discussion of the goals or their inter-relationships. Just a leap to the answer. I also rarely get "this is what I used to think/do, but experience has led me to that." I don't hear concerns or limitations of the strongly asserted approaches. After all, the people I ask are experts!

What's wrong with these responses?

In each case, the expert asserts that his/her selection of architectural element is the best way to meet the relevant goals. The results rarely stand out from the crowd for the typical answers listed above.

The key thing that's wrong is the complete lack of principles and demonstration that the approaches actually come closer to meeting the goals than anything else.

The Appropriate Response to the Goals

First and foremost, how about concentrating on the goals themselves! Are they the right goals? Do any of them work against the others?

That's a major first step. No one is likely to get excited, though. Most people think goals like the ones listed above don't merit discussion. They're just common sense, after all.

Things start to get contentious when you ask for ways to measure progress towards each goal. If you're going to the North Pole or climbing Mt. Everest, shouldn't you know where it is, how far away you are, and whether your efforts are bringing you closer?

Are the goals equally important? Is their relative importance constant, or does the importance change?

Wouldn't it be wonderful if someone, somewhere took on the job of evaluating existing practices and ... wait for it ... measured the extent they achieved the goals. Yes, you might not know what "perfect" is, but surely relative achievement can be measured.

For example, people are endlessly inventing new software languages and making strong claims about their virtues. Suppose similar claims were made about new bats in baseball. Do you think it might be possible that the batter's skill makes more of a difference than the bat? Wouldn't it be important to know? Apparently, this is one of the many highly important -- indeed, essential -- questions in software that never gets asked, let alone answered.

Along the same lines, wouldn't it be wonderful if someone took on the job of examining outliers? Projects that worked out not just in the typical dismal way, but failed spectacularly? On the other end of the spectrum, wouldn't amazing fast jobs be interesting? This would be done on start-from-scratch projects, but equally important on making changes to existing software.

A whole slew of PhD's should be given out for pioneering work on identifying and refining the exact methods that make progress towards the goals. It's likely that minor changes to the methods used to meet the earlier goals well would make a huge difference in meeting later goals such as seamlessly incorporating the results of analytics.

Strong Candidates for Optimal Architecture

After decades of programming and then more of examining software in the field, I have a list of candidates for optimal architecture. My list isn't secret -- it's in books and all over this blog. Here's a couple places to start:

Speed-optimized software

Occamality

Champion Challenger QA

Microservices

The Dimensions

Abstraction progression

The Secrets

The books

Conclusion

I've seen software fashions change over the years, with things getting hot, fading away, and sometimes coming back with a new name. The fashions get hot, and all tech leaders who want to be seen as modern embrace them. No real analysis. No examination of the principles involved. Just claims. At the same time, degrees are handed out by universities in Computer Science by Professors who are largely unscientific. In some ways they'd be better off in Art History -- except they rarely have taste and don't like studying history either.

I look forward to the day when someone writes what I hope will be an amusing history of the evolution of Computer Pseudo-Science.

Posted by David B. Black on 05/09/2022 at 05:03 PM | Permalink | Comments (0)

Making Fun of Object-Orientation in Software Languages

When a thing is held in exaltation by much of the world and its major institutions, when that thing is sure that it's the best thing ever, and when people who support the thing are convinced that they're superior to the rest of us, who are nothing but unsophisticated hackers, then you've got something that's fun to make fun of. A target-rich environment.

There are lots of things to make fun of in software. There are project managers who solemnly pronounce that, due to their expertise, the project is on track and will be delivered on time and to spec. There are the software architects who haven't met a real user or written a line of production code in years, who proudly announce their design of a project to migrate to a graph database or micro-services. There are other juicy targets. But none comes close to the exalted ridiculousness of object-oriented languages (the purer the better) and those who shill for them.

Are you a member of the O-O cult, offended by this talk of making fun of supposed imperfections of the one true approach to programming languages? My sincere sympathies to you. Check this out. It's heavy-duty cult de-programming material. It probably won't work for you, but give it a try.

Back to the fun. Here's a start from an excellent essay by someone who tried for years to make OOP work.

OOP

Here are some wonderful highlights from a collection made by a person who supports OOP but thinks most programmers don't know how to program it well.

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Here are highlights from a wonderfully rich collection.

“object-oriented design is the roman numerals of computing.” – Rob Pike

“The phrase "object-oriented” means a lot of things. Half are obvious, and the other half are mistakes.“ – Paul Graham

“The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” – Joe Armstrong

“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy.” – Eric Allman

OO is the “structured programming” snake oil of the 90's. Useful at times, but hardly the “end all” programing paradigm some like to make out of it.

From another section of the same wonderfully rich collection.

Being really good at C++ is like being really good at using rocks to sharpen sticks. – Thant Tessman

Arguing that Java is better than C++ is like arguing that grasshoppers taste better than tree bark. – Thant Tessman

There are only two things wrong with C++: The initial concept and the implementation. – Bertrand Meyer

More from a good extended essay on OOP:

“C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.”

It was Bjarne Stroustrup who said that, so that’s ok, I guess.

“Actually I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind.” — Alan Kay

“There are only two things wrong with C++: The initial concept and the implementation.” — Bertrand Meyer

“Within C++, there is a much smaller and cleaner language struggling to get out.” — Bjarne Stroustrup

“C++ is history repeated as tragedy. Java is history repeated as farce.” — Scott McKay

“Java, the best argument for Smalltalk since C++.” — Frank Winkler

“If Java had true garbage collection, most programs would delete themselves upon execution.” — Robert Sewell

Object-oriented design and programming remains a useful way to think about parts of some software problems, as I've described here. As a universal approach to software, it's beyond bad. Beyond ludicrous. It is such a joke that the only thing to do is visit it briefly, make jokes, and then move on, with a tinge of regret about the science-less-ness of Computer Science.

 

Posted by David B. Black on 05/03/2022 at 09:14 AM | Permalink | Comments (0)

The forbidden question: What caused the obesity epidemic?

There is an obesity problem. Everyone knows it. Public health authorities proclaim it. Over half the population in the US is now obese. The consequences of being obese in terms of health are serious. Solutions are proposed, but they don’t seem to work. The question that’s almost never asked, the answer to which would help us understand AND FIX the problem, is pure common sense: what started the epidemic? What changed to cause the steady rise of overweight and obese people?

The reason no one wants to ask the question is because the most probable answer is something our major institutions, Experts and Authorities don’t want us to know: Their nutrition recommendations, widely promoted and visible on most food labels that you buy, are based on bad, corrupted science.

We now know how and why the science was wrong. After much study and careful trials, we know what's right. But because of the refusal of the authorities to admit and correct their error, millions of people continue to suffer and die of diseases they would not have if our Medical Health Elites would suck it up, admit error, and fix it.

There is an epidemic of obesity

The epidemic. Well known, accepted. Here's the FDA:

111

Here's the CDC:

Obesity is a serious chronic disease, and the prevalence of obesity continues to increase in the United States. Obesity is common, serious, and costly. This epidemic is putting a strain on American families, affecting overall health, health care costs, productivity, and military readiness.

Is it under control? The CDC again:

From 1999 –2000 through 2017 –2018, US obesity prevalence increased from 30.5% to 42.4%. During the same time, the prevalence of severe obesity increased from 4.7% to 9.2%.

What's bad about being obese?

According to the CDC, here are some of the consequences of being obese:

11

What are the medical costs resulting from obesity?

A highly detailed study was published in 2021 going into depth to determine the direct medical costs of obesity.

RESULTS: Adults with obesity in the United States compared with those with normal weight experienced higher annual medical care costs by $2,505 or 100%, with costs increasing significantly with class of obesity, from 68.4% for class 1 to 233.6% for class 3. The effects of obesity raised costs in every category of care: inpatient, outpatient, and prescription drugs. ...  In 2016, the aggregate medical cost due to obesity among adults in the United States was $260.6 billion.

In other words, obese people have more than double the medical care costs compared to those who are not obese. More important, the obese people themselves suffer the poor health resulting from their condition!

How are we told to prevent and/or fix it?

From the CDC:

Obesity is a complex health issue resulting from a combination of causes and individual factors such as behavior and genetics.

Healthy behaviors include regular physical activity and healthy eating. Balancing the number of calories consumed from foods and beverages with the number of calories the body uses for activity plays a role in preventing excess weight gain.

A healthy diet pattern follows the Dietary Guidelines for Americans which emphasizes eating whole grains, fruits, vegetables, lean protein, low-fat and fat-free dairy products, and drinking water.

In other words, exercise more, eat less, and follow the official diet guidelines which emphasize avoiding fat in meat and dairy.

The origins of the obesity epidemic

Did the obesity epidemic appear out of nowhere, for no reason? Nope. The key to understanding and responding to any epidemic is to trace its origins to the time and place of its start. Only then can you understand the problem and often get good ideas about how to mitigate the epidemic and prevent similar ones from getting started.

Look at this chart from the CDC:

Estat-adults-fig

A sharp upwards turn in Obesity and Severe Obesity took place in 1976-1980 and has continued rising. From the tables in the document from which this chart was taken, Obesity was about 14% and has risen to 43%, while Severe Obesity was about 1% and has risen to over 9%. That's about 3X and 9X increases. Total Obesity is now over 50% of the population! During this period the Overweight share has remained about the same (about 32%), which means that a large number of people "graduated" to higher levels of weight, probably many normals becoming Overweight while as many Overweights became Obese.

What happened when the "hockey stick" upwards trend in obesity started? It turns out something big happened, with lots of public attention. According to the government website on the history of nutritional guidelines:

A turning point for nutrition guidance in the U.S. began in the 1970s with the Senate Select Committee on Nutrition and Human Needs....

In 1977, after years of discussion, scientific review, and debate, the U.S. Senate Select Committee on Nutrition and Human Needs, led by Senator George McGovern, released Dietary Goals for the United States. ...

The recommendations included:

Increase the consumption of complex carbohydrates and “naturally occurring” sugars...

Reduce overall fat consumption...

Reduce saturated fat consumption to account for about 10 percent of total energy intake...

The widely publicized recommendations were followed up by the first in the series of official expert-approved documents:

In February 1980, USDA and HHS collaboratively issued Nutrition and Your Health: Dietary Guidelines for Americans,

It's important to note that the focus was NOT on obesity. It was on diet-related health, with a particular focus on heart disease. The consensus of Expert opinion at the time was that eating saturated fat causes heart disease. In an effort to reduce heart disease, the authorities started the drum-beat of "Stay healthy! Eat less saturated Fat!"

Obesity took off when we obeyed the Experts

The new dietary advice was shouted from the hill tops. It was pushed by government agencies. It was endorsed by every major health institution, and pushed by nutritionists and doctors everywhere. It was emblazoned on food packaging by law, each package stating how much of the evil, heart-killing saturated fat was in each serving, and how much of your "daily allowance" it used up.

The food that was offered in grocery stores and restaurants changed to reflect the "scientific" consensus. Bacon was bad. If you had to eat meat (even though you shouldn't), you should eat lean (no fat) meat. All these things are still what we see!

Here is a study based on the US National Health and Nutrition Examination Survey (NHANES) that demonstrates the strong linkage between the diet recommendations and the growth of obesity.

From a valuable study on obesity (behind a paywall):

When we put together the following…

1) Obesity is not a simplistic imbalance of energy in and energy out, but a far more complex matter of how, biochemically, the body can store or utilize fat. Carbohydrate is the best macronutrient to facilitate fat storage and prevent fat utilization.

2) Fat/protein calories have jobs to do within the body – they can be used for basal metabolic repair and maintenance. Carbohydrate is for energy alone; it needs to be burned as fuel or it will be stored as fat.

… carbohydrates can be seen as uniquely suited to weight gain and uniquely unsuited to weight loss. The macronutrient that we have been advising people to eat more of is the very macronutrient that enables fat to be stored and disables fat from being utilized.

Increasingly people ate what they were told to eat. Young people grew up eating in the new style, with vastly more packaged foods, sugar and carbohydrates than earlier generations. No surprise, they got fat early in life, and stayed fat.

Marty Makary MD, surgeon and Professor at Johns Hopkins, Makary
treats this from a different angle in his recent book.

Dr. Dariush Mozaffarian, dean of Tufts University’s Friedman School of Nutrition—the nation’s leading nutrition school ... recently wrote in the Journal of the American Medical Association, “We really need to sing it from the rooftops that the low-fat diet concept is dead, there are no health benefits to it.” As a gastrointestinal surgeon and advocate for healthful foods, I’m well aware how this low-fat teaching is based on the medical establishment’s embarrassing, outdated theory that saturated fat causes heart disease. A landmark 2016 article in the Journal of the American Medical Association found that the true science was actually being suppressed by the food industry. Highly respected medical experts like my former Johns Hopkins colleague Dr. Peter Attia are now correcting the medical establishment’s sloppy teachings. He and many other lipidologists know that the low-fat bandwagon has damaged public health. It was driven by an unscientific agenda advanced by the American Heart Association and the food industry, which sponsored the misleading food pyramid. These establishment forces spent decades promoting addictive, high-carbohydrate processed foods because the low-fat foods they endorsed require more carbohydrates to retain flavor. That 40-year trend perfectly parallels our obesity epidemic. Medical leaders like Dr. Attia have been trying to turn this aircraft carrier around, but it’s been a challenge. Despite the science, the dogma remains pervasive. In hospitals today, the first thing we do to patients when they come out of surgery, exhausted and bleary-eyed, is to hand them a can of high-sugar soda. Menus given to hospitalized patients promote low-fat options with a heart next to those menu items. And when physicians order food for patients in electronic health records, there’s a checkbox for us to order the “cardiac diet,” which hospitals define as a low-fat diet. Despite science showing that natural fats pose no increased risk of heart disease and that excess sugar is the real dietary threat to health, my hospital still hands every patient a pamphlet recommending the “low-fat diet” when they’re discharged from the cardiac surgery unit, just as we have been doing for nearly a half century. But nowhere is that now debunked low-fat recommendation propagated as much as in wellness programs.

For more study

The experts are clear on this subject. You already know this, but here are highlights of their views on fat and on cholesterol. Here is background on how saturated and cholesterol became menaces. Here is why you should eat lots of saturated fat and why should not take drugs to lower your cholesterol.

With the billion-dollar-revenue American Heart Association continuing to villanize saturated fat, this insanity is unlikely to stop soon.

Conclusion

The cause of the obesity epidemic is clear. No one talks about it because the people in charge refuse to admit their role in causing it. As the evidence from RCT's continues to pile up, careful reading shows that the emphatic language about saturated fat has lightened up a bit, but we're not even close to the equivalent of acknowledging, for example, that smoking cigarettes is bad for you. We should be shouting "eat lots of natural saturated fat, the kind in meat, milk, cheese and eggs." We're not there yet. Educated people can nonetheless make their own decisions and do just that -- and improve their health as a result.

 

Posted by David B. Black on 04/25/2022 at 05:12 PM | Permalink | Comments (0)

How to Fix Software Development and Security: A Brief History

As the use of computers grew rapidly in the 1960’s, the difficulty of creating quality software that met customer needs became increasingly evident. Wave after wave of methods were created, many of them becoming standard practice – without solving the problem! This is a high-level survey of the failed attempts to solve the problem of software development and security -- all of which are now standard practice in spite of failing to fix the problem! There are now software auditing firms that will carefully examine a software organization to see in which ways it deviates from ineffective standard practice – so that it can be “corrected!”

Houston, We've Had a problem

The astronauts of the Apollo 13 mission to the Moon radioed base control about the problem that threatened the mission and their lives.

Apollo_13_fd4_capcom_lousma_with_slayton_mattingly_brand_and_young_apr_14_1970_s70-34902

The astronauts were saved after much effort and nail-biting time.

Should the people in charge of software make a similar call to mission control? Yes! They have made those calls, and continue to make them, thousands of times per day!!

Getting computers to do what you want by creating the kind of data we call software was a huge advance over the original physical method of plugs and switches. But it was still wildly difficult. Giant advances were made in the 1950’s that largely eliminated the original difficulty.

As the years went on into the 1960’s, the time, cost and trouble of creating and modifying software became hard to ignore. The problems got worse, and bad quality surfaced as a persistent issue. There were conferences of experts and widely read books, including one from a leader of the IBM 360 Operating System project, one of the largest efforts of its kind.

Mythical_man-month_(book_cover)

The Apollo 13 astronauts were saved, but the disaster in software development has resisted all the treatments that have been devised for it. With the invention and spread of the internet, we are now plagued with cybercrime, to the extent that ransomware attacks now take place (as of June 2021) ... get ready for it ... 149,000 times per week!! I think a few more exclamation points would have been appropriate for that astounding statistic, but I'm a laid-back kind of guy, so I figured I'd stick with a mild-mannered two. Here's my description from more than four years ago on the state of ransomware and the response of the "experts."

Following are highlights of the methods that have been devised to solve the problem of software development and security. None of which have worked, but have nonetheless become standard practice.

Programming Languages

The assembled experts in 1968 decided that "structured programming" would solve the problem of writing software that worked. Here's how that turned out. A group of experts recently convened to survey 50 years of progress in programming languages. They were proud of themselves, but had actually made things worse. Highlights here. One of those early efforts led to object-oriented languages, which are now faith-based dogma in the evidence-free world of Computer Science. New languages continue to be invented that claim to reduce programmer errors and increase productivity. What always happens is that the "advances" are widely publicized while the large-scale failures are concealed; here are a couple of juicy examples. Above all, the flow of new languages provides clear demonstration that programmers don't have enough useful work to do to keep them busy.

Outsourcing

While programmers babbled among themselves about this language or that method, company managers couldn't help but noticing that IT budgets continued to explode while the results were an avalanche of failures. Service companies emerged that promised better results than company management could achieve because they claimed to be experts in software and its management.

One of the pioneers in computer outsourcing was Ross Perot's Electronic Data Systems (EDS), which had $100 million in revenue by 1975. By the mid-1980's it had over 40,000 employees and over $4 billion in revenue. It continued to grow rapidly. along with a growing number of competitors including services branches of the major computer vendors. In a typical deal, a company would turn over some or all of its hardware and software operations to the outsourcer, who would add layers of management and process. They succeeded by lowering expectations and hiding the failures.

Off-shoring

As communications and travel grew more cost effective, basing outsourced operations in another country became practical. Off-shoring had the huge advantage that while the results were no better than regular outsourcing, the fact that employees were paid much less than US wages enabled marginally lower prices and a huge infrastructure of management and reporting which further disguised the as-usual results.

US-based outsourcers started doing this fairly early, while non-US vendors providing software services grew rapidly. Tata Computer Services first established a center in the 1980's and now has revenue over $20 billion and has been the world's most valuable IT company, employing over 500,000 people. Infosys is another India-based giant, along with hundreds of others.

Project Management

Deeper investment in the growing and evolving collection of project management methods has been a key part of software organizations. The larger the organization, the greater the commitment to widely accepted, mainstream project management methods. As new methods enter the mainstream, for example Agile, new managers and outsourcing organizations sell their embracing of the methods as a reason for hiring and contracting managers to feel confident in the decisions they are making. Project management has definitely succeeded in forming a thick layer of obfuscation over the reality of software development, quality and security. The only method of project management that has ever delivered "good" results for software is the classic method of wild over-estimation. See this for one of the conceptual flaws at the heart of software project management, see this for an overview and this for a comprehensive look.

Education and Certification

Nearly all professions have education and certification requirements, from plumbers and electricians to doctors. Software organizations grew to embrace imposing education and certification requirements on their people as a sure method to achieve better outcomes. This came to apply not to just the developers themselves but also to the QA people, security people and project managers. Unlike those other professions, computer education and certification has been distinctly divorced from results.

Standards, Compliance and Auditing

Standards were developed early on to assure that all implementations of a given software language were the same. As software problems grew, standards were increasingly created to solve the problem. The standards grew to include fine-grained detail of the development process itself and even standards that specified the "maturity level" of the organization.

Just as a new house is inspected to assure that it complies with the relevant building code, including specialists to inspect the plumbing and electricity, the software industry embraced a discipline of auditing and compliance certification to assure that the organization was meeting the relevant standards. A specialized set of standards grew for various aspects of computer security, with different measures applied to financial transactions and healthcare records for example. The standards and compliance audits have succeeded in consuming huge amounts of time and money while having no positive impact on software quality and security.

Suppress, deflect and Ignore

Everything I've described above continues apace, intent on its mission of making software development successful and computer security effective. People who write code spend much of their time engaged in activities that are supposed to assure that the code they write meets the needs in a timely manner and has no errors, using languages that are supposed to help them avoid error, surrounded by tests created by certified people using certified methods to assure that it is correct. Meanwhile highly educated certified computer security specialists assure that security is designed into the code and that it will pass all required audits when released.

How is this working out? In spite of widespread information blackouts imposed on failures, enough failures are so blatant that they can't be suppressed or ignored that we know that the problems aren't getting better.

Conclusion

We're over 50 years into this farce, which only continues because the clown show that sucks in all the money is invisible to nearly everyone. The on-the-scene reporters who tell all the spectators what's happening on the field can't see it either. What people end up hearing are a series of fantasies intended to deliver no surprises, except in those cases where the reality can't be hidden. Mostly what people do is things that make themselves feel better while not making things better. The time for a major paradigm shift to address these problems has long since passed.

Posted by David B. Black on 04/19/2022 at 11:21 AM | Permalink | Comments (0)

The Facts are Clear: Don't Take Cholesterol-lowering Drugs

I have described the background and evidence of the diet-heart fiasco -- the hypothesis-turned-fake-fact that you shouldn't eat saturated fat because it raises your "bad" LDL cholesterol, which causes hearts disease. Not only is it wrong -- eating saturated fat is positively good for you!

This deadly farce has generated a medical effort to lower the cholesterol of patients in order to keep them healthy. There have been over a trillion dollars in sales for cholesterol-lowering statin drugs so far.The entire medical establishment has supported this as a way to prevent heart disease.There's just one little problem, now proved by extensive, objective real-world evidence and biochemical understanding: Cholesterol, including the "bad" LDL, is NOT a cause of heart disease. Even indirectly. Lowering LDL via diet change or statins does NOT prevent heart disease. So don't avoid saturated fats or take statins!

Here's the kicker: higher cholesterol is associated pretty strongly with living longer, particularly in women! And the side effects of the drugs are widespread and serious!

Basic facts

Let's start with a few facts:

  • Eating fat will NOT make you fat. Eating sugar will make you fat.
  • The human brain is 70% fat.
  • 25% of all cholesterol in the body is found in the brain.
  • All cells in your body are made of fat and cholesterol.
  • LDL is not cholesterol! HDL isn't either! They are proteins that carry cholesterol and fat-soluble vitamins. Lowering it lowers your vitamins.

To get the big picture about the diet-heart hypothesis (the reason why you're supposed to take statins in order to lower your cholesterol in order to prevent heart disease), see this post on the Whole Milk Disaster. For more detail, see the post on why you should eat lots of saturated fat.

To get lots of detail, read this extensive review of Cholesterol Con and this extensive review of The Clot Thickens -- and by all means dive into the books. Here is an excellent summary written by an MD explaining the situation and the alternative thrombogenic hypothesis.

The Bogus Hyposthesis

How did thing get started? Stupidity mixed with remarkably bad science. Here is a brief summary of a PhD thesis examination of the build-up to the Cholesterol-is-bad theory:

The cholesterol hypothesis originated in the early years of the twentieth century. While performing autopsies, Russian pathologists noticed build-up in the arteries of deceased people. The build-up contained cholesterol. They hypothesised that the cholesterol had caused the build-up and blocked the artery leading to a sudden death (the term “heart attacks” was not much used before the end of World War II).

An alternative hypothesis would be that cholesterol is a substance made by the body for the repair and health of every cell and thus something else had damaged the artery wall and cholesterol had gone to repair that damage. This is the hypothesis that has the memorable analogy – fire fighters are always found at the scene of a fire. They didn’t cause the fire – they went there to fix it. Ditto with cholesterol. The alternative hypothesis did not occur to the pathologists by all accounts.

The pathologists undertook experiments in rabbits to feed them cholesterol to see if they ‘clogged up’ and sure enough they did. However, rabbits are herbivores and cholesterol is only found in animal foods and thus it’s not surprising that feeding animal foods to natural vegetarians clogged them up. When rabbits were fed purified cholesterol in their normal (plant-based) food, they didn’t clog up. That should have been a red flag to the hypothesis, but it wasn’t.

Then Ancel Keys got involved, and the bad idea became gospel.

Population studies

Before taking drugs like statins to reduce cholesterol, doesn't it make sense to see if people with lower cholesterol lead longer lives? The question has been examined. Short answer: people with higher cholesterol live longer. 

Here is data from a giant WHO database of cholesterol from over 190 countries:

Men

More cholesterol = longer life for men, a strong correlation. Even more so for women, who on average have HIGHER cholesterol than men:

Women

When you dive into specific countries and history, the effect is even more striking. Check out the Japanese paradox

To illustrate the Japanese paradox, he reported that, over the past 50 years, the average cholesterol level has risen in Japan from 3.9 mmol/l to 5.2 mmol/l. Deaths from heart disease have fallen by 60% and rates of stroke have fallen seven-fold in parallel. A 25% rise in cholesterol levels has thus accompanied a six-fold drop in death from CVD (Ref 6).

And the strange things going on in Europe led by those cheese-loving French:

The French paradox is well known – the French have the lowest cardiovascular Disease (CVD) rate in Europe and higher than average cholesterol levels (and the highest saturated fat consumption in Europe, by the way). Russia has over 10 times the French death rate from heart disease, despite having substantially lower cholesterol levels than France. Switzerland has one of the lowest death rates from heart disease in Europe with one of the highest cholesterol levels.

Hard-core RCT's (Randomized Controlled Trials)

RCT's are the gold standard of medical science and much else. You divide a population into a control group for which nothing changes and a test group, which is subjected to the treatment you want to test. It's hard to do this with anything like diet! But it has been done in controlled settings a few times at good scale. The results of the RCT's that have been done did NOT support the fat-cholesterol-heart-disease theory and so were kept hidden. But in a couple cases they've been recovered, studied and published.

A group of highly qualified investigators has uncovered two such studies and published the results in the British Medical Journal in 2016: "Re-evaluation of the traditional diet-heart hypothesis: analysis of recovered data from Minnesota Coronary Experiment (1968-73)." They summarize the results of their earlier study:

Our recovery and 2013 publication of previously unpublished data from the Sydney Diet Heart Study (SDHS, 1966-73) belatedly showed that replacement of saturated fat with vegetable oil rich in linoleic acid significantly increased the risks of death from coronary heart disease and all causes, despite lowering serum cholesterol.14

Lower cholesterol meant greater risk of death. Clear.

The Minnesota study was pretty unique:

The Minnesota Coronary Experiment (MCE), a randomized controlled trial conducted in 1968-73, was the largest (n=9570) and perhaps the most rigorously executed dietary trial of cholesterol lowering by replacement of saturated fat with vegetable oil rich in linoleic acid. The MCE is the only such randomized controlled trial to complete postmortem assessment of coronary, aortic, and cerebrovascular atherosclerosis grade and infarct status and the only one to test the clinical effects of increasing linoleic acid in large prespecified subgroups of women and older adults.

Moreover, it was sponsored by the most famous proponent of the diet-heart hypothesis: Ancel Keys. So what happened? Here's a brief summary from an article in the Chicago Tribune after the 2016 BMJ study was published:

Second, and perhaps more important, these iconoclastic findings went unpublished until 1989 and then saw the light of day only in an obscure medical journal with few readers. One of the principal investigators told a science journalist that he sat on the results for 16 years and didn't publish because "we were just so disappointed in the way they turned out."

From the BMJ 2016 paper:

The traditional diet heart hypothesis predicts that participants with greater reduction in serum cholesterol would have a lower risk of death (fig 1⇑, line B). MCE participants with greater reduction in serum cholesterol, however, had a higher rather than a lower risk of death.

...

The number, proportion, and probability of death increased as serum cholesterol decreased

Wowza. The "better" (lower) your blood cholesterol levels, the more likely you were to die. In fact, "For each 1% fall in cholesterol there was a 1% increase in the risk of death."

Problems with Statins

Not only do statins not work to lengthen lives, taking them is a bad idea because of their side effects. This is a starting place. For example, check the side effects of a leading statin:

11

Good effects vs. side effects

We know for a fact that lowering your blood cholesterol is a bad idea. We know the drugs that do it have side effects. It's natural to think that the drugs normally do their thing and in rare cases there are side effects. Often, this is far from the truth. Here are excerpts from an article that explains the basic medical math concept of NNT

Most people have never heard the term NNT, which stands for Number Needed to Treat, or to put it another way, the number of people who need to take a drug for one person to see a noticeable benefit. It's a bit of a counterintuitive concept for people outside medicine, since most people probably assume the NNT for all drugs is 1, right? If I'm getting this drug, it must be because it is going to help me. Well, wrong.

What about the side effects of statins?

Many people who take the drug develop chronic aches and pains. The drug also causes noticeable cognitive impairment in a proportion of those taking it, and some even end up being diagnosed with dementia - how big the risk is unfortunately isn't known, because proper studies haven't been carried out that could answer that question. Additionally, the drug causes blood sugar levels to rise, resulting in type 2 diabetes in around 2% of those taking the drug - it is in fact one of the most common causes of type 2 diabetes.

NNT applied to statins:

Well, if you've already had a heart attack, i.e. you've already been established to be at high risk for heart attacks, then the NNT over five years of treatment is 40. In other words, 39 of 40 people taking a high dose statin for five years after a heart attack won't experience any noticeable benefit. But even if they're not the lucky one in 40 who gets to avoid a heart attack, they'll still have to contend with the side effects.

How many patients are told about NNT? If you haven't had a heart attack, the NNT is vastly greater than 40, and yet statins are prescribed when cholesterol is "too high" no matter what. Many of the side effects happen in 10% of the cases, which is four times greater than the number of people who are "helped." Doctors who do this are indeed members of the "helping profession;" the question is, who exactly are they helping?

Conclusion

If you value science, you should not worry about lowering your cholesterol. If you value your life and health, you should be happy to have high cholesterol. Likewise, you should avoid taking cholesterol-lowering drugs because in the end they hurt you more than they help you. If you're worried about pharma companies losing profits, it's a much better idea to just send them a monthly check -- forget about their drugs!

 

Posted by David B. Black on 04/08/2022 at 06:17 PM | Permalink | Comments (0)

Software Programming Language Cancer Must be Stopped!

Human bodies can get the horrible disease of cancer. Software programming languages are frequently impacted by software cancer, which also has horrible results.

There are many kinds of cancer, impacting different parts of the body and acting in different ways. They all grow without limit and eventually kill the host. Worse, most cancers can metastasize, i.e., navigate to a different part of the body and start growing there, spreading the destruction and speeding the drive towards death.

Software cancer impacts software languages in similar ways. Once a software programming language has been created and used, enthusiasts decide that the language should have additional features, causing the language to grow and increase in complexity. The language grows and grows, like a cancer. Then some fan of the language, inspired by it in some strange way, thinks a brand-new language must be created, derived from the original but different. Thus the original language evolves into a new language, which then itself tends to have cancerous growth.

Like cancer in humans, programming language cancer leaves a trail of death and destruction in the software landscape. We must find a way to stop this cancer and deny its self-promoting lies that it’s “improving” the language it is destroying.

Programming language origins and growth

All computers have a native machine language that controls how they work. The language is in all cases extremely tedious for humans to use. Solutions for the tedium were invented in the early days of computing, which enabled programmers using the new languages to think more rapidly and naturally about the data they read, manipulated and put somewhere.

Each of the new languages was small and primitive when it was “born.” As the youthful language tried getting somewhere, it struggled to first crawl, then stand with help and finally to walk and run. Growth in the early years was natural and led to good results. Once each new language reached maturity, however, cancer in its various forms began to set in, causing the language to grow in weight and correspondingly to lose strength, agility and overall health.

I have described the giant early advances in language and reaching maturity with the invention of high level languages. After early maturity, a couple small but valuable additions to languages were made to enhance clarity of intention.

The ability to create what amounts to “habits” (frequently used routines) were an important part of the language maturation process. The more valuable such routines were added to libraries so that any new program that needed them could use them with very little effort. There were a couple of valuable languages created that went beyond 3-GL’s, languages that were both popular and highly productive.  It’s a peculiarity of programming language evolution that these languages didn’t become the next-generation mainstream.

That should have been pretty much it! You don’t need a new language to solve a new problem! Or an old problem.

Languages exhibit cancerous growth

In the early days of languages, it made sense that they didn’t emerge as full-grown, fully-capable “adults.” But after a few growth spurts, languages reached maturity and were fully capable of taking on any task – as shown for example, by the huge amounts of COBOL performing mission-critical jobs in finance, government and elsewhere, and by the fact that the vast, vast majority of web servers run on linux, written in plain-old C. The official language definitions in each case have undergone cancerous growth, ignored by nearly everyone sensible. For example, newer versions of COBOL incorporate destructive object-oriented features. Of course it’s the fanatics that get themselves onto language standardization committees and collaborate with each other to get useless but distracting jots and tittles added that endlessly complicate the language, making it harder to read, write and maintain.

Languages metastasize

There is plain old ordinary cancer, in which language cultists get passionate about important “improvements” that need to be made to a language. Then there are the megalomaniac language would-be-gurus who decide that some existing language is too flawed to improve and needs full-scale re-creation. Those are the august new-language creators, who make up some excuse to create a “new” language, which invariably takes off from some existing language. This has led to hundreds of “major” languages and literally thousands of others that have been invented and shepherded into existence by their ever-so-proud creators. Most such language "inventors" like to ignore the origins of their language, emphasizing its creativity and newness.

Someone might say they’ve “invented” a language, but the reality is that the invention is always some variation on something that exists. In some cases the variation is made explicit, as it was with the verbose and stultifying variation of C called C++, which hog-tied the clean C language with a variety of productivity-killing object-oriented features. And then went on to grow obese with endless additions.

Purpose-driven programming language cancers

There is no unifying theme among the cancers. But high on the list is to somehow improve programmer productivity and reduce error by inventing a language with features that will supposedly accomplish that and similar goals. Chief among these purposes is the object-oriented languages, which have themselves metastasized into endless competing forms. Did you know that using a good OO language like Java results in fewer bugs? Hey, I've got this bridge to sell, real cheap! Functional languages keep striving to keep up with the OO crew for creating the most confining, crippling languages possible. It's a close race!

The genealogy of programming languages

Everyone who studies programming languages sees that there are relationships between any new language and its predecessors. When you look at the tree of language evolution, it’s tempting to compare it to the tree of biological evolution, with more advanced species evolving from earlier, less advanced ones. Humanoids can indeed do much more than their biological ancestors.

That’s what the “parents” of the new languages would have you believe. Pfahh!

I have described the explosive growth of programming languages and some of the pointless variations. But somehow programmers felt motivated to invent language after language, to no good end. Just as bad, programmers decided that existing languages needed endless new things added to them, often copying things from other languages in a crazed effort to “keep up,” I guess.

Various well-intentioned efforts were made to prove the wonderfulness of the newly invented languages by using them to re-write existing systems. These efforts have largely failed, demonstrating the pointlessness of the new languages. There was a notable success: a major effort to re-write a production credit card system in assembler language to supposedly bad, old COBOL!

How to stop language cancer

Unless we want to continue the on-going cancerous growth and metastasizing of software languages, we need to ... cure the cancer! Just STOP! Easy to say, when a tiny minority of crazed programmers around the globe without enough useful work to keep them from causing trouble keep driving the cancer. There is a solution, though.

The first and most important part of the solution is Science. You know, that thing whose many results, along with effective engineering, created the devices on which we use software languages. Software is very much a pre-scientific discipline. There isn't even a way to examine evidence to decide whether one language is better than another. What is called "Computer Science" isn't scientific or even practical, as a comparison to medical science makes clear.

The second path to a solution is to focus on status in software. Today, software people gain status in peculiar ways; usually the person with the greatest distance between their work and real people who use software has the highest status. A language "inventor" as about as far as you can get from real people using the results of software efforts. As soon as people contributing to software cancer are seen as frivolous time-wasters the better off everyone will be.

What's the alternative to language cancer?

The most important alternative is to cure it, as expressed above. The most productivity-enhancing effort is to focus instead on libraries and frameworks, which are the proven-in-practice way to huge programmer productivity gains. The "hard" stuff you would otherwise have to program is often available, ready to go, in libraries and frameworks. They are amazing.

Finally, focusing on the details of language is staying fixed at the lowest level of program abstraction, like continuing to try to make arithmetic better when you would be worlds better off moving up to algebra.

Conclusion

Software language cancer is real. It's ongoing. the drivers of software language cancer continue to fuel more cancer by honoring those who contribute to it instead of giving them the scorn they so richly deserve. Software would be vastly better off without this horrid disease.

Posted by David B. Black on 04/05/2022 at 10:02 AM | Permalink | Comments (0)

My Health Insurance Company Tries to Keep me Healthy

I am grateful to have the health insurance I have, and grateful for the payments they've made to resolve problems I've had. Nonetheless, I can't help but be astounded at the never-ending flow of expensive, incompetent, annoying and utterly useless interaction I have had with the company's computer systems. It's small potatoes in the overall scheme of things. It's also simple stuff. Why can't they (and others like them) get it right?

The answer is simple: the company's leaders, like most enterprise companies, want to be leaders in technology. Today, that means funding big, publicized  initiatives in AI and ML. Initiatives that will, of course, transform healthcare. Soon. Getting email right? Getting paper mail right? Trivial stuff. Wins no awards, gets no attention. It's unworthy of attention, like the way the rich owners of a grand house with a dozen servants wouldn't stoop to paying attention to the brand of cleaning products they used.

The email

An email from Anthem showed up in my inbox with the subject "Schedule your checkup now -- at no extra cost." Naturally I open it. Right away there's a graphic, demonstrating that it wasn't just the software team on this job:

Anthem 1

The message with the graphic repeats the message in the subject line, strengthening it -- don't just schedule a checkup, schedule it early. Why should I do this? "It's a good way to stay on top of current health issues and take care of any new ones early, before they become more serious."

Sounds good! Except that the very next thing in this email urging me to "schedule [my] checkup early this year. There's no extra cost." is this:

Anthem 2

My plan "usually" covers it?? WTF?? Right after telling me "There's no extra cost," as in There IS no extra cost??

Then comes "You may pay a copay, percentage of the cost, or deductible if you've already had your physical for the year or if the visit is to diagnose an issue and set a plan for treatment or more tests."

I'm supposed to schedule it "early." I last had an annual physical six months ago.  Is a physical I schedule now, in March, free or not? At the bottom of the email there is a nice big box that says in big type "Schedule your checkup today." It then says "To find a doctor or check what your plan covers, please use the Sydney Health mobile app or visit Anthem.com."

I've already done the Sydney trip, describing it here. Not going there again. I'll go to the main site. I'll spare you the details. They don't know who my primary care doctor is and don't let me tell them. They give me a big list of doctors I could visit, most of whom are pediatric -- oh, yeah, good suggestions, Anthem! They must think I'm young for my age ... or something.

Then I try to find out what my plan covers, as they suggest. Nothing about annual checkups being free of charge; it's all about co-pays. Maybe it's there somewhere, but I can't find it. As usual, the link Anthem provides is to the front door of the site, not a cool new twenty-year-old technology called "deep linking," which brings me right to the relevant place. Maybe next year. Or decade. Or century.

What could have happened

There's a concept that's been around in the industry for a couple decades called "personalization." It includes things like

  • when you send an email, address it to the person, instead of making the email be like a brochure.
  • reflect basic knowledge of the person, like whether they had an annual checkup last year -- if they did, maybe they already think it's a good thing, and the message should be to be sure to do it again
    • They've got my history -- they could praise me for getting checkups for the last X years, and reminding me to keep up the good work.
  • Is the checkup "no cost" or not? Anthem has my account information, name, address and the rest. They have my plan. They know whether it's free or not. They just don't bother to check.
    • Taking my history into account, they could say that, just as last year's checkup was 100% free, this one will be too.
  • As it happens, a week before getting the email I saw my primary care physician and then a specialist who submitted pre-auths for tests. Anthem has the visit claims and pre-auths. I'm doing exactly what they want me to do, as they said in the email, "take care of any new ones early, before they become more serious." Instead, what I hear from Anthem is 100% clueless -- exhorting me to do something that the slightest bit of effort on their part would tell them I'm already doing! Blanketty-blank it!

This is customer interaction 1.01. It's also common sense. It's standard practice for companies whose tech and marketing teams have progressed past the year 2000 into the current century.

The Postcard in the mail

You might think it couldn't get worse. You'd be wrong.

After I got the email, a postcard showed up in the regular US mail. A full-color postcard from my friends at Anthem! Here's the front of it, showing a person who looks just like me having a virtual doctor visit.

Screenshot 2022-04-03 161825

Anthem cares about my health and really wants me to get that checkup -- today! They care about it so much that they appear to have two whole departments, one for email and one for postal paper mail, each charged with getting me to get that checkup.

So what do they tell me on the back? Take a look:

Screenshot 2022-04-03 161951

Here's what the email said:

It's a good way to stay on top of current health issues and take care of any new ones early, before they become more serious.

Here's what the postcard said:

Having a checkup is one of the best ways to stay on top of current health issues and take care of any new ones early, before they become more serious.

Notice the similarities and the subtle differences -- it's clear that each department wanted to assert its independence and word the exhortation in the way it thought best. The email modestly said "it's a good way," while the postcard went all the way, saying it's "one of the best ways." How much time in meetings was spent getting the wording exactly right, do you think?

Last but not least is the issue of cost. Like with the email, the postcard strongly asserted that the cost is completely covered. But then there's that little asterix, hinting that you might want to look at the tiny little print at the bottom of the page, where you find maybe it's not free after all. At least there was no mention of Sydney. I guess the paper mail department is jealous, and wants to avoid promoting the thing those snotty folks in IT keep yammering on about.

Anthem Leads the way

You might think from this that Anthem is incapable of going beyond the 1-2 punch of emails AND mass paper mailing. Incapable of doing basic software of the kind I was writing in high school, software that is little but common sense. I will let the evidence speak for itself.

Whatever Anthem may or may not be doing in terms of keeping up with paper mail and adding an electronic version, a little searching reveals that Anthem is spending huge amounts of time and money on "advanced digital" whatever, fashionable things like AI, ML and the rest of the lah-de-lah.

To discover Anthem's strategy, you have to find and sift through an array of websites that aren't the Anthem.com one you would think.

Here is part of what the Anthem CEO says in the most recent annual report: "The traditional insurance company we were has given way to the digitally-enabled platform for health we are becoming. This platform strategy is grounded in data and deploys predictive analytics, artificial intelligence, machine-learning and collaboration across the value chain to produce proactive, personalized solutions for our consumers, care providers, employers, and communities." I guess that means they're working on getting AI and ML to send me an email that's "personalized" sometime soon. Maybe.

Anthem has a Chief Digital Officer. Here's what he said in that same annual report: "At Anthem, we have built the industry’s largest platform, integrating our immense data assets, proprietary AI, and machine-learning algorithms." Is this just a lab project? No! "It’s through this platform that we are able to digitize knowledge and create a more agile and seamless experience for our consumers, customers ..." I guess digitizing my name and slipping it into an "agile and seamless" email to me is right around the corner!

In May 2020 Anthem signed a major "digital transformation" deal with IBM. According to Anthem's CIO Tim Skeen, "We are seeing a dynamic change in the healthcare industry, requiring us to be more agile and responsive, utilizing advanced technology like Artificial Intelligence (AI) to drive better quality and outcomes for consumers." Sounds good! If IBM's Watson AI can beat the world champion Ken Jennings at Jeopardy, I guess it's just a matter of time until it figures out how to personalize emails.

A glowing article last year quoting the Anthem Chief Technology Officer described how Deloitte and AWS are helping Anthem deliver "measurable benefits" such as "capabilities that use AI/ML, cognitive, analytics, and data science" to implement their strategic vision, one of whose key tenets is " 'n=1' personalization through consumer-driven whole-health products and services." Is it possible that the strategic vision of "n=1" personalization will enable them to send me an email that's to me, instead of a brochure? We'll see.

At yet another website of Anthem's I discovered that they have a Product Management and Strategy Lead who talks about how Anthem is "using predictive models and machine learning to provide consumers with the unique information, programs, and services they need ..." There's a VP of AI Technology who is "harnessing machine learning and AI ..." There's a VP of Innovation who is "... implementing innovative solutions ..."

What a wealth of important people and efforts, all bringing digital transformation to Anthem! With all this industry-leading technology, it's only a matter of time before I receive something from Anthem that isn't a postcard with the added bonus of a digital brochure, do you think?

Conclusion

See this post and the summary at the end for links to other amazing achievements of the Anthem software team -- which extends from bad communications to it's-really-bad cyber-security involving massive losses of customer personal information.

It's clear that Anthem, like most companies of its kind, pays huge amounts of attention to the current "thing," whatever that is, making sure everyone knows they're leading the way. Meanwhile, they largely ignore trivial things that are "beneath" them, things like treating customers moderately well. It starts with avoiding paying attention to the foundation of everything, which is data. Then it's compounded by the perverse status hierarchy in software in general and data science in particular; the hierarchy is simple: the farther you are away from real human customers, the higher your status. I hope this will change, but I'm not betting on it. Meanwhile, I remain grateful for the payments they make for the health care services I receive.

Posted by David B. Black on 04/03/2022 at 05:51 PM | Permalink | Comments (0)

The Facts are Clear: Eat Lots of Saturated Fat

The experts and authoritative institutions are clear: you should eat a low-fat diet and take drugs to reduce your blood LDL cholesterol to safe levels in order to make your heart healthy.  Here is their advice about saturated fat and about blood cholesterol. The capital-E Experts are wrong. They were wrong from the beginning. There was never any valid evidence in favor their views, in spite of what you might read. The quantitative and biochemical evidence is now overwhelming.  Here is my summary of the situation. In this post I’ll cover more of the evidence.

Origins and growth of the saturated fat – cholesterol – heart hypothesis

How did such a bogus theory get started? An experiment with intriguing results was one start. Here's a summary:

The hypothesis harks back to the early part of the twentieth century, when a Russian researcher named Nikolai Anitschkow fed a cholesterol [animal fat] rich diet to rabbits and found that they developed atherosclerosis (hardening of the arteries, the process which in the long run leads to cardiovascular disease). … Rabbits, being herbivores, normally have very little cholesterol in their diets, while humans, being omnivores, generally consume quite a bit of cholesterol. Regardless, the data was suggestive, and led to the hypothesis being formulated.

A paper titled “How the Ideology of Low Fat Conquered America” was published in the Journal of the History of Medicine and Allied Sciences in 2008. Here is the abstract:

This article examines how faith in science led physicians and patients to embrace the low-fat diet for heart disease prevention and weight loss. Scientific studies dating from the late 1940s showed a correlation between high-fat diets and high-cholesterol levels, suggesting that a low-fat diet might prevent heart disease in high-risk patients. By the 1960s, the low-fat diet began to be touted not just for high-risk heart patients, but as good for the whole nation. After 1980, the low-fat approach became an overarching ideology, promoted by physicians, the federal government, the food industry, and the popular health media. Many Americans subscribed to the ideology of low fat, even though there was no clear evidence that it prevented heart disease or promoted weight loss. Ironically, in the same decades that the low-fat approach assumed ideological status, Americans in the aggregate were getting fatter, leading to what many called an obesity epidemic. Nevertheless, the low-fat ideology had such a hold on Americans that skeptics were dismissed. Only recently has evidence of a paradigm shift begun to surface, first with the challenge of the low-carbohydrate diet and then, with a more moderate approach, reflecting recent scientific knowledge about fats.

The early chapters of The Big Fat Surprise book provide a good summary with details of the rise to dominance of the low-fat & cholesterol-is-bad theory.

Strong Data Showing that Saturated Fat is Good

There were problems with the diet-heart hypothesis from the beginning.

The first chapters of The Big Fat Surprise have summaries of studies that were made on peoples around the world who subsisted almost exclusively by eating animals and/or dairy, all of them strongly preferring fatty organs over lean muscle.

A Harvard-trained anthropologist lived with the Inuit in the Canadian Arctic in 1906, living exactly like his hosts, eating almost exclusively meat and fish. “In 1928, he and a colleague, under the supervision of a highly qualified team of scientists, checked into Bellevue Hospital  … to eat nothing but meat and water for an entire year.” “Half a dozen papers published by the scientific oversight committee that scientists could find nothing wrong with them.”

George Mann, a doctor and professor of biochemistry, took a mobile lab to Kenya with a team from Vanderbilt University in the 1960’s to study the Masai. They ate nothing but animal parts and milk. Their blood pressure and body weight were 50% lower than Americans. Electrocardiograms of 400 men showed no evidence of heart disease, and autopsies of 50 showed only one case of heart disease.

Similar studies and results came from people in northern India living mostly on dairy products, and native Americans in the southwest. There were many such studies, all of them showing that the native peoples, eating mostly saturated fat, were not only heart-healthy, but free of most other modern afflictions such as cancer, diabetes, obesity and the rest.

Of course the question was raised of other factors that might lead to these results. The questions have been answered by intensive studies. For example, some formerly meat-eating Masai moved to the city and lost their health. For example, Inuit who changed their diet to include lots of carbohydrates supplied by government were studied by doctors who determined they lost their health.

From the book:

In 1964, F. W. Lowenstein, a medical officer for the World Health Organization in Geneva, collected every study he could find on men who were virtually free of heart disease, and concluded that their fat consumption varied wildly, from about 7 percent of total calories among Benedictine monks and the Japanese to 65 percent among Somalis. And there was every number in between: Mayans checked in with 26 percent, Filipinos with 14 percent, the Gabonese with 18 percent, and black slaves on the island of St. Kitts with 17 percent. The type of fat also varied dramatically, from cottonseed and sesame oil (vegetable fats) eaten by Buddhist monks to the gallons of milk (all animal fat) drunk by the Masai. Most other groups ate some kind of mixture of vegetable and animal fats. One could only conclude from these findings that any link between dietary fat and heart disease was, at best, weak and unreliable.

One of the foundational studies in the field is the Framingham Heart Study, started in 1948 and still going on.

In 1961, after six years of study, the Framingham investigators announced their first big discovery: that high total cholesterol was a reliable predictor for heart disease.

This cemented things. Anything that raised cholesterol would lead to heart disease. The trouble came thirty years later, after many of the participants in the study had died, which made it possible to see the real relationship between cholesterol and mortality due to heart disease. Cholesterol did NOT predict heart disease!

The Framingham data also failed to show that lowering one's cholesterol over time was even remotely helpful. In the thirty-year follow-up report, the authors state, "For each 1% mg/dL drop of cholesterol there was an 11% increase in coronary and total mortality."

Only in 1992 did William P. Castelli, a Framingham study leader, announce, in an editorial in the Archives of Internal Medicine:

In Framingham, Mass, the more saturated fat one ate ... the lower the person's serum cholesterol ... and [they] weighed the least.

Game over! No wonder they've kept it quiet. And not just about heart health -- about weight loss too!

Here is an excellent article with references to and quotes from many journals. Here is the introduction:

Many large, government-funded RCTs (randomized, controlled clinical trials, which are considered the ‘gold-standard’ of science) were conducted all over the world in the 1960s and 70s in order to test the diet-heart hypothesis. Some 75,000 people were tested, in trials that on the whole followed subjects long enough to obtain “hard endpoints,” which are considered more definitive than LDL-C, HDL-C, etc. However, the results of these trials did not support the hypothesis, and consequently, they were largely ignored or dismissed for decades—until scientists began rediscovering them in the late 2000s. The first comprehensive review of these trials was published in 2010 and since then, there have been nearly 20 such review papers, by separate teams of scientists all over the world.

Far from believing that saturated fat causes heart disease, we can be quite certain that it's positively healthy on multiple dimensions to eat it -- it's people who don't eat enough saturated fat who end up overweight and sickly!

Sadly, there are still Pompous Authorities who assure us with fancy-sounding studies that we really should avoid eating fat. This study from 2021 dives into just such a fake study -- a RCT (random controlled trial) study -- that purported to show that eating fat remains a bad idea. Wrong. Here's the summary:

Hiding unhealthy heart outcomes in a low-fat diet trial: the Women’s Health Initiative Randomized Controlled Dietary Modification Trial finds that postmenopausal women with established coronary heart disease were at increased risk of an adverse outcome if they consumed a low-fat ‘heart-healthy’ diet.

These books by Dr. Malcolm Kendrick dive in more deeply and are moreover a pleasure to read. Among other things, The Clot Thickens explains the underlying mechanisms of arteriosclerosis (blood clots, heart disease) and what actually causes them.

Here are several articles with evidence from many scientists on the subject of saturated fat.

Conclusion

This is an incredibly important issue regarding the health of people. It's also an in-progress example of the difficulty of shifting a paradigm, even when the evidence against the dominant paradigm (avoid eating saturated fat, use drugs to keep your cholesterol low) is overwhelming. Could it be possible that billions of dollars a year of statins and related cholesterol-lowering drug sales has something to do with it? Then again, when was the last time you heard a prestigious Expert or institution say "Sorry, we were wrong, we'll try hard not to blow it again; we won't blame you if you never trust us again."

Posted by David B. Black on 03/15/2022 at 10:51 AM | Permalink | Comments (0)

What is Behind the DCash Central Bank Digital Currency Disaster?

DCash, the Digital Currency issued by the ECCB (Eastern Caribbean Central Bank) Is a pioneering effort with good intentions. Here is the background, covering how it was studied carefully, piloted in March 2019, had its first live transaction in February 2021, rolled out in March 2021, expanded in July 2021 and then, on January 14, 2022, went dead. Not just down for a few hours ... or days ... or weeks ... but long enough for any sensible person to completely give up on it. Then the ECCB announced that DCash would be back soon, and then announced that it was alive and well. The ECCB is lah-dee-dah, yes we had an "interruption" in service, but we're back better than ever!

What if someone stole your wallet and kept it from you for nearly two months? Why would any sane person convert real money to DCash if it can suddenly be stolen and held hostage for months? And not by criminals, but by the bank!

The ECCB is keeping the facts of this disaster largely hidden. I've quoted and analyzed what little they said at the time of the crash here.

Pre-announcing the Resumption

A couple days before they resumed service, ECCB announced that DCash was coming back. To regain trust and for the sake of transparency, you would think they would tell us what actually happened. Nope.

Here's their explanation:

In January 2022, the DCash system experienced its first interruption since its launch in March 2021. As a result, the processing of new transactions on the DCash network was halted. This interruption was not caused by any external intervention. The security and integrity of all DCash data, applications and architecture, including all central bank, financial institutions,  merchant and wallet apps remain secure and intact.   

Following the interruption, the ECCB took the opportunity to undertake several upgrades to the DCash platform including enhancing the system’s certificate management processes – the initial cause of the interruption, and updating the version of Hyperledger Fabric, the foundation of the DCash platform.  These upgrades have further strengthened the robust security mechanisms, which ultimately underpin the DCash technology, resulting in a more resilient product.

It "experienced its first interruption." Passive voice. Where did the "interruption" come from? Who did it? Why?

"As a result, the processing of new transactions on the DCash network was halted." As a result of what?? The processing "was halted" by whom?? The ECCB?

"This interruption was not caused by any external intervention." This implies no hacking. It was internal. Either a bad insider or something awful with the software that had (presumably) been running for months.

So they went about several "upgrades" -- not bug fixes or corrections. Then we get to "enhancing the system's certificate management process." Certificates are NOT about digital currency, they are standard web things, as I explained. And they "updated the version of Hyperledger Fabric," a standard library for blockchain. Updating to latest versions should be part of normal systems maintenance. It's not something that takes weeks! You do the upgrade, test it, run it in parallel with your current production system to assure it works, and then you seamlessly switch over. Groups large and small do this all the time. It's standard practice. Only creaky old organizations firmly anchored in the past would take a system down for hours to perform maintenance. Even they wouldn't dare take a system down for even a week!

What's the result? ECCB has now "further strengthened the robust security mechanisms ... resulting in a more resilient product." Wow. The security mechanisms either had a fault or they didn't. The claim is that it took nearly two months to create a "more resilient product." A product that had been running live for nearly a year.

Announcing the Resumption

Next ECCB declared as promised that DCash was back. They provided no further explanation:

As part of the restoration, the platform now benefits from several upgrades including an enhanced certificate management process and an updated version of the software which provides the foundation for the DCash system. Extensive testing and assurance exercises were conducted prior to restoration of the platform to ensure full functionality of the service in accordance with quality assurance specifications.

Certificate management is standard internet stuff. It has nothing to do with crypto. Why wouldn’t they already have had the latest version working as part of their system? No excuse! If they just needed to upgrade, why not do it the way everyone does? They claim to “enhance” the certificate management process. Something unique for ECCB? Bad idea.

Hyperledger fabric. Similar claims, same response.

They claim DCash is now “more resilient.” But there were no crashes during many months of operation. Therefore (according to them) DCash was already perfectly resilient.

They're hiding something. What is it??

Apps for Digital Transfer

You don't need a CBDC like DCash to quickly, easily, safely, cheaply and electronically move money around. In fact, we're all better off if central banks just ignored the whole issue. Here's my analysis of the situation, talking about a potential CBDC for the US that no one needs and describing how Venmo and CashApp work and are broadly accepted.

The ECCB made strong claims about the benefits DCash was going to bring. All benefits that are in production and use by over 100 million people, operated by private companies without a CBDC. Nonetheless they went ahead. And crashed. And clearly lie about it. What's going on??

The DCash App

As a brand-new currency, DCash needs an app. It's something the ECCB largely ignores on their self-promotional website. I wonder if there's anything to learn by digging into the DCash app? It turns out there is! Following is what I discovered.

I figured they must have a wallet app for Android. I went to the Google Play store and found the app:

Screenshot 2022-03-11 103736

Sure enough, that's the wallet. But look over there on the upper right. 40 reviews, 2 stars out of 5. That's awful!

Let's look at some of them. Sadly, Google won't give them in time order.

The first review wasn't until March 27, 5 stars.

On Aug 15 we get 1 star with the comment "Bad." No response from ECCB. Aug 31 there is 3 stars with "*yu" as the comment. No response from ECCB. Mostly it's 1 star reviews, one after the other, many with thumbs-up ratings for the badness of the review.

Months later, Dec 12, we get 2 stars and "Efgy." And a response from ECCB!

Screenshot 2022-03-11 105620

Look more closely. The review was posted Dec 12 and the response was posted nearly a month later!! Really staying on top of things, aren't they?

I see they've got a special domain for feedback. This is the first I've seen of it. You would think it would be on the main site, wouldn't you? Let's check it out. I put the support site URL in my browser and this is the result:

Screenshot 2022-03-11 105957

No, I didn't type it wrong. Even though DCash is supposedly up and running just fine, the support site isn't just broken -- it's not there! The domain doesn't exist!!

Things are clearly just awful for the Android app. I wonder how it is for iPhone -- maybe it's wonderful? Here's the preview of the DCash app on the Apple App store:

Screenshot 2022-03-11 111625

Only 5 ratings vs. the 40 ratings for Android. What's clear is that Apple users are MUCH more generous than Android. The review by Waps7777 in Dec 2021 gave it 3 stars even though "DCrash not DCash. The app crashes every time is send a payment."

Conclusion

We still have no idea what happened with DCash. But it's pretty clear from the App store comments that the currency should be called DCrash. The announcements of ECCB say nothing about the apps. The people in charge are, as usual with people in charge, going to great length to hide problems and declare wonderfulness. But with the evidence on the table to date, DCrash is a disaster and should be shut down. If the authorities cared about real human beings other than themselves, they would apologize, shut down DCash, and make a deal with Zelle, Venmo, CashApp or someone who has a track record of real success to improve the lives of the people in the EC nations.

Posted by David B. Black on 03/11/2022 at 11:33 AM | Permalink | Comments (0)

DCash Government Cryptocurrency Shows Why Fedcoin Would Be a Disaster

The United States is seriously planning to issue FedCoin, a CBDC (Central Bank Digital Currency), following the lead of the Chinese government and others around the world. I have previously spelled out why we don’t need Fedcoin, basically because the currency of the United States is already largely digital. In this article I argue that not only don’t we need FedCoin, but that issuing such a CBDC has a strong potential for disaster. For a perspective that is broad and deep on this subject, see Oonagh McDonald’s recent book Cryptocurrencies: Money, Trust and Regulation.

The Eastern Caribbean Central Bank

Did you know that in 1983 eight countries in the eastern Caribbean banded together to create a central bank with a common currency? The ECCB resembles the Federal Reserve in the US for Anguilla, Antigua and Barbuda, Commonwealth of Dominica, Grenada, Montserrat, St Kitts and Nevis, Saint Lucia, and St Vincent and the Grenadines.

The ECCB’s experiment with a Digital Currency

After considerable planning, the ECCB kicked off a pilot for a digital currency in 2019. According to their website:

The Eastern Caribbean Central Bank (ECCB) launched its historic DXCDCaribe pilot, on 12 March 2019. ‘D’, representing digital, is prefixed to ‘XCD’ - the international currency code for the EC dollar.

The pilot involves a securely minted and issued digital version of the EC dollar - DCash. The objective of this pilot is to assess the potential efficiency and welfare gains that could be achieved: deeper financial inclusion, economic growth, resilience and competitiveness in the ECCU - from the introduction of a digital sovereign currency.

DCash will be issued by the ECCB, and distributed by licensed bank and non-bank financial institutions in the Eastern Caribbean Currency Union (ECCU). It will be used for financial transactions between consumers and merchants, people-to-people (P2P) transactions, all using smart devices.

The pilot was declared a success. The phase 2 rollout of DCash started March 31, 2021.

The ECCB provides a detailed description of the excellence of the implementation and security of the DCash system. For example:

The DCash platform is being developed through security-by-design principles. Applications are subject to rigorous quality assurance, and independent security testing, prior to live deployment.  Hyperledger Fabric is being utilized to create an enterprise-grade, private-permissioned, distributed ledger (blockchain).  Modular and configurable architecture is used to facilitate DCash transfer, payment processing, and settlement across authenticated and authorized API’s. Additionally, all DCash users must be authenticated and authorized.

The application framework was designed with built-in mitigations against common web application vulnerabilities, and goes through a quality assurance process that includes rigorous security testing. Multi-factor authentication is required for financial institutions, all APIs are authenticated and authorized, and all participants are vetted. In addition, secure hardware elements are being used on mobile devices.

More details were provided to demonstrate the security and high quality of the system. In addition to unspecified data centers, the website states:

Google Cloud is the current service provider. With the exception of the minting system, all system services are hosted in Google Cloud. Connections between different system layers is secure (SSL/HTTPS) and permissioned (IP Address restrictions, username/ passwords, and JWT tokens).

There’s a Problem

So what happened to this wonderful, highly secure digital currency? It went down!

The ECCB announced on January 14, 2022 that there was a system-wide outage.

This break in service has been caused by a technical issue and the subsequent necessity for additional upgrades. Therefore, DCash transactions are not being processed at this time.

There were lots of words about how things would be OK.

Did it go down for an hour? Bad. A day? REALLY bad. A week or more? A complete, unmitigated, no-excuses disaster.

What if you were a user of DCash and you couldn’t use it? It would be like having money in your bank account, but the bank claims it’s unable to give you any! What are you supposed to do? To whom can you appeal? No one!

It’s worse than that. As this writing at the end of February, a full six weeks after DCash D-Crashed, it’s still down.

Why did DCash go down?

We don’t know much. In early February it was reported:

The Eastern Caribbean Central Bank has revealed that an expired certificate caused its pilot central bank digital currency (CBDC), DCash, to go offline from January 14. Karina Johnson, the ECCB project manager for the DCash pilot, told Central Banking that “the version of Hyperledger Fabric (HLF)”, the network that hosts DCash’s distributed ledger, “had a certificate expire”. To install an up-to-date certificate, the currency’s operators are undertaking “a version change of HLF and associated...

This is really strange. If the language used is correct, a “certificate expiration” has nothing to do with digital currency or blockchain. An expired certificate is something that is issued by a “certificate authority” It’s used all over the web. For example, most web addresses start with https://www. Etc. The “s” means secure, which means that the traffic between your browser and the website is encrypted. When a browser sees the https, it goes to the site, which sends a certificate issued by a CA (certificate authority) that says that the public/private key pair used by the site is legit.

There are NO certificate authorities in Bitcoin or other cryptos! There are just public/private key pairs, with the private key being used to “sign” a transaction sending Bitcoin from the corresponding public key – which assures that it really is the owner of the public key sending the BTC.

So what's going on and how could a "certificate expiration" have caused this? No one is saying. By the way, a expiration of this kind can normally be fixed very quickly, less than a day.

The next (and most recent as of this writing) thing that was publicly announced was this on Facebook on February 14:

Screenshot 2022-02-28 114428

Why did DCash go down? Why is it still down after all this time? How are the consumers and merchants being helped with their funds being locked and inaccessible? No one is talking.

Conclusion

ECCB seems to have done everything right. They carefully studied. They worked with an experienced vendor, who had experience doing CBDC. They used the leading blockchain fabric. They used Google for hosting. They did a limited trial, released it in one of their regions, and then made it more widely available. And then something went wrong. Very wrong. What it could possibly be that involves "certificates expiring" is mysterious. How they could have built something that could be dead for over six weeks is extremely rare in software.

CBDC's are a terrible idea. We don't need them. They add nothing in terms of cost or speed to the digital fiat currency and associated software that we already have. How can any government guarantee that they won't have a DCash disaster when their own CBDC rolls out? So governments are suddenly wonderful bringing out great software that works? I've got this bridge, by the way, and I can let you have it for a limited-time-only bargain price...

Note: this was originally posted at Forbes.

 

Posted by David B. Black on 03/09/2022 at 10:49 AM | Permalink | Comments (0)

Why Object-Orientation in Software is Bad

What?? Object-oriented programming (OOP) is practically the standard in software! It’s taught everywhere and dominates thinking on the subject. Most languages are O-O these days, and OO features have even been added to COBOL! How can such a dominant, mainstream thing be bad?

The sad truth is that the badness of OOP isn’t some fringe conspiracy theory. An amazing line-up of astute, brilliant people agree that it’s bad. A huge collection of tools and techniques have been developed and taught to help people overcome its difficulties, which nonetheless persist. Its claims of virtue are laughable – anyone with experience knows the benefits simply aren’t there.

Object-oriented languages and novels

Object-orientation is one of those abstruse concepts that makes no sense to outsiders and is a challenge for people learning to program to understand and apply. To make OOP monstrosity clear, let’s apply OOP thinking to writing a novel.

There are lots of ways of writing novels, each of them suitable for different purposes. There are novels dominated by the omniscient voice of the author. There are others that are highly action-based. Others have loads of dialog. Of course most novels mix these methods as appropriate.

Some novels feature short chapters, each of which describes events from a particular character’s point of view. There aren’t many novels like this, but when you want to strongly convey the contrast between the contrasting experiencing of the characters, it’s a reasonable technique to use, at least for a few chapters.

What if this were the ONLY way you were allowed to write a novel??!! What if a wide variety of work-arounds were developed to enable a writer to write  -- exclusively! -- with this sometimes-effective but horribly constricting set of rules?

What if ... the Word Processors (like Microsoft Word) from major vendors were modified so that they literally wouldn't allow you to write in any other way, instead of giving you the freedom to construct your chapters any way you wanted, with single-person-point-of-view as one of many options. What if each single small deviation from that discipline that you tried to include were literally not allowed by the Word Processor itself!? All this because the powerful authorities of novel creation had decided that single-person chapters were the only good way to write novels, and that novelists couldn't be trusted with tools that would allow them to "make mistakes," i.e., deviate from the standard.

There would be a revolution. Alternative publishing houses would spring up to publish the great novels that didn’t conform to the object-novel constraints. The unconstrained books would sell like crazy, the OO-only publishing houses would try to get legislation passed outlawing the unconstrained style of writing, and after some hub-bub, things would go back to normal. Authors would exercise their creative powers to express stories in the most effective ways, using one or several techniques as made sense. The language itself would not be limited or limiting in any way.

Sadly, the world of software works in a very different way. No one sees the byzantine mess under the “hood” of the software you use. No one knows that it could have been built in a tiny fraction of the time and money that was spent. Industry insiders just accept the systematized dysfunction as the way things are.

This is objects – a special programming technique with narrow sensible application that has been exalted as the only way to do good programming, and whose rules are enforced by specialized  languages only capable of working in that constrained way.

Is there nothing good about OOP?

It isn’t that OOP is useless. The concept makes sense for certain software problems – just as completely other, non-OOP concepts make sense for other software problems! A good programmer has broad knowledge and flexible concepts about data, instructions and how they can be arranged. You fit the solution to the problem and evolve as your understanding of the problem grows, rather than starting with a one-size-fits-all template and jamming it on. You would almost never have good reason to write a whole program in OO mode. Only the parts of it for which the paradigm made sense.

For example, it makes sense to store all the login and security information about a body of software in a single place and to have a dedicated set of procedures that are the only ones to access and make changes. This is pure object-orientation – only the object’s methods access the data. But writing the whole program in this way? You're doing nothing but conforming to an ideology that makes work and helps nothing.

However. When you embody OOP in a language as the exclusive way of relating data and code, you’re screwed.

In this post I describe the sensible origins of object-orientation for describing physical simulation, for example for ships in a harbor. Having a whole language to do it was overkill – I describe in the post how hard-coding the simulation in statements in a language made it hard to extend and modify instead of moving the model description into easily editable metadata -- and then into a provably best optimization model..

That is the core problem with object-oriented languages – they are a hard-coded solution to part of a programming problem, rather than one which creates the most efficient and effective relationships between instructions and data and then increasingly moves up the mountain of abstraction, each step making the metadata model more powerful and easier to change. Object-oriented concepts are highly valuable in most metadata models, with things like inheritance (even multiple inheritance, children able to override an inherited value, etc.) playing a valuable role. Keeping all the knowledge you have about a thing in one place and using inheritance to eliminate all redundancy from the expression of that knowledge is incredibly valuable, and has none of the makes-things-harder side effects you suffer when the object-orientation is hard-coded in a language. In the case of simulation, for example, the ultimate solution is optimization – getting to optimization from object-oriented simulation is a loooong path, and the OOP hard-coding will most likely prevent you from even making progress, much less getting there.

Conclusion

Any reasonable programmer should be familiar with the concepts of encapsulation, inheritance and the other features of object-oriented languages. Any reasonable programmer can use those concepts and implement them to the extent that it makes sense, using any powerful procedural language, whether in the program itself or (usually better) the associated metadata. But to enforce that all programs be written exclusively according to those concepts by embedding the concepts in the programming language itself is insanity. It's as bad as requiring that people wear ice skates, all the time and every day, because ice skates help you move well on ice when you know how to use them. If everything were ice, maybe. But when you try to run a marathon or even climb a hill with ice skates on, maybe you can do it, but everyone knows that trading the skates for running shoes or hiking boots would be better. Except in the esoteric world of software, where Experts with blinders on declare that ice skates are the universal best solution.

Posted by David B. Black on 03/08/2022 at 09:56 AM | Permalink | Comments (0)

Hurray Up! It's Almost Two Late Two Celebrate Two's Day

The magnificent, century's-rare Two's Day itself has already come and gone. But while it's still fairly large in the rear-view mirror, there is still time to celebrate the nine day wrapper around the day (and hour and minute) of Two's Day itself. Because we're still early in the amazing nine day long celebration of Palindromic Two's Week.

Today is February 23, 2022. It's a pretty two-y day, right? Particularly when you toss out boring and easily-misspelled "February" and replace it with a nice proper two, as in 2/23/2022.

We all know we're in the middle of the 2000's, I trust. Just as the Y2K bug happened because nearly everyone left off the repetitive and largely useless 19 from dates in that fast-fading-into-the-past century, most people leave off the ubiquitous 20 from the year now. What does that make today? Let me spell it out for you:

2/23/22

Oh, boy. I can tell from your silence that you don't get it yet. Let me make it easier:

2 2 3 2 2

It's a PALINDROME!

OK, let me spell it out for you. Literally. Here are a couple of examples. Hint: try reading each word or phrase backwards and see if you notice something.

racecar

repaper

top spot

never odd or even

Now go back to today's date. Not quite as cool as yesterday:

2 2 2 2 2

But still awesome in its own way. Let's reach back into the past and go fearlessly forward in the land of dates (but not nuts ... uh, OK, maybe numerical nuts...):

2 2 0 2 2

2 2 1 2 2

2 2 2 2 2

2 2 3 2 2

2 2 4 2 2

2 2 5 2 2

2 2 6 2 2

2 2 7 2 2

2 2 8 2 2

Before the palindromic nine day celebration, there was the pathetic:

2 1 9 2 2

and then there will be the huge let-down of

3 0 1 2 2

Yet another disappointment is that the perverse designers of the month system, in addition to making the spelling of Feb-ru-ary weird, gave it a pathetic 28 days most of the time, dissing it yet again.

All I can say is, let's CELEBRATE the nine days of Palindromic Two's Week while we're in it!

 

Posted by David B. Black on 02/23/2022 at 09:38 AM | Permalink | Comments (0)

The Experts are Clear: Keep your Cholesterol Low

Everyone knows it’s important to maintain a healthy diet, things like avoiding fatty meat and fish and whole-fat dairy products. All the experts tell us it’s so, and the nutrition guides on food products help us choose food wisely. Everyone knows what “fat” is. Most of us have also heard of “cholesterol,” but it’s not so clear just what that is. It gets clear when you visit a doctor, have your blood tested, and hear the doctor tell you that your cholesterol levels dangerously high. The doctor says you’ve got to get your cholesterol under control, or else your odds of getting heart disease and dying early go way up.

The doctor will probably tell you that you can help yourself by eating less saturated fat, which causes cholesterol to rise. Depending on how high your numbers are, the doctor may also put you on statin drugs, which lower your cholesterol levels the same way other drugs help lower dangerously high blood pressure. It’s just something you have to do in order to lead a long and healthy life. Are you ready for an incapacitating heart attack, or are you going take a couple pills every day? Is that so bad?

The CDC

Let’s make sure this is really true. Let's go to the federal CDC, the Center for Disease Control and Prevention.

CDC

Hey, they've got a whole section on cholesterol! Fortunately the CDC makes clear that it’s a myth that all cholesterol is bad for you. There’s HDL, which is good for you. And then there’s…

LDL (low-density lipoprotein), sometimes called “bad” cholesterol, makes up most of your body’s cholesterol. High levels of LDL cholesterol raise your risk for heart disease and stroke.

They go on to explain exactly why LDL is bad for you:

When your body has too much LDL cholesterol, it can build up in the walls of your blood vessels. This buildup is called plaque. As your blood vessels build up plaque over time, the insides of the vessels narrow. This narrowing can restrict and eventually block blood flow to and from your heart and other organs. When blood flow to the heart is blocked, it can cause angina (chest pain) or a heart attack.

There is something you can do with your diet to help things:

Saturated fats can make your cholesterol numbers higher, so it’s best to choose foods that are lower in saturated fats. Foods made from animals, including red meat, butter, and cheese, have a lot of saturated fats.

But then, in the end, the important thing is to avoid getting a heart attack or stroke. The good news is that that there are drugs to help:

Although many people can achieve good cholesterol levels by making healthy food choices and getting enough physical activity, some people may also need medicines called statins to lower their cholesterol levels.

Department of Health and Human Services (HHS)

Is the government united in the effort to reduce bad cholesterol. Let’s make another check, to the appropriately named Department of Health (HHS).

Apparently the whole world, according to WHO, is sure that heart disease is a huge killer:

Cardiovascular diseases—all diseases that affect the heart or blood vessels—are the number one cause of death globally, according to the World Health Organization (WHO).

They’re also sure that, in addition to diet, cholesterol has a firm place on the list of heart-harming things:

Your health care provider can assess your risk for cardiovascular disease through preventative screenings, including weight, cholesterol, triglycerides, blood pressure, and blood sugar.

The American Heart Association (AHA)

How about the professional organization of heart doctors – what’s their position on cholesterol? It’s pretty clear:

LDL cholesterol is considered the “bad” cholesterol, because it contributes to fatty buildups in arteries (atherosclerosis). This narrows the arteries and increases the risk for heart attack, stroke and peripheral artery disease (PAD).

Harvard Medical School

Better check with the people who train the best doctors. Let's make sure this is really up to date.

Harvard

Here's what they have to say:

Too much LDL in the bloodstream helps create the harmful cholesterol-filled plaques that grow inside arteries. Such plaques are responsible for angina (chest pain with exertion or stress), heart attacks, and most types of stroke.

What causes a person's LDL level to be high? Most of the time diet is the key culprit. Eating foods rich in saturated fats, trans fats, and easily digested carbohydrates boost LDL

OK, but what if for various reasons diet doesn't get things under control?

Several types of medication, notably the family of drugs known as statins, can powerfully lower LDL. Depending on your cardiovascular health, your doctor may recommend taking a statin.

Conclusion

The science has spoken. The leading authorities in the field of heart health speak it clearly, without reservation and without qualification. Heart attacks are a leading cause of death everywhere.  Blood plaques cause heart attacks. Blood plaques are caused by having too much LDL, the bad cholesterol, in the blood. Your LDL is raised by eating too much saturated fat. You can reduce your chances of getting a heart attack by strictly limiting the amount of saturated fat you eat and by taking drugs, primarily statins, that reduce the amount of LDL.

Why wouldn’t any sane person at minimum switch to low-fat dairy and lean meats, if not go altogether vegan? And then, to be sure, get their blood checked to make sure their LDL level is under control.  The only one who can keep you healthy is YOU, blankity-blank-it! And if you by chance run into some crank telling you otherwise, you shouldn’t waste your time.

Posted by David B. Black on 02/21/2022 at 01:56 PM | Permalink | Comments (0)

Medicine as a Business: Medical Testing 6: Another Test

When you have a tumor that's supposed to be vanquished by radiation therapy but refuses to go away, you're supposed to check on it periodically to see if it's resumed rapid-growth mode. While experience hasn't made my heart grow fond of MRI's, I reluctantly decided to give it another go, since I still have lumps I shouldn't have..

Here's what happened last time. I'm reluctant to dive into MRI-world again because even simple medical scheduling like for a covid test is a big problem -- but small compared to the nightmare of scheduling something like an MRI. Why don't I just go somewhere else where it's done well? Hah! Fat chance. And even then the burden would be on me to pry what are supposedly MY records from the iron grip of the multiple EMR's of my current system.

This time was an adventure -- a new kind of screw-up!

Scheduling the test

You would like to think that a doctor would keep on top of his/her patients and notify them when they're supposed to do something. Like my vet does for my cat! The rhetoric is that they do. It's possible that some of them do -- though how they manage it when having to spend nearly half their time entering an ever-growing amount of stuff into EMR's that are supposed to make things better is a testament to dedication and likely early burn-out.

The burden was on me to remember to schedule this standard-protocol follow-on test for my cancer. Clearly the big-institution medical center wasn’t up to the job. Neither was my insurance company, which is glad to pepper me with reminders to get my blood pressure tested by a doctor, something I regularly do myself at home. Test for cancer? It’s beyond them. A straightforward workflow software system would handle it all automatically.

I was supposed to get the next test a year after the prior one. I let it slip. No one reached out to me, of course. It's now nearly two years. Sigh.

I reached out via email on Nov 16, because I know calling is pointless. After many interactions on Dec 22 I was told I have an appointment -- ignoring of course my request to make it myself. At least I got it and could arrange things to be there on January 8.

Two days before the appointment I got a brief reminder voicemail and an email. The email didn't happen to mention a time or place. I guess they trusted me to know -- unlike any normal scheduling reminder system. But it did give me a ton of words about covid and safety, and requested that I spend time filling out forms online, which I did. Including uploading my driver's license and insurance card.

Taking the test

I arrived on time. After 45 minutes of claustrophobic rigid motionlessness to assure a good quality MRI while being bombarded by loud noises, the tech stopped things and asked me about my tumor and its location, which is under my shoulder blade. His reply: "We have to stop the MRI test. I'm following the test order, but I just looked at the prior scans and they're different! This order says "shoulder," which means around the joint. What past scans did was scapula, including all the way to near the backbone. This machine can't capture that. We'll have to restart you with the other machine here that can."

There was more conversation, all polite on my side, since the tech took initiative and was saving me from thinking everything was fine and having to come back to get the scan done correctly.

Even better, the facility wasn't busy, and the tech took the initiative to get me scanned at the correct machine. I was delayed by an hour and had extra practice at remaining immobile under aural bombardment, but OK. I warmly thanked both techs for their initiative and flexibility and went on my way.

Simply copying and sending in the same order as before was apparently beyond the esteemed radiation center director and/or his staff. I guess I should have gone elsewhere after the time I was in for an appointment after a scan had been done and he carefully examined … the wrong shoulder blade. And then only changing after the second time I politely mentioned he was on the wrong side.

Seeking the test results

What you're supposed to do is make a follow-up appointment with the director of the radiation oncology center to get your results. As in the past, I want to see the results myself. I have previously made an account on the system's patient access portal to do this. I entered the login information and got told this:

11

Less than a year after my prior access, they de-activated me. Do banks inactivate accounts for lack of use? How about email accounts? Or anything else? Exactly what horrible consequence is being averted by prompt de-activation? Right.

I read through all the material. Only by downloading a PDF file was I able to get the phone number I had to call, which was the only path back to activation. I called and after much of the usual nonsense I got through to a person who, after learning everything about me except my favorite flavor of ice cream gave me a code to enable me to enter a new miraculously complex password and have access to ... my own data, blankity-blank it!

The Surprise Appointment

Remember when I asked to make my own MRI test appointment so I could be sure it was at a time I could make? And one was made on my behalf? Imagine making a reservation at a restaurant and they TELL YOU when the appointment is that you may have -- because we're nice; after all, we don't have to let you come, so we'll fit you in when it suits us. This is what the MRI appointment was like.

Now, having logged into MyChart -- finally -- I discovered I had an appointment to see the director of radiation oncology! Surprise! When were you going to tell me, guys? I was nice and called the number, wove my way through the phone maze and found someone who claimed he would "tell the director." Not cancel the appointment; tell the director. Is some form of after-school detention coming my way to punish me for this refusal of an appointment? We'll see.

Why are they so insistent in me having an appointment with the director to "go over my results" with me? Simple: they want to be able to generate a claim for a visit.

Trying to get my test results

Given that they made an appointment for me to see the doctor two days after the test, it’s a fair assumption that the test results have been filed. I’ve been on the system’s patient access system and the radiation center’s separate (of course) system every day. They clearly have the results. They refuse to let me have them.

Refusing to provide patients timely access to their test results should be a crime. Why? In the most basic way, they are my property. Suppose I go to a tailor and get measured for a custom suit. I pay for the suit. Then the tailor refuses to give me the suit, and ignores my requests. If you go back to the tailor shop, the tailor says “I don’t deliver the suits. I just take measurements, make a suit, and give it to my team.” How do I get my suit then? “Go to MyTailor.com, sign in and it will be there.” What if it’s not? “Sorry, it’s not under my control.” Is the tailor shop committing a crime, taking money and refusing to deliver what was paid for? Of course! But in the wonderful world of medical business, this is standard practice.

Beyond the crime issue, sometimes those test results are health issues that the patients can be incredibly anxious about! Like me. I’m writing this making liberal use of my right arm and fingers, which the cancer could kill. Could it be worse? Yes. But am I anxious to see those results? You betcha!

All the rhetoric is that patients have the "right" to have full access to their own records. Wonderful modern medical record systems crow about how they support this full access. Lies. Blatant and pernicious. And no one does anything about it! Not only isn't it a scandal, it isn't even news.

Read here about the break-through in hospital EMR electronic data exchange. Read here, here, here and here about prior adventures on the same subject. Summary: Compared to past experiences, this was pretty good!

Getting the pre-auth

I’d kind of like to see the actual pre-auth so I can see the test order, a thing that the insurance company should have denied because it was wrong. I went to their website, which is of course down.

Anthem

They say I can get what I need from their wonderful app Sydney, but it doesn’t have the information. Of course. Forget it. I have better things to do. I already know that the armies of highly paid IT professionals at Anthem can't build software, so beating a dead horse..

Getting my test results

After finally gaining access to MyMountSinai I log in. Of course the test isn't there. Given that they made an appointment for me to see the doctor two days after the test, I'm pretty sure they have it. They're just taking their sweet time to let me see it. Because patient satisfaction is important to them, you know.

I check the next day. The next. Next. Next. A couple more. I finally email the doctor who ordered the test, politely asking if he would send it to me. A couple days later I got an email from Mount Sinai:

Capture

Amazing! The results for "David A. Black" are in! I wonder who that is? A long-lost relative? I'm David B. Black. No wonder matching patient health records is a problem.

I carefully read through the report. Here's the punch line:

Capture

"No convincing evidence of progression..." is definitely "appreciated" by me! While I'd much rather that it was gone, sullenly sitting in my body not growing I'll gladly take.

The doctor later responded to my email saying he would forward the test results, which arrived. The substance was the same, but Mt Sinai had gone to the trouble to omit lots of information from the report released to me officially and the one forwarded from their own internal system to me. For example, the name of the doctor who wrote the report. Instead of simply copying the information they have to enable me to access it, they've taken trouble to create software to pick and choose exactly which -- of my possessions! -- they will deign to allow me to have. When they feel like it.

Conclusion

In the overall scheme of things, everything I experienced was small potatoes. I'm healthy and alive. This doesn't come close to being in the ball park of the deaths and serious issues resulting from medical error and the costly, health-harming impact of standard medical practices that have been proven to be wrong, but which the authorities refuse to change because it would mean admitting error.

My experience is nonetheless a good example of the business-as-usual gross inefficiencies of the medical system that drive up costs, cause endless patient trouble and generally make things far worse than they should be. This isn't about exotic new biomedical discoveries. It's about things that should be plain, ordinary common-sense processes and software of the kind widely used in fields like veterinary medicine that should be the standard in human medicine. But aren't. One is tempted to think in terms of self-absorbed heads-in-the-clouds elites, but all I've got is mountains of mountains of anecdotal evidence, no serious, RCT's (random controlled tests, the gold standard of medical studies) in favor of that hypothesis, so I'll just put it aside.

Posted by David B. Black on 02/15/2022 at 09:15 AM | Permalink | Comments (0)

Two’s-day February 22 2022 an EXTREMELY Rare Day

What will happen on Tuesday, February 22, 2022 is something remarkably rare in history. 2/22/2022 has SIX two’s and a zero. The recently-passed 2/2/2022 was also pretty amazing, well worth making a big deal out of were it not for its grander cousin following just 20 days later. 2/2/2022 has only FIVE two’s and even worse, it fell on that ignominious day of the week Wednesday.

Wednesday is a terrible day. It’s the low point of the week, just as far from the last weekend as it is to the next one. It’s a contradiction in terms, a Wednesday trying to pass itself off as a Two’s-day. And the spelling! It’s pronounced “Wen’s day.” So why in the world does it spell itself “Wed ness day?” To trip up third graders on spelling tests?

You think it’s not such a rare thing? How about Feb 22, 1922, you might say. Well, here’s the dirt:

2/22/1922 had only 5 two’s.

The upcoming AMAZING day not only has 6 of those cool two’s, but the only non-two on the day is “nothing” to be worried about. Literally zero.

2/22/1922 had LOTS of non-two’s to upset anyone looking for beauty and consistency. There was a one. And, even worse, a nine. There is no rational come-back to those glaring errors.

If you’re still hanging on to the imagined glory of 2/22/1922, consider this: it was a Wednesday! How can a legitimate Two’s-day fall on a Wed-nes-day, I ask you?

You might dig in your heels and say “that’s fine about the past. But what about the future? Isn’t it obvious that 2/22/2222, a date that’s only 200 years from now would be even better?  That’s seven two’s! Beat you!”

Ummmm, no you didn’t. Yes, it’s got an extra two. Good for it. But it doesn’t fall on a Tuesday. And to answer your desperate comeback, neither does 2/2/2222.

The glorious Two’s-day, 2/22/2022

When should we celebrate? On the day, of course, but I mean exactly when?

Think about it. Tick, tick, tick.

The fireworks should go off and the bottles should be popped exactly when the 24 hour clock hits twenty-two seconds of twenty-two minutes of ten o’clock in the evening, in other words:

22:22:22

At that exact second it will be:

22:22:22 on Tuesday, 2/22/2022

Two's-Day!!!

What could possibly be better than that??

Get ready, folks. Make your preparations. This isn’t just a once-in-a-lifetime event, it’s a once-practically-EVER event!

Here's how one teacher is helping her elementary school class celebrate:

Twos-day-683x1024

 

Posted by David B. Black on 02/06/2022 at 04:59 PM | Permalink | Comments (0)

The Experts are clear: Don’t Eat Much Saturated Fat

Any reasonably aware person knows that it’s important to maintain a healthy diet. High on the list of what “healthy eating” means is limiting the amount of saturated fat in your diet. This impacts all the meat and dairy products you consume. You should only drink reduced-fat milk for example. If you must eat meat, make sure it’s lean, and never eat something obviously fatty like bacon. This isn’t just something experts say at their conferences. It’s the official recommendation of all government bodies, and brought to the attention of ordinary people by nutrition labels on food products. Warning: there are contrarian views on this subject.

Cheese

Here’s a nice goat cheese I bought:

Goat cheese front

When you turn it over, here’s most of the nutrition label:

Goat cheese back

Wow, calories must be important – they’re first and in big type. Right after calories comes Fat.  It must be really important, because I’m told not just how much fat there is, but how much of the fat I’m allowed to eat a day is in each serving.

This is interesting. There’s 6 grams of Total Fat, which is only 8% of my daily allowance, but 4 grams of the Fat is Saturated Fat, 2/3 of the total, and that’s 20% of my daily allowance. Couldn’t be clearer: I can eat a fair amount of fat, but I’d better make sure that only a tiny part of it is Saturated. Doing the arithmetic, they only want me to eat 20 grams of Saturated fat, while I’m allowed 76 grams of Total Fat.

I wonder if I’m getting this right, because some of those labels seem like things you should get lots of, like vitamins and potassium. I’d better check.

FDA

Oh, good, the FDA’s food label page  links right to a whole initiative they sponsor, the Healthy People initiative! How great is that, they’re concentrating on the big picture, keeping us all healthy. What a great government we have!

Here’s what they have to say about diet at a high level:

Healthy diet

Pretty clear, huh? Just like I said above: eat only lean meat, and low fat dairy. Saturated fats are bad for you. Everyone knows it. The importance is so great, it’s on the label of nearly every food product.

American Heart Association (AHA)

Let’s admit it, though, sometimes the government lags behind the latest science. Let’s make sure that’s not the case here.

What about the major medical organization that concentrates on heart, the American Heart Association? Their position seems very clear:

Heart

They sound pretty sure about themselves. Why are they so certain? Here's what they say as of November 2021: "Decades of sound science has proven it can raise your 'bad' cholesterol and put you at higher risk of heart disease."

OK, there are decades of science backing them up. Still, it's pretty broad, talking about not eating "too much" saturated fat. Do they have something more specific to say? Here it is:

AHA

Hmm, how does that relate to the FDA's food label? On the cheese label above, the Saturated Fat was 4g, which is 20% of the recommended total. Arithmetic: if 4g is 20%, then 20g is the limit imposed by the FDA, which is almost 50% more than the professional organization of medical cardiologists recommends! I thought our government was looking out for our health -- the FDA should get with it!

Harvard

Hold on here, let's not jump to conclusions. Let's check in with that incredibly prestigious medical school, Harvard Medical School?

Here’s what they have to say in an article from November 2021:

Aa

Isn't it wonderful that they make it clear that it isn't just bacon and fatty cheese we need to be careful about? Reading a bit further,

Capture

Higher than the AHA, but lower than the FDA. I guess they don't all read the same scientific studies, or something. But at least they all agree that Saturated Fat is bad for you. Reading a bit farther in the article, they say plainly that eating too much Saturated Fat "can raise the amount of harmful LDL cholesterol in your blood. That, in turn, promotes the buildup of fatty plaque inside arteries — the process that underlies most heart disease."

Couldn’t be clearer.

Mayo Clinic

Just to be absolutely, double-plus positive, maybe it's worth checking one of the best hospital medical systems in the world, the Mayo Clinic. They're doctors, after all, not researchers or institutional employees. Let's see what they say. OMG! Look at what I found in the section on nutrition myths!

Eating fat will make you fat. The fat-free and low-fat diet trend is a thing of the past (80s and 90s, to be exact). Yet, some individuals are still scared of fat.

Isn't that what all this focus on fat avoidance is all about? Let's read on:

Be aware that fats aren’t created equal. Choose heart-healthy unsaturated fats, such as olive and canola oil, nuts, nut butters and avocados over those that are high in saturated and trans fats, including fatty meats and high-fat dairy products.

Now I get it. The FDA nutrition food label had a high limit for fats in general (which are OK), but a low limit for saturated fats, the bad kind. So the Mayo Clinic is on board too. All the experts agree!

Conclusion

There are crazy people out there who ignore the clear message of the government, the Experts and leading authorities in the field of health and nutrition. Some of these crazy people even write books, the obvious intent of which is to make more of the population lead crappier lives and die sooner. Here's a brief summary. Why the FDA, the agency supposedly charged with keeping us healthy, permits these health-destroying, misinformation-filled books to be published, I have no idea.

Regardless of the distractions: government and the big authorities in the field are united in the effort to keep us all more healthy by encouraging us all to strictly limit the amount of Saturated Fat we eat.

Posted by David B. Black on 02/01/2022 at 09:48 AM | Permalink | Comments (0)

Object-Oriented Software Languages: The Experts Speak

On the subject of Object-Oriented Programming (OOP), there are capital-E Experts, most of academia and the mainstream institutions, and there are small-e experts, which include people with amazing credentials and accomplishments. They give remarkably contrasting views on the subject of OOP. Follow the links for an overview, analysis and humor on the subject.

The Exalted Experts on OOP

Here is the start of the description of Brown's intro course to Computer Science, making it clear that "object-oriented design and programming" is the foundational programming method, and Java the best representation language:

Brown intro

Here's their description of OOP, making it clear that there are other ways to program, specifically the nearly-useless functional style, never used in serious production systems.

Brown OOP

See below to see what Dr. Alan Kay has to say about Java.

Here is what the major recruiting agency Robert Half has to say on the subject:

Object-oriented programming is such a fundamental part of software development that it’s hard to remember a time when people used any other approach. However, when objected-oriented programming, or OOP, first appeared in the 1980s, it was a radical leap forward from the traditional top-down method.

These days, most major software development is performed using OOP. Thanks to the widespread use of languages like Java and C++, you can’t develop software for mobile unless you understand the object-oriented approach. The same goes for web development, given the popularity of OOP languages like Python, PHP and Ruby.

It's clear: OOP IS modern programming. Except maybe some people who like functional languages.

The mere experts on OOP

We get some wonderful little-e expert witness from here.

“Implementation inheritance causes the same intertwining and brittleness that have been observed when goto statements are overused. As a result, OO systems often suffer from complexity and lack of reuse.” – John Ousterhout Scripting, IEEE Computer, March 1998

“Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.” – John Carmack

OO is the “structured programming” snake oil of the 90' Useful at times, but hardly the “end all” programing paradigm some like to make out of it.

And, at least in it’s most popular forms, it’s can be extremely harmful and dramatically increase complexity.

Inheritance is more trouble than it’s worth. Under the doubtful disguise of the holy “code reuse” an insane amount of gratuitous complexity is added to our environment, which makes necessary industrial quantities of syntactical sugar to make the ensuing mess minimally manageable.

More little-e expert commentary from here.

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)


Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”


Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”


Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”


Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”


Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”


Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”


Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”


Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”


Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”


Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”


John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”


Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”


Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”

Here is Wiki's take on issues with OOP. It goes into detail.

Here is Linus Torvald's take on object-oriented C++. Linus is merely the creator and leader of the open-source software that fuels the vast majority of the web.

More details:

Essay by Joe Armstrong. "After its introduction OOP became very popular (I will explain why later) and criticising OOP was rather like “swearing in church”. OOness became something that every respectable language just had to have."

A talk given at an OOP conference by an OOP supporter who lists 10 things he hates.

A Stanford guy telling his evolution to OOP and then out of it. Lots of detail.

A professional who gradually realized there were issues with objects.

I have therefore been moving away from the object-oriented development principles that have made up the bulk of my 17 year career to date. More and more I am beginning to feel that objects have been a diversion away from building concise, well structured and reusable software.

As I pondered on this topic, I realised that this isn’t a sudden switch in my thinking. The benefits of objects have been gradually declining over a long period of time.

A detailed explanation of how the noun-centricity of OO languages perverts everything. Here is an extended quote from the start of this brilliant essay:

All Java people love "use cases", so let's begin with a use case: namely, taking out the garbage. As in, "Johnny, take out that garbage! It's overflowing!"

If you're a normal, everyday, garden-variety, English-speaking person, and you're asked to describe the act of taking out the garbage, you probably think about it roughly along these lines:

  get the garbage bag from under the sink
carry it out to the garage
dump it in the garbage can
walk back inside
wash your hands
plop back down on the couch
resume playing your video game (or whatever you were doing)


Even if you don't think in English, you still probably still thought of a similar set of actions, except in your favorite language. Regardless of the language you chose, or the exact steps you took, taking out the garbage is a series of actions that terminates in the garbage being outside, and you being back inside, because of the actions you took.

Our thoughts are filled with brave, fierce, passionate actions: we live, we breathe, we walk, we talk, we laugh, we cry, we hope, we fear, we eat, we drink, we stop, we go, we take out the garbage. Above all else, we are free to do and to act. If we were all just rocks sitting in the sun, life might still be OK, but we wouldn't be free. Our freedom comes precisely from our ability to do things.

Of course our thoughts are also filled with nouns. We eat nouns, and buy nouns from the store, and we sit on nouns, and sleep on them. Nouns can fall on your head, creating a big noun on your noun. Nouns are things, and where would we be without things? But they're just things, that's all: the means to an end, or the ends themselves, or precious possessions, or names for the objects we observe around around us. There's a building. Here's a rock. Any child can point out the nouns. It's the changes happening to those nouns that make them interesting.

Change requires action. Action is what gives life its spice. Action even gives spices their spice! After all, they're not spicy until you eat them. Nouns may be everywhere, but life's constant change, and constant interest, is all in the verbs.

And of course in addition to verbs and nouns, we also have our adjectives, our prepositions, our pronouns, our articles, the inevitable conjunctions, the yummy expletives, and all the other lovely parts of speech that let us think and say interesting things. I think we can all agree that the parts of speech each play a role, and all of them are important. It would be a shame to lose any of them.

Wouldn't it be strange if we suddenly decided that we could no longer use verbs?

Let me tell you a story about a place that did exactly that...

The Kingdom of Nouns

In the Kingdom of Javaland, where King Java rules with a silicon fist, people aren't allowed to think the way you and I do. In Javaland, you see, nouns are very important, by order of the King himself. Nouns are the most important citizens in the Kingdom. They parade around looking distinguished in their showy finery, which is provided by the Adjectives, who are quite relieved at their lot in life. The Adjectives are nowhere near as high-class as the Nouns, but they consider themselves quite lucky that they weren't born Verbs.

Conclusion

No big surprise, the experts beat the Experts hands-down. But you'd never know if you go through typical Computer Science "education," absorb the way that object-orientation is the "dominant" paradigm of computing and read the job requirements that talk about how the hiring group is serious about their object-hood. Programmers who are serious about what they do and try to understand it soon see the lack of clothing on King Object and move on.

Posted by David B. Black on 01/24/2022 at 02:28 PM | Permalink | Comments (0)

Data Humor Book by Rupa Mahanti

There's a new book out about Data Humor.

41iGGcEISaL

If you like data, you will be amused by this book. If you feel tortured by data, join the crowd -- and read this book, it will relieve some of the stress. If you were wondering what nerd humor was all about, read this book -- better to learn about nerd humor by getting the giggles.

The author searched far and wide for data humor. She stumbled upon a blog that had some bits she thought were funny -- this blog, yes, the one you're reading now!

She contacted me to ask permission to quote me. After thinking hard about whether I should grant permission -- for about a microsecond -- I gave it. She asked me to check out a draft of the book. I guess she liked what I said because my quote went on the back cover and was the first of the quotes on Amazon.

"This is a brilliant book. The title says it's humorous. It's hilarious! But even more valuable is the sad-but-true insights it conveys about humans, lost and wandering in uncharted forests of data, anxious to escape."—David B. Black, Technology Partner, Oak HC/FT Partners

It's quite amazing how widely she searched for quotes, from places I never would have thought to look:

... book containing a collection of more than 400 funny and quirky quotes, puns, and punchlines related to data, big data, statistics, and data science, from different sources and a wide array of cultural figures, thought leaders and key influencers across the world- William Edwards Deming, Charles Wheelan, Brené Brown, David B. Black, Tim O’Reilly, Jill Dyché, Evan Levy, Gwen Thomas, George Mount, David Shenk, James Gleick, Jim Barksdale, Vincent Granville, Cathy O’Neill, Dale Carnegie, Martyn Richard Jones, Timo Elliott, Mark Twain, Phil Simon, Lewis Carroll, Oscar Wilde, Thomas H. Davenport, DJ Patil, Damian Mingle, Thomas C. Redman, Cassie Korykorv, Brent Dykes, Guy Bradshaw, Scott Taylor, Susan Walsh, Winston Churchill, Ronald Reagan, Arthur Conan Doyle, and many more.

Here's just one:

Data isn't information, any more than fifty tons of cement is a skyscraper.
Clifford Stoll (Stoll 1996)

Don't you need some light in your life? I promise, it's lighter in every way than fifty tons of concrete...

Posted by David B. Black on 01/18/2022 at 09:49 AM | Permalink | Comments (0)

Software NEVER needs to be “Maintained”

We maintain our cars, homes and devices. Heating and cooling systems need regular maintenance. So do our bodies! If we don’t care for our bodies properly, they break down! Software, by sharp contrast, never needs to be maintained. NEVER! Using the word “maintenance” to describe applying a “maintenance update” to software is beyond misleading. More accurate would be to say “a new version of the software that was crippled by a horrible design error that our perpetually broken quality processes failed to catch.” That’s not “maintenance.” It’s an urgent “factory recall” to fix a design error that infects every car (copy of the software) that was built using the flawed design.

Software is different than almost everything

Software is unlike nearly everything in our experience. It is literally invisible. Even “experts” have trouble understanding a given body of code, much less the vast continent of code it interacts with. Naturally, we apply real-world metaphors to give us a chance of understanding it. While sometimes helpful, the metaphors often prove to be seriously misleading, giving nearly everyone a deeply inaccurate view of the underlying invisible reality. The notion of “software maintenance” is a classic example. The flaw is similar to the words “software factory.”

Maintaining anything physical centers around either preventing or repairing things that break due to simple wear-and-tear or an adverse event. We change the oil in a car because it degrades with use. We change the filters in heating and cooling units because they get clogged up with the gunk from the air that passes through them. We sharpen knives that have dulled as a result of use. We maintain our homes and yards. It’s the physical world and things happen.

In the invisible, non-physical world of software, by contrast, a body of software is the same after years of use as it was the moment it was created. Nothing gets worn down. Nothing gets clogged. An inspection after years of heavy use would show that every single bit, every one and zero, was the same as it was when it was created. Of course there are memory crashes, hacker changes, etc. It’s not that software is impervious to being changed; it’s just that software is unchanged as a result of being used – unlike everything in the normal physical world, which one way or another, is changed by its environment – everything from clothes getting wrinkled or dirty from wear to seats being worn down by being sat upon.

The meaning of software maintenance

When a car is proven to have a design flaw, auto manufacturers are reluctant to ship everyone a new car in which the original design flaw has been corrected. Instead, they issue a recall notice to each affected owner, urging them to bring their car to the nearest dealership to have repair done to the car that corrects the design flaw. It’s inconvenient for the owner, but far less expensive for the manufacturer. With software, by contrast, all the software vendor has to do is make a corrected version of the software available for download and installation, the software equivalent of shipping everyone a new car! It’s no more expensive to “ship” hundreds of megabytes of “brand-new” code than it is a tiny bit. Such are the wonders of software.

Software factory recalls are part of everyday life. Software creators are maddeningly unable to create error-free software that is also cyber-secure. See this.

We’ve all become accustomed to the Three Stooges model of building software.

111

There are highly paid hordes of cossetted employees enjoying free lunches and lounging on bean bags on luxurious campuses, “hard at work” creating leading edge software whose only consistent feature is that it’s late, expensive and chock full of bugs and security flaws.

While the Three Stooges and their loyal armies of followers are busily at work creating standards, regulations and academic departments devoted to churning out well-indoctrinated new members of the Stooge brigades, rebels are quietly at work creating software that is needed to meet the needs of under-served customers, using tools and methods that … gasp! … actually work. What an idea!

The good news is that the rebels are often richly rewarded for their apostasy by customers who eagerly use the results of their work. It’s a good thing for the customers that the totalitarian masters of the Three Stooges software status quo are no better at enforcing their standards than they are at building software that, you know, works.

Posted by David B. Black on 01/10/2022 at 11:30 AM | Permalink | Comments (0)

Cryptocurrency: Money, Trust and Regulation Book

A book has been published about cryptocurrency that stands out from the many books available on the market: it's written by a person with experience and true expertise in financial markets, institutions and regulation both in government and the private sector, Oonagh McDonald. Disclosure: I was her technical advisor for the book. We connected as a result of my article in Forbes on Central Bank Digital Currencies.

61ACT87EQ3S._SX331_BO1 204 203 200_

Dr. McDonald's prior books are impressive because of her amazing perspective and knowledge. Here's her background:

Dr. Oonagh McDonald CBE is an international expert in financial regulation, having advised regulatory authorities in a wide range of countries, including Indonesia, Sri Lanka and Ukraine. She was formerly a British Member of Parliament, then a board member of the Financial Services Authority, the Investors Compensation Scheme, the General Insurance Standards Council, the Board for Actuarial Standards and the Gibraltar Financial Services Commission. She was also a director of Scottish Provident and the international board of Skandia Insurance Company and the British Portfolio Trust. She is currently Senior Adviser to Crito Capital LLC. She was awarded a CBE in 1998 for services to financial regulation and business. Her books include Fannie Mae and Freddie Mac: Turning the American Dream into a Nightmare (2013), Lehman Brothers: A Crisis of Value (2015) and Holding Bankers to Account (2019). She now lives in Washington DC, having been granted permanent residence on the grounds of "exceptional ability".

Read the comments at the link about her books on Lehman Brothers, Fannie Mae, bankers and markets and others.

Here are examples of what others have said:

Oonagh McDonald has done it again. In this ambitious book, she helps the rest of the world catch up with her on the opportunities and risks associated with stable coins. Even if one may disagree with her about the future of stable coins (and I do a bit), this book is an invaluable resource, especially as a teaching tool, because of McDonald’s ability to synthesize and interpret a vast amount of information about complex and novel practices. -- Charles Calomiris, Henry Kaufman Professor of Financial Institutions, Columbia Business School

McDonald’s rigorously researched analysis of the development of cryptocurrencies is a must-read for anyone who has a stake in the future of money. It is an historical tour de force that painstakingly teases out of every corner of the cryptocurrency world the critical issues that governments, policy makers, and consumers must consider before abandoning government fiat money. -- Thomas P. Vartanian, executive director and professor of law, Program on Financial Regulation and Technology, George Mason University

Everyone fascinated by how the cryptocurrency phenomenon has created a whole sector of ventures to furnish ‘alternative currencies’, while the dollar price of a Bitcoin boomed from 8 cents to a high of more than $60,000, must wonder whether all this will really bring about a revolution in the nature of money. Will Bitcoin’s libertarian dream to displace central bank fiat currency be achieved? Or ironically, will central banks take over digital currencies and make themselves even more dominant monetary monopolies than before? Oonagh McDonald, always a voice of financial reason, provides a thorough consideration of these questions and of cryptocurrency ideas and reality in general, with the intertwined issues of technology, regulation, trust, and government monetary power. This is a very insightful and instructive guide for the intrigued. -- Alex J. Pollock, Distinguished Senior Fellow Emeritus, R Street Institute, and former Principal Deputy Director, Office of Financial Research, US Treasury

It's a different perspective from the many books on the subject of cryptocurrencies that have been published. Whether or not you agree with her conclusions, you will read facts and perspective here that are not available elsewhere.

Posted by David B. Black on 01/04/2022 at 04:18 PM | Permalink | Comments (0)

The Nightmare of Covid Test Scheduling

Oh, you want to get a Covid test, do you? Little did you know that the clever people who do these things also give you endurance, patience and intelligence tests at the same time! Our wonderful healthcare people and helpful governments have somehow arranged a diverse number of ways to make you fill out varying forms in varying orders, only to find out that there are no available appointments.

Don’t you think the highly paid experts who created these services could have done something simple, like following the model of dimensional search used at little places like Amazon, travel sites and other places that care about customers? I guess that would have been too easy or something. And besides, medical scheduling in general is a nightmare, why should this be different?

Looking for a test: CVS

Search told me that my local CVS has testing. I clicked to the website of my local store. I clicked “schedule a test.” Although I had come from the local store, I guess the people who built covid testing didn’t manage to get the local site to pass on its location, so I entered my location again as requested.

Now I have to “answer a few questions” for no-cost testing. Eight questions. Then when I say yes to recent symptoms, 12 more questions plus the date my symptoms began. Then clicking that I was truthful.

Next, pick a test type, look at a map of local stores and see a list of dates starting today. I pick today. There’s a list of each store, with a button under each to “Check for available times.” Click on the first store. Here’s what appears:

There are no available times at this location for Tue., 12/21. Try searching for availability on another date.

Wow. I go up and pick the next day. Click. No times. Pick the next day. Click. No times.

CVS has pioneered a whole new way to help customers pick a time! You pick a date, pick a store, click and hope you get lucky. Then pick a different store and/or a different time and click again. And keep rolling until you hit the jackpot! Assuming there’s one there…

Since there was no end in sight, I tried something different.

Looking for a test: Walgreens

No questions first. Hooray! Just put in a location, pick a test type and see a list of locations. … Almost all of which had “No appointments available.” Let’s check out the one nearest to me, which said “Few appointments available.” I click. First I have to agree to lots of things. Now I have to enter my full patient information: name, gender, DOB, race, ethnicity, full address, phone and email. Then review and click that it’s correct.

Then, it’s the covid questions: my symptoms, contacts, medical conditions, pregnancy. Have I had vaccines? For each, which one and the date given. Have I tested positive in the past?

Now, after all that, I can pick an appointment. Back to that bait-and-switch first screen with test types and locations. I pick the location. Now a calendar shows up. Today’s date is highlighted. This message in red is below: “No time slots available within the selected date. Try a different date for more options.” The next 7 days are in normal type, beyond that they’re greyed out. Do any of them work? I try each day individually. They each give the same message! Why couldn’t you have told me that NO DATES WERE AVAILABLE!?!? Maybe even … BEFORE I filled all that stuff out??

Looking for a test: The state of NJ

Since I live in NJ, I get regular dispatches about how the state government cares about my health in general and covid in particular. So I went to the state site.

NJ covid test

Which it turns out is operated by a private company, Castlight.

Castlight

I put in my zip code. They list places that offer testing, one of which is the Walgreens I just tried. But I click on it anyway, and they link me to Walgreens testing … in a town 10 miles away instead of my town, which was explicitly the one I clicked on. Good job!

They got my hopes up by listing Quest Diagnostics, which has a location in my town. I answer a long list of questions and am told that I quality for a test! Hooray! But then …

Myquest

I have to sign up and provide loads of personal information before even knowing I can get a test. That’s it for Quest.

Looking for a test: The local county

Maybe my local county would have done it better? Let’s check it out.

I get a long list of testing places. How do I find one near me? After a few minutes of confusion, I discover that the sites are listed alphabetically! Now that’s helpful!

CVS of course is near the top, with a line per location. My town isn’t listed even though I already know that the local CVS claims they do tests. Crap.

Looking for a test: Digging deep

I found a private place, Solv, that claims to link you right to testing places. I tried. They had a clinic not too far from me. Clicked. I’m still on Solv, which is potentially good. After more clicking It turns out that no appointments were available today or tomorrow, the only choices. Gee, Solv, maybe in the next release of your software you could possibly only show choices that were actually, you know, available??

I finally tried a little pharmacy that is local and has remained independent. They offer tests. I clicked and got to a page dedicated to the pharmacy under a place I’d never heard of, Resil Health. Right away they list dates and times available. Just a few days out.

Gerards

I pick a date and enter the usual information on a clean form, but also my insurance information and a photo of the front & back of my card. Click. The time is no longer available! But at least picking another time was easy. I was disappointed that it was a couple days out. They sent an email with a calendar invite. I accepted. There was a link to reschedule. I tried it. To make a long story short, sometimes when I clicked reschedule the dates available changed, and earlier ones appeared. After some effort I snagged one the same day! Then I went. All I had to do was show my driver’s license – since they had everything else, neither I nor anyone at the pharmacy had to do paperwork – Resil health did it all, including the reporting.

It was a pain, but by far the best. Hooray for small-group entrepreneurs, getting a service up and running that makes things easier and better than any of the giant private companies and certainly any of the pathetic ever-so-helpful governments.

Looking for a test: Is it just me?

I had to wonder: is New Jersey particularly bad, as snotty New Yorkers like to joke about, or is it just the way things are? It turns out that, even in high-rise Manhattan, covid testing is tough. This article spells out the issues.

Mayor Bill de Blasio keeps telling New Yorkers frustrated with long waits and delayed results at privately-run COVID testing sites to use the city’s public options — but his administration’s incomplete and bulky websites make that exceedingly difficult.

It’s not just me.

Conclusion

I got my test. I’ll get the results soon. Let's hope getting those results is better than it often is in medicine. What’s the big deal? I’m only writing about it because it’s a representative story in the life-in-the-slow-lane of typical software development. It’s possible to write good software. Thankfully there are small groups of motivated programmers who ignore the mountain of Expert-sanctioned regulations, standards and processes that are supposed to produce good software. These software ninja’s have a different set of methods – ones that actually work! For example, in New York City:

The complaints echo the problems New Yorkers encountered when city officials first rolled out their vaccine appointment registration systems this spring — prompting one big-hearted New Yorker with computer skills to create TurboVax to workaround the mess.

“We don’t have a single source of truth for all testing sites in NYC,” tweeted the programmer, Huge Ma, who was endearingly dubbed ‘Vax Daddy’ by grateful Gothamites. “Tech can’t solve all problems but it shouldn’t itself be a problem on its own.”

One guy – but a guy who actually knows how to produce effective, working software in less time than the usual software bureaucracy would take to produce a first draft requirements document. This is one of the on-going stream of anomalies that demonstrate that a paradigm shift in software is long overdue.

Posted by David B. Black on 12/22/2021 at 03:01 PM | Permalink | Comments (0)

The Dimension of Automation Depth in Information Access

I have described the concept of automation depth, which goes through natural stages starting with the computer playing a completely supportive role to the person (the recorder stage) and ending with the robot stage in which the person plays a secondary role. I have illustrated these stages with a couple examples that illustrate the surprising pain and trouble of going from one stage to the next.

Unlike the progression of software applications from custom through parameterized to workbench, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.

Automation depth in Information Access

Each of the patterns of software evolution I've described are general in nature. I’ve tried to give examples to show how the principles are applied. In this section, I’ll show how the entire pattern played out in “information access,” which is the set of facilities for enabling people to find and use computer-based information for business decision making.

Built-in Reporting

“Recorder” is the first stage of the automation depth pattern of software evolution. In the case of information access, early programs were written to record the basic transactions that took place; as part of the recording operation, reports were typically produced, summarizing the operations just performed. For example, all the checks written and deposits made at a bank would be recorded during the day; then, at night, all the daily activity would be posted to the accounts. The posting program would perform all the updates and create reports. The reports would include the changes made, the new status of all the accounts, and whatever else was needed to run the bank.

At this initial stage, the program that does the recording also does the reporting. Reporting is usually thought to be an integral part of the recording process – you do it, and then report on what you did. Why would you have one program doing things, and a whole separate program figuring out and reporting on what the first program did? It makes no sense.

What if you need reports for different purposes? You enhance the core program and the associated reports. What if lots of people want the reports? You build (in the early days) or acquire (as the market matured) a report distribution system, to file the reports and provide them to authorized people as required.

Efficiency was a key consideration. The core transaction processing was “touching” the transactions and the master files; while it was doing this, it could be updating counters and adding to reports as it went along, so that you wouldn’t have to re-process the same data multiple times.

Report Writers

The “power tool” stage of automation depth had two major sub-stages. The first of these was the separation of reporting from transaction processing. Information access was now a key goal in itself, and was so important and done so frequently that specialized tools were built to make it easy, which is always the sign that you’re into the “power tool” phase.

This first generation of power tools were specialized software packages generally called “report writers.” The power tool was directed at the programmer who had to create the report. Originally, the language that was used for transaction processing was also used for generating the report. The most frequent such language was COBOL. The fact that COBOL was cumbersome for this purpose was reflected in the fact that specialized syntax was added to COBOL to ease the task of writing reports. But various clever people saw that by creating a whole new language and software environment, the process of writing reports could be tremendously enhanced and simplified. These people began to think in terms of reporting itself, so naturally they broke the problem into natural pieces: accessing the data you want to report on, processing it (select, sort, sum, etc.), and formatting it for output.

The result of this thinking was a whole industry that itself evolved over time, and played out in multiple environments and took multiple forms. The common denominator was that they were all software tools to enable programmers to produce reports more quickly and effectively than before, and were complete separate from the recorder or transaction processing function.

At the same time, data storage was evolving. The database management system emerged through several generations. This is not the place for that story, which is tangential to the automation depth of information access. What is relevant is that, as the industry generally recognized that information access had moved to the report writer stage of automation, effort was made to create a clean interface between data and the programs that accessed the data for various purposes.

Data Warehouse and OLAP

Report writers were (and are) important power tools – but they’re basically directed at programmers. But programmers are not the ultimate audience for most reports; most reports are for people charged with comprehending the business implications of what is on the report and taking appropriate action in response. And the business users proved to be perennially dissatisfied with the reports they were getting. There was too much information (making it hard to find the important things), not enough information, information organized in confusing ways (so that users would need to walk through multiple reports side-by-side), or information presented in boring ways that made it difficult to grasp the significance of what was on the page. And anytime you wanted something different, it was a big magilla – you’d have to get resources authorized, a programmer assigned, suffer through the work eventually getting done, and by then you’d have twice as many new things that needed getting done.

As a result of these problems, a second wave of power tools emerged, directed at this business user. These eventually were called OLAP tools. The business user (with varying levels of help from those annoying programmers) had his own power tool, giving him direct access to the information. Instead of static reports, you could click on something and find out more about it – right away! But with business users clicking, the underlying data management systems were getting killed, so before long the business users got their own copy of the data, a data warehouse system.

In a sign of things to come, the business users noticed that sometimes, they were just scanning the reports for items of significance, and that it wasn’t hard to spell out exactly what they cared about. So OLAP tools were enhanced to find and highlight items of special significance, for example sales regions where the latest sales trends were lower than projections by a certain margin. This evolved into a whole system of alerts.

Predictive Analytics

OLAP tools are certainly power tools, but the trouble with power tools is that you need power users – people who know the business, can learn to use a versatile tool like OLAP effectively, and can generate actions from the information that help the business. So information access advanced to the final stage in our general pattern, the “robot” stage, in which human decision making is replaced by an automated system. In information access, that stage is often called “predictive analytics,” which is a kind of math modeling.

As areas of business management are better understood, it usually turns out that predictive analytics can do a better, quicker job of analyzing the data, finding the patterns, and generating the actionable decisions than a person ever could. A good example is home mortgage lending, where the vast majority of the decisions today are made using predictive analytics. Many years ago, a person who wanted a home mortgage would make an appointment with a loan officer at a local savings bank and request the loan. The officer would look at your information and make a human judgment about your loan worthiness.

That “power user” system has long since been supplanted by the “robot” system of predictive analytics, where all the known data about any potential borrower is constantly tracked, and credit decisions about that person are made on the basis of the math whenever needed. No human judgment is involved, and in fact would only make the system worse.

Predictive analytics is the same in terms of information utilization as the prior stages, but the emphasis on presenting a powerful, flexible user interface to enable a power user to drive his way to information discovery is replaced by math models that are constantly tuned and updated by the new information that becomes available.

Sometimes the predictive analytics stage is held back because of a lack of vision or initiative on the part of the relevant industry leaders. However, a pre-condition for this approach really working is the availability of all the relevant data in suitable format. For example, while we tend to focus on the math for the automated mortgage loan processing, the math only works because it has access to a nationwide database containing everyone’s financial transactions over a period of many years. A power user with lots of experience, data and human judgment will beat any form of math with inadequate data; however, good math fueled with a comprehensive, relevant data set will beat the best human any time.

Conclusion

All these stages of automation co-exist today. One of the key rules of computing is that old programs rarely die; they just get layered on top of, given new names, and gradually fade into obscurity. There are still posting programs written in assembler language that have built-in reporting. In spite of years of market hype from the OLAP folks, report writing hasn’t gone away; in fact, some older report writers have interesting new interactive capabilities; OLAP and data warehouses are things that some organizations aspire to, while others couldn’t live without them; finally, there are important and growing pockets of business where the decisions are made by predictive analytics, and to produce pretty reports for decision-making purposes (as opposed to bragging about how well the predictive analytics are doing) would be malpractice.

Even though all these stages of automation co-exist in society as a whole, they rarely co-exist in a functional segment of business. Each stage of automation is much more powerful than the prior stage, and it provides tangible, overwhelming advantages to the groups that use it. Therefore, once a business function has advanced to use a new stage of information access automation, there is a “tipping point,” and it tends to become the new standard for doing things among organizations performing that function.

Posted by David B. Black on 12/13/2021 at 02:19 PM | Permalink | Comments (0)

Trusting Science: the Whole Milk Disaster

I trust science. The gradual emergence of science has led to a revolution in human existence that has happened so quickly and with such impact that it is hard to gain perspective on it.

Trusting science is not the same as trusting the pronouncements of people who are designated scientific experts. Establishing the truth of a scientific theory is an entirely different process than the social dynamics of rising to a position of leadership in a group of any kind. Official experts, whether government, corporate or academic, nearly always defend the current version of received truth against challenge of all kinds; most of those challenges are stupidity and ignorance. My go-to expert on this subject, as so many others, is Dilbert:

Dilbert expert

Sadly, those same establishment experts tend to be the strongest opponents of genuine innovation and scientific advances of all kinds. As I explain here, with examples from Feynman and the history of flight, one of the core elements of successful innovation is ignoring the official experts.

My skepticism is well proven in the case of so-called Computer Science, which doesn't even rise to the level of "useful computer practices" much less science. As I have shown extensively, Computer Science and Engineering is largely a collection of elaborate faith-based assertions without empirical foundation. And computers are all numbers and math! If it's so pathetic in such an objective field, imagine how bad it can get when complex biological systems are involved.

This brings us to the subject of saturated fat (solid fat of the kind that's in meat), whole milk and human nutrition. This ongoing scandal -- for which no one has been imprisoned, sued or even demoted -- in spite of its leading to widespread obesity and other health-damaging conditions -- is still rolling along. The hard science concerning the supposed connection between saturated fat, cholesterol and heart disease is in. The results are clear. It is positively healthy for people to eat saturated fat. Period. The scandal is that the "expert" people and organizations that have declared saturated fat and cholesterol to be dangerously unhealthy for many decades refuse to admit their errors and continue to waffle on the subject.

This is relevant to computer science because of the stark differences between the two fields. Software is esoteric and invisible to nearly everyone, while the results of eating are tangible to everyone, and the statistics about the effects are visible and measurable. The common factor is ... people. In both cases there is a wide consensus of expert opinion about the right way to build and main software, and the right way to eat and live in order to be healthy. Experts! From blood-letting to flying machines, they lead the way!

Usually the Science-challengers are wrong

It has taken me a great deal of time to dig in to this scandal, in part because there are so many cases of "the experts are all wrong -- me and my fringe group have the truth." I wanted to make absolutely sure "it's good to eat saturated fat" wasn't another of these. After all, the simple notion that eating fat makes you fat makes common sense!

An example of a harmfully bogus claim is the anti-vax movement, which has been supported by a number of famous people. The idea is that vaccinations in general and childhood vaccinations in particular have horrible consequences -- for example, causing autism in children. A study led by Dr. Andrew Wakefield was published in the British journal Lancet that claimed to prove the association. After years of growing fear and resistance to childhood MMR vaccines, the study was shown to by fatally flawed and corrupt, funded by trial attorneys who wanted to sue drug makers. Later claims that mercury-containing thimerosal in some vaccines continued to fuel the anti-vax cause. Also wrong. Here's a brief history.

The Scandal

Just as vaccinations are provably good things, surely the diet recommendations of the major medical and government institutions in favor of limiting fat must also be! Sadly, this is not the case. Rather, it's a wonderful example of how hard paradigm shifts are to accomplish, particularly when the prestige of major institutions are involved. And, sadly, how prestige and baseless assertions have substituted for science, shockingly similar to bloodletting and other universally-accepted-on-no-objective-basis practices.

A basic, understandable summary of the subject may be found in The Big Fat Surprise, which is loaded with appropriate detail. Here is a summary:

"the past sixty years of low-fat nutrition advice has amounted to a vast uncontrolled experiment on the entire population, with disastrous consequences for our health.

For decades, we have been told that the best possible diet involves cutting back on fat, especially saturated fat, and that if we are not getting healthier or thinner it must be because we are not trying hard enough. But what if the low-fat diet is itself the problem? What if those exact foods we’ve been denying ourselves — the creamy cheeses, the sizzling steaks — are themselves the key to reversing the epidemics of obesity, diabetes, and heart disease?"

Yes, this sounds like what an anti-science crank would say. All I can say is, dig in. You'll find the shoddy beginnings of the fat-cholesterol-heart hypothesis; the biased studies that seemed to support it; the massive, multi-decade Framingham study which was trumpeted as supporting the anti-fat theory, but whose thoroughly confirmed and vetted results were actively suppressed for many years; the uncontested studies that disprove the anti-fat recommendations; and the improved understanding of the biological systems that thoroughly debunks the widely promoted campaign against saturated fat and LDL, the "bad" cholesterol.

More detail

If you want a start on more detail, I recommend Dr. Sebastian Rushworth at a high level and the recent book by long-term cardiac doctor Malcolm Kendrick that gives the details of the studies and biology that explain what really happens.

Here are a couple explanations from Dr. Rushworth:

"the LDL hypothesis basically says that heart disease happens because LDL somewhow ends up in the arterial wall, after which it is oxidized, which starts an inflammatory reaction that gradually leads to the hardening of arteries and eventually to bad things like heart attacks and strokes."

"... the LDL hypothesis is bunk. There is by now a wealth of evidence showing that LDL has little to do with heart disease, such as this systematic review from BMJ Evidence Based Medicine, which showed that there is no correlation whatsoever between the amount of LDL lowering induced by statins and other LDL lowering drugs, and the benefit seen on cardiovascular disease risk (if indeed any benefit is seen – it often isn’t)."

Rushmore's summary of the Kendrick book is:

"The ultra-short elevator pitch version of what he argues in the book is that heart disease is what happens when damage to the arterial wall occurs at a faster rate than repair can happen. That’s why everything from sickle cell disease to diabetes to high blood pressure to smoking to rheumatoid arthritis to cortisone treatment to the cancer drug Avastin increases the risk of cardiovascular disease – they all either increase the speed at which the arterial wall gets damaged or slow down its repair. It’s why heart disease (more correctly called “cardiovascular disease”) only affects arteries (which are high pressure systems) and not veins (which are low pressure systems), and why atherosclerosis (the hardening of the arteries that characterizes heart disease) primarily happens at locations where blood flow is extra turbulent, such as at bifurcations.

This alternative to the LDL hypothesis is known as the “thrombogenic hypothesis” of heart disease. It’s actually been around for a long time, first having been proposed by German pathologist Carl von Rokitansky in the 19th century. Von Rokitansky noted that atherosclerotic plaques bear a remarkable similarity to blood clots when analyzed in a microscope, and proposed that they were in fact blood clots in various stages of repair.

Unfortunately, at the time, von Rokitansky wasn’t able to explain how blood clots ended up inside the artery wall, and so the hypothesis floundered for a century and a half (which is a little bit ironic when you consider that no-one knows how LDL ends up inside the artery wall either, yet that hasn’t hindered the LDL hypothesis from becoming the dominant explanation for how heart disease happens). We now know the mechanism by which this happens: cells formed in the bone marrow, known as “endothelial progenitor cells”, circulate in the blood stream and form a new layer of endothelium on top of any clots that form on the artery wall after damage – thus the clot is incorporated in to the arterial wall.

In spite of the fact that probably at least 99% of cardiologists still believe in the LDL hypothesis, the thrombogenic hypothesis is actually supported far better by all the available evidence. While the LDL hypothesis cannot explain why any of the risk factors listed above increases the risk of heart disease, the thrombogenic hypothesis easily explains all of them.

Conclusion

Many major institutions have dialed down their fervent condemnation of the low-fat and LDL-is-bad myths, but haven't done what they should do, which reverse their positions and mea culpa. They should at minimum take at least part responsibility for the explosion of obesity, useless pharma mega-dollars wasted, and the attendant health disasters for countless humans. The fact that they're not helps us understand the resistance to correction of the similarly powerful mainstream myths about software. It's not about the LDL or the software; it's about people, pride, institutions, bureaucracy and entrenched practices and beliefs that fight change.

 

Posted by David B. Black on 12/06/2021 at 02:24 PM | Permalink | Comments (0)

Computer Science and Kuhn's Structure of Scientific Revolutions

If bridges fell down at anywhere close to the rate that software systems break and become unavailable, there would be mass revolt. Drivers would demand that bridge engineers make radical changes and improvements in bridge design and building. If criminals took over bridges and held the vehicles until they paid a ransom anywhere close to the number of times criminals rob organizations or their data or lock their systems until a ransom is paid, there would be mass revolt. In the world of software, this indefensible state of affairs is what passes for normal! Isn't it time for change? Has something like this ever happened in other fields that we can learn from?

Yes. It's happened enough that it's been studied, and the process of resistance to change until the overwhelming force of a new paradigm breaks through.

Thomas Kuhn was the author of a highly influential book published in 1962 called The Structure of Scientific Revolutions. He introduced the term “paradigm shift,” which is now a general idiom. Examining the history of science, he found that there were abrupt breaks. There would be a universally accepted approach to a scientific field that was challenged and then replaced with a revolutionary new approach. He made it clear that a paradigm shift wasn’t an important new discovery or addition – it was a whole conceptual framework that first challenged and then replaced the incumbent. An example is Ptolemaic astronomy in which the planets and stars revolved around the earth, replaced after long resistance by the Copernican revolution.

Computer Science is an established framework that reigns supreme in academia, government and corporations, including Big Tech. There are clear signs that it is as ready for a revolution as the Ptolemaic earth-centric paradigm was. Many aspects of the new paradigm have been established and proven in practice. Following the pattern of all scientific revolutions, there is massive establishment resistance, led by a combination of ignoring the issues and denying the problems.

The Structure of Scientific Revolutions

Thomas Kuhn received degrees in physics, up to a PhD from Harvard in 1949. He was into serious stuff, with a thesis called “The Cohesive Energy of Monovalent Metals as a Function of Their Atomic Quantum Defects.” Then he began exploring. As Wiki summarizes:

As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant.

Structure-of-scientific-revolutions-1st-ed-pb
His path for coming to his realization is fascinating. I recommend reading the book to anyone interested in how science works and the history of science.

After studying the history of science, he realized that it isn't just incremental progress.

Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.[1]

Real-life examples of this are fascinating. The example often given is the shift from "everything revolves around the earth" to "planets revolve around the sun." What's interesting here is the planetary predictions of the Ptolemaic method were quite accurate. The shift to Copernicus (Sun-centric) didn't increase accuracy, and the calculations grew even more complicated. The world was not convinced! Kepler made a huge step forward with elliptical orbits instead of circles with epicycles and got better results that made more sense. The scientific community was coming around. Then when Newton showed that Kepler's laws of motion could be derived from his core laws of motion and gravity the revolution won.

While the book doesn't emphasize this, it's worth pointing out that the Newtonian scientific paradigm "won" among a select group of numbers-oriented people. The public at large? No change.

Anomalies that drive change

One of the interesting things Kuhn describes are the factors that drive a paradigm shift in science -- anomalies, results that don't fit the existing theory. In most cases, anomalies are resolved within the paradigm and drive incremental change. When anomalies resist resolution, something else happens.

During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

The strength of the existing paradigm is shown by the strong tendency to blame things on mistakes of the researcher -- or in the case of software, on failure to follow the proper procedures or to write the code well.

The Ruling Paradigm of Software and Computer Science

There is a reigning paradigm in software and Computer Science. As you would expect, the paradigm is almost never explicitly discussed. It has undergone some evolution over the last 50 years or so, but not as radical as some would have it.

At the beginning, computers were amazing new devices and people programmed them as best they could. Starting over 50 years ago, people began to notice that software took a long time and lots of effort to build and was frequently riddled with bugs. That's when the foundational aspects of the current paradigm were born and started to grow, continuing to this day:

  1. Languages should be designed and used to help programmers avoid making mistakes. Programs should be written in small pieces (objects, components, services, layers) that can be individually made bug-free.
  2. Best-in-class detailed procedures should be adapted from other fields to assure that the process from requirements through design, programming, quality assurance and release is standardized and delivers predictable results.

The ruling paradigm of software and computer science is embodied in textbooks, extensive highly detailed regulations, courses, certifications and an ever-evolving collection of organizational structures. Nearly everyone in the field unconsciously accepts it as reality.

Are There Anomalies that Threaten the Reigning Paradigm?

Yes. There are two kinds.

The first kind are the failures of delivery and quality that continue to plague by-the-book software development, in spite of decades of piling up the rules, regulations, methods and languages that are supposed to make software development reliable and predictable. The failures are mostly attributed to errors and omissions by the people doing the work -- if they had truly done things the right way, the problems would not have happened. At the same time, there is a regular flow of incremental "advances" in procedure and technology designed to prevent such problems. This is textbook Kuhn -- the defenders of the status quo attributing issues to human error.

The second kind of anomalies are bodies of new software that are created by small teams of people who ignore the universally taught and proscribed methods and get things done that teams 100's of times larger couldn't do. Things like this shouldn't be possible. Teams that ignore the rules should fail -- but instead most of the winning teams are ones that did things the "wrong" way. This is shown by the frequency of new software products being created by such rule-ignoring small groups, rocketing to success and then being bought by the rule-following organizations, including Big Tech, who can't do it -- in spite of their giant budgets and paradigm-conforming methods. See this and this.

When will this never-ending stream of paradigm-breaking anomalies make a paradigm-shifting revolution take place in Computer Science? There is no way of knowing. I don't see it taking place any time soon.

Conclusion

The good news about the resistance of the current consensus in Computer Science and software practice to a paradigm shift is that it provides the room for creative entrepreneurs to build new things that meet the unmet needs of the market  The entrepreneurs don't even have to go all-in on the new software paradigm! They just need to ignore enough of the bad old stuff and use enough of the good new stuff to get things done that the rule-followers are incapable of. Sadly, the good news doesn't apply to fields that are so outrageously highly regulated that the buyer insists on being able to audit compliance during the build process. Nonetheless, there is lots of open space for creative people to build and grow.

Posted by David B. Black on 11/29/2021 at 11:14 AM | Permalink | Comments (0)

The Dimension of Software Automation Breadth

Computers and software are all about automation. See this for the general principles of automation. When you dive into details and look at lots of examples, patterns emerge. The patterns amount to sequences or stages of automation that emerge over time, with remarkable consistency. When you apply your knowledge of the pattern to the software used in an industry at any given time, you can identify where the software is in the sequence. You can confirm the earlier stage in the sequence and predict with accuracy the stage that will follow. This gives you ability to say what will happen, though not when or by whom.

The patterns of automation play out in multiple dimensions. I have described what may be called the depth of automation, in which software evolves from recording what people do through helping them and eventually to replacing them.

In this post I will describe another dimension, which may be thought of as the breadth of automation. The greater the automation breadth the more functions are incorporated into the automation software and the more highly integrated the functions are with each other.

The Dimension of automation breadth

Automation breadth has these basic levels, independent of the automation depth of the products involved:

Component

Not a stand-alone product, but a component that could be incorporated into many custom applications and/or products

Point product

Implements a single function

Product collection

A group of point products from a single vendor

Product suite

An integrated set of separate products, with meaningful benefits from the integration

Integrated product

A single body of source code that performs a variety of related functions that would otherwise require separate products, with meaningful benefits from the unification

Integrated product with selective outsourcing

A product that is written and delivered in such a way that the using organization can choose to have the vendor staff a number of functions off-site.

Components rarely appear first in historical terms, but they are the beginning of this sequence. Typically, functionality that is very difficult to write or whose requirements change rapidly is separated out and delivered in the form of a component. The quality of the component may become so important that it becomes an industry standard.

Example: The Ocrolus service (disclosure: Oak HC/FT is an investor) is a classic example of a valuable, narrow component. It takes images of documents of nearly any kind, recognizes them, extracts their data and returns the data to the component user. This functionality is so challenging to write, and the consequence of errors so great, that most applications that need the functionality are likely to use the component, which is delivered as a service, rather than write their own.

In a time of rapidly emerging functionality, point products are usually first, because they can be gotten to market quickly. Buyers typically talk in terms of “best of breed,” but have the problem of negotiating and maintaining relationships with multiple vendors, who frequently have conflicting interests. The buyer also has to take responsibility for integrating the various products he has bought. Buyers reasonably worry about whether their typically small vendor will stay in business and continue to invest in their product.

Example: Captura (now part of Concur) provided a point product to enable a company to automate the process of entering, approving and paying expense reports.

Product collections solve some of these problems. There is now a single vendor; the vendor is probably much larger and more likely to stay in business; the products will be maintained and there is no conflict of interest. Once a product collection becomes available in a category, buyers will typically prefer an adequate product from a collection to a superior point product. Product collections can be formed by acquisition.

Example: CA, Computer Associates, is a typical vendor that acquires products in a category and puts them together into a product collection.

It is often desirable to share data among the products in a collection. Frequently, they maintain copies of essentially the same information, have essentially the same security roles defined, etc. Users have to go through considerable time and effort to accomplish this on their own, and then again when there is a new release, and desirable integrations are sometimes not even possible. Product suites solve these problems to a large extent, since a single vendor performs the integration and sells the integration as part of the product collection. Once a product suite becomes available in a category, buyers will typically prefer an adequate product suite to a product collection whose products are superior, because the cost of installation and maintenance is lower and the benefits of integration often outweigh the benefits of individual product features. Building a product suite typically requires source-code level coding and functionality design changes, but the code bases of the products can be separate.

Classic example: Microsoft Office was one of the first product suites for office products. While the benefits of integration are not overwhelming in this functional area, users clearly benefit from having a suite rather than individual products. Once the benefits of a suite became accepted, buyers no longer wanted individual office products.

Recent example: The market for business formation has long been dominated by the point product Legalzoom. A new, rapidly growing product suite called Zenbusiness (disclosure: Oak HC/FT is an investor) is taking classic advantage of moving to the next stage of automation breadth, by offering small business customers not only business formation services, but also services for websites, banking and accounting.

Not all functionality areas benefit from having an integrated product, but for those that do, integrated products are better for vendors (reduced costs due to elimination of redundancy) and for buyers (a simpler, more unified product that is easier to learn and operate, and has deep, fully-automated function integration), and typically win in the marketplace.

Example: Everyone thinks of SAP’s R3 as being the first client/server ERP product, but its real break-through was being the first truly integrated product on which a large enterprise could run its business. Using a single body of code and a common shared DBMS, its many modules each ran different parts of the enterprise business. In spite of the high cost of implementation and operation, the advantages of running your business on a single integrated product were overwhelming. Like most projects to build a unified product, the vendor had deep domain knowledge, it took a long time to get it right, and the early installations were painful.

Once functionality markets reach a level of maturity, the competition becomes intense, and organizations often have to choose areas of distinction, outsourcing the functions they choose not to compete in to a low-cost provider. Products that enable organizations to selectively outsource in this way typically take business from products that must be fully staffed by the buyer.

Example: FDC is the leading processor for credit cards in the US. With a billion cards outstanding, the market is huge and highly competitive. While the technology base of the FDC product is decades old, the functionality it delivers is highly sophisticated. In addition to delivering a completely integrated, multi-department product, FDC offers off-site staffing for the various departmental functions, where the staff sounds on the telephone as though they worked for you.

Conclusion

The dimension of automation breadth helps us understand the evolution of software and the progression that naturally takes place in a given market space. The companies that win are most often the ones that dominate a narrow market segment with a particular product and then broaden the range of software they can sell to a given organization. They typically move step by step according to the progression I have described here.

Posted by David B. Black on 11/23/2021 at 03:55 PM | Permalink | Comments (0)

The Destructive Impact of Software Standards and Regulations

In many fields of life, standards and regulations are a good thing. Standards are why you can get into a new car and be comfortable driving it; without standard steering wheels and brake pedals, no one would be able to drive a rental car. Software is different. The misguided effort to impose standards and regulations on software development has played a key role in the nonstop cybersecurity disasters and software failures that most organizations try to minimize and ignore.

Do software standards and regulations literally cause software and cybersecurity disasters? Yes, in much the same way as looking at your cell phone while driving causes auto accidents. Everyone agrees that distracted driving is bad, but somehow distracted programming is ignored. It’s a classic case of ignorance and good intentions that have horrible unintended consequences. Seeing the bad consequences, the community of standards writers believes the problem is that their standards aren’t deep, broad and detailed enough -- let's distract the programmers even more!

Software and Bicycles

Suppose that writing a software program were like riding a bicycle, and that finishing writing the program was like reaching your destination on the bicycle.

Even if you're not a bike rider, you’ve likely seen lots of bikes being ridden. You've probably seen kids learning on training wheels:

Youth+bike+riders+MS

(Image)

Kids eventually learn to balance and learn how to handle curves, hills and the rest. Then there are serious riders whose bodies are tuned to the task. They ride with focus and concentration, assuring that they handle every detail of the road they're on to maximum advantage.

11

(Image)

Because people who create standards and regulations on software are appallingly ignorant of creating code with maximum speed and quality, they impose all sorts of strange requirements on programmers. It's as though the bicycles were invisible to everyone but the programmers, but the leaders have it on best authority that the bicycles they demand their programmers use are up to the most modern standards.

AB496E1F-9612-435E-B579-EB4784EB41A1

(Image)

They create increasingly elaborate processes and methods that are supposed to assure that bicycles reach their destinations quickly and safely, but in fact assure the opposite. The programmers are required to juggle a myriad of meetings, planning discussions, reviews and other activities while still making great progress.

TRI7GMRETH73VAIHHOKISI5W5U

(Image)

The oh-so-careful regulators go to great lengths to assure that at each stage of the bicycle’s journey the rider does his riding flawlessly and without so much as a swerve from the proscribed path. When developers are forced to follow regulations and standards, they don’t just pedal, quickly and smoothly, to the finish line. They constantly stop and engage in myriad non-programming activities. No sooner do they start to pick up speed after being allowed back on the bike than ring! ring! It's time for the security review meeting!

Riding a bicycle competitively is not the most intellectual of activities. Neither is being a batter in baseball. But both require exquisitely deep focus and concentration. The batter's head can't be swimming with advice about what to do when the pitch is a sinker; the batter has to be present in the moment and respond to the pitch in real time with the evolving information he gets as the pitch approaches the plate.

In the same way and arguably even more so, the programmer has to be immersed in the invisible evolving "world" of the software around him, seeing what should to added or changed to what, where in that world. The total immersion in that world isn't something that can be flicked on with a switch -- though it can be brought crashing down in an instant by an interruption. In the same way, a biker doesn't get to maximum speed and focus in seconds. It takes time to sink in to the flow.

Meanwhile, all the managers judge programmers primarily by how well they juggle, like the skilled fellow in the picture above, blissfully unaware and seemingly uncaring that the unicycle is better for stopping, starting and going in circles than it is for making forward progress.

Conclusion

This is the ABP (Anything But Programming) factor in software -- make sure the programmers spend their time and energy on things other than driving the program to its working, useful, high-quality goal. The managers feel great about themselves. They are following the latest standards, complying with all the regulations, and assuring that the programmers under their charge are doing exactly and only the right thing -- doing it right the first time. When such managers get together with their peers, they exchange notes about which aspects of the thousands and thousands of pages of standards they have forced their programmers to distract themselves with recently. Because that's what modern software managers do.

Posted by David B. Black on 11/17/2021 at 09:52 AM | Permalink | Comments (0)

The promising origins of object-oriented programming

The creation of the original system that evolved into the object-oriented programming (OOP) paradigm in design and languages was smart. It was a creative, effective way to think about what was at the time a hard problem, simulating systems. It’s important to appreciate good thinking, even if the later evolution of that good thinking and the huge increase in the scope of application of OOP has been problematic.

The evolution of good software ideas

Lots of good ideas pop up in software, though most of them amount to little but variations on a small number of themes.

One amazingly brilliant idea was Bitcoin. It solved a really hard problem in creative ways, shown in part by its incredible growth and widespread acceptance. See this for my appreciation of the virtues of Bitcoin, which remains high in spite of the various things that have come after it.

The early development of software languages was also smart, though not as creative IMHO as Bitcoin. The creation of assembler language made programming practical, and the creation of the early 3-GL’s led to a huge productivity boost. A couple of later language developments led to further boosts in productivity, though the use of those systems has gradually faded away for various reasons.

The origin of Object-Oriented Programming

Wikipedia has a reasonable description of the origins of OOP:

In 1962, Kristen Nygaard initiated a project for a simulation language at the Norwegian Computing Center, based on his previous use of the Monte Carlo simulation and his work to conceptualise real-world systems. Ole-Johan Dahl formally joined the project and the Simula programming language was designed to run on the Universal Automatic Computer (UNIVAC) 1107. Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding.

Originally it was built as a pre-processor for Algol, but they built a compiler for it in 1966. It kept undergoing change.

They became preoccupied with putting into practice Tony Hoare's record class concept, which had been implemented in the free-form, English-like general-purpose simulation language SIMSCRIPT. They settled for a generalised process concept with record class properties, and a second layer of prefixes. Through prefixing a process could reference its predecessor and have additional properties. Simula thus introduced the class and subclass hierarchy, and the possibility of generating objects from these classes.  

Nygaard was well-rewarded for the invention of OOP, for which he is given most of the credit.

What was new about Nygaard’s OOP? Mostly, like Simscript, it provided a natural way to think about simulating real-world events and translating the simulation into software. As Wikipedia says:

The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports

In some (not all) simulation environments, thinking in terms of a set of separate actors that interact with each other is natural, and an object system fits it well. Making software that eases the path to expressing the solution to a problem is a clear winner. Simula was an advance in software for that class of applications, and deserves the credit it gets.

What should have happened next

Simula was an effective way to hard-code a computer simulation of a physical system. The natural next step an experienced programmer would take would be to extract out the description of the system being modeled and express it in easily editable metadata. This makes creating the simulation and making changes to it as easy as making a change to an Excel spreadsheet and clicking recalc. Here's the general idea of moving up the tree of abstraction. Here's an explanation and illustration of the power of moving concepts out of procedural code and into declarative metadata.

This is what good programmers do. They see all the hard-coded variations of general concepts in the early hard-coded simulation programs. They might start out by creating subroutines or classes to express the common things, but you still are hard-coding the simulation. So smart programmers take the parameters out of the code, put them into editable files and assure that they're declarative. Metadata! There are always exceptions where you need a calculation or an if-then-else -- no problem, you just enhance the metadata so it can include "rules" and you're set.

The Ultimate Solution

As you take your simulation problem and migrate it from hard-coded simulation to descriptive, declarative metadata, the notion of "objects" with inheritance begins to be a value-adding practicality ... in the metadata. You understand the system and its constraints increasingly -- for example ships in cargo ports.

Then you can take the crucial next step, which is worlds away from object-orientation but actually solves the underlying problem in a way simulation never can -- you build a constraint-based optimization model and solve it to make your cargo flows the best they could theoretically be!

People thinking simulation and programmers thinking objects will NEVER get there. They're locked in a paradigm that won't let them! This is the ultimate reason we should never be locked into OOP, and particularly locked into it by having the object-orientation concepts in the language itself, instead of employed when and as needed in a fully unconstrained language for instructions and data.

The later evolution of OOP

In spite of the power of moving from hard-coding to metadata, the language-obsessed elite of the computing world focused on the language itself and decided that its object orientation could be taken further. Simula’s object orientation ended up inspiring other languages and being one of the thought patterns that dominate software thinking. It led directly to Smalltalk, which was highly influential:

Smalltalk is also one of the most influential programming languages. Virtually all of the object-oriented languages that came after—Flavors, CLOS, Objective-C, Java, Python, Ruby, and many others—were influenced by Smalltalk. Smalltalk was also one of the most popular languages for agile software development methods, rapid application development (RAD) or prototyping, and software design patterns.The highly productive environment provided by Smalltalk platforms made them ideal for rapid, iterative development.

Lots of the promotional ideas associated with OOP grew along with Smalltalk, including the idea that objects were like Lego blocks, easily assembled into new programs with little work. It didn’t work out for Smalltalk, in spite of all the promotion and backing from influential players. The companies and the language itself dwindled and died.

It was another story altogether for the next generation of O-O languages, as Java became the language most associated with the internet as it grew in the 1990’s. I will tell more of the history later. For now I’ll just say that OOP is the paradigm most often taught in academia and generally in the industry.

Conclusion

OOP was invented by smart people who had a new problem to solve. Simula made things easier for modeling physical systems and was widely used for that purpose. As it was expanded and applied beyond the small category of problems for which it was invented, serious problems began to emerge that have never been solved. The problems are inherent in the very idea of OOP. The problems are deep and broad; the Pyramid of Doom is one of seemingly endless examples.

When OOP principles and languages were applied to databases and GUI’s, they failed utterly, to the extent that even the weight of elite experts couldn’t force their use instead of simple, effective approaches like RDBMS for database and javascript for GUI’s. OOP has evolved into a fascinating case study of the issues with Computer Science and the practice of software development, with complete acceptance in the mainstream along with widespread quiet dissent resulting from fraudulent claims of virtue.

Posted by David B. Black on 11/01/2021 at 10:36 AM | Permalink | Comments (0)

Deep-Seated Resistance to Software Innovation

Everyone says they're in favor of innovation. Some organizations promote innovation with lots of publicity. Many organizations even have CIO's (Chief Innovation Officers) to make sure it gets done. But the reality is that resistance to innovation runs strong and deep in organizations; the larger the organization, the greater the resistance usually is. The reason is simple: innovation threatens the power and position of people who have it. They feel they have nothing to gain and much to lose.

It's not just psychology. Innovation resistance throws up barriers that are thick and high. See this for examples.

A good way to understand the resistance is look at sound technologies that have been proven in practice that could be more widely applied, but are ignored and/or actively resisted by the organizations that could benefit from them. I have called these In-Old-vations. Here is an innovation that is still waiting for its time in the sun, and here's one that's over 50 years old that is still being rolled out VERY slowly.

In this post I will illustrate the resistance to technology innovation in a little-known extreme example: the people in charge of Britain's war effort resisted innovations that could help them win the war. They were literally in war and losing and decided, in effect, that they'd rather lose. Sounds ridiculous, I know, but this is normal behavior of people in organizations of all kinds.

The Battle of Britain

Britain was at war, facing Hitler’s much larger, better-prepared military, who had already rolled over its adversaries. Life and death. Literally. The established departments did all they could do defend from attacks. The so-called Battle of Britain is well-known. What is not as widely known is the battle in the seas. German submarines were sinking British ships at an alarming rate. The Navy had no answers other than to do what they were already doing harder.

The situation was desperate. If there was ever a time to "think outside the box" it would seem this was it. The response of the Navy to new things? NO WAY. Amazing new weapons developed by uncertified people outside the normal departmental structures? NO WAY. Once those weapons are built and proven, use them to stop the submarines that were destroying boats and killing men by the thousands? NO WAY!!

Of course, you might think that someone would have known that the fairly recent great innovation in flying machines was achieved by "amateurs" flying in the face of the establishment and the acknowledged expert in flying as I describe here. You might think that Navy men would remember that perhaps the greatest innovation in naval history was invented by Navy-related people. But no. Protecting our power and the authority of our experts is FAR more important than a little thing like losing a war!

The story of the new way to fight submarines is told in this book:

Churchills

Someone who was not part of the Navy establishment invented a whole new approach to fighting submarines. The person wasn't a certified, official expert. He was rejected by all relevant authorities and experts. Fortunately for the survival of England, Churchill made sure the concept was implemented and tested. The new devices were delivered to a ship.

This all took time and it was not until the spring of 1943 that the first Hedgehogs were being installed on Royal Navy vessels. When Commander Reginald Whinney took command of the HMS Wanderer, he was told to expect the arrival of a highly secret piece of equipment. ‘At more or less the last minute, the bits and pieces for an ahead-throwing anti-submarine mortar codenamed “hedgehog” arrived.’ As Whinney watched it being unpacked on the Devonport quayside, he was struck by its bizarre shape. ‘How does this thing work, sir?’ he asked, ‘and when are we supposed to use it?’ He was met with a shrug. ‘You’ll get full instructions.' Whinney glanced over the Hedgehog’s twenty-four mortars and was ‘mildly suspicious’ of this contraption that had been delivered in an unmarked van coming from an anonymous country house in Buckinghamshire. He was not alone in his scepticism. Many Royal Navy captains were ‘used to weapons which fired with a resounding bang’, as one put it, and were ‘not readily impressed with the performance of a contact bomb which exploded only on striking an unseen target’. They preferred to stick with the tried and tested depth charge when attacking U-boats, even though it had a hit rate of less than one in ten. Jefferis’s technology was too smart to be believed.

Here's what the new mortars looked like:

450px-Hedgehog_anti-submarine_mortar

What happened? It was transformative:

Over the course of the next twelve days, Williamson achieved a record unbeaten in the history of naval warfare. He and his men sank a further five submarines, all destroyed by Hedgehogs. Each time.

If resistance to change to true technological innovation is so strong when you’re desperate, literally at death’s door, how do you think it’s going to be in everyday life? The rhetoric is that we all we love innovation! The reality is that anything that threatens anybody or anything about the status quo is to be ignored, shoved to the side and left to die. Anyone who makes noise about it obviously isn’t a team player and should find someplace to work where they’ll be happier. And so on.

Conclusion

Innovation happens. Often nothing "new" needs to be invented -- "just" a pathway through the resistance to make it happen. Here is a description of the main patterns followed by successful innovations. If you have an innovation or want to innovate, you should be aware of the deep-seated resistance to innovation and rather than meeting it head-on, craft a way to make it happen without head-on war. Go for it!

Posted by David B. Black on 10/26/2021 at 08:46 AM | Permalink | Comments (0)

Franz Liszt: 210

This week is the 210th anniversary of the birth of Franz Liszt on Oct 22, 1811. I wrote a couple highlights about Liszt for his 200th anniversary. Aside from loving his achievements in life, he was the inspiration for this blog.

440px-Liszt-1870

Liszt was one of those rare people who was world-class brilliant and also appreciated the valuable work of others, supporting that work in word and deed.

One way he demonstrated this was by championing the work of other composers by transcribing their work and performing it for audiences who might otherwise not have a chance to hear it. He made amazing transcriptions of operas and many other kinds of works. Probably the most impressive transcriptions he made were making piano versions of all nine Beethoven symphonies!

Oddly enough, the symphony transcriptions were largely ignored in the world of music, even by Liszt's pupils. It wasn't until Glenn Gould recorded a couple of them in 1967 that they began to appear on the scene. But they remain obscure, even among classical music lovers.

I thought I knew all the Beethoven symphonies, but hearing a Liszt piano transcription performed was like hearing it for the first time. After a great deal of struggle he finally managed to transcribe the fourth (choral) movement of the ninth symphony. And did it again for duo pianos.

Frederic Chiu has recorded the 5th and 7th Symphonies for Centaur Records. In his program notes he suggests that "Liszt's piano scores must therefore be taken as a sort of gospel in regards to Beethoven's intentions with the Symphonies" because of Liszt's unique perspective, having met Beethoven in person, having heard collaborators and contemporaries of Beethoven perform the Symphonies, having studied and performed the works both as a pianist/transcriber and as a conductor in Weimar. No one in history could claim to have as much exposure, insight and journalistic integrity as regards Beethoven's intentions around the Symphonies.

I personally recommend the recordings by Konstantin Scherbakov. You will be listening to the unique inspiration of Beethoven, further enhanced by the incredible genius of Liszt.

Listening to these transcriptions can pull you out of normal physical space into a dimension filled with drama, beauty and inspiration, one that, unlike the austere, timeless beauty of math, drives through time from a start to a conclusion.

Posted by David B. Black on 10/18/2021 at 06:15 PM | Permalink | Comments (0)

Next »

Links

  • David B. Black's Forbes articles
  • David B. Black's LinkedIn profile
  • Oak HC/FT
  • David B. Black's Amazon author page

Recent Posts

  • How to Improve Software Productivity and Quality: Schema Enhancements
  • The Goals of Software Architecture
  • Making Fun of Object-Orientation in Software Languages
  • The forbidden question: What caused the obesity epidemic?
  • How to Fix Software Development and Security: A Brief History
  • The Facts are Clear: Don't Take Cholesterol-lowering Drugs
  • Software Programming Language Cancer Must be Stopped!
  • My Health Insurance Company Tries to Keep me Healthy
  • The Facts are Clear: Eat Lots of Saturated Fat
  • What is Behind the DCash Central Bank Digital Currency Disaster?

Categories

  • AI, Cognitive Computing (17)
  • Big Business (18)
  • Big Data (20)
  • Big Tech Companies (10)
  • Bitcoin Blockchain (28)
  • Books (14)
  • Business and Product Strategy (6)
  • Cloud (6)
  • Computer Fundamentals (26)
  • Computer history (30)
  • Computer Science (13)
  • Computer security (30)
  • Computer storage (21)
  • Counting, Numeracy (3)
  • CTO (19)
  • Customer Service (9)
  • Data Center (5)
  • Data Science (2)
  • Database (6)
  • Dictionary (4)
  • Digital Media (9)
  • Due Diligence (2)
  • Education (6)
  • Email (6)
  • Experts (11)
  • Facebook (9)
  • Fashion (18)
  • Financial Technology (11)
  • Forbes cross-posts (23)
  • Government (18)
  • Growing a winner (16)
  • Health Insurance (10)
  • Healthcare (29)
  • Healthcare business (16)
  • Healthcare EMR/EHR (15)
  • Heart Health (5)
  • Ingredients (2)
  • Innovation (31)
  • Internet (4)
  • Machine Learning (9)
  • Medical practice & training (4)
  • Microservices (3)
  • Microsoft (3)
  • Music (4)
  • Nerds (12)
  • Nutrition (8)
  • Oak HC/FT (12)
  • Oak Investment Partners (4)
  • Oak portfolio companies (11)
  • Object-oriented Software (5)
  • Occams Razor Occamality (7)
  • People (24)
  • Project Management (11)
  • Regulations (14)
  • Social Media (1)
  • Software Architecture (9)
  • Software business (2)
  • Software Components (2)
  • Software Development (47)
  • Software Documentation (2)
  • Software Evolution (30)
  • Software management (11)
  • Software Myth-Conceptions (6)
  • Software Programming Languages (23)
  • Software Quality (27)
  • Software Science (2)
  • Software Standards (1)
  • Software User Interface (3)
  • Twitter (4)
  • Warfare (7)
  • X-IO Storage (18)
See More
Subscribe to this blog's feed

About

  • The Black Liszt
  • Powered by TypePad