Why Trying to Future-Proof Anything Will Fail
I recently decided to retire the paper copies of the hundreds of pages I wrote for trade magazines and academic publications over the last few decades. As I read through these articles on Computer Integrated Manufacturing, AI, mobile computing and other topics, it became clear that any attempt to future proof anything will fail.
In most of these articles, I was working with bleeding-edge technology from start-ups trying to change the world. Some of them did change the world, but not in the way their founders expected. But on list-after-list, almost all of the products I reviewed no longer exist, and most of the companies that created them either went out of business or were acquired.
And in the context of old magazines, the advertisements, ranging from expensive pages in Byte Magazine to more niche ads in PC AI, the products touted in print don’t even warrant a Wikipedia entry. Some exist in Internet archives like the Macintosh Repository.
The underlying technology also confounds memory, or more likely looks obsolete to those who never saw it. Pascal, Lisp, and Prolog as languages (along with Fortran on the Mac!). Articles about the second coming of DOS through Windows 3.0. And myriads of expert system shells.
People spent millions of dollars advertising implementations of now obsolete technologies. Writers and analysts like me spent hours exploring products and reviewing them. And IT departments and individuals spent millions of hours creating and deploying systems based on these tools.
My most prominent relationship to expert systems came from Neuron Data’s Nextpert Object. At Western Digital, I built the Surface Mount Assembly Reasoning Tool, or SMART, along with the now-defunct Bechtel AI Institute.
Nexpert Object ran $5,000 a copy and required a hardware dongle so the developer copy was bound to a computer and SMART programmed pick-and-place machines in a mixed vendor environment. SMART could write a program in an hour that would have taken a human manufacturing engineer more than a week. We developed SMART because manufacturing engineers wanted to engineer, not program the last thing they engineered.
SMART was very successful, running at Western Digital for a number of years. But eventually, after I left the company, our combination of C and Nexpert Object stopped being supportable. Neuron Data changed their development object, abandoning much of their previous programming paradigms and adopting Java for development. Nexpert Object became Blaze. Blaze was acquired by Brokat, which was acquired by HNC, which then merged with Fair Isaac Corporation, which every credit owner now knows as FICO.
I have no idea how much of Neuron Data’s Blaze still sits in the fraud detection code used by FICO, but its inference engine demonstrated a powerful way to program for detecting fraud. Neuron Data changed the world in a small way, but its broader reach–tools built on its Nexpert Object platform–no longer run.
Embedding intelligence in systems was supposed to be a way to future-proof work. As people left a company, their codified knowledge was retained in an expert system. As new knowledge was acquired, the expert system would be updated. That worked for a while in some cases; eventually, the systems like this became myopic, or the technology or process they were designed to support was upended to the point that the knowledge was no longer relevant.
Using the latest technology, designed specifically for a modern problem, failed to live up to its promises. Expert systems were to reason like humans. Knowledge-based businesses were the next frontier.
Today, knowledge-based businesses are also on the forefront, but rather than meticulously crafted expert systems, modern systems use data to inform learning algorithms, replacing inference engines with pattern recognition. Both are incomplete models of human thinking, and machine learning, even deep learning, will find limits because it isn’t general enough to encompass the human experience. Even back in the day, inference engines, as taught in college computer science courses, only consisted of forward, backward and mixed-mode. Those models competed with others like generalized black board systems, and yes, even neural networks to offer a broader range of reasoning and representation.
The elephant in the expert systems room was Cyc. Cyc, developed at the Microelectronics Computer Technology Corporation (MCC) offered over twenty ways to reason over its logical assertions. The team even experimented with neural networks as a way for Cyc to “dream” about what it knew. Cyc continues on, but it has not found wide adoption.
Cyc’s approach, couched in the technology of expert systems and logic appears out of touch with current theory. Cyc, despite trying to represent all common sense knowledge, did reach that aim–the portion of it that still proves useful does so in limited domains. So far at least, Cyc is decidedly not future-proof.
Machine learning-based pattern recognition will persist as does some of the logical foundations of expert systems. Because machine learning can develop knowledge from data, solutions require less human intervention–though data cleansing, bias control and other factors constrain ML from becoming a turnkey solution. Even if large chunks of future systems remain machine learning-based, some new, better models of machine-based learning will eventually evolve. Like logic, pattern recognition is only an aspect of intelligence; it is not even a major proportion of what brains do.
So, if we look back 30 years from today, some remnants will seem familiar. Kingston, for instance, in the advertising images carousel, still sells memory, albeit very different memory than they sold in the ’80s and ’90s. That said, Kingston may still sell memory in 2052. But much of the language of computing and the technology of computers will have transformed semantically and in features and functions.
Computer Ads from the 1990s
Only a couple of the products, technologies or companies shown above still exist. Who do you remember? Who is still with us?
Apple now touts Swift development. Some of my articles cover expert systems in HyperCard. Virtual reality and augmented reality are starting to give way to the metaverse. RPA, or robotic process automation, feels kind of expert system-like. Blockchain struggles to find long-term relevancy. NTFs look like a fad-directed path while trust in cryptocurrency erodes.
By definition, 5G will be replaced by 2052, and any device using it will be unable to connect to the wireless standard of the time. The same is true of Wi-Fi 6. Cybersecurity will likely still be around, but the attacks and defenses will be very different. Object-oriented programming remains, but it isn’t the topic of the day. On-premise systems running COBOL, RPG, and Fortran gave way to cloud computing with Prython, Ruby and PHP. Quantum computing, if successful, will disrupt areas where it proves useful, perhaps killing off blockchains claims about safety through encryption.
Perhaps least future-proof of all were the print publications in which my words appeared, most of which no longer primarily publish on paper, if they publish at all.
If you read through my old articles, you will see I always wanted an iPad. I wrote about the Zaurus and the HP 95LX. My original 16GB iPad still sits on a shelf in a leather TwelveSouth case. My current 12.9″ M1-based iPad Pro sits in regular service, knowing that as powerful as it is, it is not future-proof. I have already owned and retired six iPads. I live on my iPad more than I do my phone.
Many of these articles were written on a Macintosh Plus, upgraded by a third party to 512K of RAM. Later I wrote on a Mac II or a Mac IIci. Each device was more powerful than its predecessor. None of them were future-proof. The apps they ran, the languages used for development, and the underlying hardware: all obsolete, regardless of how flexible or powerful it was back in the day.
The only way to future-proof is to accept change. Maybe even wallow in it. Be willing to discard the obsolete and replace it. Want to appear prescient? Implement new things faster than others and keep doing so. Keeping repeating. Costly, yes, but the price of prescience.
The “steady” firms will lock in for some time, running a set of apps on a set of hardware platforms for a few years, before even they realize that the “latest and greatest” has been disrupted by technological innovations and new business models.
Know that any technology investment, be it large or small, being made today carries an expiration date. The best technologists update tech before it becomes a liability. And in one more nod to future-proofings false mythology: even the best anticipators of change can only work their magic a few cycles before they find their future-proofing no longer effective.
That is why I love scenario planning so much. It is the only discipline that purposefully explores the future with the knowledge that most of the speculation and almost all of the near-term choices will prove irrelevant when placed in the context of deep time.
Despite its glorious images, the Hubble telescope is falling prey to its own frailties and to NASA’s failure to adapt to the future of spaceflight fast enough to replace the shuttle. The James Webb Space telescope is poised to give us some new glorious images, but even before it started its work, a small piece of space debris reminded its fans that it isn’t future-proof either.
For more serious insights on strategy, click here.
Cover photo NASA. From 1990s era clip art CD.
Leave a Reply