Alone With a Meta 4

This one is more personal essay than story. It is autobiographical and therefore unavoidably geeky in some respects. So with that forewarning, here it is:

Alone with a Meta 4
Don Thompson

In 1971 there were only two things that could get me out of bed at 3:30 in the morning: a surf trip up the coast and a chance to be alone with a Meta 4. What’s a surf trip, you may ask? Just kidding.

I was a Computer Science undergraduate at the University of California, San Diego, and, as shocking as it may seem, that kind of study wasn’t considered the coolest thing to do at the time. The UCSD campus was alive with Vietnam war protests, debates about the evils of capitalism and the threat of over-population, civil rights, and the revolt of the students. Herbert Marcuse’s graduate student, Angela Davis, held rallies in the quad and my Sociology professor, with hair down to his waist and appropriately named Dr. Wild, established a commune in the surrounding eucalyptus groves. While I was aware of all this and in sympathy with some of it, I wasn’t politically mature enough or brave enough to be much of a participant. I stayed on the edge of the action, except to the extent that my sociology and psychology classes brought me into contact with some of the people and ideas of the day.

But whatever it might imply about my idealism (or lack of it), I found myself drifting away from the so-called soft sciences and into the hard ones. I made a move toward Electrical Engineering but was immediately distracted by an introductory Computer Science class using a language called ALGOL running on a massive Burroughs 6700 mainframe in a glass-enclosed, raised-floor, 3000 square foot air-conditioned room. Distracted, then completely and utterly captivated, I volunteered as a teaching assistant for the same class soon after that first exposure. There was something about the idea of personally injecting a little piece of intelligence into the soul of a machine that eventually made me trade mornings in the surf for mornings with a Meta 4.

But before I can explain that particular insanity, I need to fill in a bit of background for you. In the early 70s, there were still very few people in the world who actually interacted with computers, and at that point in the evolution of the field, the majority of this small population only did so at arm’s length. Software was created by writing cryptic statements on paper coding sheets which were handed to a keypunch operator who created cards with punched holes representing the characters on the original sheets. The resulting deck of cards was then handed back to the programmer who carefully checked them, filled out an accounting form, and submitted the cards across a counter to a computer operator who was a minor deity in the local computing pantheon. Then, sometime in the next several minutes, hours, or perhaps next Thursday, depending upon the priority of the programmer’s account, the behind-the-counter demigod loaded the deck into a card reader about the size of refrigerator laid on its side and attempted to rescue the cards in their original order when they spewed out the other end.

Deck size mattered. It correlated strongly with the status of the programmer in this odd little society. Size implied complexity and complexity implied intelligence (often falsely). Soon it became more fashionable, more reliable, and a lot easier to carry around a 14-inch reel of 9-track magnetic tape instead of a large metal tray of cards. The tape could be updated with changes introduced by smaller decks of cards as the need arose. Even though a reel of tape made program size invisible to the casual onlooker, it lent an air of sophistication and maturity to its owner. It seemed to say, “Here is something important, maybe even novel, and definitely complex – I’ve just gotten to the point where I don’t need to flaunt it anymore.”

But back to the software development process, such as it was. Without going into the details of software tools like compilers, assemblers, and interpreters, let’s just say that programs must be automatically translated from human-readable form into more machine-specific forms before they can be executed by a computer. And the programmer’s original code must be completely unambiguous and correct. It must be syntactically correct to run at all, and it must be semantically correct to do what the programmer intended.

Novice programmers learning a new language often had to endure several embarrassing passes across the demigod’s counter, sometimes burning hours or days before their syntax became acceptable (i.e., perfect). An omitted semicolon or parenthesis could result in several cryptic error messages later in the code, none of which pointed directly to the original cause of the problem.

Having finally ironed out the syntax errors, programmers then faced the much deeper challenge of validating their code against their intentions. Did they really understand the problem they wanted to solve, and had they created something that actually solved it? Did the program have any unintended side-effects? Was it efficient enough? Did it work against all forms of data that the programmer expected to give it? Had all possible paths through the code been tested? In short, did this thing do what the programmer intended? Needless to say, this whole process can take a long time when a demigod stands between you and the machine you ultimately hope to control.

So you might be able to imagine my elation (okay, please just try) when I learned that I would be gaining sole, uninterrupted access to a computer system, albeit a much smaller one occupying only about 150 square feet of floor space, for several hours each week. I had been lucky to serve as a teaching assistant for Jef Raskin, who years later would go on to lead the first Macintosh development team at Apple. At UCSD he had recently invented a simple programming language called FLOW for use in his computer-based visual arts classes. As a project for another class, I proposed to implement Raskin’s language from scratch, on a computer for which it wasn’t originally designed.

This computer was built by a local San Diego company called Digital Scientific and was dubbed the “Meta 4.” It was housed in a small lab on the fifth floor of the Applied Physics and Information Science building on the UCSD Revelle College campus, and I was given a key to the lab. Now this might not sound like a big deal, but from four to eight, three mornings a week, it freed me from the tyranny of the demigod, letting me directly control a machine and, to some extent, my own destiny. Alone with a Meta 4, I could create, test, fix, and re-create several iterations of my software each morning.

In a loose sense, the Meta 4 served as a metaphor for other machines because, as its name directly indicated, it was a meta-machine: a computer describing another computer. It was one of the first commercial computers specifically designed to emulate the instruction sets of other more widely used machines, and this particular one was configured to emulate an IBM 1130. So the code I wrote to build the FLOW interpreter was not the native Meta 4 microcode; it was IBM 1130 assembler code. This was a low-level language in which each symbolic instruction translated directly into a set of binary digits, ones and zeros, which represented the native machine language of the 1130.

My goal was to allow people to interactively create programs, not in this cryptic code but in the much simpler, English-like FLOW language. They would type FLOW statements on the Meta 4’s console and my software would help them with their syntax as they went along, and then would interpret a collection of those statements to produce results. So, in effect, FLOW was being interpreted by my IBM 1130 assembler code which was being interpreted by the Meta 4’s own microcode, and finally executed by its unique hardware deeply hidden under three layers of digital metaphors.

Today, the Apple Watch on my wrist is at least 10,000 times more powerful than that old Meta 4, and infinitely more useful. The languages used to implement the watch’s features are immeasurably more complex and capable than the forgotten little language called FLOW. The now-familiar metaphor of colorful icons on a desktop (or watch face), the illusion of multiple overlapping windows on a screen, the annoying advertising video embedded in a web page, the data analysis used to encourage your next online purchase, and the budding artificial intellects of Alexa, Siri and Cortana are each created by millions of unseen computer instructions – some operating on our personal devices and others quietly doing their real-time magic on machines located elsewhere in the world. The development, testing and debugging tools available to software engineers creating these wonders today would have seemed dream-like, if not flat-out miraculous, to anyone working in the early 70s.

Even so, In the decades since my mornings with the Meta 4, there have only been a handful of times when I’ve felt anything close to the empowerment and awe that greeted me each time I unlocked that lab at four in the morning. I had been alone with a quasi-intelligence pretending to be an IBM 1130 pretending to be a Jef Raskin FLOW machine. I could slog through all the raw, grungy, behind-the-scenes details but still wholeheartedly agree with Arthur C. Clarke’s statement that “any sufficiently advanced technology is indistinguishable from magic.”

One summer day in the year following my Meta 4 mornings, I found myself paddling out at Black’s Beach, gazing at the rippled surface of the sandy ocean floor as a clear, glassy swell passed beneath me. Looking up, I could see the next set of waves building just outside the last break and could imagine my next ride; but I could not foresee how computer science would utterly transform society in the years to come. I tried, but like most of us, my ponderings were much too linear and my extrapolations embarrassingly derivative. And they still are. We delude ourselves by thinking we’re in complete control of our creations when, more often than we might want to admit, we’re just along for the exhilarating but unpredictable ride.

For the foreseeable future (note major caveat above), we will still be the highest-level entities in this pile of abstractions we’re creating and will therefore retain at least some degree of control. But since the beginning of the digital revolution, we’ve been obsessed with the metaphorical aspects of this. From the early 1950s, the popular press loved to refer to our new machines as “electronic brains” and we’ve relentlessly anthropomorphized the evolving technology ever since.

The “easy” parts are coming along nicely. We’re creating truly useful robots to handle tasks we either don’t like or cannot do, and in some restricted forms of conversation, Alexa can now arguably pass the Turing Test. This test was suggested by Alan Turing in 1950 to provide one simple criterion for deciding whether a machine could “think.” If a person exchanging written messages with an entity hidden in another room could not reliably tell whether that entity was another human or a machine, then Turing declared that entity to be intelligent. More recently, several researchers have proposed various upgrades to the Turing Test, implying that AI technology has progressed enough to warrant more demanding criteria.

Certainly progress has been astounding. But the hard part, the holy grail, isn’t just artificial intelligence; it is artificial consciousness or machine self-awareness, and we seem driven toward that ultimate goal. Some say we’ll never get there, arguing from either a scientific or a spiritual standpoint. Some say we shouldn’t get there, again arguing from one or the other perspective. And underlying all arguments are centuries of debate about the true nature of consciousness and the mind.

Even if it turns out – as some theorists suggest – that consciousness is just an “emergent property” that naturally arises out of sufficient complexity, linear thinking tells us that our current technology isn’t likely to get us very far down that road. After all, despite phenomenal advances in the last few decades, there are physical limits to the number of transistors we can cram onto a chip as well as practical constraints on the kinds of algorithms that can take advantage of massive parallelism (multiple cores on a chip, multiple chips in a system, multiple systems in a network). But this is linear thinking.

If recent breakthroughs in quantum computing are any indication, we might be in for some seriously non-linear advancements in our ability to create enormously complex systems sometime in the next ten to twenty years. If this happens, we will be compelled to push the human-machine metaphor even further. It will be fascinating, exhilarating, and existentially risky.

When that time comes, will humanity still be the best metaphor to employ? Are humans really the appropriate model? Are we “good” enough in all the important ways? And if not, will our creations work to form a deeper synthesis with us, or will they simply think us unworthy of the effort?

In either case, we will no longer be alone when we unlock our labs at four in the morning. But how will it feel, or will we even take note, the first time we answer that door from the inside?