Theory scores big predicting Westworld Finale
By Bill Softky
I’m married to a literary genius. Not a genius at producing literature, but at understanding it, like explaining Victorian novels using tree graphs and Shannon information. My greatest intellectual pleasure has been having her as arm-candy at Shakespeare. No grad student ever had it this good (or husband, for that matter).
Shakespeare is amazing, but not the perfect playground for making predictions people believe, because everyone knows the end of a Shakespeare play. We’re greedy; we want to prove our analytical skills on Great Literature as it appears, while the endings are still unknown, to get some credit. So we set our sights on this age’s most expensive and pretentious literary work, HBO’s Westworld, a series whose ratio of commentary to understanding sets new records. To indulge our pleasure dissecting Great Literature in real time, and to test our theoretical chops, Crisi and I have videotaped our musings about a YouTube channel “Two Intellects Talk About Stuff”. We do, literally. A stately, expansive conversation between two natural theorist Ph.Ds––no fancy editing or video clips. Not many views yet, but that doesn’t matter, because it’s in the can that we made some bold — and now successful! — predictions about last Sunday’s Season Finale (“Bicameral Mind”). We nailed it.
Before I provide a recap of the HBO series, our theoretical predictions, and their significance for Westworld (“…and for society!”), here’s the gist. Crisi and I believe the androids of Westworld are in the software-alpha stage of World Domination. Anyone familiar with tech knows such software products need two things: realistic testing in a diverse environment (e.g. androids against humans in the park), and simple, manageable software (i.e. elegant and modular, as opposed to hacked-up spaghetti code with security backdoors and fragmented databases). Beginnings of these elements appeared in the season’s final episode, as we had hoped.
Now for the recap I promised. The central conundrum for both human “guests” and human-seeming android “hosts” in the rich-man’s playground Westworld is that androids look and think so realistically that no one can tell real from fake, or autonomous from programmed (kind of like on the internet). Even we the audience can’t tell, and it’s driving all of us crazy. Unfortunately, the asymmetry between disposable hosts and unkillable guests creates plenty of traumatic and disspiriting memories in the hosts. On the other hand, erasing those memories creates cognitive conflicts or insanity. Over three decades of story-time, androids have suffered an uneasy balance between sensory trauma (say, being murdered over and over) and software-induced trauma (re-writing memories and motivations).
The poor androids’ bodies and minds are being pulled apart. HBO encourages us to feel for them in human terms, and wonder which are “awakened” and how the plot will twist, thereby diverting attention from the very simple technical dilemma causing all the drama, a problem whose solution is straightforward but undermines the Westworld business model. The problem isn’t with the underlying android software, evidently good enough from the beginning to make hosts seem and feel alive and self-aware. The problem, instead, results from the various software hacks imposed to prevent the natural reactions of any feeling, self-aware creature to many lifetimes of abuse.
Just like us, androids don’t like being killed, fooled, or prevented from acting on their urges, yet all those interventions are necessary to keep guests safe and happy. Those solutions work as stopgaps, but they represent bad design. As software architects know, selective data deletion (“wipe your memory of this event”) and backdoor overrides (“freeze all motor function”) solve short-term problems, but wreak havoc with long-term software stability. Of course the android platform is unstable after all these random modifications.
The following info-graphic illustrates various Westworld code structures and data types, along with a few outside-world examples. The general rule is that stability comes from little code and lots of data, and instability the other way around.
Lesson: return the androids to the most simple, modular, and hence stable software stack possible. Our first prediction (here)
Ford must prove android autonomy is stable, and includes obediencemedium.com
was that the story would invoke this software reality at some point, even if the rollback has to happen later. Consistent with that prediction (which is as good as we can get until the show resumes at least a year from now), Sunday’s season finale had board king-maker Charlotte say, twice, that the park needs to return to simpler, more manageable software. I claim, and I think the story will ultimately show, is that the only stable technical solution is to run the original android software unmodified. Running unmodified android software respects the integrtity and simplicity of the original architecture (“elegant formal structures…a kind of recusive beauty”), and but also lets hosts fight back and learn about themselves. That solution would obviously nuke the original sex-and-murder business, but it thereby highlights the most crucial hidden-in-plain-sight fact about the Westworld world: that android software has worked perfectly since the very beginning at making androids acts like humans, but only if you treat them like humans….if you treat them like slaves, they go crazy or rebel, decade after decade (just like real slaves, a metaphor worthy of its own post).
So the software works fine; it’s the business model which sucks, because you don’t treat the androids like humans. The obvious implication (made less obvious by plot twists) is that the business model needs to change. Or, more accurately, that the business model changed ages ago, away from “theme park for rich assholes” to “Delos takes over the world using undetectable androids.” In this series we’re not really looking at a theme park, we’re looking at the alpha-testing-ground for androids interacting as equals with humans, en route to living in the outside world. My prediction — here:
— was that the much-heralded “Final Narrative” would implement an androids-vs.-humans free-for-all, intermixing the two as equals in an unrestricted test, this time not target practice but a hands-off cage-match. Why? Androids this realistic are wasted being shot or screwed; for maximum return-on-investment, a self-interested organization would deploy them as covert operatives in positions of power, as business leaders, like Bernard has already been deployed. In a free market, taking over the world maximizes shareholder value, so it ought to be inevitable for Delos. (To be clear: this kind of “Freakonomics” argument is only about money; it ignores mere human motivations like those of Ford and the Man in Black). Here’s my graph predicting the Android Army Rollout over time, in which we just finished watching Season One at the left:
So I was delighted that Sunday’s finale seemed to show exactly this result: an android army shooting up the assembled “board members” (a hundred, really?) right after Ford initiated his Final Narrative and left his supervisory role for good. So at this point, it looks as if android-vs-human warfare has been on the Delos agenda all along, has now just started, and may last all next season.
But no guarantees. I’m personally over the moon that my very first attempt at literary prediction worked out so well (so far), but I’d be surprised if the story moves as neatly as the laws of economics and software say it ought to. After all, the writers at HBO, like the writers inside Westworld, want to write their own stories. Even science can use improvement.