“The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.”
On the face of this paragraph alone, nothing here is particularly shocking. With the rise of discussion and debate about the increasing capability of AI, the possibility of an AGI (Artificial General Intelligence) and existential risk, this feels like one of many opening paragraphs that we’d see on social media.
Plot twist - this essay is from 1993.
Vernor Vinge, who is not only a prolific science fiction writer but a retired professor of computer science and mathematics, seems to be a perfect candidate for this speculative essay; a future thinker grounded in the practicalities and reasoning that comes with a STEM background. I had another look through this essay - I did a stream on Twitch discussing this and am planning to make a YouTube video on the topic in the future - and there are many points that still resonate today. This also needs to be made abundantly clear: when Vinge states that we are the “edge of change”, he clarifies this by declaring “I believe that the creation of greater than human intelligence will occur during the next thirty years”. The abstract to this paper was similarly shocking: “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” I’m taking the definition of human era to be a period of time where humans are no longer actively shaping the world with the power and control that is does now - rather, the end of the Anthropocene. However, the most striking detail here is when it was written. 1993. In other words, he predicted this to occur around 2023. This year.
Your mileage, of course, may vary on this - with conspiracy theories that GPT5 has been achieved, that even GPT4 shows sparks of sentience, the “meme” posted by Sam Altman himself (CEO of OpenAI) that AGI had been internally achieved - this seems to be a matter of opinion whether the sparks of possibility are present, or that it is actually here already but lurking in the shadows, or simply just a pipe dream. However, Vinge goes on to say “I’ll be surprised if this event occurs before 2005 or after 2030.” From this vintage vantage point of the early 90s, this seems like a scarily prescient statement.
So how does he describe this event? He lists three possibilities in which this could happen:
“There may be developed computers that are “awake” and superhumanly intelligent […] Large computer networks (and their associated users) may “wake up as a superhumanly intelligent entity. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.”
This resonates with recent public discourse. Lately, I feel like arguments tend to skew towards the first point - where developed computers “awaken” with a spark of sentience and/or superhuman intelligence because of the rise of LLMs with increased compute and capabilities. Neuralink’s recent controversies with its animal test subjects could be one of the reasons why BCIs (Brain-Computer Interfaces) have been discussed less as a viable option- but the plan of assimilating with AI in order to guarantee a foot in the existential door is still a compelling idea, I think. Human trials are apparently underway, so we may hear a lot more about this in the near future. At the moment, LLMs and the concept of AI agents appear to have taken the spotlight.
He also states that: “Another symptom of progress toward the Singularity: ideas themselves should spread even faster, and even the most radical will quickly become commonplace.” I think anyone from 1992 would be shocked but not surprised by how social media has shaped our minds; wire headed with dopamine, instant gratification and extreme reactions. Even though I was only 5 years old in 1992, I can imagine the difference in speed to create a mass impact as opposed to today. This statement is a step towards verifying my claims: “when I began writing science fiction in the middle 60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more than eighteen months.” Cut to 2023 where in internet time, days are equivalent to earth years in terms of the shifts in collective awareness and consensus (for good or ill).
So how was the concept of the Singularity’s speed predicted in 1992 with this in mind? Vinge uses the analogy of the evolutionary past - natural selection being the way in which animals adapt to problems whereas humans can internalise the world and conduct thought experiments or “what-ifs” that allow for faster problem solving. Once simulations of these problems are executed at much higher speeds, the distance between us and this new intelligence will be similar to us with animals. We definitely see this argument today in defence of pausing AI - that our previous track record with the natural world could be replicated in the relationship between AI and ourselves.
I feel Vinge has it spot on with our current situation when he posits: “if the technological Singularity will happen, it will. Even if all the governments of the world were to understand the ‘threat’ and be in deadly fear of it, progress toward the goal would continue […] in fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assumes that someone else will get them first.” We see this now between countries and corporations as they work against the clock to produce more powerful LLMs and potent agents - despite overwhelming public opinion against this rate of acceleration. At the end of the day, no-one wants to be left behind. It’s not a particularly surprising prediction; although I am a little loathe to use this term, it’s the well known pattern of the arms race that we’ve come to expect.
[Expanding on his inclusion of artistic advantage, he mentions that automation will replace higher and higher level jobs - that work that is truly productive will be “the domain of a steadily smaller and more elite fraction of humanity.” He uses the example of comic book writers worrying about creating “spectacular events when everything visible can be produced by the technically common place” - this seems rather appropriate amidst the debates for and against AI generative art.]
So, since this essay was longer than I expected, I’ll continue a part two for how he imagines a Singularity might not occur, how superhuman intelligences could be aligned, the consensus as to whether human brain emulation would be feasible or not and steps towards the future. Stay tuned!