So a little while ago, I wrote a blog post about the 30th anniversary of the essay that popularised the term “singularity”, proposing that in the future (actually, around 30 years) we might experience an intelligence explosion of technology that would be outside the realms of human control and understanding. It turned out so long that I decided to cut it into two -part one can be found down here.
The second half of this essay deals with alternatives to the singularity in case it cannot happen; he reports of a workshop held by Thinking Machines Corporation as early as 1992 that investigated the question “How We Will Build a Machine that Thinks”. Perhaps surprisingly, there was a general agreement that “minds could exist on biological substrates and that and that algorithms are of central importance to the existence of minds.” This could be read in both an optimistic and pessimistic way - since people have been considering an answer to this question since the early 90s, we may be closer than we think or that we’re yet to find a solution despite the amount of times might make this a fool’s errand. He cites Moravec’s estimate “that we are 10 to 40 years away from hardware parity” in 1988 and that other parties (such as Roger Penrose and John Searle) that were more skeptical of this idea. Because of this, he posits what would be the case if the Singularity could not be achieved, that hardware performance curves would level off in the early 00s.
As of 2023 there are still hardware limitations, such as heat dissipation and quantum effects, that have been worked around by shifting towards parallel processing and multi-core processing designs. There has also been a focus on specialisation vs generalisation in terms of hardware - GPUs which were primarily designed for graphics processing have been repurposed for AI and machine learning for their efficiency at parallel processing. TPUs (tensor processing units) have also been developed for neural network computations. Quantum computing is still in its nascent stage, and the focus has broadened - especially in terms of considering energy efficiency and sustainability.
He accurately describes that if there were a possibility of a singularity occurring, that despite warnings and threats that would arise, progress towards the goal would continue - like the arms race we see towards gaining more compute and building more capable LLMs:
Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of "a machine in the form of the mind of man" [12]. In fact, the competitive advantage -- economic, military, even artistic -- of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.
We’re definitely seeing the case of this, but also of co-operation and willingness to build bridges between companies and countries in order to keep AI safe, aligned and contained. As LLMs grow within major corporations with access to huge amounts of assets, data and compute, I wonder if the world will be populated with specific Minds like in Iain M Banks’ Culture Series.
Vinge also touches upon the concept of superhuman intelligence, specifically arguing against Eric Drexler’s idea of, if these entities were available, that confining it physically or with laws would be sufficient. Vinge argues that confinement is “intrinsically impractical”, as there would be little reason as to why an entity that thinks a million times faster than us would stay refined without a way of escaping. For a fleshed out example about how this could happen, Max Tegmark in the title Life 3.0 outlines an in-depth case study about a hypothetical superintelligence and the multiple scenarios that could occur. Vinge explains that this is a form of “weak superhumanity”, as imagining such a concept would most likely involve more than the simple cranking up of clock speed when it comes to the brain.
He then references the famous 3 laws of Isaac Asimov when it comes to robot or AI alignment for human protection, saying that the “Asimov dream is a wonderful one”, and one I would agree with of course, if it is done properly. It’s simple enough a system to work within a fictional framework, but for the complex nuanced world and societies in which we live in today, there would be many scenarios that would escape its safety net.
Vinge’s concept of a unrestrained Singularity is a rather harrowing one; that even though the physical extinction of the human race is one possibility, it may not be the scariest. In fact, the analogy that he uses has been used many times in the present day - in that superhuman intelligence could treat us in a similar way to how we have treated animals and nature in general: “again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet....” He then lists possible ways in which human intelligence or equivalent automation would still be relevant - “embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients.” He also posits the idea of a Society of Mind, similar to that of a hivemind - a large scale extrapolation of how neurons fire and contribute to the running of a singular human brain. Each component would be as complex as a human mind but running a specific task for the larger entity.
He mentions other paths to the Singularity; explaining that even if we cannot prevent it, we have the freedom to establish initial conditions - which is still an important topic now. He proposes that Intelligence Amplification (IA) would be a more straightforward route to the Singularity, and the logic behind this reasoning seems sound: "Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that.”
The concept of Human/Computer teams is the main focus of this segment, focusing on the intuition of humans alongside the efficiency of machines. Vinge gives the examples of chess and art, the latter of which we can argue is happening with generative art. The AI generates images in response to human prompts and subsequent feedback, allowing a collaboration of sorts to create a finished outcome. Some of his points have been achieved, although a lot are still ongoing. For example, his proposition of developing interfaces that humans can use from anywhere has been solved with the plethora of wearables and connected devices. Human/computer networks have been achieved due to collaboration tools like Skype and Zoom.
He also proposes developments in limb prosthetics, nerve to silicon transducers and advanced neural experiments, the latter of which including animal embryo experimentation. Although we have made significant progress in limb prosthetics that can be controlled by neural signals and BCIs (Brain Computer Interfaces), we still have not achieved direct nerve to silicon interfaces. As far as the animal embryo experiments go, ethical considerations make this a cautiously advancing project.
Vinge sums up the essay with the long lasting effects of the Singularity not only if it were to happen, but if we were to tailor it to our wishes. Immortality may be available, or at least a lifetime that rivals the universe’s. How could our minds, still harbouring generations of emotional baggage, handle such an expanding stretch of time? How might this increased bandwidth of networking affect our communication, self-consciousness, the concept of ego? He finishes with this paragraph:
Which is the valid viewpoint? In fact, I think the new era is simply too different to fit into the classical frame of good and evil. That frame is based on the idea of isolated, immutable minds connected by tenuous, low-bandwidth links. But the post-Singularity world _does_ fit with the larger tradition of change and cooperation that started long ago (perhaps even before the rise of biological life). I think there _are_ notions of ethics that would apply in such an era. Research into IA and high-bandwidth communications should improve this understanding. I see just the glimmerings of this now, in Good's Meta-Golden Rule, perhaps in rules for distinguishing self from others on the basis of bandwidth of connection. And while mind and self will be vastly more labile than in the past, much of what we value (knowledge, memory, thought) need never be lost. I think Freeman Dyson has it right when he says [8]: "God is what mind becomes when it has passed beyond the scale of our comprehension."
In other words, the structures that we have upheld in society could dissolve with these new stages of intelligence evolution. The boundaries of self, identity and world will be profoundly changed. I feel as though this is happening now, maybe not entirely in the way he imagined, but many of the points he makes (especially in terms of alignment and the race to control such ideas of superhuman intelligence) still rings true today. Although this quote is not specifically about AI, this speech in Childhood’s End by Arthur C Clarke always speaks profoundly to me about the scenario we may find ourselves in:
For what you have brought into the world may be utterly alien, it may share none of your desires and hope, it may look upon your greatest achievements as childish toys - yet is something wonderful, and you will have created it.