In this post I discuss three related concepts that often are discussed jointly: (i) transhumanism; (ii) the singularity; and (iii) artificial general intelligence (AGI). I first introduce other people’s definitions to stipulate how I will apply the first two of these terms in what follows. (If you are curious about my sources, feel free to follow the underlying links.)
Transhumanism is the position that human beings should be permitted to use technology to modify and enhance human cognition and bodily function, expanding abilities and capacities beyond current biological constraints. [emphasis added]
The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence.
First, it should be immediately clear that one can pursue transhumanism without pursuing or reaching the singularity and vice versa. One could, for example, pursue enhanced bodily functions that are wholly irrelevant to intelligence. And one could reach the singularity through (so-called) AGI without achieving the enhancement of any human cognition.
However, in the past many theorists assumed that once the singularity is achieved via AI or AGI one could, then, generate a form of transhumanism. Vernor Vinge (1993) realized, however, “there are other paths to superhumanity.” Among these, Vinge focused on Intelligence Amplification (IA) which involves all kinds of computer-human interfaces.
Second, if one spends some time reading in the literature devoted to the singularity one soon realizes that what counts as human ‘intelligence’ is often nothing like wisdom, but rather either a species of instrumental rationality or something like IQ. In addition, one often finds a slide from cognitive skills (plural) to a singular notion of intelligence (if not IQ then, say, something correlated with computational speed). In what follows, I pretend that we all know what ‘intelligence’ in all its beautiful heterogeneous variety is and that we are not too in love with a single operationalization of it.
Third, in what follows, I stipulate without argument that the only transhumanism possibly worth defending is the one rooted in the free choices of individuals. (This is sometimes known as ‘liberal eugenics.’) To be sure, whenever I read actual defenses of transhumanism — often these would-be-Nietzschean legislators of the future badly disguise their self-affirming elitism and/or racism — I tend to turn against the practice. But in the abstract, I am the kind of liberal that welcomes individuals and couples shaping an unknown, uncertain, and often unintended future for humanity through worldmaking agency. (Keep this last sentence in mind.)
So much for set up.
The way I understand the singularity entails a kind of Kuhnian incommensurability between the before and after times. This idea goes back to a famous argument by Vernor Vinge (1993), who in describing the singularity wrote “we are entering a regime as radically different from our human past as we humans are from the lower animals.” Let’s call this radical alterity. As Vinge puts it, “From the human point of view this change will be a throwing away of all the previous rules.”
Not everyone understands/treats the singularity as involving a form of Kuhnian incommensurability. For example, in commenting on Vinge, Nick Bostrom suggests there are three features to what Vinge means by ‘singularity.’ First, what Bostrom calls ‘verticality’ (the point at which “the speed of technological development becomes extremely great.”) Second “the creation of superhuman artificial intelligence” or, so-called, “superintelligence.” And third, “a point in time beyond which we can predict nothing, except maybe what we can deduce directly from physics. (Unpredictability, aka ‘prediction horizon.’)” (The Transhumanist Reader, edited by Max More & Natasha Vita-More p. 399) Crucially, while Bostrom endorses verticality and superintelligence as properties of the singularity, Bostrom argues against the likelihood of unpredictability.
Now, in the context of defending predictability, Bostrom grants what I have been calling radical alterity:
[W]e might still be incapable of intuitively understanding what it will be like to be living in this future. (What would the hyper-blissful experiences feel like that would exist if the universe were pleasure-maximized?) This is one sense in which posthumans could be as incomprehensible to us as we are to goldfish. (The Transhumanist Reader, edited by Max More & Natasha Vita-More p. 401; see also
on p. 404)
So far so good.
Now, let’s stipulate that the radical alterity/incommensurability thesis that one ought to associate with the singularity is prospective in character. Let’s allow that the posthumans — be they the augmented humans or super-intelligent machines — do get/understand the what’s it like for (ahh) un-augmented humans. So, the incommensurability is just one-directional; it’s prospective only. (On some interpretations of Kuhnian incommensurability this is all Kuhn had a right to claim.)
What I find odd about the transhumanist/singularity literature is that despite the diagnosis of prospective incommensurability/radical alterity immanent in it, so many theorists seem to assume that the singularity is worth achieving. But this assumption is wholly unearned if the singularity involves prospective incommensurability. It’s a feature of the singularity that we who aim to achieve it simply can’t endorse the goodness that might follow it from our present vantage-point. There is, thus, something wholly irrational about pursuing it as an aim; and something wholly bat-shit-crazy about pursuing it as an overriding goal in an arms-race environment.
As Carlo Cordasco helpfully pointed out to me in discussion last week, my argument echoes L.A. Paul’s framework of transformative experience. (For those familiar with my engagement with Paul’s material this should not surprise, of course.) In fact, the singularity involves an extreme form of transformative experience. For, what’s peculiar then about the variety of transhumanism(s) of which the singularity is the limit-case is that these involve aspirations whose goodness and worth are wholly unknowable to us absent experience of it and reliable testimony. This is the main point of today’s digression.
The full significance of the radical alterity of the singularity is often self-masked to those engaged in discussions about the singularity or long-term-ist transhumanism. For, rather than dwelling on the possibility of radical alterity, the discussion is frequently focused on so-called ‘value-alignment.’ In defending his post-singularity predictability thesis, Bostrom himself suggests this: since “the super-intelligences or posthumans that will govern the post-singularity world will be created by us, or might even be us, it seems that we should be in a position to influence what values they will have.” (The Transhumanist Reader, edited by Max More & Natasha Vita-More p. 401.)*
Readers of my never-ending-criticisms of MacAskill’s What We Owe the Future (first here; second here; the third here; the fourth here; the fifth here; this post on a passage in Parfit (here.)); and sixth here) may recall my lampooning MacAskill’s framework. For, MacAskill assumes (i) that currently society is relatively plastic; (ii) that the future values of society can be shaped; (iii) that in history there is a dynamic of “early plasticity, later rigidity” (p. 43). Such rigidity is also called "lock-in" by MacAskill, and he is especially interested in avoiding premature (iv) "value lock-in." (See especially, my third and sixth post.)
By contrast, when it comes to the singularity, Bostrom does want to secure relative value lock-in. For, ideally “the superintelligence will have the values of its creators.” (The Transhumanist Reader, edited by Max More & Natasha Vita-More p. 401.) What’s attractive about this position is that if it were true, then we could know at least something important about the post-singularity world prospectively. And, perhaps, this would be sufficient, thereby, to endorse the singularity as worth having from our perspective (and undermine the argument of the present digression).
In practice, Bostrom endorses both radical alterity when it suits him (“a superintelligence would not necessarily have a human psychology”) and the commensurability between our values now and the ones that ought to be locked-in at the arrival of singularity: “of course it would not change its most fundamental values; it is a mistake to think that as soon as a being is sufficiently intelligent it will “revolt” and decide to pursue some “worthy” goal.” (p. 401)** This purported ‘mistake’ is (recall) known as an ‘existential risk.’ One persons’ modens ponens, is another’s modens tollens. So, if you deny radical alterity, I raise you one Yudkowsky.
I am not the first to note that if one spends considerable time reading on the singularity, transhumanism, and existential risk one can’t help but be struck by the strain of millenarianism and sense of being among the elect that runs through this literature. I don’t remark this as criticism. Sometimes I quietly envy the participants’ sense of purpose of living in highly meaningful times. Nevertheless, one is tempted to whisper, “What, if some day or night, a demon were to steal after you into your loneliest loneliness and say to you: This life, as you now live it and have lived it, you will have to live once more and innumerable times more…”
*Notice that Bostrom really can’t fathom the radical alterity thesis because he insists that posthumans “might even be us.”
**The strain of Nietzschean elitism is always present: “The plausible values are those that it seems fairly probable that many of the most influential people will have at about the time when the first superintelligence is created.” (p, 401) Why think — unless one is gripped by the gospel of success — the influential people have the right values? Why think influence correlates with authority?
The theoretical issues on transhumanism seem to be simpler if you assume that the feasible modifications aren't genetic but are chosen by adults. As experience has already shown, if a pill or injection can improve performance in some direction (or just provide a desired experience), lots of people will take it, even if there are adverse side effects. And if there were a pill that reliably raised intelligence, I'd take it. But there isn't.
This raises one of the big problems with all of this discussion. It's about hypotheticals that aren't happening, at least not obviously. Recent AI is impressive, but partly by contrast with the stagnation/deterioration/ensh*ttification of existing tech, most notably Google.