Dennett and Harari, The Intentional Stance, and the Road to Serfdom through AI faking Personhood
Part I
More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the 'intentional stance' and its role in identifying so-called 'intentional systems,' which just are those entities to which ascription of the intentional stance is successful. (If you don't like my use of 'teleological' nothing hinges on it in what follows; it was just a hook.) Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the following (1985) one:
It is a familiar fact from the philosophy of science that prediction and explanation can come apart. Here's an example: before Newton offered a physics that showed how Kepler's laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light. I mention this because it's important to see that the intentional stance isn't mere or brute instrumentalism. The stance presupposes both prediction and explanation as necessary conditions (that is, they are neither sufficient without the other).
So, far, I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) in cognitive science and economics (one of Dennett's original examples).* But as the language of 'stance' suggests and as Dennett's examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is "easily overlooked." (p.93) For, Dennett the difference between (i-ii) is actually one of degree (this is his Quine-eanism), but I find it useful to keep them clearly distinct (and when I do so, I will use 'intentional stance (i)' vs 'intentional stance (ii).')
Now, as Dennett already remarked in his original article, but I only noticed after reading Rovane's (1994) "The Personal Stance," back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, "Whatever else a person might be-embodied mind or soul, self-conscious moral agent, "emergent" form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person." Let me dwell on a complication here.
That, in ordinary life, we are right to adopt the intentional stance(ii) toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection and that it is generally a successful practice may also be the effect of such selection, as Dennett implies. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of an intentional system. So far so good.
A few weeks ago, Dennett published an alarmist essay ("Creating counterfeit digital people risks destroying our civilization") in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.** (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett's piece is (sociologically) notable because in it he is scathing of the "AI community" (many of whom are his fanbase) and its leading corporations ("Google, OpenAI, and others"). Dennett's philosophy has not been known for leading one to a left-critical political economy, and neither has Harari's. In addition, Dennett's piece is psychologically notable because it goes against his rather sunny disposition -- he is a former teacher and sufficiently regular acquaintance -- and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn't Dennett's shtick. Third, despite their prominence neither Harari nor Dennett's pieces really reshaped the public discussion (in so far as there (still) is a public). And that's because it competes with the 'AGI induced extinction' meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.
Here's Dennett's core claim(s):
You may ask, 'What does this have to do with the intentional stance?' Dennett writes, "Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future." This is a kind of (or at least partial) road to serfdom thesis.
For, at a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a partial road to serfdom thesis because a crucial plank is missing--see below.)
Before I comment on Dennett's implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance(ii) is so (to borrow from Bill Wimsatt) entrenched into our cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.)
Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it's worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.
Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized when the robot Cog was encountered in the lab, we have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools 'user friendly.'
Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario -- our digital environment is already polluted in the way Dennett worries --, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves and societies.
This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity 'we' wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig's book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).
Now, at this point it is worth returning to the counterfeit money example that frames Dennett's argument. It is not obvious that counterfeit money harmed society. They did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he paid salaries (and wrote IOUs) and other expenses. I don't mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime.
And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates "strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes." (This is already quite common in product liability and other areas of tort law around the world.)
I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.
In many ways, another real risk here, and which is why I introduced the road to serfdom language, is that our fear will make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.
I could stop here, because this is my main point. But Dennett's own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:
As my regular readers know, I don't think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It's probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. So, we have all kinds of imperfect, overlapping institutions and practices that correct for this (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)
It doesn't follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)
There is an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.
*Here's a fun exercise: read Dennett's 1971 "Intentional Systems" after you read Milton Friedman's "The methodology of positive economics." (1953) and/or Armen Alchian's "Uncertainty, evolution, and economic theory" (1950). (No, I am not saying Dennett that Dennett is the Chicago economist of philosophy!)
**Full disclosure, I read and modestly commented on Dennett's essay in draft.