On Epistemic Opacity in Computing and the Division of Labor (some Humphreys, Babbage, Paley, and Adam Smith)
Back in the day, there was a modeling and, subsequently, a simulation turn in the philosophy of science. I was introduced to it through Bill Wimsatt at The University of Chicago. But one of the unsung heroes of this turn — I don’t dare calling a change in meta-philosophical views a ‘revolution’ — was Paul Humphreys (1950-2022). (Wimsatt introduced me to Humphreys’ work.) This can still be discerned by Eric Winsberg’s generous references to Humphreys in his SEP entry on simulations (here).
Humphreys’ definition of a simulation also naturally led to his treatment and even definition of epistemic opacity. Epistemic opacity in Humphreys’ sense can be understood as the inability to surveil or verify1 the steps from input to output of an algorithmic or computational process. This inability can be practical or in principle. I did not present it as a definition because I am paraphrasing Humphreys, and I am not here trying to write a paper for Analysis in which I secure the definition through necessary and sufficient conditions against objections. (Also some of Humphreys’ own attempted definitions are not as illuminating as one would wish.)
As an aside, I write all of this with non-trivial humility. When I first encountered Humphreys’ work about thirty years ago, I was not especially excited by his approach. And so I ended up wholly missing its significance (even though I did recognize that modeling and simulations were important). It’s only when I started working with Federica Russo and Jean Wegemans on our joint (2024) paper that I started to appreciate it.
I have been unable to read Humphrey’s initial treatment of his ideas, which I think he presented in a festschrift for Suppes in the early 1990s. And so I am unsure when he fully grasped his own views on epistemic opacity. And I wouldn’t surprised if one can find a version of epistemic opacity in Suppes (who wrote a lot I haven’t read) on models and measurement.
Of course, what causes epistemic opacity may be context specific or agent-relative, and so what policy one develops should pay attention to the source and nature of the particular opacity. This point is central to an incredibly influential paper by Jenna Burrell “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big data & society 3.1 (2016): 1-12. [HT Katie Creel] This paper famously divides opacity in three kinds. Somewhat annoyingly it does not define the core notion of opacity, but the third kind of opacity identified in the paper is a variant on epistemic opacity in Humphreys’ sense.
So much for set up.
I think the idea of epistemic opacity in Humphreys’ sense is older than the last half century. And I want to quote two passages that illustrate this. And, en passant, I will note that this passage also provides some evidence that William Paley read Adam Smith. Let me start in reverse order. Because Paley’s case is slightly more convincing.
I will be quoting from Paley’s (1802) Natural theology: or, Evidences of the existence and attributes of the Deity, collected from the appearances of nature. The argument will feel familiar either because you know my research on the impact of what I have called Cicero’s Posidonian argument, or because you are familiar with Dawkins and Dennett who have criticized Paley’s response to the argument I am about to discuss.
In context (at the end of chapter 2), Paley has just introduced his target, which he calls ‘atheism.’ This just is the manufacture of complex machines by complex machines. (That passage by Paley has been quoted a lot included by Dennett, and will be quoted anew by either partisans of AI or AGI itself.) Paley then glosses this process as “that no art or skill whatever has been concerned in the business, although all other evidences of art and skill remain as they were, and this last and supreme piece of art be now added to the rest.”
He then illustrates atheism as follows at the start of the subsequent, third chapter:
“This is atheism: for every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater and more, and that in a degree which exceeds all computation. I mean, that the contrivances of nature surpass the contrivances of art, in the complexity, subtilty, and curiosity of the mechanism; and still more, if possible, do they go beyond them in number and variety: yet, in a multitude of cases, are not less evidently mechanical, not less evidently contrivances, not less evidently accommodated to their end, or suited to their office, than are the most perfect productions of human ingenuity.”
It’s pretty clear that Paley has Hume’s Dialogues (which Paley mentions near the end of his book) and maybe even Spinozism in his sights here as the paradigmatic instances of atheism. (One of his sources, Nieuwentyt, is quite explicit that Spinoza is his target.)
Now, there is a lot going on this passage, but the atheist treats nature as a mechanical process that is epistemically opaque. In fact, the natural-mechanical process itself exceeds all computation. (In Paley’s time computation was still done by hand, of course.) Of course, what literally exceeds all computation for Paley’s atheism is every manifestation of design in infinite nature (presumably in virtue of that infinitude).
But as it happens, Paley’s atheist also implies that the actual number of mechanisms that give rise to natural contrivances (that exhibit design) also generate epistemic opacity (they are a vast number and variety). So, nature is an immense mechanism that is powered by immensely many and sophisticated hidden mechanisms. The epistemic opacity of the vast number of mechanisms that produce visible nature is actually common ground between Paley and the atheist, and this common ground goes back to Cicero’s representation of the debate between the Stoic and the Epicurean in his famous dialogue On The Nature of the Gods. Even that arch-rationalist, Spinoza, insists both that nature is full of mechanisms, and that most of them are unknown to us (see the Appendix to Ethics 1).
But the wording of Paley’s passage reminded me of another passage that I also love quoting. It occurs near the end of the very first chapter of book 1 of Wealth of Nations (1776), when Smith introduces his key notion of a division of labor (the passage gave rise to I, Pencil):
Observe the accommodation of the most common artificer or day-labourer in a civilised and thriving country, and you will perceive that the number of people of whose industry a part, though but a small part, has been employed in procuring him this accommodation, exceeds all computation….if we examine, I say, all these things, and consider what a variety of labour is employed about each of them, we shall be sensible that, without the assistance and co-operation of many thousands, the very meanest person in a civilised country could not be provided, even according to what we very falsely imagine the easy and simple manner in which he is commonly accommodated.
Now, I am happy to concede that Smith need not mean to claim here (as a proto Hayekian) that the market process is always in principle epistemically opaque or a coordination device. (He thinks that, too, elsewhere.) All that he is saying here is that in rich countries the vastly distributed division of labor itself induces epistemic opacity into the process of commodity production where inputs (and factors of production like labor and capital) generate particular outputs. This may be missed if you only focus on the famous pin-factory, where a part of the production process is concentrated under one roof, where each of the steps may be mechanized and surveilled.
Let me loop back to the history of computing. Because given the language of computing and epistemic opacity, why did epistemic opacity not become more central to history of computing from the start? I think reflection on Babbage helps explain this. In On the Economy of Machinery and Manufactures (1832) (recall this post) Babbage appeals to Prony’s reading of the Wealth of Nations to give an example of how one can use the division of labor also in intellectual subjects. His example is how the dispersed and distributed human computers created giant mathematical tables for the adoption of the decimal system during the French revolution. Each node, a human computer, only needs to understand her — computers were usually women then — own step in the process which can be opaque to her; and only the most elite mathematician (a man, alas) was required for surveilling and bringing together the whole process. That process was not opaque, at least not in principle, to the master mathematician. This surveyable distributed process became the template for Babbage’s way of thinking about his calculating engine. Each of the steps — human capital embodied — could be ‘blind’ without understanding the whole, but the machine embodied principles that were transparent to the well-educated master mathematician, who could, in principle, check each step along the way. That is, a division of labor that did not exceed human computability.
Okay, that’s it for today. I’ll be using some of these ideas in Boston on Friday at Northeastern, where I link epistemic opacity to (recall this post) the art of government as we find it in Rousseau’s Discourse on Political Economy and Smith’s Wealth of Nations. So I may not have time to digress before the week-end.
Humphreys’ vocabulary always reminded me of a logical positivist, but he got his PhD at Stanford in the 1970s, even though I hae never see him grouped among the so-called Stanford School.


In case you haven't seen it, my response to "I, Pencil"
https://johnquiggin.com/2011/04/16/i-pencil-a-product-of-the-mixed-economy/