Scientific Models and Political Decision-Making (on Winsberg & Harvard)
Early in the Corona pandemic (March 2020), I co-authored a piece with Eric Winsberg, “Climate and coronavirus: the science is not the same,” in New Statesman in a series edited by Aaron James Wendland. My motive for writing it was that I found there was a huge mismatch among public policy, public justifications/rhetoric of that policy, and the apparent public health threat we were faced with. I suspect Winsberg, who is one of the leading philosophers of science of modeling as such and climate science in particular, felt the same, although I suspect our inclinations were quite opposite. It was my sense then that Winsberg would have naturally gravitated toward what became the Swedish policy whereas I was sympathetic to what became the Australian/New Zealand approaches.
Anyway, our piece has held up remarkably well, and, if I can say so, was also rather wise given much else that was being written at the time. A few months later, while teaming up with Neil Levy, we couldn’t get a major outlet to publish our follow up (see here in The Conversation) despite inside connection at the New York Times, a helpful introduction from a Nobel laureate economist at the Wall Street Journal, and a number of other high profile venues. Our error: we didn’t play along with the entrenched, politicized framing that the moment seemed to demand. This piece has also held up remarkably well. In addition, Winsberg’s particular skepticism about the direction of policy has been amply vindicated.
I remembered all of this when I was reading Winsberg’s and Stephanie Harvard’s stimulating Cambridge Elements, Scientific Models and Decision-Making. I am less familiar with Harvard (UBC), but they have co-authored a number of recent papers on how to think about public health modeling and values since the pandemic started. I had bought their book because I want to write a follow up to my piece (published with Nick Cowen) on novel externalities in which we engage with political decisions in the context of fast science (see Stegenga) and novel externalities. I assumed that the Elements would reflect Winsberg’s evolving reactions to the pandemic.
It would be unfair to say that the work represents a score-settling on Covid-19 policy, but it does that, too. It gives a fantastic and crisp introduction to the philosophy of modeling (and the evolving terminology to analyze it), and illustrates it with incredibly helpful examples (many drawn from climate sciences). And then it applies the framework developed to criticize Neil Ferguson’s ICL model and the rather disastrous role it played in driving pandemic policy. It very nicely shows — inter alia — how representational choices in modelling and the endorsement of facts involve non-trivial value commitments. It is, thus, a lovely contribution also to the values in science literature.
So much for set-up.
In the concluding sections Winsberg and Harvard discuss how to think about the responsibility of policy salient modelers when it comes to the values internalized in their modeling choices. And they discuss three approaches: (I) being transparent about the values that shape their modeling decisions; (ii) using ethically correct values; (iii) or by appealing to publicly held values. They end up leaning toward the third one. Given space constraints of an Elements, it should not surprise that the discussion is brief. But the challenge I want to point in their analysis is not, I think, a mere effect of the brevity of their discussion.
For, throughout they assume that there is a ‘public’ and that it has shared values. Even assuming that there is a ‘public,’ why think it would agree on values? In a pluralist and polarized society this seems like a very optimistic (ahh) modeling decision. We can discern the role this commitment plays in their criticism of the first option. (While writing with Federica Russo and Jean Wagemans I have expressed skepticism about the utility of transparency in AI ethics (here), so I am not defending the first option.) Here’s what they write (I have removed their references to the literature):
How can they ensure they are not imposing their idiosyncratic values on the public?
One proposal that we find in the general ‘values in science’ literature is that scientists should strive to make their own reasonable methodological decisions and then be transparent about what values guided those decisions… There are two considerations here that suggest this is unlikely to do the work it needs to do – to avoid imposing idiosyncratic values on the public. For example, the ICL group…are relatively ‘transparent’ about the fact that they chose the Hubei data set over the DP data set….But for ‘transparency’ to mitigate the problems discussed here, it should enable members of the public to figure out whether the choice the ICL group made is or isn’t the one they would have made, given their values. If the public can see that they would have made the same choice, then no idiosyncratic values risk being involved. If they can’t tell that, then the strong possibility exists that the ICL group is being value-laden in a way that the public would fundamentally object to, and that this fact remains hidden. Therefore, it is a criterion of success for the transparency proposal, that transparency leads to members of the public being able to tell if modellers are making choices that fail to accord with their values. We can call this the ‘congruence’ criterion. Winsberg & Harvard, pp. 59-60 [emphasis in original]
Notice that congruence presupposes not just a fairly unified public (the pluralism problem), but also a fairly skilled one (the competence problem). Even absent the fear and panic of a public health emergency this is a tall order. One need not be an epistemocrat to see that ‘congruence’ is highly demanding. (Winsberg & Harvard agree which is one reason why they reject the transparency approach.)
Now, in discussing why they reject the second approach, Winsberg & Harvard relax their assumptions a bit which might make it seem that they can tackle the pluralism problem. They write, “If that’s right, then having scientists limit themselves to ethically permissible representational choices will underdetermine those choices and leave them open to making choices that do not reflect the values of the majority of the people on whose behalf decision-makers will be acting when they make use of the model.” (p. 61) Here the public is merely the majority of the people on whose behalf-decision makers will be acting.
Of course, in a genuinely pluralist society it is not obvious that even half the people really agree on values and their ranking. Moreover, even if the values of decision-makers are (miraculously) genuinely representative on behalf of those they will be acting, many more moral agents may be affected by their decisions. In epidemics, especially, political boundaries need not track the boundaries of those impacted by decisions. Leaving aside the fact that elected representatives tend to balance not just values but also interests.*
Their own view, in drawing on Anna Alexandrova and Mark Fabian (2021), is that “In modelling projects that aim to directly inform public policy, it seems to us that scientists have an obligation to make the ‘right’ choices in the sense of ‘right’ that means ‘in accord with publicly held values’.” (p. 61) Now, it’s important to recognize that the approach of Alexandrova and Fabian is focused on thick concepts and models that have distinct stakeholders. In fact they emphasize that they are primarily focused on “the production of a measure for a specific context.” And this allows them to tackle the pluralism, competence, and affected problem at once. Alexandrova and Fabian emphasize this in their conclusion: “The conception of thriving we were able to articulate is more detailed and in line with what Alexandrova (2017) calls mid-level theories of wellbeing: theories geared to a specific group of people in a specific context, rather than the general homo sapiens.” One may add that there is also a shared goal here, and that time constraints are not especially limiting.
Let’s stipulate that Alexandrova and Fabian have given an existence proof of concept for the co-production model. (I am myself not too worried about the fact that co-production must involve compromises at all levels, as they acknowledge; I do worry that the people guiding the consultation process do have agenda setting influence on shaping the range of acceptable outcomes. This is a worry I also have about the many pilot studies we find of citizen panels in the study of deliberative democracy.) It doesn’t follow that their template is suitable for what Winsberg and Harvard want: “in fact, we think their basic idea needs to be extended far more widely to include representational decisions in modelling generally.” (p 61)
Winsberg and Harvard recognize that this is challenging (including in the closing sentence of the book). But in explaining why; they focus on variants of the competence problem (p. 62) They don’t really offer the resources to tackle the pluralism problem. In my view this problem is inevitable once we recognize (and they offer multiple examples throughout the book) that a policy can effect multiple ends and heterogenous populations at once. While drawing on their own work, they suggest “What seems to be required are normative guidelines for public modelling projects that articulate how representational decisions should be made collaboratively between modellers and stakeholders.” (p. 62) Why think there will be agreement or reflective equilibrium on those guidelines in a pluralist and great society?
Let me wrap up with a wider diagnosis. Led by Heather Douglas, the values in science literature has been incredibly salutary in diagnosing all the ways values enter into the production (and even supply) of modeling in policy salient sciences. It has been informed by fine-grained understanding of scientific practice. This has made it highly salient in diagnosing where the authority of ‘science’ is being abused in real policy contexts (say because not so disinterested or captured by particular world-views, etc.) However, it has drawn back from what we may call the precipice of the political. It has done so by either explicitly drawing or tacitly relying on commitments common in democratic theory. As its critics — and this includes feminists, Marxists, agonists, realists, Conservatives, and even some liberals — have noted it has a view of democracy that is both de-politicized and/or time-consuming.
It is an interesting sociological fact that ideas going back to Dewey and Habermas are so influential in the science and values literature. Without irony: this reflects the nobility of the enterprise. But there is no evidence that their conception of democracy and legitimacy are endorsed by the wider public as such. Even in climate policy (where the science is robustly well-attested), it has been impossible to avoid genuine politics. How to do values in science while being attentive to politics, but without letting political desiderata corrupt either science or values, is no small challenge.
*Early in the book, Winsberg and Harvard toy with the idea of unifying decision making through formal decision theory by representing them in terms of exact “probabilities and utilities.” But while drawing on the inarticulability thesis they (quite rightly) reject this. (p. 48 and section 4.7)