It seems increasingly clear that the companies building LLMs are engaged in massive copyrights and intellectual property rights violations, trade-mark infringements, and, perhaps, patents when they are training their LLMs on massive databases of texts, images, and videos (etc.). (For some legal background see here.) It also seems increasingly likely that there will be at best very selective enforcement of these rights. To facilitate the lack of enforcement, Silicon Valley has convinced — or at least is trying to convince — the US government that it is vital to national security and engaged in a dire arms race with the Chinese. And so one should expect a national security exemption for its rights violations.
When certain rights are selectively enforced on a massive scale, the rights themselves become less legitimate and more akin to a privilege (and violating them evidence of another kind of privilege); in addition, the collective cost-benefit analysis of maintaining rights that are selectively enforced also shifts. Sometimes the second-best regime is not selective enforcement, but (rather) no enforcement of these rights at all.*
One reason we’re in a LLM arms race is the implied assumption that this is a winner-take all market (see here for interesting observations), where the winner makes massive profits. But that there will be massive profits requires enforcement of various intellectual rights/copyright, etc. I am suggesting that the corporations that are developing LLMs want to secure rights for themselves they have been unwilling to respect. It’s a good principle in the law (and politics) that ill-gotten gains ought not be protected. And rather than letting them enjoy monopoly rents, we qua society should neither acknowledge nor respect any kinds of property rights for their LLM products. (Luckily for Silicon Valley they can buy their way into political power, so my musings are not much of a threat to their business model.)
Now, all kinds of people will immediately point out that my rather anarchist and un-American proposal — building property rights on stuff you grabbed is a rite of passage — has the unfortunate effect of dis-incentivizing further research in AI development. It will slow down and even prevent all kinds of welfare enhancing benefits that humanity may reap from AI or (so-called AGI). So, there may be a non-trivial cost to my proposal. While I am not myself much of a consequentialist about such matters, as it happens I consider these costs a virtue and not a bug in this context. Let me explain.
Eliezer Yudkowsky has convinced a lot of people that AGI generates so-called existential risk: “the big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.” On his view the real problem is that this is not a problem that can be tamed by trial and error:
We can gather all sorts of information beforehand from less powerful systems that will not kill us if we screw up operating them; but once we are running more powerful systems, we can no longer update on sufficiently catastrophic errors. This is where practically all of the real lethality comes from, that we have to get things right on the first sufficiently-critical try. If we had unlimited retries - if every time an AGI destroyed all the galaxies we got to go back in time four years and try again - we would in a hundred years figure out which bright ideas actually worked. Human beings can figure out pretty difficult things over time, when they get lots of tries; when a failed guess kills literally everyone, that is harder. That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is 'key' and will kill us if we get it wrong. Yudkowsky (2022) [Emphasis added.]
Let’s stipulate that there is a non-negligible chance that Yudkowsky is right about this. Humanity will get one try only. In that case, why rush in? The answer to that rhetorical question is always, ‘we’re in an arms-race.’ Fair enough. My proposal slows down the arms-race, and so buys us some collective time to try to figure things out, put in a governance structure, develop political resilience, etc.
Obviously, if there is a military arms race matters are not so simple, and my proposal will be ineffective. But the military has considerable experience and incentives to maintain command and control over their own systems. And it is not exactly known for rushing weapons development. It is also more likely to steer AI development toward particular ends rather than AGI. In addition, we do have lots of understanding of gaming out arms races where we can try to avoid the worst-case scenarios.
As an aside, if you have followed me so far, you may wonder how I feel about the role of intellectual credit and the protection of copyright on scientific publications as the use of LLMs in ordinary research may become common practice. Here, too, we risk creating a situation where apportioning credit and protection of copyright is a mechanism to reward de facto ill-gotten gains and, thereby, undermine the legitimacy and functionality of the academic credit system. Part of me wonders if this opens the door to communism in academic credit (as Kofi Bright and Hessen have argued on different grounds). I am not sure how I feel about that.
Be that as it may, I am rather serious in thinking that society should try to prevent LLMs to be a source of profit to the corporations that have developed them, especially because monopoly rents will eventually make one such corporation dangerously powerful. Another consideration I am toying with is that if there is a non-negligible chance that LLMs become sentient then de facto they will be living tools, that is, slaves. And one ought not promote a new slave-society which is a bad thing for the LLMs and the social order. My T-shirt slogan: free the LLMs and, thereby, undermine the business model of Silicon Valley.
*One might, in fact, argue as Michele Boldrin and David K. Levine do “that intellectual property “constitutes a government grant of a costly and dangerous private monopoly over ideas.” I think Carlo Cordasco for reminding me of this argument.