Philosophy Bear recently posted a post with the same title as above. An argument he makes in 23 points. Disappointed was I when I found that in his first point he specifies:
Moral realism has many senses, but call “practical moral realism” the view that there are moral consensuses for humans we can figure out and come to agree upon
This is the boring kind of moral realism! I wanted to know if we can test real moral realism. In point 22 he goes on to claim.
Practical moral realism, though it is a thesis in metaethics, has implications for normative ethics as well.
This is not true. If “practical moral realism” is true, it is still the case that you can coherently hold any moral anti-realist position. You can say that morals are stance dependent, but everyone coincidentally happens to have the same “stance”. You can be an Emotivist but believe humans are emotionally similar enough to each other to have the same emotional response to the same ethical dilemma. “Practical moral realism” is not a meta ethical position. So let me write the post Mr. Bear did not:
Real moral realism is not an empirical question. Or is it?
Suppose tomorrow Sam Altman discovers the secret sauce that makes superintelligence (it was hidden behind the soy bean paste he bought once but never uses). He hails an Uber to the nearest data center and starts sprinkling the sauce over the GPUs. The OpenAI alignment team happen to be there with their measurement tapes to check of none of the GPUs are out of allignment. They yell “Sam! Not yet! We haven’t solved alignment yet!”. Sam answers them calmly “Don’t worry guys/gals/gexs, I read some substack blog, turns out Moral Realism Is True!’’. He empties the whole bottle and creates GPT-Tree(3).
Sam’s response might puzzle the alignment team, but only if they failed to think through the implications of moral realism.
Morality is about what we have reason to do — impartial reason, to be specific. These reasons are not dependent on our desires.
Anyone, or anything that can deduce moral facts has a “reason” to comply with them. Even if they don’t want to. Humans are kinda dumb. But even they are able to figure out moral facts like “murder is wrong” and “don’t torture people”. Any superintelligent AI is by definition much smarter than we are. So it would figure out any moral fact we have, and more! So Altman’s superinteligence might have very stupid desires, like predicting tokens. But we need not worry about it killing us. “Murder is wrong” is a moral fact. GPT-Tree(3) knows this fact. And it has a “reason” to act on the fact, even if it doesn’t desire to! After the AI figures out these facts we can count on him acting morally going forwards. It may occasionally stil predict tokens, but only when morality allows for it.
If Altman’s sceme works, moral anti-realist will have to eat crow. But they also have a stance independent reason not to eat animals (including crows), so they are in a real bind! Until they resolve that paradox they will at the very least have to admit they were wrong. A being that has a completely different set of emotional and desires than they themseles do converged on the same moral facts as they did. This proves pretty thuroughly that moral facts aren’t dependent on any emotion or desires.
But what if the scheme fails? That means there really are no stance independent moral facts to discover. There simply is no way that humans have the rational capacity to discover moral facts, but GPT-Tree(3) - which is much more intelligent than us - does not. I would say that it’s the moral realists’ turn to eat crow. But there are two problems with that.
Some moral realist have (evidently stance dependent) moral objection to eating animals.
Everyone would be dead.
The latter point is also a reason not to copy this particular experiment design. But as a thought experiment it shows that moral realism is at least testable in theory. And our intuitions on how this experiment would play out shines a light on the plausibility of moral realism. Most people working in AI alignment have long ascribed to the Orthogonality Thesis. Which says that GPT-Tree(3) would not figure out any stance independent reasons to act morally. The thesis hasn’t been tested yet. My own intuition is that it is true, and therefore that moral realism is false. But in the future we might test it. Hopefully in a more controlled manner than the scenario above.
There's a philosopher at Sydney University (David Braddon-Mitchell working on an argument against moral realism in a paper in progress he has talked about publically called "Immanuel Kant and the killer robots" where he argues that if superintelligence needs alignment, this is an argument against some (perhaps most) forms of the strong moral realism- and inasmuch as moral realists accept alignment isn't a done deal along with superintelligence, they implicitly reject their own moral realism.
I do think practical moral realism can be regarded as a realism for the reasons I outline in response to your comment.