Wednesday, October 15, 2025

The task of self-descriptive metaethics

Part I: Descriptive metaethics
 
The task of descriptive metaethics is to describe what it is that most people think they are doing when they make claims about what is good or bad, better or worse, right or wrong, good or evil, okay or not okay, fair or unfair, deserved or undeserved, justified or unjustified, reasonable or unreasonable, obligatory or prohibited, blameless or blameworthy, supererogatory or non-supererogatory, a duty, a right, a responsibility, a moral failure or success, a moral mistake, faulty or sound moral reasoning, a moral truth or falsity, a moral fact or opinion, a virtue or vice, or what someone should do or not do.
 
Humans often perform acts of moral protest, in some cases by literal protesting with picket signs, or by expressing disapproval through speech, behavior, or writing. The question is, what do folks take themselves to be doing when they engage in these acts of moral protest? What do folks take themselves to mean when they claim, for example, that Israeli forces were wrong—profoundly wrong—to kill Hind Rajab, her family, and the paramedics who tried to help her?
 
When making claims of moral protest, do folks in general see themselves as voicing a mere preference they have, similar to food or aesthetic preferences? Or do they see themselves as making truth claims about the way things really are? Or something else, or one in one context and the other in a different context?
 
Part II: Reacting to Lance Bush 
 
Lance Bush left a comment on a YouTube video (youtube.com/watch?v=vhhIzTM9yIY) in response to commenter Nathan. I will put his comments in red and react to them:
 
Original comment by Nathan:
 
While of course most people have never explicitly considered claims that amount to moral realism, they do tend to think (1) moral mistakes are possible: "I used to think doing X was wrong, but I was mistaken about that," (2) that collective mistakes are possible--there's moral progress and decline, not just change, (3) that just because someone, or some group, disapproves of people doing some action X, that doesn't necessarily mean doing X is wrong, and (4) that there can be disagreements in ethics, with one side being mistaken, and with no good reasons for their [views]. Those are just a few things off the top of my head that at least suggest such people tend to be accepting something like realism, although of course not in those terms. That's the most "natural" interpretation of these observations.
 
Hi Nathan. I’m consistently frustrated when philosophers make claims about human psychology without appealing to or conducting the appropriate empirical research[Agreed.] . . . To my knowledge, you're not a social scientist and have not conducted or engaged in empirical research on how nonphilosophers think about moral realism, nor have you presented or appealed to any empirical evidence to support any of your claims. If you want to know how nonphilosophers think you should consult appropriate research rather than relying on speculation, personal anecdote, or whatever armchair rationale analytic philosophers typically rely on. [Agreed.]

At present, there is no compelling body of empirical evidence to support any of your claims. Worse still, they are all described in a way that would be insufficient as an operationalization of an indication of support for moral realism.

“(1) moral mistakes are possible: "I used to think doing X was wrong, but I was mistaken about that”

A belief in the possibility of moral mistakes is consistent with many forms of moral antirealism, so this would not be especially diagnostic of people being moral realists. 
[I would want to hear more on this. Spencer Case mentions in the video an example of attitudes of disgust. Eating grubs is gross to me, but I would not consider it a mistake for an aboriginal to eat grubs.] There’s also no compelling body of evidence showing most people think moral mistakes are possible. “Mistake” would also have to be operationalized. For instance, a relativist could believe it’s wrong to steal, and steal anyway, acting against their own moral standards. [I doubt it's possible for someone to act against their own moral standards. Action it seems to me is admission of acceptance of what you are doing. Even in the case of addiction, where people may do things they explicitly hate because they are addicted, the addiction itself is what makes the addictive act worth it for the person addicted despite all the negatives that come with the addictive act.]  Presumably they’d think they were making a moral mistake. [Presumably not, as evidenced by the fact that they acted in the way they did.] But that’s different then being mistaken about whether or not stealing is morally right or wrong in the first place. It’d be challenging to conduct research on this, and I don’t know of any such research. Even if there is some, there won’t be much. There’s no robust cross-cultural body of data showing people think moral mistakes of the relevant kind are possible. [The concept of 'mistake' seems objective to me and nothing said here would convince me otherwise. I can imagine someone defining anti-realism, one layer of anti-realism, as the view that there are only hypothetical imperatives of the form "If you want X, then you should Y." Failing to do Y would be a mistake with respect to the goal over X, and that mistake would be objective (in line with the objectivity of the concept of mistake), but this position would be anti-realist in the sense that it would not admit of a categorical imperative.]

“(2) that collective mistakes are possible--there's moral progress and decline, not just change”

First, progress and decline is consistent with many forms of antirealism. 
[Agreed, in the goal-relative sense noted above.] It’s even consistent with relativism: there is progress or decline relative to the moral standard in question. The concepts of progress and decline do not require, presuppose, or even hint at realism. People make progress on personal goals, or at becoming better artists, for instance. People can and do already employ the notion of progress and decline relative to self-chosen goals all the time. [Perhaps buried within the intuition that Nathan is getting at is that some goals are obviously better than others. From an Islamic standpoint, more countries being ruled under Sharia would be progress, but from other standpoints it would be a decline. So you might think that progress with respect to an objectively better goal is objective progress, while progress with respect to an objectively lesser goal is objective decline. But this requires one to believe that some goals are objectively better than others, and I don't know of any empirical research suggesting that people in general believe that some goals are objectively better than others. That would be a great question to include in a metaethical survey.]

Second, there is no established body of empirical evidence that supports the claim that most people think moral progress in such a way that is most consistent with realism is possible. On the contrary, the best available study that specifically addresses the question of objective moral progress suggests most people reject this idea. Here is an excerpt from the abstract:

“Our results suggest that, neither abstractly nor concretely, people dominantly believe in the possibility of objective moral progress, knowledge and error. They attribute less objectivity to these phenomena than in the case of science and no more, or only slightly more, than in the cases of social conventions and personal preferences. This finding was obtained for a regular sample as well as for a sample of people who are particularly likely to be reflective and informed (philosophers and philosophy students). Our paper hence contributes to recent empirical challenges to the thesis that people believe in moral objectivity.”

Pölzler, T., Zijlstra, L., & Dijkstra, J. (2024). Moral progress, knowledge and error: Do people believe in moral objectivity?. Philosophical Psychology, 37(8), 2073-2109.

“(3) that just because someone, or some group, disapproves of people doing some action X, that doesn't necessarily mean doing X is wrong,”

This would at best only indicate whether people endorsed agent relativism, so this would not be a good way to ensure that people are not appraiser relativists nor any other form of antirealist. As such, rejecting this notion would not be a strong indicator that a person is a moral realist. I reject this, and I’m the most prolific antirealist there is.
[This is confusing. I think Lance is saying that he accepts the idea that just because someone, or some group, disapproves of people doing some action, that doesn't mean doing that action is wrong. But Lance is still an anti-realist. Probably, a more charitable interpretation of what Nathan is saying is that most people believe that individuals and groups can be wrong about what is wrong. But that's the same as point (1).] As such, this is not diagnostic of or strongly indicative of being a moral realist. Furthermore, this once again is an empirical question and at least some studies that use wording consistent with agent relativism not only find that many participants select these response options, but that variations of relativism were the most common response even after participants underwent training in familiarizing themselves with metaethics and were given comprehension checks and detailed response options. And this held up across seven different paradigms. Does this mean most people are moral relativists? No, but it is better than anecdotes and the self-reports of philosophers. See here:

Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.

“(4) that there can be disagreements in ethics, with one side being mistaken, and with no good reasons for their reviews” 
[Again, a repeat of point (1).]

Again,you present no evidence that most people think this. The main paradigm in experimental metaethics is, incidentally, the disagreement paradigm. And it in fact finds very high rates of antirealist responses from participants, especially in newer, better-designed studies. Furthermore, disagreement is consistent with antirealism. 

In addition, there is some empirical evidence suggesting folk notions of disagreement differ from the kind that would support the notion that they are implicit realists. See: 

Khoo, J., & Knobe, J. (2018). Moral disagreement and moral semantics. Noûs, 52(1), 109-143.

From the abstract: “We show that there are moral conflict cases in which people are inclined to say both (a) that the two speakers disagree and (b) that it is not the case at least one of them must be saying something incorrect.” 
[The grubs illustration comes to mind again. Consider the claim, "Grubs are delicious." I disagree with this claim. There's an aboriginal out there that agrees with this claim. Do I think the aboriginal is mistaken in his affirmation of this claim? Nope. But when we remove ambiguity, things become more clear: "Grubs are delicious to everyone." Okay, now this claim is clearly false, and now affirming it would be to make a mistake. So consider: "Killing Hind Rajab, her family, and the paramedics who tried to help her is wrong to everyone." Again, we arrive at a clearly false claim, as the killing was not wrong according to the Israeli soldiers. So we must remove the ambiguity enough so we can see what makes it true or false. Something like: The goal of maximizing goodness is the best goal, and the killing of Hind Rajab, her family, and the paramedics who tried to help her fails to further this goal. So the Israeli soldiers furthered an objectively worse goal, which demonstrates a failure of understanding of the goodness and badness of things.]

In short, not a single one of your four examples, even if it was true, would be good evidence that people are realists, since all such responses are consistent with antirealism. Second, whether most people think anything in particular is an empirical question and you present no empirical evidence to support your claims. At present, there is no good evidence most people are moral realists. 
 
Part III: Self-descriptive metaethics  
 
The Pölzler and Wright (2020) study says the following:
 
"To begin with, we explain the difference between what we call “normative” and “meta-ethical” sentences about morality:
 
Normative sentences about morality express moral judgments. In uttering these sentences we evaluate something morally; we indicate that we regard something as morally right or wrong, good or bad, virtuous or vicious, and so on.
[…]
Meta-ethical sentences about morality do not express moral judgments. In uttering them we remain evaluatively neutral. Instead, we are making claims about the nature of morality itself." (59)
 
"Following these explanations, we test and improve participants’ understanding of the normative/metaethical distinction by two comprehension checks: (1) a theoretical question about what they have just read (right answer: “Normative sentences express moral judgments and meta-ethical sentences make claims about the nature of morality itself”) . . ." (59) 
 
Taking 'judgment' to mean something like: a conclusion reached after a process of reasoning, I don't believe that when I describe something as good or bad that I am making a normative judgment. 
 
I am not making a judgment at all, much less a normative judgment, when reporting on the goodness or badness of my directly experienced intrinsic goods or evils. The goodness or badness of these goods and evils is directly observed, and so statements that amount to pure reports of this goodness or badness do not involve any kind of reasoning process or a conclusion, but just a report of directly accessed data.
 
When it comes to beliefs about extrinsic and depriving evils, then there is a judgment there, but I don't take it to be a normative one. I don't know what work the word 'normative' is meant to be doing here. I don't believe in any kind of categorical imperative or sui-generis, irreducible normative property because a) I have no idea what that would look like and b) it seems to be that positing such a property is poorly motivated, motivated by a misunderstanding of the essence of 'should.'
 
Okay, but the study specifies that normative judgments occur when we evaluate something as morally good or bad, and not (presumably) axiologically good or bad. Fine, but it's not obvious what the distinction between moral goodness and axiological goodness is meant to amount to, and it's easy to imagine a view where there is no such distinction, or where moral goodness is just a subset of axiological goodness. If moral goodness is just a subset of axiological goodness, and axiological goodness is non-normative, then even moral goodness is non-normative in that case.
 
Some philosophers (Steven Mitchell, Against Metaethical Descriptivism (2011), David Copp, (Ethical Naturalism and the Problem of Normativity (2024)) say that normativity is the core question of metaethics. So no surprise, the question of what normativity is, what 'normativity' means, and what normativity is about is a matter of significant debate.
 
Here's where I am at the moment: Normativity is about the concepts referred to by the words 'ought' and 'should'. What does it mean if I ought to do something or ought not believe something? To analyze normativity just is to analyze what it is we are doing when we make claims about what we ought to do or believe.
 
Okay. If that's right, then I do not consider judgments or apprehensions about what is good or bad to be normative. Something can be bad in some sense and yet you ought to do it, and something can be good in some sense and yet you ought not do it. So goodness and badness come apart from ought and ought not.
 
You might think that this is true – that something can be bad and yet you ought to do it – only in cases where the badness is outweighed by goodness in other areas. So if something is pro toto (in total) good, then it ought to be. But if that's right, then something's being pro tanto (to that extent) good is for that thing to ought to be pro tanto.
 
But I reject both the idea that something ought to be (simpliciter) just because it is good pro toto, and I reject the idea that oughtness can come in degrees. Something either ought to be done or not; it's a binary. This is because 'should' is conceptually dependent on the concept of goals; "you should do xyz" is always true or false relative to a goal.
 
Note 1: Degrees of normativity might be like degrees of truth. The belief that the moon is the size of a pea is, in a sense, more wrong than the belief that the moon is the size of Pluto. But in classical logic truth and falsity are binaries; there aren't degrees of truth. Maybe in a similar way some actions are better or worse at furthering a goal, suggesting degrees of what you ought to do (you ought to do the best action over the second best, the second best over the third best, the third over the fourth, and so on), but strictly speaking what you should do relative to a goal simply comes down to what is in fact the action that best furthers that goal.
 
Note 2: In the case of a Buridan's Ass (two equally good actions), you should select a member of a set of multiple equally best actions. The set is the best action, and not any particular individual. But 1) It's impossible to know whether a true Buridan's Ass is at hand or whether it's merely epistemic (you can't tell which one's better), except for maybe extremely simple goals; 2) A true Buridan's Ass might not be possible, as there may always be an action that is technically ever-so-slightly better than alternatives, even if the difference is minute enough that no human could ever have figured it out.
 
This is why it doesn't make sense to say something ought to be simpliciter, as that would suggest of an ought divorced of a goal. The closest you could get to that, as already hinted at, is the idea of a best goal. But even there, someone might fail to hold the best goal in mind or to recognize it, and so by not having that goal, it doesn't make sense to say that this person ought to do something relative to a goal they don't have. If someone fails to have the best goal in mind, then all there is to say at that point is that this person failed to understand the goodness and badness of things. And this is why I reject, in some sense, the concept of normativity and go for some kind of descriptivism.
 
The point in all this is that if someone comes to me and asks me to fill out a survey describing my metaethical commitments, I might disagree with the way terminology is used by the survey and I might use key terms under a different understanding. If answers are constrained to multiple choice, I might not be able to honestly select any answer. If answers allow for fill in the blank, then my response might be misinterpreted by the study, and the data I provide would be useless. If the premise of the study is to see whether the subject is a realist or anti-realist, and if I have a problem with this dichotomy, because there are multiple layers of the concepts 'realist' and 'anti-realist' and I'm convinced that it's possible to mix and match layers, then I will disagree with the premise of the survey itself. So if the survey takes my answer and puts me into the oversimplified 'realist' or 'anti-realist' box, then I will have given the survey bad data.
 
This is why the task of descriptive metaethics is less interesting to me compared to the task of self-descriptive metaethics. 
 
In the following section I will try to peel back some of those layers of realism and anti-realism. 
 
Part IV: Realism and Anti-realism
 
On the question of normativity, someone could say that normativity is about prescription in contrast to description (i.e. what you should do is what is prescribed to you), and that prescription is categorically separate from description. Prescription is conceptually connected to ought, so this isn't necessarily a separate view from the view that says normativity is about the concept of oughtness. The point here is that the default view in philosophy is that description is categorically separate from prescription and that one cannot be reduced to the other, and so the question of normativity is the question of the nature of this irreducible normative category. Anti-realism is the view that denies the existence of this irreducible normative category while realism is the view that accepts the existence of this irreducible normative category – so says one way to characterize the anti-realist / realist divide.
 
I mentioned that I go for some form of descriptivism. That means I depart from the default view and believe that prescriptions can be reduced to descriptions; there are no irreducible oughts, only is's, and oughts are a species of is
 
So I'm convinced of the following anti-realist views:
 
(a) There is no irreducible normative property. There is a normative property, but it reduces to properties related to goal-furthering, which are descriptive and non-normative.
 
(b) There is no categorical imperative, only hypothetical imperatives.
 
(c) There is no irreducible prescription. Oughts reduce to is's. To say someone ought to do xyz is just to describe xyz as the best and/or necessary actions to further a goal, and thus to describe whether someone succeeded or failed to further that goal. 
 
But I'm convinced of the following realist views:
 
(d) There are true and false 'should' and 'ought' statements, and the truth or falsity of these statements is judgment-independent.
 
(e) There are intrinsically good and intrinsically bad things and the goodness and badness of these things is there regardless of anyone's judgment (but not regardless of anyone's experience; goodness and badness are experience-dependent, but judgment-independent). Therefore, I believe in judgment-independent value facts, i.e. axiological realism.
 
(f) Because there are true value facts about some things being good, bad, better, or worse, there are true facts about some goals being better or worse than other goals.
 
(g) So there are true facts about which actions further better or worse goals, and true facts about which actions further or fail to further the best goal of maximizing goodness.
  
(h) A person who was all-knowing, privy to all knowable facts about all things, who only believes what's true and acts according to those true beliefs, has perfect understanding, is perfectly rational, is maximally intelligent, is perfectly intellectually virtuous, and is perfectly morally virtuous – such a person would not be capable of [insert horribly evil action here].
 
(i) Failing to have a goal of maximizing goodness is to fail to have the best goal.
 
(j) Actions that depend on objectively false (wrong) beliefs are themselves, in a sense, objectively false (wrong). That is, if you have a goal to believe what's true, then you will, if consistent, have a goal to not perform those actions that depend on false beliefs.
 
(k) There is objectively good and bad reasoning. There is objectively good and bad reasoning over issues of morality. There are mistakes and false beliefs. There are mistakes with respect to moral goals (i.e. the goal to make the world better, to be virtuous, to avoid committing agent-restricted actions, or to act with a good will) and false beliefs about morality, e.g. about which actions are or are not mistakes with respect to moral goals. 
 
(l) There are true facts about whether an action, attitude, habit, state of mind, or person is virtuous or vicious.
 
(m) Normative pressure is objective because mistakes are objective.  
 
I agree that descriptive metaethics is an empirical study requiring empirical work best suited for experimental philosophy. My point in all of this is that if someone comes to me and asks me to fill out a survey about my metaethical beliefs, I won't be able to do it, because I don't have my beliefs fleshed out. And if I have to fill out the survey under a pretense I disagree with, like, "Claims about axiological goodness and badness are normative claims", or if I'm working with a different understanding of key terms and my answers are misinterpreted, then my answers will be bad data. Depending on how the survey defines anti-realism and realism, I might be crudely placed under either the 'realist' or 'anti-realist' category while holding to metaethical beliefs that some would consider to belong to the opposite category. In theory, two separate, independent surveys could place my views under opposing categories!
 
In response to being prompted by the survey, I would want to first spend time figuring out what I believe and why. And so the task of self-descriptive metaethics comes before descriptive metaethics. Certainly, the task of self-descriptive metaethics is dialectical like the rest of philosophy, so the 'self' in self-descriptive metaethics does not imply that this is a solo task. The point is that if you ask someone what their metaethical views are, and they haven't yet engaged in the task of self-descriptive metaethics (which includes dialogue) to any degree, then they won't have an answer, or their answer will be contradictory, confused, or too vague. But after engaging in the task of self-descriptive metaethics for some time, not only might the subject's answer be well-articulated, consistent, systematic, and so on, the subject might disagree with assumptions of the survey such that their answer cannot be used to answer the question the survey was trying to answer (e.g. if the survey is trying to answer the question of whether this subject is a realist or anti-realist, the subject might disagree with this simplistic dichotomy and find themselves having both realist and anti-realist beliefs).

But if philosophy requires of me to go to others and inquire about their beliefs, what if they too must, at my prompting, go away to first figure out what they believe and why? The difference is that dialectic is done through reading and writing, so when reading you are reading the thought-out, comprehensive and articulated views of someone who has themselves been reading, writing, and thinking about that particular topic for years. That's very different from asking a random person their metaethical views.
 
Part V: Ordinary Language Objection to my view
 
Someone might say that because morality is constructed, like language, the idea of someone going off on their own to figure out what moral terms mean is nonsense. That would be like someone going off on their own to figure out the meaning of other English words. Just as language in general is a construct and fundamentally dependent on the social group, moral language in particular is a construct and fundamentally dependent on the understanding of the social group. So if you want to know what moral terms mean, descriptive metaethics is the only viable approach to answer those questions, and "self-descriptive" metaethics is a non-starter.

Reply 1: The value of systems
 
Basic words have clear and easy meanings, and it would be silly to go off on one's own to try to understand the meaning of those basic words. But words like good, bad, right, wrong, normative, should, etc. – these are way more polysemous, abstract, and ambiguous, and when we try to disambiguate these terms and situate them in a consistent, comprehension, and coherent framework that shows how all these terms relate to each and relate to other philosophical terms like 'truth' and 'fact', it's a lot of work to build out that system. Ordinary folks will not have done this work, and even for folks who have attempted to create such a system, there will be objections to their system. 
 
Reply 2: Expert opinion

This leads to the additional point that because there is skill involved in systematizing one's moral language, experts who are especially trained and gifted in this skill will have better answers to questions pertaining to the meaning of metaethical terms compared to non-experts. 
 
Just as we shouldn't put much stock into the typical non-philosopher's views about knowledge or metaphysics, we shouldn't put much stock into the typical non-philosopher's views about metaphysics either.
 
There is something to be said about the value of defending "common sense" beliefs. I tend to think of common sense as something like: the ease of which it is to get folks to understand the appeal of your view. The more commonsensical your views, the easier it is to get folks to understand your views and to understand why someone would find them likely to be true.
 
But the common sense view is that common sense has its limitations. Common sense-ness is a theoretical virtue, but it's not the end all be all theoretical virtue. Other theoretical virtues take precedent. Historically, germ theory was not common sensical (with folks of the past believing in things like "balancing the humors" and miasma), and neither was the evilness of slavery and racism.

A point in favor of the importance of common sense-ness comes out of the theoretical virtue of simplicity. All else being equal, simpler theories are more likely to be true, and simpler theories tend to be easier to get the common person to understand and see the appeal of. Indeed, perhaps common sense-ness is a theoretical virtue just because simplicity is a theoretical virtue. But problems like the Monty Hall problem show that even simple truths can be counterintuitive.

So even if it's difficult to get the common person to understand your metaethical views or understand their intellectual appeal, that wouldn't by itself count for much against your views.

No comments:

Post a Comment