Friday, July 11, 2025

Reacting to William Lane Craig's views on hell


God pours out His wrath against sinners, as we see in the Flood and on the day of judgment as described by Jesus and Revelation (and Paul mentions God's wrath in Romans 1). But why? What's so bad about sin? Because it alienates us from God? How does sin do that? When a child makes a mistake, does their loving parent disown them, or correct their behavior? 

If God is in control, then it's up to God whether we are alienated from Him or not. Even the New Testament depicts Jesus as eating and drinking with sinners, because just as the doctor comes for the sick, not the healthy, so does God come for the unrighteous, not the righteous (Mark 2:17). Far from pushing God away, our sin should draw God all the closer to us! That's when we need God the most, and a loving God would recognize that.

Maybe God pours out His wrath against sin because of how bad sin is, because of the pain and death that sin causes. But that demands the question: What greater source of pain than hell? If causing pain is so bad, then hell, the greatest pains with the greatest duration, is the worst evil (see David Lewis, "Divine Evil").

Jesus and Revelation paint a picture that forgiveness is impossible for the damned. Why? Either the damned are incapable of asking for forgiveness, in which case their free will has been robbed (and so the Christian cannot appeal to free will to defend the justice of hell or God's hiddenness), or they can ask for forgiveness but God refuses it, in which case God is not perfectly forgiving.

Thursday, July 10, 2025

An objectively true value fact that is certainly true

Sometimes I see anti-realist YouTube comments say something like: "Give me ONE example of an objectively true moral fact. I'll wait 😏" 

I don't know about moral fact, but here's an objectively true value fact:

AGONY: If life were nothing but a series of agonizing moments, then it would be better to not be alive.

If someone says "It's better to be alive and experience an endless number of agonizing moments than to be dead" or if they say "It's a toss-up between experiencing an endless number of agonizing moments and being dead", then they have said something false. AGONY is true independently of anyone's judgment of its truth. That's an objective truth.

AGONY is not, however, mind-independently true. Value depends on consciousness, so of course value facts depend on minds. But, as Sharon Rawlette points out in her book The Feeling of Value, what we care about when discussing objectivity is not mind-dependence, but judgment- (aka stance-) dependence. Something can be mind-dependently true while being stance-independently true. Consider the following exchange:

P1: "Wow, I had a great time last night!"

P2: "No you didn't."

End scene. That P1 had a great time last night is mind-dependently true, and because it is in fact true that P1 had a great time last night, P2 says something false about P1's experience. P2's stance has no bearing whatsoever on the truth of P1's experience.

So for every phenomenal fact—a fact of experience—that fact is mind-dependently yet stance-independently true. The facts a) that the experience took place when it did and b) that the experience had the quality it had do not depend on judgment.

The anti-realist might try to argue that AGONY is judgment-dependent. To judge one thing as better than another always involves... a judgment! The reason why, the anti-realist might say, it would sound so strange to hear someone say "It's better to be alive and experience an endless number of agonizing moments than to be dead" is because that's a judgment no one (or virtually no one) would make. But just because AGONY is an "inevitable" judgment doesn't change the fact that it is in fact a judgment. In theory, if someone were to judge "It's better to be alive and experience an endless number of agonizing moments than to be dead", then that judgment would be true for the person who judges as such. So AGONY is a judgment-dependent claim. (An anti-realist could argue that AGONY is not true in a judgment-dependent or judgment-independent way because it's not a real proposition, but an expression of emotion. But I take emotivism to be obviously false so I'll move on.)

But this fails to explain why AGONY is an inevitable judgment. It's inevitable because it's not a judgment but a conjunction of phenomenal facts; pain has as part of its essence a not-worth-it quality while happiness has as part of its essence a worth-it quality. Pain, by virtue of what it is, makes life less worth living while happiness, by virtue of what it is, makes life more worth living. (Some pains are worth it exactly because they are necessary for certain kinds of happiness.) This not-worth-it-ness (or, badness) and worth-it-ness (or, goodness) of pain and happiness respectively is found in the phenomenal experience itself of pain and happiness. So goodness and badness are directly accessible via immediate experience. 

So we directly access the badness of pain, we directly access the goodness of happiness, and therefore we directly access the phenomenal preferableness of happiness over pain. Therefore, happiness being better than pain is not grounded in a judgment, but is grounded in immediate experience. The concept 'better' is itself not grounded in judgment, but in experience.

I could not possibly deny AGONY because the badness of pain is basic and self-evident, the goodness of happiness is basic and self-evident, and the preferableness of happiness to pain is basic and self-evident. The self-evident, experience-based truth of AGONY, that a life that is nothing but a series of agonizing moments is not worth living, is why it's impossible to deny.

I think this gets at what might be at bottom the most fundamental disagreement between realists and anti-realists, which is whether pain is bad independently of judgment and whether happiness is good independently of judgment. The realist claims there is no judgment here, only immediate experience, while the anti-realist claims that while there might be immediate experience of some of the qualities of pain and happiness, you cannot immediately experience badness or goodness itself, and in fact what is taken to be an experience of badness or goodness is really an interpretation of one's immediate experience.

But, quoting Rawlette:

"In sum, normative phenomenology often comes to be associated with other properties or objects. This can lead us to assume that a feeling of goodness or badness must always be about something further. But normative phenomenology can stand all on its own, and it does not lose its intrinsic normative character in doing so. When normative phenomenology is isolated, as seems to happen in cases of electrical stimulation of certain limbic areas, its positive or negative nature stands out clearly as a property of the phenomenology itself and not of any intentional object. This means that, even if it is clear that our normative phenomenology cannot be taken as evidence of the objective goodness or badness of other objects, our normative phenomenology itself may nevertheless be objectively good or bad." (pg 98)

Response 1: I don't agree that the goodness of happiness and the badness of pain are intrinsically normative like Rawlette claims. I do agree that they are intrinsically worth having and not worth having respectively. I say this because normativity is attached to the concepts of 'ought' and 'should', and indeed Rawlette describes happiness as having an intrinsic ought-to-be-ness, which I take to basically mean the same thing as having an intrinsic normativity. But 'oughts' and 'shoulds' I think are not irreducible properties but can be analyzed in terms of mistakes, failures, successes, and goals. It doesn't seem true to me that my happiness, or anyone's happiness, ought to be, even though it is good. So I feel free to translate this notion of 'ought-to-be-ness' as 'worth-having-ness' or 'worthiness'. Joys (instances of happiness) make life more worth living while pains make life less worth living in virtue of what they are. I don't see how joys ought to be in virtue of what they are or how pains ought not be in virtue of what they are. Indeed, if 'oughts' must be analyzed in terms of goals, and if one's goal is to maximize happiness in an idealized way, then certain joys ought not be promoted (e.g. entertainment at the cost of productivity) and certain pains ought to be embraced (e.g. the pain of serious effort). While Rawlette speaks of pro tanto ought-to-be-ness, I cannot make sense of this idea. I can make sense of the idea of pro tanto worthiness. But given a goal, one action is either the best at furthering that goal / is necessary to further that goal, or not, and so you either ought to perform that action or not relative to that goal. I don't see oughts as coming in degrees.

Response 2: In context, Rawlette is discussing the fact that experiences of happiness and pain are so strongly correlated with objects or events that we can easily make the mistake in thinking that the goodness of our happiness or the badness of our pain is about those objects or events. But that's not true. There are studies where patients have their brains stimulated and they report feelings of euphoria. Per Rawlette, this shows that at least in principle there are experiences of happiness that aren't about anything or directed at any object or event causing the happiness.

But 1) That seems debatable, given that the euphoria the patients experience is arguably about a) their brain being in the state that it's in, and/or b) the event of the brain stimulation.

But this is too quick, because you could ask the patient, "What are you so happy about?" and they might say "I don't know!", showing that the patient is happy about nothing in particular.

But I can imagine someone who defends representationalism of the mind pushing back against this and saying just because a person doesn't know what their qualia is representing, that doesn't mean their qualia isn't representing something, in this case representing a certain internal state or event.

So I'm not convinced that these experiments show that our happiness/pain isn't about something.   

And 2) Rawlette doesn't cite any studies. A quick search reveals at least one study: Okun et al 2004, published online 2010, in Neurocase. In that study a single patient reported euphoria during deep brain stimulation. So on just a first glance, apparently euphoria can be induced through brain stimulation.

But 3) If happiness/pain lack aboutness, shouldn't we be able to see that directly? If the goodness of happiness is phenomenal, then doesn't that settle the question? Don't all raw feelings lack aboutness? When reflecting on the appearance of redness, the appearance of the appearance and the reality of the appearance are one and the same. The appearance-reality distinction breaks down when it comes to appearances themselves. You cannot have an appearance of redness without having a phenomenal experience of redness, as the two are the same. That appearance is not about anything; there's no propositional content to it, it's just an experience. The same applies to pain: You cannot appear, from a first-person perspective, to be in pain without actually being in pain. The internal appearance of pain and pain are the same thing. And the badness (the not-worth-it quality) of pain is part of that experience, and so the badness is likewise not about anything.

This generalizes to all "phenomenally defined" terms. If you asked me to define what 'red' means to a person blind from birth, there are no words I can use to communicate phenomenal red. Same with 'pain'. I cannot define phenomenal pain to a robot that doesn't know what it is. This is why 'good' and 'bad' are so hard to define; first, these words are used in different ways, requiring careful analysis in breaking them down, and second, at the core they are phenomenal terms which cannot be defined with words, only with experience.

So an even simpler claim than AGONY would be:

BAD: This pain is bad. Or: That pain was bad.

Laurence BonJour describes what we allegedly have immediate access to (Epistemology, 2nd ed., pg 100-101):

"What things are we supposed to be immediately aware of or 'acquainted' with . . .? . . . Descartes' view is apparently that we are immediately aware of the existence and contents of all of our conscious states of mind, a view that has been adopted by many others. These would include, first, sensory experiences of the sort that we have just been discussing . . . Included also would be, second, bodily sensations, such as itches, pains, tingles, and the like. These are naturally regarded as experiences of various events and processes in the physical body, but Descartes' point again would be that there is in each of these cases something directly or immediately present to consciousness, something that cannot be doubted, even though the more remote bodily cause certainly can be. The third main category of states of whose existence and content we are allegedly immediately aware are conscious instances of what are sometimes referred to as 'propositional attitudes': conscious beliefs or acceptances of propositions, together with conscious wonderings, fearings, doubtings, desirings, intendings, and so forth, also having propositional content. In these cases, the view would be that I am immediately aware both of the propositional content . . . and of the distinctive attitude toward that content that such a state involves . . ."

So, if this view about direct access is right, not only do I have direct access to conscious states, including experiences, but I would even have direct access to the propositional content of BAD along with direct access to my acceptance (or rejection) of BAD (i.e. whether a particular mental state of mine can be expressed as BAD).

And as Rawlette brings to mind, how could it be possible that I be aware of the idea of badness? Rawlette suggests that it is exactly phenomenal badness that explains where we get the concept.

"When we ask ourselves what it is we mean by 'goodness,' we can turn to this basic phenomenal experience for the answer." (100) 

Continuing:

"While a particular person's judgments or attitudes will determine whether he or she feels a positive or a negative quale, the positive or negative nature of the phenomenology is intrinsic to the phenomenology itself. If a particular person feels a negative quale, no one can experience that same quale and have it be positive rather than negative." (99) 

And:

"When we stop trying to project the qualities of normative phenomenology onto perceptions with which they are merely associated, we realize that, far from being an illusion, judgment-independent value exists in the realm most immediate to us. Judgment-independent value exists as part of the very fabric of our mental life." (100) 

So not only is the badness of pain not a judgment, interpretation, or theoretical posit, but the badness of pain is pre-judgment, pre-interpretation, and pre-theory. The badness of pain is not a posit to explain data, but is itself data, and data of an incorrigible kind (directly accessible). 

Whenever there is judgment, there is a gap that allows for error. But when you have a phenomenal-noumenal collapse, something pre-judgment, there is no possibility of error. Compare: "I remember that I had lentils and rice for breakfast" to "I had lentils and rice for breakfast." What is certainly true is that I have a memory or a memory experience. But when I interpret that memory of having had lentils and rice for breakfast as entailing that I actually had lentils and rice for breakfast, that's where memory is not perfectly reliable. It may have been the day before yesterday that I had lentils and rice, and I'm misremembering and thinking it was today. This is generally true of introspection: Introspective beliefs constituted by phenomenal experience are incorrigible, but when I use those beliefs to interpret a further conclusion, that inferred belief is not incorrigible. This is why Descartes' cogito "I think, therefore I am" contains an error (a mistake I don't think Descartes himself made, but is found in the common Latin and English translations); it should say I see that I am. There is no 'therefore', there is no inference.

Thus, Josh Rasmussen (Who Are You, Really? 27 & 29 fn8): 

"My analysis of eliminativism, then, is fundamentally this: by the light of introspection, I think it is possible to know something about your experiences directly. In particular, you can know some thoughts, feelings, and intentions. On this analysis, the subjective aspects of consciousness are not theoretical posits that explain some data . . . Rather, conscious states are part of your data—which I think you can access directly."

". . . in my analysis, a belief based directly and solely on a direct experience is the most secure a belief could possibly be, for it has the fewest sources of possible error. . . . necessarily, if one directly experiences x . . . and on that basis alone believes that x exists, then that belief is true."

Tuesday, July 8, 2025

Isn't the Lockean Thesis clearly false?

Liz Jackson writes ("The relationship between belief and credence", Philosophy Compass, 2020): 
Lockean Thesis (traditional): S ought to believe p iff S has a rational high credence in p. 
    On this version of the Lockean thesis, a rational high credence is necessary and sufficient for rational belief. This does justice to the intuition that rational agents are confident in the propositions that they believe. 
    Nonetheless, worries loom. The first is the lottery paradox (Chandler, 2010; Douven, 2012; Douven & Williamson, 2006; Kyburg, 1961; Leitgeb, 2014b; Smith, 2010a; Weintraub, 2001; Weisberg, 2015, Section 5). Suppose you have a lottery ticket; uncontroversially, you should have a high credence your ticket will lose (a fair 100-ticket lottery puts your credence at 0.99, and we can increase the number of tickets). If the threshold for belief is any value short of one, then, according to the traditional Lockean thesis, there are lotteries in which you ought to believe your ticket will lose. But your ticket is not special: by the same reasoning, you should believe, of each ticket, that it will lose. Further, many think that if you should believe p and you should believe q then you should believe p and q; this is an example of a closure principle (Kvanvig, 2006; Luper, 2016). Given certain closure principles, you should believe a large conjunction: ticket 1 will lose and ticket 2 will lose and...ticket n will lose. Nevertheless, you know one ticket will win, and thus you should also believe the negation of this conjunction. But intuitively, you should not believe contradictions. In response, some have suggested that the traditional Lockean thesis is false, and you ought not believe lottery propositions, such as my ticket will lose, even though you have a rational high credence in them (Douven, 2002; Jackson, Forthcoming; Kelp, 2017; Ryan, 1996; Smith, Forthcoming; Staffel, 2015). 
Let's disambiguate the thesis:
 
*Lockean Thesis (traditional): S ought to believe necessarily p iff S has a rational high credence in p.
 
Then I should believe that each ticket will necessarily lose. But that's clearly a false belief. Each ticket will not necessarily lose, only very probably. I shouldn't believe that my ticket will lose, only that it very probably will.
 
**Lockean Thesis (traditional): S ought to believe very probably p iff S has a rational high credence in p.
 
That's better, though a) terms like 'ought' and 'rational' need to be unpacked before I understand what this thesis is saying and b) arguably it's impossible to not believe very probably p while having a high credence in p. 
 
Though that implies one cannot have credences without also having beliefs. It does seem to me that credences are by far the more complex mental state between the two, with belief being a simple and common representational mental state. Anecdotally I see folks admit to believing something (say, God's existence) much quicker than admitting to a specific confidence level in that belief.
 
Kevin Harris: Here is another question, Dr. Craig:

Greetings. In your debate with Christopher DiCarlo, he asked if you could estimate your level of confidence in your belief in God, and that if your belief was as likely as a universe with no God, or if it was higher. You said your belief was higher than 50/50 but that you had no way of measuring whether it was highly probable or not. Wouldn’t this seem to be of utmost importance to the question and what you have spent your life debating? How on Earth are we to accept that you find the existence of a God more probable when you have no way of knowing what more probable means?

Dr. Craig: Well, wait a minute. I never said I don’t have any idea of what more probable means. For something to be more probable than not means you have a greater than 50% chance of being true. Right? That’s what it means to be “more probable.” What I’ve refused to do is to quantify it and to say I am 75% certain or it is 83% certain that God exists or 99% certain. I think anybody who does try to put those kind of numbers on it is being disingenuous, and you ought to be very suspicious. The fact is we can talk, I think, in only rough terms about this, and saying “I think it is very probable that God exists” or “more probable than not.” Things of that sort. I think that is quite acceptable.

So Craig is happy to say that he believes in God and is happy with vague language like "It's more likely than not that God exists" or "It is almost certain that God exists", but Craig doesn't like giving specific numbers to it, because of how arbitrary that would be.

KEVIN HARRIS: . . . The first thing that kind of struck me that I wanted to run past you is he said, “We should be formulating theism and atheism in terms of confidence levels from zero percent to 100 percent, not in terms of belief.” Kevin really complimented you on building systems.[3] He wants to build a system here it seems – a philosophical system, systematic theology, systematic philosophy. But the argument itself is going to require confidence. Are you confident in the confidence?

DR. CRAIG: I'm not! And I made that point over and over again. I have no confidence whatsoever in his claim that in order to believe in God I need to have a probability significantly higher than 51%. He gave no argument for that other than just an example. He gave an illustration that he thinks it is 51% probable that Hillary Clinton will win the presidential election. But that is not enough for him to believe that she will win. Well, fine, that is just an illustration. But in many other cases, 51% might be enough for a person to believe in something, especially (as William James pointed out) if this is a pressing conclusion that imposes itself upon you and is a matter of great urgency and importance, William James argued that in a case like that you are perfectly rational to believe in something even without evidence. This is James' famous essay, “The Will to Believe.” This idea that in order to believe something you have to be significantly more than 51% confident in it, I think is person-relative and subjective. For some people that might be true, for other people it might not. For some beliefs it might be true, but for other beliefs (as James said) it may not. 
Rejecting the Lockean Thesis has the consequence that even if something is incredibly likely to happen, say a 99% chance of rain, you can't believe that it will rain. But isn't that crazy? You totally can plan as if it will rain, act as if it will rain (toss an umbrella into your car before you drive off, etc.).
 
But of course it's silly to believe both that there is a less than 100% chance of something happening and that it will necessarily happen. Instead, you believe that there is a 99% chance of rain or that it's very likely to rain. And such probabilistic beliefs perfectly justify the same actions, or at least extremely similar actions, as the actions justified by full belief (belief in the necessity of or certainty of). 
 
I'm not sure Craig is correct when he says the threshold for belief is context-dependent. But I think acting as if something will happen need not require belief. Craig mentions William James' idea of the leap of faith. But the person who leaps the gap to escape the avalanche does not believe that they will make the jump. All they need to believe to make sense of jumping is that jumping is better than doing nothing. It's not that the bar of belief is lowered based on context, it's just that what you believe changes based on context. The desperate skier who makes the jump doesn't jump because he or she believes they will make it or will very likely make it (like in ordinary circumstances), but, again, only that jumping is better than doing nothing.
 
A standard definition of belief is given by Liz Jackson in the article:
 
"To believe something is to regard it as true or take it to be the case . . ."
 
But this can sound like: "To regard something as true is to regard it as true" or "To believe something to be the case is to believe it to be the case", both tautologies. The point being that defining 'believe' in terms of 'regarding' or 'taking' (or 'accepting' or 'assenting to') doesn't seem helpful. (I'm not saying Liz Jackson's definition of belief is not helpful; I'm sure she could give a much better gloss on 'belief'; it's the standard definition that is not helpful.) 
 
So I lean toward the view that 1) Propositions are representational (i.e. propositions claim a match between external properties and "internal" properties; if there is a match then the proposition is true, if there is a mismatch then the proposition is false); 
 
2) Non-propositional beliefs are representational via mental states (i.e. the internal properties are internal to a mental state, not to a proposition);
 
3) Propositional beliefs are representational via propositional attitude (the internal properties are internal to the proposition that corresponds to the belief; OR the internal properties are internal to a mental state which corresponds [contains? represents? constitutes?] to a proposition);
 
4) Truth itself is representational, connecting truth and belief;
 
5) Beliefs do not come in degrees in the sense that you can partially believe something, but beliefs do come in degrees in the sense that you can believe in stronger or weaker claims;
 
6) In the case of believing p, not only do we believe p in the sense that our mental states represent (or attempt to represent) reality as described by p (details to be specified), but we also believe whether p is (epistemically) necessarily true or probably true (or lack such a belief, or only believe dispositionally that p is necessary, etc). Credences are levels of confidence, and are beliefs about how likely p is true. Beliefs and credences appear to come apart because the belief that p and the belief (or dispositional belief) that p is necessary (or 10% probable or highly probable, etc.) come apart, and because the additional modal belief or probability belief is not necessary to hold the belief (which is why Craig can believe in God without believing that God's existence is X% likely; granted, Craig probably believes it's not epistemically necessary that God exists).
 
Jordan Peterson infamously struggles to answer the question of whether God exists, and complains about vagueness over terms 'God' and 'exists'. (Which is fine as long as you proceed to offer some help in defining those terms, as otherwise you announce that you don't have anything thoughtful to say.) I suspect Craig similarly struggles to give God's existence an explicit percentage because of problems of vagueness. Precise attitudes don't fit imprecise propositions. There's also the problem of the complexity of the question. The more complicated a debate is, the harder it is to feel confident not only that one position is right, but it's harder to feel confident that you even understand what the debate really amounts to (is the debate itself based on false assumptions? e.g. like when 'nature' is falsely pitted against 'nurture' for explaining human behavior) and what it would mean to take a stand on one position versus another.

Wednesday, July 2, 2025

It's okay to be a bully bully bully bully

The paradox of intolerance goes like this: Do not tolerate the intolerant, only judge those who are judgmental, only kill killers, only bully bullies, and only war with warmongers. It's a superficial paradox; a contradiction only arises if you are using a strict rule: Intolerance of all kinds is wrong, judgment of all kinds is wrong, killing of all kinds is wrong, bullying of all kinds is wrong, and war of all kinds is wrong. But this strict rule is clearly false; defensive intolerance is fine, defensive judgment is fine, killing in defense is fine, bullying in defense is fine, and defensive war is just war. So it's perfectly consistent to be intolerant of intolerance, judgmental of judging, to kill killers, to bully bullies, to war with warmongers, and to hate hatred. If everyone followed the rule of "only bully bullies", then no bullying would be needed, as no bullying would get started. Same with all the other defensive rules. As long as you are not the aggressor, you are in the clear, because the alternative is to fail to stand up for yourself, to fail to stand up for others, to allow victimization to go unchecked, and to allow tyranny to trample over the innocent. A complication arises when escalation is involved. Is the escalator the bully, or the bully bully? If escalation bounces back and forth, you could argue that both sides are bullies, and so a third party that bullies either side would be a bully bully, which is fine. But, because of the muddiness of the situation, a fourth party might mistake the bully bully for a bully and proceed to bully the bully bully, making them a bully bully bully, which is not fine. If you understand who is who you might try to correct things as a fifth party by bullying the bully bully bully. In short, if you are an odd-numbered aggressor, you are in the wrong, but if you are an even-numbered aggressor, you're fine. Odd is evil, even is okay. But what if you are mistaken about where you are in the chain, or where you would be if you intervened and began to withdraw toleration, to judge, kill, bully, war, or hate? Our inability to ever know whether we are odd or even is exactly why we can do none of those things ever, and must take on a position of absolute pacifism. If aggressors walk all over us and crush us, so be it. We can guarantee we are even-numbered in the chain of aggression by being aggressor zero.

Tuesday, July 1, 2025

Pedagogy and AI

I remember the first day of my first, and only, public speaking class, a class I was taking only because it was a general ed. requirement. (I doubt I would have signed up for a public speaking class on my own initiative – too anxious for that at that stage in life.) I remember going through the school cafe on the way to the class, stopping to use the restroom, and feeling this wave of dread and anxiety. I steeled myself and told myself that I just have to do it. Rip it off like a band-aid. I knew it was healthy, but I hated it all the same.
 
It was awkward and awful, and it was good and necessary, and I improved a lot through the course. I came away thinking that college should require more than just one public speaking class. The skills are too important.
 
After becoming a tutor I discovered how powerful explaining things out loud was for increasing my own understanding. Before I would have felt too weird talking to myself out loud, but after tutoring it felt natural; speaking out loud is a kind of information processing, and explaining things out loud is, I think, an essential component of learning.
 
While I only had the one class that fell under the subject of 'Speech', technically I had a number of classes after that I would consider public speaking classes in a broad sense. These were management and business classes that required public presentations. Again, these assignments were agonizing, but I tackled them head-on and felt that they were healthy and necessary. I came away thinking that every class ever should have a public speaking component. The more you do it, the better you get, and the better you get, the more manageable the anxiety becomes.
 
For the first time ever I had oral exams for one of my philosophy classes in Spring 2025 and Fall 2024. They were intense, and I prepared hard for them. Despite my preparations, I still made errors; I could have prepared better. (I did well overall and received As for both oral exams I took.) Again, healthy and challenging.
 
Oral exams teach the value and necessity of preparation and they force you to explain things in your own words on the spot. There's no relying on search engines or AI; you either know it or you don't in the moment. It's like an interview. You can easily bullsh*t written assignments (generally – depends on how they are designed), but there's really no way to bullsh*t an oral exam.
 
With AI tools like ChatGPT calling into question the legitimacy of student written work, and thereby calling into question the student's understanding of class material, oral exams are quickly becoming an essential tool to assessing a student's understanding. (I have no idea whether High Schools or Universities have started moving in this direction or not. It's not like you can use AI during a proctored test, so I imagine written tests aren't going away. It's written assignments that are losing relevance, though even in that case, you can generally tell whether something is legit or AI, as AI work, especially in philosophy, is poor in quality, milquetoast, uninspired, shallow, and often just straight up incorrect or misleading – for now.)
 
But given the massive pedagogical value of explaining ideas out loud, and the sheer importance of public speaking skills, oral exams are already highly valuable and underused. (I certainly wish I had more of them. They force a higher standard of preparation and understanding.)
 
Putting this all together, my thought is this: Schools K-12, and college, should focus on building the speaking skills of students. Students despise public speaking. That's understandable; they have no confidence in their speaking skills, and humiliation is just about the most painful thing there is. An example of a speaking assignment would be to record yourself on video explaining an idea, such as how to solve systems of equations. This process reveals gaps in understanding when you go silent and realize there's something you don't understand, can't explain, or have a question about. Students could be encouraged to leave in those silences in the recording and to write down then and there what their hangup is. These recordings are then sent to the teacher who watches them and makes notes on what students are either explaining incorrectly or what they struggle to explain. By sending the recordings to the teacher, the student doesn't have to worry about presenting in front of a group of people. But recording yourself is still a performance, one that can be critiqued by your teacher. It's much easier when the risk of embarrassment is limited to one person rather than a group. The teacher gives 1) Feedback on speaking skills and 2) as mentioned, prepares teaching notes based on recordings.
 
This has the additional advantage of making lessons highly relevant to each student. Often students feel disconnected from lessons, making it harder to care, harder to pay attention to, and harder to be motivated to show up at all. But when the lessons cover the mistakes students made in their explanation videos, and answer questions students have, each student suddenly has a clear personal stake: if I don't show up to class, I may never fill those gaps of understanding that I am aware of. That's a painful thought.
 
Students could then be required to make a follow-up video re-explaining the same topic to show their improvement from before and after. By the time the student is required to speak in front of the class, they are well-prepared, having gone several rounds on speaking, explaining, clarifying, and re-explaining a topic (students could be required to pick a unique presentation topic for their class presentation, one they must prepare for on their own using what they've learned from the class-based speaking assignments).
 
Students would be far more confident going into these class presentations because 1) They have prepared well for them and 2) Because of teacher feedback, they have an idea of where they fall in terms of speaking skills. They know what to expect in terms of how others will react to their speaking.

Friday, June 27, 2025

Lance Bush on justification, truth, and intuitions

 
27:15–29:15
 
"I don't believe in analytic accounts of justification either, I just completely reject them. I think what philosophers tend to be talking about is nonsense; I don't need justification for beliefs. I build a system on pragmatic grounds; I act based on what I expect to yield consequences that are conducive to my goals. I don't need any sort of extraneous permission. So I can give a pragmatic account of justification . . . but I'm talking about something that probably functionally and very much so philosophically is quite different from their accounts of justification. . . . It looks to me like a lot of analytic philosophers want some sort of permission to hold a view. I don't need reality's permission to hold a view. Let's say I'm a complete instrumentalist about my beliefs and I just go around believing things that are useful to me, and someone comes along and says, 'Yeah, but that belief isn't justified.' Okay. Well, what happens if I ignore it? Nothing. If you act like a pragmatist and ignore non-pragmatic conceptions of justification, there are no consequences to this. There's none! There aren't consequences. So I don't care, because I care about the consequences of my actions. So these non-pragmatic conceptions of justification are practically irrelevant and I don't care about them. Someone could say, 'Ah, but they're true!', okay well your truths don't matter to me. And if someone says 'Yeah but it doesn't matter if it doesn't matter because our quest is to figure out what's true', great, you're operating on a non-pragmatic conception of truth. I reject that as well, so I don't care about that either. . . . I don't believe in correspondence theory . . . So the whole thing is this system that they're operating within where I reject the whole system."
 
Continuing (29:38–30:29): 
 
"But for philosophers that take non-pragmatic approaches, I'm not obligated to abide by their metaphilosophy anymore than they're obligated to abide by mine. What you won't see me doing, at least I don't think so, is going around insisting that if you're not a pragmatist, like you're doing it wrong and you could only do things correctly if you're doing them the way I do. Now, there may be a sense in which I think that that's true, again pragmatically true—I mean it's almost trivially pragmatically true—but I try to be self-aware enough to realize when people are approaching philosophy from a different metaphilosophical perspective and be mindful of that fact and pivot to a discussion about metaphilosophy when it becomes appropriate. But a lot of people that work within conventional mainstream metaphilosophies, they don't see it as metaphilosophy, they're just doing philosophy and if you're not doing what they're doing, you're doing it wrong, you're not doing it at all."
 
I'm on board with the consequentialist aspects of what Lance is saying. And maybe a hard consequentialist position like the one I take leads to a pragmatic theory of truth and justification. I'm aware of Shamik Dasgupta's defense of a pragmatic theory of truth in this paper "Undoing the Truth Fetish." I have yet to analyze his arguments in that paper. So I don't know where I will land on the issue of truth and justification ultimately (or would land given enough time, research, thought, etc.).
 
Where I am at the moment though is that saying "My beliefs aren't justified and I don't care" is exactly as crazy as it sounds. I'm sure Lance can appreciate how it sounds to say "I don't need justification for my beliefs." It sounds, well, crazy. Saying "I don't need justification for beliefs" sounds like saying "I cannot be wrong" or "I don't need reasons to think that something is true to be convinced that it is true or is probably true." Again, that sounds crazy. But if I learned more about Lance’s views then maybe what he's saying wouldn't sound crazy at all.
 
It seems to me that at the heart of justification is this worry of arbitrariness: Imagine philosophers saying "I believe in a..." 
 
Philosopher 1: "...Correspondence theory of truth."
 
Philosopher 2: "...Pragmatic theory of truth." 
 
Philosopher 3: "...Deflationary theory of truth."
 
Philosopher 4: "...Primitive theory of truth." 
 
Philosopher 5: "...Semantic theory of truth."
 
Philosopher 6: "...Coherence theory of truth."
 
Philosopher 7: "...Performative theory of truth." 
 
Philosopher 8: "...Constructivist theory of truth."
 
Philosopher 9: "...Pluralist theory of truth." 

My goal is to believe what’s true about truth. Given that goal, which view of these should I take? Or should I take none of them? 

Here’s an idea: I will assign a number 1–9 to these views and use a random number generator to select a view randomly and I will believe whichever view is selected. You might complain that such a view would not be justified, but I don’t care. My beliefs can be totally arbitrary and that’s fine by me.

Not only would it be crazy to do this, it would be impossible. I can’t believe a philosophical view unless it makes sense to me. The "making sense" part is why reasons are needed. Reasons explain someone’s belief in x rather than y. Again, reasons are answers to 'why' questions, which makes them a kind of explanation. (So in cases where internal explanations aren't needed, like in non-propositional beliefs, reasons aren't needed. But those beliefs still have explanations, say in evolutionary terms.)

I think the problem of evil shows that a perfect being does not exist. Imagine if my true answer to someone asking why I think that is "I don’t care." That would be a bad answer. It would be so bad in fact that it would call into question whether I really believe what I claimed to believe, because, really, it’s not possible to have the answer "I don’t care" if I have reasons to believe my claim; the reasons are the answer! That's why, and how, I believe.
 
Being a bit tongue-in-cheek, imagine I said: I am converting to Nazism. Why? Well, haven't you heard? Justification is not needed! I don't need an answer. 
 
This would just be nonsense, because this is not how belief works. You can't convert to an intellectual position (like a philosophical or political position) without having an answer to the question of why you are convinced that that position is better than alternatives. (I’m not talking about social conversion, but doxastic conversion.) Whether the answer is justifying depends on whether the answer is any good. Does Lance think the answers moral realists give to challenges to moral realism are any good? I would guess not. So doesn't he accept the notion of good answers?

P.S. Before the above discussion, Lance talks about and denies the reality of intuitions, or as Huemer defines calls them, "intellectual seemings."
 
Curiously, within the quote at top Lance uses the phrase "it looks to me", which looks to me like an intuition marker. So it seems to me that an intuition is a seeming ("intellectual seeming" is redundant), which is something you are inclined to believe, agree with, or act as if you believe, but if asked why you believe that thing you wouldn't be able to articulate a clear answer, at least not without doing some serious work first.
 
In this episode, https://www.youtube.com/watch?v=yVFuRH--n2o,  roughly around the 1h:30m mark, Alex Malpass says that intuitions are unreliable and count very little, with seemings acting as something of a practical tool for moving on from intractable problems of skepticism. I'm inclined to agree with that, though I think Huemer would accuse Malpass of self-defeat because Malpass is relying on his seemings when downplaying seemings.
 
In any case, if intuitions are beliefs you believe but can't quite articulate why, then they are in a sense unjustified beliefs (using a reasons-based sense of justification). But if you hold the belief only very lightly, then you're not making the mistake of believing in a way that's disproportionate to the evidence or reasons to believe.
 
It can be worth holding onto beliefs you can't articulate reasons for because 1) you can't help but hold the belief, even if only very lightly, and 2) there may be reasons within the vicinity that do justify that belief, reasons that explain why it was that the belief seemed true to you to begin with.
 
So with intuitions there's this idea of subconscious belief or subconscious understanding involved; to have an intuition is to be subconsciously aware of certain reasons to believe something, but those reasons are not explicit in your mind. (Haven't you had the experience of reading a philosopher who articulates something you already agreed with, but couldn't articulate?) 
 
Back in school sometimes I would answer a math question intuitively. If you were to ask me "Why is that your answer?", I would have said "I don't know, but it feels right", and often I would get math questions right when operating by this feeling. Similarly we hear of "intuition-based" chess players who don't calculate captures or board-states but instead play moves that feel strong and avoid moves that feel weak. It's possible to be subconsciously attuned to a truth without being able to consciously explain it, which is why intuitions are worth exploring to bring out the understanding (or misunderstanding) that was lying underneath.
 
But I'd agree with Malpass (or what I take he'd agree with) that until that exploration has been done, and the reasons for the belief are uncovered, the intuition by itself is not worth anything other than as a jumping off point.

Wednesday, June 25, 2025

The Absurdity of Life Without God (June 2025)

 
1) Is Craig saying that all humans prior to Christianity should have thrown themselves off cliffs in despair over the meaninglessness of their lives? Because that would be a clearly false thing to say.
 
2) In response, Craig might say something to the effect of "Humans prior to the revelation to the Jews or prior to the Gospels could have believed in God as revealed by creation."

But this would be a bad response for the following reasons:

A) Craig himself says that you need God and an afterlife for reality to be ultimately meaningful.[*1] But Orthodox Judaism doesn't teach an afterlife, and no clear teaching of the afterlife can be found in the Old Testament. So really, even belief in the God of the Old Testament, the true God, is not enough on Craig's view for life to be meaningful. I'll say that again: Even believing in the true God is not enough to live a meaningful life on Craig's view! So all the figures of the Old Testament were living meaningless lives, including Joseph, Abraham, and Moses? Really?
 
B) The God revealed by creation is, like Hume points out, not clearly good nor evil, as life is rife with both good and evil. But if the God of creation is not clearly good nor evil, and is apparently indifferent to both our suffering and our joy, then there's no reason to think there will be an afterlife for our sakes.

More importantly, if there is no report of an afterlife revealed by God in a credible way, then there is no reason to think there is an afterlife. It would be pure speculation. So...
 
3) ...would Craig say it's okay to live a life that might be meaningful on speculation? Does life have to be absolutely certainly known to be meaningful (i.e. does God’s existence and an afterlife have to be known with certainty) for a person to rationally live it?

Surely Craig would not set the standard that high. Indeed Craig is infamous for saying that when it comes to pragmatic reasoning, the standard can be extremely low.[*2] As long as there is some chance that there is a god out there and an afterlife out there, then it's reasonable to live according to that chance. So as long as atheists aren't absolutely certain that there is no god and no universalist afterlife, as long as they believe there is a non-zero chance of these things, then they can live on the same kind of leap of faith that Craig champions when he says Christianity is worth believing in even if there were only a one in a million chance of it being true. And many atheists would admit that there is some chance of such a universal salvation, however small.

4) Is a naturalistic worldview unlivable? No! And shouldn't Craig know better, having debated however many naturalists at this point, who all clearly do not find naturalism to be unlivable? Is Graham Oppy, a “scary smart” atheist scholar, somehow irrational for living as a naturalist?

Craig claims that naturalists live as if their lives have meaning, but have no basis for this meaning. But that's not only mistaken, but I'd claim certainly mistaken. The basis for meaning that naturalists have is the same basis that all humans prior to Christianity had, and it’s the same basis that, ironically, even Christians have for meaning.
 
Why do Christians live their lives? Do Christians receive a letter from God in the mail that details their purpose on earth? Or do Christians receive a vision or a dream, or an auditory message from God that details their purpose on earth? Nope. Christians are left to figure things out on their own, the same as everyone else.
 
And so inevitably Christians end up living for: their jobs, hobbies, entertainment, family, friends, exploring the world, because their biology generates an internal pressure to survive, and so on, exactly the same reasons humans have always had for living.
 
You might say that Christians have uniquely Christian jobs like pastor, missionary, and Christian philosopher, but that's only very few Christians. Most Christians are like everyone else: Working some miscellaneous job that puts food on the table for the family. (By the way, why is God so okay with his followers working mundane jobs for decades and decades, knowing what that does to the soul, I have no idea.)
 
We are told in Revelation that in heaven there will be no more death or suffering. We are told that hell is a place of torment. This is a strikingly hedonistic system of value. If Christianity were anti-hedonism, then we could imagine heaven being filled with pain and hell being filled with happiness. After all, if it's things other than pain and happiness that form the basis of value, then heaven could be filled with those intrinsically good things (whatever they are) along with pain, and hell could be filled with those intrinsically bad things (whatever they are) along with happiness.  
 
As it stands, we don't find that description of heaven and hell in the Bible or in Christian tradition. So we're left with a hedonistic picture of Christian value. But if hedonism is true, then happiness forms the basis of meaning. That which is meaningful is that which makes life worth living, and vice versa (this is the meaning in life sense of meaning). Happiness makes life worth living. So that which imparts happiness imparts meaning. And this is exactly what we find in the world: all human motivation can be understood in terms of pursuing certain kinds of happiness and avoiding certain kinds of pain.
 
The reason why I say that it's certainly the case that life is meaningful is because not only is it certain that we experience happiness, but because it is certainly the case that our actions make a difference (this is the difference-making sense of meaning). Craig claims that our lives make no ultimate difference if death is the permanent end. But there is a difference, and a certain one at that, between happiness and pain. The real moments of real flourishing that real people really have—that's the difference-maker, and it's a difference that we have direct access to via immediate experience.[*3]

5) So a naturalistic worldview is certainly livable for the same reasons that life in general is livable, including Christian life. But is a Christian worldview livable? A person can find Christian worldviews to be unlivable for the following reasons:

A) To be a Christian you must select a denomination but it can seem like there are no good reasons to select one over the others.
 
B) When you try to do theology to discover which denomination is best, you discover that there are severe challenges to the coherence of all Christian doctrines, including the doctrines of God, the Trinity, the Incarnation, the Atonement, Salvation, Sin, Eschatology, Heaven, Hell, Creation, Faith, Ecclesiology, etc. 
 
B) There are moral horrors in the Old Testament, horrors that Craig happily defends.[*4]
 
C) Jesus says things that are arguably straight up false, including His teachings on divorce (Mt. 19:9), that the "meek shall inherit the earth" (Mt. 5:5), that "whoever is not with me is against me" (Luke 11:23), "how much more will your Father in heaven give good things to those who ask him" (Mt. 7:7), and "the one who believes in me will also do the works that I do and, in fact, will do greater works than these, because I am going to the Father." (John 14:12).
 
On that last one you might try to wriggle out of it by saying that it refers to only the disciples, but a) it specifies "the one who believes in me", not just the disciples, and b) the disciples did not go on to perform greater works than that of Jesus.
 
You might respond by saying "works" refers to spreading the gospel, and "greater works" refers to how greater numbers of people will be reached by the gospel than what Jesus reached during His earthly ministry. But the term for works, τὰ ἔργα, found in John 14:11, refers to miracles. 
 
D) If someone is committed to a pragmatic theory of truth, and if being a Christian is pragmatically false, then Christianity is false for that person. Some people have found being a Christian to be detrimental to their mental health (religious trauma, hell anxiety, etc.) and success in life. (Why would God give us this heuristic of looking to what works for guidance on what to believe and what to do with one’s life and then allow for Christianity to not work for so many people, I have no idea.)
 
E) Christianity commits one to unbelievable supernatural elements, including:
 
E.1 - Figures in the Old Testament living to hundreds of years;
E.2 - The Nephilim;
E.3 - The Divine Council;
E.4 - Angels;
E.5 - Demons;
E.6 - Satan as a demonic ruler of the world;
E.7 - Hell as a literal, physical place;
E.8 - Heaven as a literal, physical place;
E.9 - The bread and wine of the Eucharist being the literal body and blood of Jesus, if the Catholics are right about transubstantiation;
E.10 - Various miracle stories like talking animals, the flood, the plagues, Jonah and the Fish, pillars of fire, the miracles of Jesus, John's Revelation, etc.
 
F) Many folks who are LGBT report experiencing Christianity to be unlivable, and even non-LGBT folks find Christian ethics to be impractical and naive.
 
G) Infernalistic versions of Christianity are seen as unlivable because a) it's impossible to socialize with people you believe are going to hell; b) it's impossible to believe that the large majority of humanity is going to hell, including close family and friends; c) it's impossible to envision one being happy in heaven knowing so many people, or even a single person, is in hell.
 
6) Speaking of hell, life in hell is, if the accounts of the Bible and tradition accurate, not worth living. A life not worth living is a meaningless life. So life in hell is a never-ending meaningless life! So Infernalistic Christianity is far more guilty—infinitely more guilty—than naturalism of producing years of meaningless existence. If God as a perfect being entails that all existence is meaningful, and if life in hell is not meaningful, then God's existence entails that infernalism is false and there is no hell, an indictment of orthodox Christianity.
 
7) Not only does naturalism not entail a meaningless life, but Christianity does not entail a meaningful life. Christian belief can cause the believer to develop a sense that this life is pointless, because life doesn't truly begin until the end of the world and the new heaven and new earth come. This can cause the Christian to develop a lifestyle of passivity, of waiting for the end of the world. This is exacerbated by the pain of effort and risk of injury. Ambition and achievement go out the window, and the Christian lives an empty life waiting for God to do something or waiting for the Rapture – for life to really start. How tragic!
 
If Christianity ends up false, then these Christians will have wasted their one chance to fight for meaningful experiences in this life. Even if Christianity ends up true, it's still the case that these Christians failed to live well in this life. God, sensitive to these things, should encourage Christians to live for this world, say, by granting special protections to the Christian, allowing them to live more fearlessly. Instead, God allows Christians to be persecuted, martyred, and ridiculed for their beliefs. (And allows them to end up stuck in mundane jobs, as mentioned – an important point, considering how meaningless working a mundane job is, at least for those of us who dream bigger. Let me put it this way: Personally, if God revealed himself to me and gave me a dream job, I would believe in Him and devote myself to Him in a heartbeat, because that would be a fast way to know that God is real. Ironically though, such a dream job would involve doing philosophy research full time, research that happens to have led me to the conclusion that Christianity is almost certainly false).

8) Craig worries about atheists living an inauthentic life. Surely Craig can appreciate the fact that there are Christians and even pastors who live inauthentically? There are testimonies of pastors and seminary teachers who lose faith but keep the Christian mask on to keep their employment. That’s an inauthentic life, not because it’s inwardly atheistic, but because it’s inwardly atheistic in combination with being outwardly Christian!
 
If the concern is strictly with authentic living, then you would demand many people to leave Christianity, because their Christian life is not authentic. It is exactly because of problems of authenticity (both in terms of one's own cognitive dissonance as well as the observed cognitive dissonance in other nominal Christians) that many people stop going to church and leave religion altogether. How many of us have experienced the inauthenticity of Christians, who preach one thing but practice another? I suspect it is exactly because of the problems mentioned of the livability of Christianity that results in these hypocrisies. The falsity of Christianity causes a clash between reality and the Christian's beliefs, and these clashes often manifest in contradictions in the Christian's attitudes and behaviors.
 
Living authentically includes living according to what you believe to be true, and not according to what you want to be true or according to what people around you pressure you into believing. The echo chambers of church life can produce intellectual dishonesty. The desire to be with God, for heaven to be real, for there to be ultimate justice, and the desire for one's devotion to a religion to have not been a waste, can each produce bias that encourages someone to dishonestly stay in their worldview. If Christians were honest, they would realize that 1) They want Christianity to be true more than anything, and 2) This produces a deep bias that seriously compromises the intellectual honesty of the Christian. Do you truly actually believe that Christianity is true, or are you just in it out of fear of death, or desire to see a loved one again? (I don't mean this in a patronizing way in the least. When I was a Christian I wanted to be with God more than anything, and in some sense I still feel that way, although I also grapple with Graham Oppy's remarks on the irrationality of wanting God to exist. See "Naturalistic Axiology", Chapter 5 of Four Views on the Axiology of Theism.)

 *1 - See Reasonable Faith, Third Edition, pg. 74: "So it's not just immortality man needs if life is to be ultimately significant; he needs God and immortality. And if God does not exist, then he has neither."
 
 
*3 - You might think of flourishing and suffering as more sophisticated and involved notions of happiness and pain, and it's really flourishing, not happiness per se, that we should maximize, and it's really suffering, not pain per se, that we should minimize. A very rough approximation of flourishing might be like the following:
 
A human is flourishing when:
 
1) Their basic needs are met, such as food, clothing, shelter, and medical care.
 
2) Their more advanced psychological needs are met, including feeling accepted by and well-integrated into a community.
 
3) They experience happiness on a regular basis.
 
4) They do not experience pain on a regular basis.
 
5) The pains they do experience are instrumentally good, such as the natural pains that accompany self-improvement and the establishing and maintaining of a eudaimonic system. The instrumental goodness easily outweighs the intrinsic badness of these pains. In other words, they do not experience higher-order pain, only lower-order pain.

6) The happiness experienced is instrumentally good and not instrumentally bad. In other words, they experience higher-order happiness, not just lower-order happiness.
 
Ditto, mutatis mutandis, for suffering. Note: I'd claim that we have direct access to whether we are happy or in pain, but we don't necessarily have direct access to whether we are flourishing or suffering.
 
*4 - See "W.L. Craig Defends the Slaughter of Canaanite Children" on the @CosmicSkeptic YouTube channel.