I’m back for some more analysis of points brought up in the Sean Carroll episodes. Spoiler alert: Sean Carroll was right about Boltzmann brains. Who would have thought! It took a commenter with a background in physics to actually address the problem I brought up on its own terms to convince me. I got even more comments on objective morality, so I spend some time clarifying that. Finally, I tried listening to a conservative podcast for about 2 weeks! Tune in and I’ll tell you how that went..
Podcast: Play in new window | Download (Duration: 40:00 — 46.7MB)
Subscribe: RSS
Your discussion of Boltzmann brains is remarkably well informed, especially for a non technician. Your contention seems to be about how a certain integral works out: it’s higher likelihood to have a fluctuation that generates one mind, but the lower likelihood scenarios generate many minds, so how does the probability distribution of minds work out? This seems like a totally legitimate question that can only be answered empirically or by theory based calculation. TLDR not bad
I don’t honestly know of a non-berserker Republican talk show host (I am careful to make a distinction between conservatives and Republicans). Alex Jones is worse than Micheal Savage. Jones’ current rant is that Michelle Obama is actually a man—no kidding—which makes Barack Obama a muslim communist Kenyan homosexual traitor. Just when you thought he couldn’t get any worse.
David Brooks, who writes for the NT Times, is a conservative with a brain. George Will is another. These guys write regular columns but don’t have talk shows or podcasts that I’m aware of.
The Boltzmann Brain concept sounds so much to me like the surface-to-air missiles in the Hitchhiker’s Guide to the Galaxy that were suddenly transformed into a sperm whale and a bowl of petunias by the spaceship’s infinite improbability drive. I wonder if that’s where Douglas Adams got the idea.
Regarding objective morality, I’m always somewhat surprised that the theory of Alonzo Fyfe, the Atheist Ethicist (http://atheistethicist.blogspot.jp/) is never considered, as he has a model of objective morality that makes more sense (as far as I can tell, not being a philosopher myself) than any alternatives.
If I had to pick a core idea of that theory (which might differ from the one that Alonzo would pick), it’s that morality is not about what one individual should do, but how a group of people (we are a social animal after all) should use praise and condemnation to achieve our goals. It’s rather subtle and my short presentation cannot do it justice.
Regarding the use of the words objective and subjective, I believe Alonzo uses the example of somebody’s height, which is clearly an objective measure, despite the fact that it depends on that specific person and is therefore not general. In the same way, given how we behave as a species and what motivates us, we can certainly consider that there would be objective ways to bring about the results we desire, including in the domain of morality, even if the specific answer is more elusive.
Well, to call it a theory of “objective morality” requires specifying that there are two different definitions of “objective”.
There is the type of “objective morality” that identifies moral principles that are entirely independent of psychological states. Of course, that type of objective morality really does not exist.
However, there is a type of moral objectivity that says that the truth of a proposition is independent of whether or not a person or people believe (or desire) that it is true. This type of moral objectivity exists.
Take the proposition, “Jim prefers butterscotch over chocolate.”
If there were no mental states – no preferences – then this statement will be nonsense. It won’t even be false – it won’t apply.
On the other hand, the real world has preferences. It has a person named Jim. This is a fact. Is as much a part of reality as . . . say . . . the distance of the earth from the sun or that water is made up of two hydrogen atoms and an oxygen atoms. It is the type of fact that explains and predicts the movement of real objects in the real world. Anybody who denies that I prefer butterscotch to chocolate would be wrong.
Now, the fact that I prefer butterscotch to chocolate does not imply that anybody else does or even should prefer butterscotch to chocolate. But then again, the fact that I am right handed does not imply that anybody else is or should be right handed, and the fact that I am X years old does not imply that everybody else is or should be X years old. Yet, the proposition, “I am X years old” or “I am right handed” are objectively true propositions. So is the proposition that I prefer butterscotch to chocolate.
If I were to say, “Butterscotch is better than chocolate” in a context that is reduced to (says exactly the same thing as) I prefer butterscotch to chocolate then the phrase “Butterscotch is better than chocolate” is just as objectively true as “I prefer butterscotch to chocolate.”
Of course, I would not have time to go into it here, but the question is – can moral statements be reduced to statements of this type. If they can, then they are objectively true or false.
These objective values are not independent of psychological states. However, psychological states exist. They are a part of the world. You cannot explain and predict many of the things that happen in the world without using mental states to explain them. And relationships between objects of evaluation and mental states are just as real – just as objective – just as much a part of the world.
I think you are using objective and subjective in the same ways that Thomas is , you have “the results we desire” whereas Thomas has “well being”, but both carry the same philosophical implications I think. What I think they are both getting at is if we can all agree on some basic starting point then we can shake out an objective theory of morality from those basic assumptions. This is where I have a disagreement with Thomas, I think his theory is far to heavily front loaded. His concept of well being or in you words “desired results” is doing most of the moral heavy lifting. Coming to a consensus on what constitutes well being I see as the hardest part, although I will grant that should such an unlikely event come to pass then we could move forward toward Thomas’ quasi-objective morality.
The concept of well being is so broad that any and all moral systems could be supported by a variations on the concept. Even granting the broadest definition(which I still think presents hurdles) of minimizing suffering then we are left to determine what suffering is. Is it purely physical pain or does mental anguish count? How do we weigh the difference between the two. Is toiling away at a menial job suffering or does it instill character? You will need endless debate on endlessly finer and finer points and we still haven’t gotten to any moral prescriptions we’re just defining well being.
My own personal preference is to view morality as a combination of social and biological forces that interact to create our basic moral intuitions. How those intuitions play out in the world is based on our skills at convincing people and the power dynamics inherited from history, which I guess could seem unsatisfying and merely contingent on chance.
I actually don’t think broad ethical agreement is that elusive, if people get clear about what they mean and are honest. I view the objection to notions of well being as a foundation for morality the same way I view consciousness denial. Yes, it’s possible to articulate these positions in a logically coherent way. But all of our discussion on these issues is predicated on a basic understanding that we’ve all had experiences that make specific positions certain. Abstract arguments on paper can make sense of consciousness denial and moral relativism, but, by being conscious creatures, we all have first hand data that refute them.
First, your objection about character building is a tacitly utilitarian one, so it hardly flies as an objection to consequentialist ethics.
Second, we all have first hand experience of states of being that we can effortlessly rank ethically. Imagine two circumstances. In the first, your life proceeds as it has. In the second, you were flayed alive and then magically repaired at age 10, then had your memory of this event erased so as to prevent any “character building.” If you see no difference between these two scenarios, then I would argue you don’t have any place in a conversation about ethics. You’re just playing an uninteresting and unhelpful game with words.
On the question of deriving “ought” from “is”, note that nobody considers this a problem for hypothetical imperatives based on desires. “If you want to increase your chances of surviving an auto accident, you should wear a seat belt”. It is only categorical imperatives that generate an is-ought problem. If morality is a system of hypothetical imperatives (i.e., “Given the set of desires that exist as a matter of fact within a community, the ought to adopt a social rule against lying) there is no is-ought problem.
Lights in this case are objectively true or false.
By the way . . . the objectivity of hypothetical imperatives applies to the game of chess.
“If you want to win this game of chess, then you ought to make move X and not move Y” is an objectively true statement. It only applies to people who want to win a game of chess, but this does not prevent it from being objectively true.
Similarly, it is an objectively true fact that, “Given human desires, people ought to establish social rules against breaking promises, lying, and taking the property of others without their consent” is an objective fact. It is not a fact that exists independent of desires – where desires do not exist, reasons to establish these social rules do not exist. However, desires DO exist – they are a part of the real world. Consequently, the reasons to establish these social rules exist – they are also a part of the real world.
There is a significant difference between the concepts of desire and well-being.
The proposition, “Agent desires that P” is a purely descriptive statement. It is neither good nor bad in itself that Agent desires that P. It is simply a fact.
Yet, this fact reports a reason for action. An agent who has a desire that P has a motivating reason to realize a state of affairs where P is true. This is what desires do. Desires select the ends or goals of intentional action, while beliefs select the means.
As I mentioned above, we use desires and beliefs to predict and explain events in the real world. Why did Japan attack Pearl Harbor? Why did Jim refuse that promotion?
However, “well-being” is a value-laden term. What is a part of well-being is good by definition. Thus, it does not answer questions of value. It only transfers the question to questions of whether a state counts as well-being.
(Note: I have been investigating the thesis that well- being is found in the fulfillment of one’s self-regarding desires, thus reducing statements of well-being to statements about descriptive facts that generate reasons for action.)
On the idea that we must “agree on a basic starting point” . . . actually, we do not.
Imagine two families living in close proximity. Family 1 is a middle-age couple that gets its drinking water from a conventional well. Family 2 has three children and has a truck come by one a week from the city to fill a local water tank.
One day, one of the children discovers a big hole in the field. It turns out that farmers decades ago had built a conventional well, then covered it up. Over the years, the cover rotted and parts of it has collapsed, revealing a deep hole.
Family 1 wants to bring in somebody to remove this old rotted covering and put on something that is more sturdy. This is because they do not want anything to fall (or be thrown) down the well that might contaminate the water supply.
Family 2 wants to bring in somebody to remove the old rotted covering and put on something that is more sturdy to prevent their children from accidentally falling into the well.
These two families have not agreed on any set of common principles. They do not even have common interests. Yet, their different and distinct interests both recommend the same course of action – bringing somebody in to put a new and sturdier cover on the well.
If you take any community in which the following is true: (1) the community is made up of beings that have desires, and (2) some desires are maleable, then it is virtually axiomatic that there will be some desires that people generally will have many and strong reasons to promote and others they will have reason to inhibit. The former are desires that tend to fulfill the most and the strongest of other desires, while the latter are desires that tend to thwart the most and strongest of other desires.
We do not need to introduce any type of presupposed agreement for this to be the case. The motivation to promote the “good” desires and inhibit the “bad” ones comes entirely from the desires fulfilled by the “good” desires and thwarted by the “bad” ones.
Now, we add one more contingent biological fact. The creatures in this community have a brain with a “reward system”. That is to say, desires are molded or modified through a system of rewards and punishment. Let us also add that praise operates on the brain as a reward and condemnation as a punishment.
Now, we have a community where there are desires that people generally have many and strong reasons to promote through rewards and praise, and desires that people have reason to inhibit through a punishment and condemnation.
I do not think it is at all difficult to support the conclusion that people generally have reason to use these tools of reward, punishment, praise, and condemnation to support such things as: (1) a desire to help those in dire need, (2) a desire to repay debts, (3) an aversion to taking the property of others without their consent, (4) a desire to keep promises, (5) a preference for true beliefs over false beliefs, (6) an aversion to the use of violence except in defense of the innocent.
This does not require any type of intrinsic prescriptivity. These desires obtain their value precisely because they are useful. The motivation to reward, punish, praise, and condemn comes entirely from the desires that these interests will help to fulfill.
Nor does it require any type of prior agreement. It requires a community of creatures with desires, some of which are malleable – nothing more.
Mark Steyn is a conservative voice that you may find palatable. He operates at a high level of intellectual honesty and his sense of irony is outstanding.
And for British conservatives you can try Douglas Murray from the Henry Jackson society and James Delingpole of the Radio Free Delingpole podcast.
Sigh I’m going to troll and wonder if an apologist could use the Boltzmann brain to justify that god is also a high probability because maybe he is a brain or series of brains.. Noooo.
For conservative economics, try the podcast EconTalk with Russ Roberts.
Hey Thomas,
So I’ve seen and heard a lot of discussion on objective morality and I think, much in the same way the comments you were receiving on the Boltzmann’s brains problem, most are missing the point
First, I want to concede the starting point, let’s assume that we have all agreed on the desired outcome of morality, which it seems is equivalent to agreeing on rules in chess. I would like to argue that even if this were the case, as it is in the example that I’ll give below, there is still vast disagreement on how one can accomplish the outcome.
The example I’ll be using is that of consequentialist ethics, a term used to broadly describe any moral theory that is concerned primarily with calculating the well being of all agents. Within this specific group of ethicists there exists disagreement about how and what to calculate, who counts as agents, and importabtly, how much weight we should give to well being over other considerations( such as duty and rational self interest).
This is where I believe the analogy with chess breaks down and where we can pry apart the idea of even an objective starting point. An agreement on moral objectives leads inevitably to an agreement on outcomes, but from those agreements it does not follow that there must be an agreement on the methods or “moves.”
None of this is to say that atheistic ethics needs to concede anything to the theist critic in terms of objective morality. Even if the atheist cannot point to an objectively derived set of rules concerning morality, they can certainly respond by saying that as rational, empathetic agents the majority of humanity agrees on loose codes of conduct, ones that far predate that of any conception of an objective adjudicator in the sky. And that from those codes we can then derive logically consistent and well-being enhancing ethics.
I hoped I’ve helped to clarify the anti-objectivist position a little bit and would be happy to have a conversation if anything I said made any sense whatsoever.
Last thing I’ll say is that as a recent listener of the show, I’ve been enjoying myself tremendously and hope you keep up the good work.
Like others have said, it’s hard to come up with a conservative podcast or show that’s not super-crazytown, but Michael Savage is DEFINITELY NOT the way to go for thoughtful conservative perspective. He’s in between Limbaugh and Jones on the crazy meter…nuttier than Limbaugh, but not a complete conspiracy loon like Jones.
Look for anything with Andrew Sullivan in it. Although, I don’t think he’s got his own podcast. He was one of the first political bloggers back in the early aughts, and he stopped last year or the year before because he got burnt out on it. But he’s a thoughtful conservative, and wrote an extensive, wondrous, harrowing article on the dangers of a Trump presidency.
Hi Tom,
I listed to episode AS244 and episode AS245 where you talked to Sean Carrol & where you talked about receiving mail that attempts to explain the Boltzmann Brain stuff to you. I hope I am not one of those who misunderstood you & offers useless, irrelevant information.
The chance of a physicist brain is greater than the chance of a physicists brain AND the entire Darwinian evolution ALSO. The more contributing factors you add to an event, the LESS likely it is to occur. The opposite idea is the logical fallacy of conjunction.
“You are not so smart” podcast episode 077 “The conjunction Fallacy” will explain it better than I can & may help you see the error of thinking a brain is less likely to occur than a brain AND evolutionary processes that produce the brain. Yes there is little to no common sense to that but the math works out that fewer contributing factors = more likelihood of occurring.
Minute 11:05 – 14:00, is a good explanation of the conjunction fallacy. You can skip to that; you don’t have to listen to the whole podcast.
Minute 3:33 – 5:55 & 8:20 are good examples too.
I hope this helps. ????
Raphael