AS44: The Moral Landscape, with Ryan Born, Part 2

We continue our talk with the winner of Sam Harris’s essay contest, Ryan Born! We have a very enlightening discussion of philosophy, specifically moral philosophy. Is Sam Harris right when he says that science can determine moral values? Is wellbeing really the bottom line objective in terms of morality? These are some of the many questions we discuss!

Ryan’s blog is http://pointofcontroversy.com

The Moral Landscape Challenge is here: http://www.samharris.org/blog/item/the-moral-landscape-challenge1 and you can find several more posts about it on Sam Harris’s blog.

9 thoughts on “AS44: The Moral Landscape, with Ryan Born, Part 2”

  1. Towards the end Ryan Born says something like this:

    “foundationally, if all that can be morally good is well being, and well being can be investigated, then acquisition of moral truths is indeed a science”

    Ryan seems to assume that ability to investigate/measure well being would produce one-dimensional values. Measure well being (wb), compare results. First action is 3wb, second action is 7wb, therefore second action is more moral.

    What about multidimensional measurements of well being? Let’s say we measure well being by absence of pain (ap), positive sensations (ps), intellectual fulfillment (if), social fulfillment (sf). How does one compare (10ap + 100ps + 5if + 1sf) with (5ap + 10ps + 500if + 10sf)?

    Did I miss Sam Harris or Ryan Born addressing these topics? Do they acknowledge a possibility of multiple conflicting correct answers? How does Sam Harris propose to resolve conflicts between individual preferences for well being?

  2. A very interesting and highly detailed discussion. Ryan is a very entertaining, knowledgeable and absorbing guest!

    To me your discussion reinforces my earlier suggestion that Harris does not actively engage with the philosophical considerations of ethics sufficiently in TML. This detracts from the overall impact of the book in my view.

    Rather than potentially alienating the target readership, the main problem is that he’s still making a massive assumption that Utilitarianism (or the notion of the wellbeing of conscious creatures) can adequately cover all (or even most) moral situations and provide answers to all (or even most) moral problems. As Ryan points out, this is not yet a settled issue in philosophy.

    In discussing Ryan’s examples, you try and go to great lengths to adjust utilitarianism to cover any situation by modifying it for the problem at hand. The issue is, when this is done it tends to borrow heavily from the other morality systems. You may see this as just “obvious”, but in fact, you’re taking something directly from deontology to make that adjustment. This may be subconscious, but hopefully, unless I’m making a big error here, you should be able to see it if you haven’t done so already when I identify it below.

    For example, in AS33, between about 48 to 51 minutes on the timestamp, you and Ryan are talking about the “optimum population dilemma” with regards to consequentialist thinking – and you make the point that realistically we would not be able to reach a higher average wellbeing per person by reducing the population, because “getting there would be bad”. Whilst this is fine, and I agree with you about it, we must acknowledge that “getting there would be bad” is actually not part of consequentialist considerations per se. As demonstrated by the responses to the trolley problem, killing may sometimes be justified in consequentialism. We must consider the circumstances to determine if killing is justified (already done here – average wellbeing is being raised, so tick that box).

    However, “getting there would be bad” is very much a part of deontology (for example, the rule that says “killing is never justified”).
    Therefore you are using deontology to argue for consequentialism. As Ryan says “Actually that’s just the problem, consequentially, it (reducing the population down) would be…(better)”.

    Let’s make no mistake: Consequentialism is the most commonly used and best suited ethical approach to most circumstances . But it doesn’t always work. What if someone doesn’t have either the capacity, time or correct information to make a moral judgement based on consequences? When many people can lead good lives following the simple rule of “doing no harm”, or maybe looking up to their role models as living examples of how they should behave – it’s clear that there are other valid approaches, which have advantages consequentialism doesn’t provide. Sure, these people also make consequentialist decisions a lot, and I’m not saying we shouldn’t chiefly be consequentialist (I think we should), but one ultimately can’t rule out the possible effectiveness of either deontology or virtue ethics in certain situations.

    1. Thanks for your reply! Didn’t imagine my comment was worthy of one…To be honest I’m just playing devil’s advocate, as I didn’t realise Thomas was so pro Harris. Not that I’m anti-Harris, I agree with him most of the time.

      As you say, yes we could add to the intended consequences of the population probelm, but wasn’t it already supposed to have been determined for the purposes of the example that overall, average wellbeing was in fact being increased by lowering the population?

      So we can argue that this is not the case. Having to also consider what has to happen in the act of moving from one level of wellbeing to another, within consequentialist framework, makes it even more complicated; and any “suffering in the implementation of a population reduction plan” needs to be taken account of in the initial determination of change in wellbeing. So the initial conditions have changed and the initial assumption was wrong. We can change the equation, this is fine – but there is no consequentialist “barrier”. Otherwise you are in fact using deontology.

      The reason to not reduce the population, is not because it involves killing people, it’s because we determined that the wellbeing of those remaining would be adversely affected, meaning that it would not be raised overall. It’s complicated. In fact, a big criticism is that it can become too unwieldy a system. An initial look can suggest X, but if we study it more deeply, Y emerges.

      Just as in the trolley problem, can you really blame people for making a different decision to you? Many people are appalled that someone would actually throw the switch to save the five people, condemning the one (I know Tracie Harris has expressed an opinion on this).

      If we conclude, using consequentialism, as you seem to do (and I agree), that increasing average wellbeing in this way is not a desirable result; and we also say that increasing total wellbeing by increasing population is not a desirable result either (as in the 100 Billion people who have lives barely worth living, as Ryan discussed), then I question what real-world use wellbeing actually has, at least from a strategic standpoint, and also, how wellbeing could even be increased at all (besides technological advances etc. that are already happening). The definition of wellbeing can, it seems to me, become easily muddied.

      It will always be possible to argue against certain actions that may or may not increase wellbeing for different people (is increasing the health of X people in Africa worth increasing the debt of your country by Y for example?) Whose wellbeing is more important, and moreover who gets to say that?

      It could very well be, that we find it exceptionally difficult to increase overall wellbeing at all, with all these extra considerations. Seeing as the theory is supposed to rely on maximising wellbeing, doesn’t the system of calculating that need to be refined? To me, it reinforces the idea that the philosophy is not quite there yet, and this still seems problematic for Harris’ approach in TML.

      For many people, following good role models and clear rules of conduct are a lot simpler, and can go a long way towards an ethical life. It’s just that religious rules and religious role models are seemingly always very bad examples to use!

  3. I am still struggling to grasp the distinction between deontology and consequentialism.

    Arguments for deontology seem to say: we need rules because rules produce better consequences than consequentialism. Isn’t that a form of consequentialism?

  4. Thanks for the reply James! Interesting read.

    I have little use for Ryan’s example of 100s of billions of people living lives barely worth living idea because a definition of well being that doesn’t include a reduction of suffering makes no sense to me. That example would be a reduction of well being, not an increase in one.

    I’m not really a moral realist myself, but off the various means of doing ethics, I find consequentialism most convincing because of it’s connection to real world results. Perhaps the discussion of these matters without that connection isn’t that fruitful. Maybe these sweeping examples of global population changes and the like are just as divorced from reality as doctors doing at-will organ harvesting or people so fat they safely stop run away trolleys.

    What I’d really like to see would be challenges to consequentialism that use real case studies and real current situations rather than these made up thought experiments that try to isolate variables to the point of isolating themselves from reality.

    Perhaps this idea of “well being” ends up being useful when we’re not talking about fake things and instead talk about individual cases involving real people.

  5. Some thoughts on the “Trolley” and “Bad Doctor” scenarios:

    Disclaimer – I’m not a philosopher, I know very little of philosophy, and what I’m adding may be obvious to everyone.

    Trolley Problem 1: You are the only person who can hit the switch moving the trolley from the track with the five people on to the track with one person on, thereby creating a net saving of four lives.

    Trolley Problem 2: You are the only person who can push the only suffic iently-heavy-trolley-stopping-person in front of the trolley on the five person occupied track, thereby creating a net saving of four lives.

    Bad Doctor: You are a doctor who can sacrifice a healthy patient to provide organs for five transplant recipients, thereby creating a net saving of four lives.

    These are just stories, and I believe we consider them as such, and place ourselves into the narrative. Let’s start with the easy one.

    Trolley Problem 1: Here we have a runaway trolley, heading towards either five or one victim. However, all six of these people have, carelessly or recklessly, wandered onto the tracks. They have become “participants” in the scenario. You are the person with the finger on the button. Perhaps you are an employee of the trolley company, trained for this, and it is your responsibility. Or perhaps you are the only person next to a switch, with a large sign that says “Citizen – in case of runaway trolley there (arrow pointing at trolley), YOU are RESPONSIBLE for using this switch to decide which of these tracks (arrow pointing at tracks) the trolley will follow. Choose wisely.”.

    In either case, you are also a participant. You have a choice to make, and in this case I believe the logic fairly simple. Unless the five are Nazis, the only universal victim of choice.

    Trolley Problem 2: The trolley and the victims remain the same. However, at this point you are not a participant, only a “bystander”. So is the heavy person. You can choose to become a participant, by throwing yourself in front of the trolley. The heavy individual can also choose the same. However, we live in a society where you cannot make the choice for the heavy person. We call that murder. If we were to say the moral action is to push, the the consequence is a society where it’s ok to go about making these decisions for other people without their consent.

    Bad Doctor: The doctor is a participant by virtue of his skills. The recipients are participants, by virtue of their illnesses. However, the healthy patient is merely a bystander. We as a society have agreed to only accept brain-dead people who have previously volunteered as appropriate participants for donation. To do otherwise is again considered murder. Approval of this action results in the consequence of a society where nobody trusts doctors.

    So, to sum up – could it be our sense of our place in a narrative structure that allows us to quickly make moral decisions of these types, and by doing so expressing deeper forms of consequentialism than just adding up lives. I don’t know – perhaps we should think up some other narratives that explore this hypothesis. Some science person should test this! Of course, given my ignorance, perhaps they already have. I would welcome any input anyone could give me on this.

  6. your B Livius to the big picture we truly have control over population growth um.. Licenses for kids and or let’s say and I may be wrong but with our world power and innovations and new technology and science Channing small jeans for short amount of time to make it where unable to reproduce until genes change back to normal therefore in some way or another going to a test to prove there worth while for a person /set . Diet drastically you guys have lowered it more and more I’ve noticed it on the way these things and people are eventually we will get back to the way we were were small colonies were in certain places definitely segregated but you can not morally be okay with taking a human life keeping one coming in bothers me but at the same time you’re not stopping a potential of something great for every one of us including myself how you a bility to change the world in such a beautiful and amazing ways that we can never comprehend or understand for in fact no matter what we’re not perfect but the parts of us that are not perfect are different now everyone art speech business life people animals everything has a true beauty and a purpose unless somehow someway I’m speaking ro f****** god by God I mean overall spirit and your essence of everything you have no right to play this role and Here I am thinking logically smartly soundly beautifully at the same time have an actual place to live just waking up to my consciousness of reality and existence first step understanding me understanding is what your morals are and what’s behind your understanding what is it okay to teach person

Leave a Reply to Darren BennettCancel reply