AS46: Moral Philosophy

Is consequentialism the obvious choice for moral theory? Well not according to most philosophers. But then again, no one theory is the obvious choice for philosophers. They are split 3 ways. What are we to make of this? Hear Thomas’s thoughts on moral philosophy and consequentialism in particular in light of last week’s episodes featuring Ryan Born, the winner of the Sam Harris Moral Landscape Challenge!

11 thoughts on “AS46: Moral Philosophy”

  1. As grating as it must be for philosophers to hear science-types misappropriating basic concepts in philosophy, it’s equally hard to hear philosophy-types talk about utility optimization in such a basic way. My work is in optimization and distributed resource allocation. There is a whole field of mathematics devoted to optimization.

    The optimization problem is usually formulated as:
    maximize: f(x)
    subject to: some constraints on x

    x is an optimization variable that represents the different possible states of the system. It could be multidimensional. One dimension would be for each agent (conscious creatures) in the system, others can be any other metric we want: x_i = {socialwellbeing_i, intellectualfulfillment_i, …} and x = {x_i | i \in Universe}. Bascially, the space of variable x is Harris’s landscape.

    The point of contention is f(), the utility function that maps any state of the network to a scalar measure of utility. Ryan talked about two functions: sum and average, found some objections and seemed to think that he was done. IMO Sam Harris’s claim is that there exists some function that would capture utility in a way that jives with our notions of morality. If Ryan wants to disprove that claim, he would have to find a case that cannot be handled by ANY utility function or else show that there exists such a case FOR ALL utility functions.

    FYI, other functions are available.
    http://www.cs.helsinki.fi/u/ldaniel/mm_cn/lec1.2_cc_resource_allocation.pdf
    In fact, a general class of utility functions called alpha-fair include a tuneable parameter “alpha”, between 0 and infinity, to adjust the degree of fairness we want. As for different dimensions of utility, we can use weights to aggregate them, in fact, almost certainly different sets of weights for different people!

    For resource allocation problems in networking and economics, there are typically constraints due to the physical limits. (Network connectivity and link capacity for networks, limits on capital and supply for a company). How do we find these constraints in the moral landscape? Certainly many constraints on the state of the world are physical, and are within the domain of science. Maybe some other constraints will turn out to be things like rights and that’s where philosophers would stake their claim (debatable).

    I’m not sure what Sam Harris’s view is on the time-varying nature of the moral landscape is, but I think that it does change over time. The limits (constraints) of what’s possible has changed and will continue to change over time.

    The fact we don’t know the current correct form of the aggregating function (alpha, weights?), the constraints, or even all the relevant dimensions of the landscape x doesn’t matter. The claim is merely that the moral landscape exists and that our best hope for discovering the contours of the moral landscape is through a better understanding of the physical world and ourselves through science.

    Actually, even you insist that there are aspects of the landscape that cannot be accessed by science, you’re still not really taking down the idea of the moral landscape.

    TL;DR: You can use constraints and more sophisticated utility aggregation methods and still be optimizing.

  2. Great stuff. I certainly do not count myself as an expert, but here’s a few thoughts on this.

    This really boils down to some ideas that we (and indeed many of our great ape cousins) can act with selfless generosity while also being completely self-serving at the same time.

    What we are missing here is both classical and neuro-psychology as well. Wheter or not this is consequentialism, I honestly have no clue.

    The Later shows a great number of our personal actions are driven by neurochemicals driving us to “feel good” or “feel bad”, and as explained by smarter people than me (such Dr. Steve N of Skeptics Guide to the Universe) often (read: all? )our decisions are made before we are conscious of them. The former shows many of our behaviors are modified by our current context.

    Many psychology studies could completely alter trolley analogy, by simply stating one of the following: You know no one is watching you when you are ready to pull the lever. You know someone is watching you while you pull level. An authority figure makes it clear they will punish you for pulling/not pulling the lever. There is part of your “in-group” on the track, and an “out-group” is on the train. Mommy and daddy taught you not to pull any levers. If you don’t pull the lever, you get $10 million dollars, scot free. etc. etc.

    We (i.e. me in my cozy western life) are quite privileged to look at these things as we do. Perhaps the world is better place because of ideals of “freedom”, “equality”, “fair justice” and popping of egocentric bubbles. But this not a universal true, nor is it some special morality shared by humans; its contextually based our current position in history. Not long ago, historically speaking, it was morally right (and accepted by the majority of our western society) that different humans should be segregated, and it was immoral to keep half our population from voting or working outside the home.

    As for one starving kid, being better than many starving kids to get cash, its pretty simple. either A = you think (or unconsciously decide) that “wow I can’t help that, its too much for me handle” B = it makes more personal with targeted messaging “YOU can help this ONE child, cant YOU”. Now if the feeling of guilt this pushes trigger our “moral being” or is just a trick of neurology is probably something freakonomics, or someone else has covered better than I could.

    In the context of the meteor scenario, I think it hitting a large populated area would certainly dive into the idea that “its to big for us to handle, so its time to hush up, and hope it goes well” territory. Certainly Canada’s military experience in the Rewanda genocides makes a horrible small scale example of this thought process.

    1. sigh. correction: it was moral to keep half the population from voting and working outside the home. Second paragraph of the ramble

  3. Two thoughts from a non-expert.
    First Thomas, I am not sure why you are getting hung up on the physics of the trolley thought experiment. I find myself being able to accept the scenario, suspended disbelief perhaps in order to understand the intended point it is trying to illustrate. I am not consciously aware of any influence the faulty physics has on my moral evaluation.
    Second, I think I agree with you when you talk about moral intuition and how it’s not necessarily “right” just because we feel it. I think it’s analogous with our instinct to have sex. It’s a motivation that emerged through evolution because it is beneficial. There is no reasoning behind it is just a blind, unsophisticated motivation. Notice how the instinct to have sex does not decrease when we know that conception is not possible. Empathy, I believe evolved in a similar way. It is a blind motivation that does not turn off when a utilitarian reasoning conflicts with it but it is generally beneficial to our species.

    1. I agree with Thomas about the trolley example. Since we cannot know the future with certainty, a real-life practical consequentialist would make moral decisions based on his/her best guess as to the likely consequences. If I make the claim that in day-to-day life you make moral decisions based on likely consequences, and your counter-example involves super-human strength, momentum defying fat guys, an alternate universe with only one person, or time traveling to kill baby Hitler, then it’s somewhat of a non-sequitur. Even if you can picture the scenario, your intuition would be tied to your ability to extrapolate the future based on how things work in real life.

  4. I am enjoying your discussions on morality. Here’s a couple more thoughts on consequentialism.

    1) According to Gödel even our formal systems mathematics are necessarily incomplete. How much more unlikely is it that we could create a consistent moral system. Perhaps consequentialism is like democracy; the less worse system available.
    2) Rather than thought experiments, try applying consequentialism to larger social issues Here’s a couple to see how it would fare:
    a. Should all recreational drugs be banned? What about alcohol and caffeine? Should all recreational drugs be legalized? What about crystal meth and heroin? Who decides?
    b. Should there be limits on free speech? Who decides on the limits?

    Keep up the good work Thomas.

  5. Hi Thomas.

    First let me say that I’ve recently discovered your podcast and am thoroughly enjoying catching up on past episodes and hearing your thoughts.

    With regard to this last episode, I just wanted to point you and your other listeners to a fantastic book on this very subject called “Moral Tribes” written by psychologist Joshua Greene. Greene runs the Moral Cognition lab at Harvard and also has a background in philosophy. His book is a fascinating blend of moral philosophy (Greene is an unabashed utilitarian) and cognitive science.

    I think you’ll find his conclusions as to how our cognitive limitations lead to our moral intuitions quite interesting. Some of the issues he talks about include the use of people as a means; our cognitive ability to keep track of causal chains; and our apparent dual processing system of morality. As well, he presents the findings of research done on a other Trolley Problem variants including “The Loop Case”.

    I remember you saying you needed to find someone other than Sam Harris to quote on this area. I think you’ll find a wealth of quotable material from Greene.

  6. Hi Thomas.

    Some thoughts on your recent conversation with Ryan Born and your subsequent analysis.

    You seemed to use the terms “well-being” and “happiness” interchangeably and I think that is a mistake. Certainly happiness is a factor in the overall well-being of a conscious creature but cannot possibly be the only factor. What of the happy slave? Or the happy child with terminal cancer? Or the happy woman denied an education with no right to leave home without being covered head to toe and with a male chaperone? These seem to be realistic examples where happiness and well-being do not equate. I doubt I could produce an exhaustive list of factors that would be part of the “well-being calculus” but things like freedom, security, education, justice, health, suffering, family and other relationships, etc all contribute to overall well-being. I think this is consistent with how Sam Harris uses the term.

    The aggregation problem – how to consider the total well-being of a population – is very interesting but I do take issue with some of what Ryan said on this topic. He spoke of a “pile” of well-being that could theoretically be added to and therefore a very happy individual or small population (10? 100,000?) would have less total well-being than a population of 100 trillion living barely tolerable lives. He seems to be interpreting Sam’s position as every individual has a greater-than-zero “well-being score” and we can just add them up – therefore a small population of high scores would be easily outscored by a large population of low scores. I think this is fundamentally wrong. I have no idea how well-being should be measured or aggregated but to me it is obvious that massive amounts of injustice and suffering decrease the overall population’s well-being. I found myself in complete agreement with you that a population of 100,000 very happy and fulfilled individuals would definitely be “better” than 100 trillion miserable people. “Better” in this case must surely mean a higher overall well-being score.

    When thinking about these things I often find myself returning to Sam’s comparison of well-being with health. Many times during your conversation with Ryan I thought it would have been appropriate to reiterate that point. We don’t have to have a mathematically or philosophically water-tight understanding of the term “health” or have a well defined “health calculus” but we recognise the benefits to us individually and collectively when our health score increases (e.g. we are cured of a disease or we get access to medicines that allow us to manage a disease; an injury is healed; we get fitter; etc). As interesting as a thought experiment might be on whether it is right to eliminate some mental disease that increases our ability to create music while severely hindering our ability to communicate and interact socially (borrowed from “House”), that theoretical case should not be something that stops us getting on with the job of healing broken arms and a million other things that increase our collective health score.

    Anyway, thank you Thomas for this very interesting discussion.

    Rod

    1. Hey Rod,

      I really did not intent to use well-being and happiness interchangeably. If I did it was certainly misspoken because I don’t at all think of them as one in the same. I think you’re right that it could have been a good idea to bring up the health analogy with aggregating well-being. We don’t need to know if 100 moderately healthy people is better than 10 super healthy people in order to have a science of medicine. But then maybe the difference is that health is usually personal whereas moral decisions can be more zero-sum? Possibly worth thinking about.
      Thanks for the comment!

      Thomas.

Leave a Reply to Thomas SmithCancel reply