[b]Pooperscootian Utilitarianism Part 3: further explanations

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

Post Reply
Clinton
Posts: 42
Joined: Mon Mar 06, 2023 9:36 pm

[b]Pooperscootian Utilitarianism Part 3: further explanations

Post by Clinton »

Pooperscootian Utilitarianism Part 3: further explanations

Please read the threads on Pooperscootian Utilitarianism parts 1 and 2 before reading this.



What is Pooperscootian Utilitarianism?:
Answer: Pooperscootian Utilitarianism is a universal brand of utilitarianism that strives to maximize the pleasure and minimize the suffering for all feeling life in all universes for all of time.


Why should I adopt Pooperscootian Utilitarianism as my moral code, and/or under what contexts should I do so?:
Answer: So far as I can tell you already want it to be your moral code. You just may not realize it yet. See the thread on Pooperscootian Utilitarianism part 1 for more details.


You claim that I already want Pooperscootian Utilitarianism to be my moral code. What madness is this? That makes absolutely no sense. I don’t even know what it is yet. Things can’t be my moral code if I’ve not heard of them before now.
Answer: I think that belief stems from an inaccurate definition of what it means for us to want things. Think about a dog who habitually chases cars. In this scenario, the dog’s owners choose not to allow the dog to run around outside freely because it chases cars. The dog may believe that it is protecting its territory and pack from a ferocious monster. It has no way of knowing that it’s, instead, pointlessly irritating its owners, while decreasing its own freedom through forcing its owners to not let it run around outside freely, to prevent the dog from chasing cars. In this way, I’d argue it’s the human owners of the dog who truly understand what the dog’s free will desires more than the dog does. So…what the dog really wants, I’d argue, although it doesn’t realize it, is likely to NOT chase cars…in order to gain more freedom…because it could chase lots of other things through that freedom – rabbits – bugs – moles – frisbees – etc, and get to be outside far more often unsupervised. In this way what our “free will” wants is quite often determined more by what decisions we’d make if we had more knowledge than what decisions we might think we currently want to make, with our limited knowledge.

The above is still not a full explanation of what the bloody hell you’re talking about dumbass.
Answer: So, the idea is that there are different degrees of free will, and the more informed you are about the decision you’re about to make the more free will you have. It’s also possible, even for people who are not you, who have more knowledge than you do about the decision you’re about to make to understand what your free will desires more than you do.

Still not a full explanation
Answer: Think about what it feels like to be some animal – let’s say a giraffe. You could have been born a giraffe…or a serial-killing human…or Nelson Mandela. Regardless of which of these figures we’re considering you might having been, were you them all your actions and thoughts would have been pre-destined by your genetics and environment and so you would have made the exact same decisions Nelson Mandela, or the serial killer, or the giraffe did. For this reason, nobody and nothing can really be accurately described as “deserving” anything better or worse than anyone or anything else. With that in mind, and the fact that you could have been born into any organism in the universe, I’d argue the most enlightened way to think about morality is to imagine how you’d behave towards an organism if you would experience everything they experience. Therefore, I’d argue that what we all really want, but we typically don’t realize it, is to behave as if we could feel the ramifications of all our action and inaction on every life form. With that in mind, I’d say it would make a lot of sense to have a moral code that strives to maximize pleasure and minimize suffering for all life in all universes throughout all of time…because I don’t know how you’d better achieve your goal than through that system.


Why is this system that strives to assist all life a utilitarian system, necessarily? Why not have some other moral code instead?
Answer: I don’t see how it could be possible to assist a life form in any relevant way that doesn’t involve increasing their pleasure or reducing their suffering…and I definitely don’t see how it would be possible to assist an organism in any way they care about that doesn’t involve increasing their pleasure or reducing their suffering. That’s what utilitarianism strives to accomplish…typically just for a defined group, but there’s nothing about utilitarianism that prevents it from striving to do so universally.


Does Pooperscootian Utilitarianism strive to determine objective morality?
Answer: I don’t know what objective morality actually is, and I’m not sure anyone else does either. It might. It definitely has the goal of creating a moral code that everyone in the universe wants to exist though, based on my previous description of “want” and “free will” in this thread, and thread #1.


How does Pooperscootian Utilitarianism deal with various obstacles utilitarianism inevitably involves? Is it act-based or rule-based or both? Does it seek to maximize pleasure and minimize suffering through Total Consequentialism, or Average Consequentialism or neither? How does it deal with future events? Have you considered the problems of concepts like “utility monsters,” such as single beings who might gain infinite joy from other being’s suffering?

Answer: I am less confident about what the answers to those questions are than the answers to the other questions I’ve described.


I suspect Pooperscootian Utilitarianism will be primarily act-based utilitarianism, but rule-based in circumstances in which the consequences of following act-based utilitarianism for society would lead to more suffering than some variant of rule-based utilitarianism…the best example of which I can think of would be regarding sex crimes, in the following way: It would seem to me that someone who follows act-based utilitarianism might note that sex crimes against a sleeping person that don’t damage the sleeping person, and that won’t go caught, would lead to no suffering for the victim, and only pleasure for the perpetrator, and therefore you could argue that act-based utilitarianism would encourage such sex crimes. However, if a moral code encourages engaging in such sex crimes, I’d suspect that would greatly frighten society, and therefore cause more suffering than if we had a rule forbidding those types of sex crimes…so, I’d argue that in that type of instance we’d temporarily use rule-based utilitarianism rather than act-based utilitarianism, with the rule forbidding those types of sex crimes. I see rule-based utilitarianism as a last resort though, and act-based utilitarianism to generally be more preferable, because act-based utilitarianism focuses on specific actions and therefore will, I’d argue, be much more nuanced and accurate in terms of maximizing pleasure and minimizing suffering, in general.


Regarding utility monsters: A “utility monster” would be a hypothetical creature that gains vastly more pleasure or vastly more suffering, or both, from actions than most other life forms would. So, an example might be a being who gains 1 billion units of pleasure from eating me, whereas I’d only lose ten units of suffering from being eaten by it, that we’ll say are worth negative ten units of pleasure for me. I don’t perceive “utility monsters” as major challenges for any form of utilitarianism…but rather, just concepts to be accepted, through they tend to be at odd with our instincts. The solution to this brand of monster I’ve described, for example, in this isolated circumstance, would seem to be for me simply to be eaten by the utility monster. It’s of course worth keeping in mind that, if I’m eaten, or if there is one being being given massive amounts more resources than all other beings, that could lead to all sorts of massive reductions in pleasure as side-effects…through jealousy, and ensuing revolts against the creature, or fear/hatred etc. That all must be taken into account for accuracy too…and so there will often be reasons to treat these “utility monsters” not so differently than anyone else, even with their greater emotional needs. Another solution, if their emotional needs result in massive suffering for them, could simply be destroying them to avoid having black holes for resources, and perhaps for their own benefit as well. Whether that is true or not course depends on the context and situation…like everything else though.

-------------------------------------------------------------------------------------------------------

Does Pooperscootian Utilitarianism use Total Consequentialism or Average Consequentialism or something else to contemplate how to maximize pleasure and minimize suffering?

Total Consequentialism = that which is best is based of what decisions increase the total amount of pleasure (with suffering functioning as negative pleasure) the most.
Average Consequentialism = that which is best is based off what decisions increases the average pleasure (with suffering functioning as negative pleasure) the most.

Answer:
So, I’d argue that Total Consequentialism and Average Consequentialism each have their obstacles to overcome. Some of the obstacles with the basic forms of each can be seen in the following scenarios:

Ron is a Total Consequentialist and a utilitarian. Ron’s goal is to increase the total amount of pleasure produced by the people of Earth. Through Ron’s innovativeness, Ron is able to produce more than enough resources for an infinite number of humans to live happy lives. Ron concludes that the more people there are, the more people exist to produce happiness. Therefore Ron begins encouraging women to have as many babies as possible. There exists a problem for Ron though. Most women don’t want to have more than a few children – usually well under 10. Some women don’t want to have any children at all. So, what’s Ron to do?
And here’s the main problem that I see as existing for Ron that totally refutes the idea that Total Consequentialism in its basic form could be rational: The goal of any sensible moral code is to help life forms that do or will exist. There is no point in trying to help organisms that will never exist. Therefore it makes no sense under any circumstances to produce new life forms, unless those life forms assist previously existing life forms, or life forms that will exist in the future, more than they harm them.

So, in other words, we should never be thinking “I should have more babies because that will produce more pleasure for society through the babies’ pleasure.” That’s because the would-be babies benefit in no way from coming into existence. You can simply not have the baby you’re considering having, and then that would-have-been-person automatically never exists, and is not factored into our equations.
The question of whether or not to have children should be totally rooted in whether or not those children would improve society more than harm it. Only after it’s clear that they would is the question of whether or not their lives would be ideal to them factored into the question of whether or not to produce new life.

So, I’d definitely argue that Total Consequentialism in its basic form is unusable...because it argues for endlessly producing new life forms regardless of whether they assist previously existing beings or beings who will exist in the future. What about average consequentialism?

----------------------------------------------------------------------------------------------

Sue is an Average Consequentialist and a Utilitarian. Sue’s goal is to engage in actions that increase the average happiness level of the people on Earth. Sue decides to go about doing this by engaging in a series of light-speed mass euthanizations that all occur before anyone notices. First, she zips around the world euthanizing the least happy people. Then she zips around the world euthanizing the next least happy people. Then, a billionth of a second later she zips around the world to euthanize the next least happy people. She engages in this process because each time she’s euthanized the least happy group of people to increase the average happiness of the people of Earth, there becomes a next least happy group of people on Earth she can euthanize to increase average happiness. Now, ordinarily we could argue that such death would cause massive terror and suffering and perhaps couldn’t be worth the costs…but in Sue’s case she’s moving so quickly that nobody can react in enough time to feel any alarm from it. Nobody has time to realize they’ve lost their relatives, or their own life, or that anything is unordinary.

I’d actually argue that Sue’s method is less clearly flawed than Ron’s. I could see benefit to all life merging into some kind of singular, euphoric organism. The ultimate survivors or survivor would all likely feel fear of being euthanized themselves, and having all their life’s efforts be torn out from under them when they die unexpectedly…but you might be able to deal with those dilemmas by them experiencing personality changes of some kind.

So, I’d say with Sue’s method the end result would presumably be some kind of singular, perpetually euphoric organism…or the extinction of all life, because death is not inherently negative, and life cannot be better than death, because once a life form dies there is no life form that could be benefited from being alive left…so death, in all circumstances is just as ideal as eternal euphoria, if there is no pain nor suffering required to experience the death.

Here's the major flaw I see with Sue’s Method though: Whether it’s worth it for me to live or not is in no way dependent on how happy people on the other side of the planet are.

Sue’s method in its basic state would only allow for the destruction of life if that life leads an inferior existence to something else. However, it would seemingly not allow for the destruction of the life living the most ideal existence of existing life if all existing life were leading a horrifyingly miserable existence, and the most ideal lives of life were only a little less horrifyingly miserable. Sue’s Method in its basic state would not allow for the death of people living down in hell in equal amounts of misery, if everyone were down in hell, but would encourage the death of cheerful medium wage workers with loving families, who merely aren’t as happy as most billionaires.


So, here’s my proposed solution, given that both Average and Total Consequentialism appear flawed in their basic forms (keeping in mind that I could be wrong about this being the best system): I think, unless someone thinks up a better idea, we should use a modified version of Total Consequentialism that is modified in the following way:

Your goal, as with the basic form of Total Consequentialism, is to increase the total pleasure produced. Don’t factor in the pleasure that would have been produced by organisms that won’t exist, only organisms that will exist or do.

So, how does the above deal with organisms that might exist? The same way it deals with any potential future pleasure increases or decreases. We’d calculate the pleasure or suffering that would have been produced by their actions over their life, and I’d argue that we should multiply that by the percentage chance that we believe it is likely to come to pass. So, if we believe there is a 25% chance of a person being born, we’d multiply the pleasure or suffering they’d add to the world through their experiences and actions by 0.25

------------------------------------------------------------------------------------------

I do think a modified version of Average Consequentialism still has its place as well though…so long as you avoid the flaws of Average Consequentialism in its basic form. Here’s how it would work:

I’d argue, in any circumstance, it is equally valid to perceive life forms as individuals, as it is to perceive all life as merely sensory appendages of the same super organism.

The modified system of Total Consequentialism I’ve described is the system that treats life forms as individuals who simply have an interest in assisting each other.

The modified system of Average Consequentialism I’m about to describe actually treats all life more as if we’re sensory appendages of the same multi-universe-and-all-of-time-spanning super organism.

As I see it we’re both individuals and sensory appendages of a single super organism composed of all life…so the question of which path to choose is largely subjective and which is better, I’d argue, will probably depend simply on which you believe to be the most neat or simplest to figure out.

That’s keeping in mind that both Average and Total consequentialism will, in many circumstances work pretty similarly. The goal either way is more happiness/less suffering. It’s just that Average Consequentialism in its basic form has more risks of irrationally destroying people, and Total Consequentialism in its basic form has more has more risks of irrational creating people.

So how might we modify Average Consequentialism to be rid of its flaws?

Well Average Consequentialism wants to devour things…so we essentially bop it on the nose with a metaphorical rolled up newspaper and tell it, “No! Bad Average Consequentialism! No devouring that utopian civilization just because they live slightly less euphoric lives than their neighboring utopian civilization!” and just don’t let it do that and instead have the following system for determining whether or not a life form’s existence is good for it:

Both our Average and Total Consequentialist strategies, we base whether or not it is good for that being to be alive based on whether or not it’s happiness level is above some kind of consistent threshold…that would work in ways I’m less than confident about at this time. We will not be destroying civilizations just because they lead less ideal lives than members of other civilizations.

Our modified Average Consequentialist system can lead to some conclusions our modified Total Consequentialist system will not though, and I find these conclusions especially interesting. I’ll describe how they can work in the following thought experiment:


Thought Experiment: Superhumans – some odd ramifications of our modified Average Consequentalist form of Pooperscootian Utilitarianism

In the following scenario, we will be focused purely on benefitting humanity, which will also be defined as humanity’s descendants, to simplify the thought experiment.
Imagine the people of Earth have an option before them. We have the option of building a species of superhumans. The superhumans would be smarter than we are, stronger, have better immune systems but fewer allergies. They wouldn’t physically age past young adulthood. They’d have better impulse control than us. They’d have drastically higher emotional intelligence. They’d be more likely to lead better lives than we do in every conceivable way due to these traits. However, if we create them, a wizard will immediately teleport all of them into another universe, and we’ll never be able to contact them again. This other universe is very similar to our own. The only difference is that Earth there is unpopulated by human beings…so far…but it has similar natural resources and life.
The downside of creating this new group of superhumans is that the cost would put the previously existing humanity in poverty for several generations.
Should we create the superhumans?
Well, if you’ve read some of the reasoning before this thought experiment, you’d have seen my statements about how there is no reason to create new life unless that life would assist previously existing life or life that will exist. With that in mind, it would certainly seem like the Superhumans would have no means of helping their parent civilization more than they harm them. Their creation might be something humanity likes the idea of…but I would suspect whatever meagre increased positive emotion stems from that would not make up for all the years of poverty and resulting suffering.
So, it would seem like the creation of the superhumans would harm existing life forms, while not assisting them or any life forms known that will exist…so according to our modified Total Consequentialist brand of thought that treats people more like individuals, we should not go this route.
Our modified Average Consequentialist option does not treat people like individuals though, but rather as sensory appendages of a single super organism.
So, our modified Average Consequentialist moral code says that all of humanity (and all life, but we’re focusing on humanity now) is one super organism, and the creation of the superhumans will have improved the average quality of life of humanity. It will have, in other words, assisted the super organism humanity comprises, and therefore there’s an argument that perhaps it should be done.
So, which is the better route? Creating the super humans or not?
I’d say that’s up to you. Maybe humanity could vote on it.

Note that, if you’re wondering why humanity life might be better thought of as sensory appendages of a super organism than individuals, refer to Pooperscootian Utilitarianism thread part 2 that emphasis the fuzziness of individuality.
User avatar
FlashDangerpants
Posts: 6268
Joined: Mon Jan 04, 2016 11:54 pm

Re: [b]Pooperscootian Utilitarianism Part 3: further explanations

Post by FlashDangerpants »

Clinton wrote: Thu Jun 08, 2023 10:55 am I suspect Pooperscootian Utilitarianism will be primarily act-based utilitarianism, but rule-based in circumstances in which the consequences of following act-based utilitarianism for society would lead to more suffering than some variant of rule-based utilitarianism…the best example of which I can think of would be regarding sex crimes, in the following way: It would seem to me that someone who follows act-based utilitarianism might note that sex crimes against a sleeping person that don’t damage the sleeping person, and that won’t go caught, would lead to no suffering for the victim, and only pleasure for the perpetrator, and therefore you could argue that act-based utilitarianism would encourage such sex crimes. However, if a moral code encourages engaging in such sex crimes, I’d suspect that would greatly frighten society, and therefore cause more suffering than if we had a rule forbidding those types of sex crimes…so, I’d argue that in that type of instance we’d temporarily use rule-based utilitarianism rather than act-based utilitarianism, with the rule forbidding those types of sex crimes. I see rule-based utilitarianism as a last resort though, and act-based utilitarianism to generally be more preferable, because act-based utilitarianism focuses on specific actions and therefore will, I’d argue, be much more nuanced and accurate in terms of maximizing pleasure and minimizing suffering, in general.
I don't want to go too far into this until we have heard Vestibule Aquaduct the Official Master[philosophical and magical] of all moral philosophy[proper] and absolutely the only person with the the gravitas and authority to kick off 4 whole threads in one day...

But this little bit I've selected here is a mistake and you should excise it from your argument as it betrays a weakness and shows that you don't actually agree with your own argument. The reasons why are thusly so...
1. You've already told us that we are confused animals that don't know what we really want, so why suddenly reverse-ferret for this one universal hangup regarding sexual assault and alter the whole rationale of your argument to do so? The flimsy thing about causing public concern can't be limited to this specific example, there are all sorts of non sexual battery related issues that it would apply to.
2. You already bit the bullet with the gorilla thing, so you've already ruled out conspicious cruelty to the point of casual murder as the sort of thing where you would apply this exception. Suggesting that you have a wide moral landscape that encompasses everything except this one walled garden for one thing you have to handle differently.

I get it, obviously you understand that the simplest way to break any moral theory is to argue it to an absurd conclusion. So if you want to find out what's wrong with Kant's deontological stylings, all you have to do is look up his explanation for why you shouldn't shoot your dog. Kan'ts deontology looks absurd if you don't bend the rules to incoporate a little bit of what consequentialism brings to the table. But vice versa, consequentialism has obvious absurd outcomes when it argues that torturing a child is actively virtuous if it brings about greater agregate happiness than not turturing the child would have. So you build in your sex crimes exception because you don't want to die on the third rail.

But consider this. Bartholomew is a janitor who cleans the toilets at a gym. He is offered money by a pervert to install a webcam in the ladies' toilets and to upload to the dark web for sale any video in which a lady does a super super ultra noisy shit. The videos will never be linked to the janitor or the gym and he is instructed to never upload pictures of their faces, only close ups of their butt holes while they do noisy shits.

Morally Bartholomew is obligated to take the money (which will bring him happiness) to share the videos (that will make many weird perverts super happy) of the women who will never be harmed unless they somehow see the site and are able (quite impossibly) to identify their own arsehole in close up from an anlge they could never possibly have directly seen it at... or if perhaps they can identify their own farts by noise alone.
Clinton
Posts: 42
Joined: Mon Mar 06, 2023 9:36 pm

Re: [b]Pooperscootian Utilitarianism Part 3: further explanations

Post by Clinton »

Methinks you got to end the end the sarcasm or whatever if you want to be understood better, my dude or dudette...if is sarcasm.
It's entertaining...but I'd say philosophical theory is baffling enough as it is when people discussing it attempt to be crystal clear.

The reason I did 4 threads in 1 day is because if I only did 1 at a time, people would likely have many more questions left unanswered, so I waited until I had all four threads completed, then submitted them all at once. I think this way, the basics of my moral code are all covered.

I'm not sure what this means (particularly the phrase reverse-ferret :lol: ):
1. You've already told us that we are confused animals that don't know what we really want, so why suddenly reverse-ferret for this one universal hangup regarding sexual assault and alter the whole rationale of your argument to do so? The flimsy thing about causing public concern can't be limited to this specific example, there are all sorts of non sexual battery related issues that it would apply to.

I'm not sure what you mean there. I might understand though.

I will note that in this thread I didn't limit it to sexual assault. I rather, stated: I suspect Pooperscootian Utilitarianism will be primarily act-based utilitarianism, but rule-based in circumstances in which the consequences of following act-based utilitarianism for society would lead to more suffering than some variant of rule-based utilitarianism…

Sex crimes are just the purest example of something I can think of with no direct harm if the criminal isn't caught and leaves no damage or signs of a crime. Most crimes don't work that way. Take the question of whether or not a hospital should forcefully harvest organs from people in the waiting room to save ten lives and only inconvenience the people in the waiting room. I can envision no likely reality in which there wouldn't be a significant possibility of that being caught...unless our criminal investigation abilities were so garbage that everybody was living in constant nihilistic and shortsighted fear, so we'd want to avoid living in that kind of society in the first place, and Ideally we'd make a world in which that kind of behavior will get caught. With sex crimes though, I don't see that there's any way to have a world, short of the existence of a watchful God or aliens or something, in which sex crimes are consistently likely to be caught...so we need more than just the potential to be caught to discourage it to avoid large amounts of societal fear and distrust, I'd say.

I guess a non-sex crime other example would be, if a friend of a religion you don't follow who you'll never see again gives you a holy relic he believes is loved by his God...but you know his religion is false, and it would feel really good to you to smash the holy relic after he leaves, but if it were logical for you to smash the holy relic according to my moral code, that would render people who follow my moral code less trustworthy in general and probably cause lots of harm as a result, so I'd say smashing the holy relic would be a bad thing. Most of the time these trust-based issues seem pretty soundly solvable through purely Act Based Utilitarianism, in my experience.

I'm still not sure that there isn't some Act Based Utilitarian system I've not thought of yet though that would work better.

Regarding your "torturing a child to result in greater aggregate happiness than not doing so" being an example of the flawed nature of utilitarianism, argument...actually, to me that sounds like something Act Utilitarianism could handle more easily than sex crimes. It seems like it'd be a lot more straitforward utilitarian calculus.

So, let's say we can cure AIDs through abducting and torturing a child..
We'd think about how much suffering the child is going to experience.
We'd compare that to how much suffering people with AIDs would experience without the cure.
We'd also think about how much harm the resulting distrust would result in, if people believe children might be abducted and used in other cures.
In that way, I'd say that the only way Act-Based utilitarianism would point to abducting and torturing the child being good is if the pros from the curing of AIDs outweigh society's fears. In our universe, there's a very good chance someone would figure out that occurred, and if there isn't, we probably want to alter society so that there is a very good chance that would be figured out so as to avoid fears...and I'd argue that medical professionals who witness such acts should report them in order to avoid more societal fears when they're eventually likely caught, and to set a standard that encourages their fellow medical workers to do the same because, trust, in general, is a good thing.

Can't say that about sex crimes though. For it to be pretty likely for all sex crimes to be caught...we'd have to sleep beneath cameras or something, so the only harm from certain many sex crimes might stem from it being perceived as logical to engage in the sex crimes.

with the child-torturing example...I'd think we'd want to build the type of society in which, if a child is going to be tortured for a medical cure, we'd want to know about it. We probably not want to live in a society in which all sex crimes were known about though...because that would involve sleeping beneath security cameras and constant caution all sorts of uncomfortable strategies, I'd think.

Furthermore, there would, I'd argue, be environments in which it would genuinely be best if society tortured the child to cure AIDS...if the bonds of trust holding society together were already so severed that they weren't worth maintaining. We'd be saving tons of lives and tons of people from the problems of AIDS. So, I definitely could imagine post-apocalyptic environments in which a shadowy cabal of Machiavellian lying rules who steal people to experiment on them could desirable.

Non-physically damaging, un-caught sex crimes being encouraged, on the other hand, I'm not sure lead to much, if any, more joy than reading a good book, and no long term gains, and perhaps considerably less joy than reading a good book. Even in those post-apocalytic environments, calling the sex crimes logical would be trading the much more long-term beneficial capacity to build trust for a few moments of pleasure, so even then I'd say those forms of sex crimes are probably bad best described as just bad, and a good place for Rule Utilitarianism to step in.

Now, what if everybody knows the child would be tortured to cure AIDs?
Well, then there are other societal ramifications that come into question. Could this lead to a decreased sense of protectiveness towards children amongst society? That would have negative consequences. Could this lead to children living in terror? That'd be another negative. We'd just think about the consequences and weight them against the pros.

----------------------------------------------------------------------------------

Regarding your example:

But consider this. Bartholomew is a janitor who cleans the toilets at a gym. He is offered money by a pervert to install a webcam in the ladies' toilets and to upload to the dark web for sale any video in which a lady does a super super ultra noisy shit. The videos will never be linked to the janitor or the gym and he is instructed to never upload pictures of their faces, only close ups of their butt holes while they do noisy shits.

Morally Bartholomew is obligated to take the money (which will bring him happiness) to share the videos (that will make many weird perverts super happy) of the women who will never be harmed unless they somehow see the site and are able (quite impossibly) to identify their own arsehole in close up from an anlge they could never possibly have directly seen it at... or if perhaps they can identify their own farts by noise alone.


The main difference between that type of sex crime and the types I was talking about is that this would result in money...which can be used in ways besides just personal satisfaction...and so therefore I'd definitely argue that whether or not this is okay has a lot to do with what he spends the money on. If he just spends it on his personal pleasure, I'd call that a bad thing.

What if, on the other hand, he spends it saving the lives of starving Bangladesh children? I'd say that's definitely worth considering contemplating doing.

I do think that starving Bangladesh children is an example of how, ordinarily, it's best to not just obey flat rules, but rather to dive in and do the utilitarian calculus.

Also, there could be a chance he'd get caught. I don't know exactly how spy cams work though, so I'm not sure how great that chance is. If he gets labeled a sex offender though, that's a lifelong change that will lead to him being monitored more carefully and hinder his life goals, most likely. And if the spy cam gets found, the company he works for will try hard, most likely, to ensure that doesn't happen again, which means greater risk for him.

On that note...we might want to consider making spy cams less easy to hide, or maybe making wireless, easy-hide-able spy cams illegal for the general public.

So...I'm literally not necessarily opposed to that janitor putting in the spy cams. I do think society should make that difficult for him to do though, so that we can maintain societal trust. I don't think he necessarily has the obligation to maintain that trust though, if the alternative is saving lives of children. I think maintaining that security should be more society's job than his...because the laws generally work is imperfectly, so in areas of their imperfection, I think it's good to be willing to break those laws that don't lead to good moral solutions.

To move away from sex crimes a bit...there are definitely circumstances in which I'd rob a corpse. All of them depend largely on what I'd spend the money on. Now...to prevent chaos from ensuing, that's why I don't think we can ever have a perfectly trusting society without punishments. Sometimes, it'll literally just be the likelihood of being punished that will render something right or wrong.
---------------------------------------------------------------------------------

I think when engaging in utilitarian calculus, one of the biggest mistakes I've seen people who don't like utilitarianism do is that they tend to not think enough about long term consequences...so they don't end up thinking about questions the way a more dedicated utilitarian might. So, with slavery, I've heard people say, "Well, slavery results in lots of pleasure, so that might be a great thing for a utilitairan."

When I think of slavery, though, I think about how it results in some people who have excess pleasure which likely reduces the worth of it, and some people who experiences massive amounts of suffering which likely compounds to increase the suffering, and both people dehumanizing and hating each other due to not understanding each other's circumstances (especially the slave owners) and the ensuing risks of revolutions, and the ensuing losses stemming from security measures required to prevent revolutions.

Then I compare that to a world without slavery in which everybody is much closer to understanding the trials we go through, because we go through more similar trials, so we have more people seeking solutions, and without as much distrust, and I think about how much better a little pleasure feels after a hard day's work than pleasure after massive amounts of pleasure.

I could see benefit to slavery if it allows an intellectual caste of inventors and great thinkers and that kind of thing...but realistically, I'm not thinking that was most slave-owners. Also, more capitalistic means allows whoever's the smartest to be more likely to rise to the top of the inventor pile. Government assistance for economically not-well-off families could accomplish that further. Slavery might have kept a lot of Einsteins and great creators under the boot of slavery. If we do have some system of forced labor, I'd think we'd, at bare minimum, have to have some system that determined accurately who would be the best thinkers and who would be the best members of some caste system or workers...etc...so we'd probably end up with something more like a philosopher king ruling over people who feel they're working hard for the nation they care for that cares for them, with everybody doing their duties based on their skills, than traditional slavery.
--------------------------------------------------------------------------------------------------------------------------

Thanks for the comments.
User avatar
FlashDangerpants
Posts: 6268
Joined: Mon Jan 04, 2016 11:54 pm

Re: [b]Pooperscootian Utilitarianism Part 3: further explanations

Post by FlashDangerpants »

Clinton wrote: Fri Jun 09, 2023 11:23 am Regarding your example:

But consider this. Bartholomew is a janitor who cleans the toilets at a gym. He is offered money by a pervert to install a webcam in the ladies' toilets and to upload to the dark web for sale any video in which a lady does a super super ultra noisy shit. The videos will never be linked to the janitor or the gym and he is instructed to never upload pictures of their faces, only close ups of their butt holes while they do noisy shits.

Morally Bartholomew is obligated to take the money (which will bring him happiness) to share the videos (that will make many weird perverts super happy) of the women who will never be harmed unless they somehow see the site and are able (quite impossibly) to identify their own arsehole in close up from an anlge they could never possibly have directly seen it at... or if perhaps they can identify their own farts by noise alone.


The main difference between that type of sex crime and the types I was talking about is that this would result in money...which can be used in ways besides just personal satisfaction...and so therefore I'd definitely argue that whether or not this is okay has a lot to do with what he spends the money on. If he just spends it on his personal pleasure, I'd call that a bad thing.
Bartholomew spends 47% of his additional income to sponsor all the meerkats at his nearest zoo. 22% goes on sexual services from a toothless crack whore behind the school bike sheds but only well outside school hours. The remainder is not presently accounted for.

Bartholomew's brother Shmartholomew is also a janitor and gets the same offer. There are two altered variables: Shmartholomew spends most of his spare cash feeding the hungry because that is what brings him immense joy, and of course he works in a school not a gym.
Post Reply