Happy Atheist Forum

General => Philosophy => Topic started by: billy rubin on April 23, 2020, 08:48:08 PM

Title: utilitarianism
Post by: billy rubin on April 23, 2020, 08:48:08 PM
utilitarianism is the philosophy that a lot of people assert is optimum:

what society should do is seek the greatest good for the greatest number.

so we work towards maximum net happiness, maximum net prosperity, maximum net health, maximum net human potential, and so on.

sounds good.

but there's a catch.

maximum good for the maximum number means that some people's good will be sacrificed for the greater good of more people.

in practice, this absolutely and specifically means that if five sick people can be saved by killing one healthy person to harvest organs from to distribute to the sick, that that is what the decision should be.

ursula k leguin explored thi once long ago

https://en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas

it has relevance now in an time and place where we are sacrificing aspects of people's lives to benefit aspects of other people's lives. for example, we are driving people's businesses into financial ruin in order to reduce the health risks to other people.

there are other tradeoffs that come to mind.

where should we place the balance point in this? how many financially ruined familes balance a family saved from death from the virus? does physical health overbalance economic health? after all, you can't be prosperous if youre dead.

but most people will not be seriously impaired by the disease, although they can transmit it. how many of those people should be economically isolated to protect how many who are physiologically vulnerable?

how many of these gun-toting morons clamoring to free the country from tyranny shoul;d we take seriously, if any?

just wondering here

Title: Re: utilitarianism
Post by: Old Seer on April 24, 2020, 12:59:39 AM
Quote from: billy rubin on April 23, 2020, 08:48:08 PM
utilitarianism is the philosophy that a lot of people assert is optimum:

what society should do is seek the greatest good for the greatest number.

so we work towards maximum net happiness, maximum net prosperity, maximum net health, maximum net human potential, and so on.

sounds good.

but there's a catch.

maximum good for the maximum number means that some people's good will be sacrificed for the greater good of more people.

in practice, this absolutely and specifically means that if five sick people can be saved by killing one healthy person to harvest organs from to distribute to the sick, that that is what the decision should be.

ursula k leguin explored thi once long ago

https://en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas

it has relevance now in an time and place where we are sacrificing aspects of people's lives to benefit aspects of other people's lives. for example, we are driving people's businesses into financial ruin in order to reduce the health risks to other people.

there are other tradeoffs that come to mind.

where should we place the balance point in this? how many financially ruined familes balance a family saved from death from the virus? does physical health overbalance economic health? after all, you can't be prosperous if youre dead.

but most people will not be seriously impaired by the disease, although they can transmit it. how many of those people should be economically isolated to protect how many who are physiologically vulnerable?

how many of these gun-toting morons clamoring to free the country from tyranny shoul;d we take seriously, if any?

just wondering here
What you're describing is, or is akin to Marxism. That process is already underway. On the right (conservatives) they refer to the media (MSM main stream media) as cultural Marxist media. This process has been underway for the last 40 or more years. It looks as though they chose this time to give it a hard push. Marxim relies on the attachment to fascism, where corporations became government, and what was government is representative of corporate operators. In Europe this idea has already failed. The EU is on the way out as countries are reclaiming their sovereignty. A "one size fits all system cannot work as such a system requires all to be put in a mental straight jacket. The EU is run by Billionaires that are never elected but represented by those in the EU parliament. It seems to be another attempt at Utopianism, but that has to fail because the very idea of utopianism is counter to nature, and there is no provisions in nature for such an idea. The natural forces existing on and within the planet can instantly destroy any utopia. In a little more time we'll see.

Be aware, an evil person will work harder doing evil because he has something to gain. A good person has little or nothing to gain by doing good.
Title: Re: utilitarianism
Post by: billy rubin on April 24, 2020, 11:26:26 AM
maybe so, but utilitariansim iz way older than marxism.

jeremy bentham, who iz currently mummified somewhere in britain, was an early proponenr.
Title: Re: utilitarianism
Post by: Asmodean on April 24, 2020, 11:28:42 AM
Quote from: billy rubin on April 23, 2020, 08:48:08 PM
utilitarianism is the philosophy that a lot of people assert is optimum:
This depends on where you are. In some areas, people largely put the individual before the collective. Those are the "free societies."

Quotebut there's a catch.

maximum good for the maximum number means that some people's good will be sacrificed for the greater good of more people.
That's not even a deal-breaker to me. The top of the "catch" pile as I see it, is the level of collectivism required.

Quoteit has relevance now in an time and place where we are sacrificing aspects of people's lives to benefit aspects of other people's lives. for example, we are driving people's businesses into financial ruin in order to reduce the health risks to other people.
Yep. That's what collectivist processes do. "Calculated" (or otherwise) sacrifices for that mythical "Greater Good." That said, it is common, and not necessarily "wrong" to turn more collectivist in times of crisis.

Quotebut most people will not be seriously impaired by the disease, although they can transmit it. how many of those people should be economically isolated to protect how many who are physiologically vulnerable?
I find it interesting that at least in my country, the rights of special interest groups often seem to supersede the rights of the general population or the individual. I can often understand it, even see how it can be necessary for a "highly civilized" society to remain as such, but as a matter of principle, I disagree with that philosophy, and my voting history thus far mirrors that fact to the point of pride.

Quotehow many of these gun-toting morons clamoring to free the country from tyranny shoul;d we take seriously, if any?
Don't take a man with a gun seriously at your own peril, but I think the greater danger to your own person than a "redneck freedom fighter" comes from people with Molotovs, bricks and suicide vests, and even that is small compared to people with cars.

Still, I take them all seriously - be they Antifa, terrorists for Allah, "From my cold dead fingers'" or drivers coming towards me on a high speed road.
Title: Re: utilitarianism
Post by: Old Seer on April 24, 2020, 01:18:09 PM
Hey, I finally found the reply button. I'll be goda heck! That's what happens when one is highly logical, simple things are overlooked.  ::)
It seems utopian ideas end up killing a lot of people to get such a society established. I can't imagine how any ant hill society can work with cognizant beings. Any one (at least over time) can recognize being in a slave society. There's always reason in ones life to be free of coercion. If the idea is to create universal happiness at the expense of others, then the other side has to make it possible, as there would have to be acceptance of universal sadness on one side to create the other. Most people on the happy side over time would see the inhumane process and be disturbed by it. Universal law dictates that there must be both sides that all are subject to.
Sargon of Akkad on youtube has a very applicable analysis of Demolition Man I found interesting.
Title: Re: utilitarianism
Post by: Asmodean on April 24, 2020, 01:28:32 PM
Quote from: Old Seer on April 24, 2020, 01:18:09 PM
It seems utopian ideas end up killing a lot of people to get such a society established.
...While failing to establish said society and broadly oppressing the survivors.

QuoteI can't imagine how any ant hill society can work with cognizant beings. Any one (at least over time) can recognize being in a slave society. There's always reason in ones life to be free of coercion. If the idea is to create universal happiness at the expense of others, then the other side has to make it possible, as there would have to be acceptance of universal sadness on one side to create the other. Most people on the happy side over time would see the inhumane process and be disturbed by it. Universal law dictates that there must be both sides that all are subject to.
Thus, I advocate maximizing freedom in stead, and letting people work out their own happiness. (I am not a Libertarian nor an Anarchist. The maximization of freedom I speak of applies within the confines of the trends of the (significant) majority. Basically, if killing people makes you happy, but 2/3 of the population don't want to let you, then you don't get to do it)

By the way, and in the spirit of learning, it's a little finicky, but if you want to respond the way I just did, it works as follows;
[quote author=Old Seer link=topic=16554.msg400494#msg400494 date=1587730689]
Your original message with link. This appears automatically if you use the "quote" button or "insert quote" function when replying.
[/quote]
My response to you

[quote]
Your next point
[/quote]
My next response to you

[quote author=Asmodean Prime]
Asmodean Prime's point without link to post
[/quote]
My response to Prime


For ease of use, I usually put some spaces in the text where I want to insert a point (In the reply window, after having quoted the post) then start at the top and insert the quote tags as needed. The one without the slash starts the quote, the one with ends it. You can think of them as code equivalents to "".
Title: Re: utilitarianism
Post by: Old Seer on April 24, 2020, 04:36:54 PM
I'm going to have to practice the process, only don't actually post. I'm not highly forum literate. (no jokes now).  :)

The proponents of such societies don't take into account "they" may be the ones dropping grapes into another's mouth while laying on a couch. It takes servitude for such to exist---no one wants to be the servant. What we have now is a mutual servitude system, and I don't say anything's wrong with it. One is always going to be a slave in one way or another. (just a religious note) An apostle points to, serving your fellow man is serving God.
Title: Re: utilitarianism
Post by: Asmodean on April 24, 2020, 05:40:19 PM
Mmh... I'm not convinced.

Who/what am I a slave to? I have a job, going to which is a voluntary exercise (Although money is a good incentive, I largely don't just quit because I like doing what I do where I do it), I rent a home, which is also a voluntary exercise. I can pack my bags and fuck off, though again, I like it here. The bank doesn't own anything I have. I don't subscribe to "taxation is theft," because I see paying taxes as my lease for living in a nice country. I do have some obligations here and there, which I did not bring upon myself or necessarily agree to, but they don't amount to servitude, let alone slavery.

While probably true of many people, how does your point apply to someone like my own sweet self?

Title: Re: utilitarianism
Post by: Old Seer on April 24, 2020, 06:02:03 PM
On the first count we are all enslaved to each other via money. You must serve someone to acquire it. I have no quarrel with that. In our (Old Seers) studys we asked. If times change to a personal life, (On your own land in you own house) where does my auto come from. So, it's back to horses. Having horses on the farm, I don't want horses. They have to go to work for me----------------I have to go to work for them. They help me grow my food for me, I have help them grow their food for them. Inter enslavement, can't get around it. The universe enslaves me to do it's ways. I have to walk (work) over to the apple tree to get the apple.

Self sufficiency is a very sparse material existence. The body type we have requires to much maintenance.  We can't sleep out in the snow bank like the Deer. So, figuring all that out why not come up with a hamlet economy of about 100 persons or families. Everyone takes on something, so I volunteer to grow all the Corn for everyone in the hamlet. To cut it short--- I agree to be enslaved for my own good. I cannot (known from experience) have enough time to make a comfortable existence on my own. So, you take it from here.  :D
Title: Re: utilitarianism
Post by: billy rubin on April 25, 2020, 10:29:18 PM
not ignoring you, folks. just waitin for the weekend to reply here.

i cant think straight enough during the work week. what i come up with is often shallower than usual
Title: Re: utilitarianism
Post by: Davin on April 27, 2020, 05:19:47 PM
I always feel like these "do the maximum" "cause the minimum" types of philosophies sound entirely tiring.

Also, I think they are incredibly unrealistic and unreasonable. "But," I've heard defenders say, "you're only supposed to try for it, no one is expected to actually achieve it." Great, add that to the actual philosophy then. No? Then this criticism stands.

It's like all the great object oriented philosophies in software development, if you try to follow all of them 100%, they start to get in the way of each other and tend to make things kind of bad. But if you follow them 90-95%, they all work together beautifully and makes development and maintenance easier and faster, as well as making things easy for other developers to get into. Which is why I tend to not follow many philosophical concepts 100%. Also part of why I find most self described philosophers tedious to deal with.

Anyway, what I like about utilitarianism is the focus on utility. In that morality isn't merely navel-gazing, mental masturbation, but something meant to provide utility to the decisions and actions. That is something I can get behind.

I guess that's why I can't follow any of these fancy philosophical moral frameworks and have none of my own. Because I think while they are nice and most presented have some things worthy of consideration and have a thing or two to offer, the real world that we live in isn't simple enough to be handled by any of them.

Edit: cleaning up some bad grammar that made things unclear.
Title: Re: utilitarianism
Post by: Ecurb Noselrub on April 27, 2020, 07:44:45 PM
The general tension is between utilitarianism (a teleological approach to ethics/philosophy/politics) and a deontological approach (focusing on duty and obligation).  Bentham v. Kant.  The US Constitution, in the political realm, took a bit of both: utilitarianism in the sense of "majority rules", but tempered with a deontological approach in the Bill of Rights (majority rules, but it can't remove these rights from the minority, like free speech, etc.).  In ethics, a deontological approach looks at principles and deduces action from these without respect to outcomes, while utilitarianism looks at the greatest good.

It's rarely one or the other - like most things, it's a combination of both. Which in and of itself is a bit utilitarian - whatever works best.
Title: Re: utilitarianism
Post by: Asmodean on April 28, 2020, 07:55:41 AM
Quote from: Old Seer on April 24, 2020, 06:02:03 PM
On the first count we are all enslaved to each other via money.
But "we" are not. I'm in a voluntary, consensual relationship with my employer, who exchanges money for my services. Money here is a means to and end, as are my services. My and my employer's ends may differ, but there is no threat of force involved at any stage of this relationship.

QuoteYou must serve someone to acquire it.
To serve is not the same as to be a slave.

QuoteInter enslavement, can't get around it.
Ah, but you can. Call it symbiosis - mutual reliance, one you can unilaterally break if you so choose.

QuoteThe universe enslaves me to do it's ways. I have to walk (work) over to the apple tree to get the apple.
This just thins out the term "slavery" to mean practically any interaction. I do not accept the scope of your definition.

QuoteI agree to be enslaved for my own good.
Then you are not enslaved.
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 06:04:56 PM
These moral questions result in moral judgments, and it makes more sense to think that moral judgments come from the brain and not some objective framework created by a divine external force. Therefore, you can't divorce these moral questions from the psychological and/or social contexts in which they find themselves. And people are more emotionally than rationally driven in their decisions.

For instance, you could resort to a more or less utilitarian solution to a moral problem depending on how emotionally invested you are in the outcome. To make my point clear, if you found yourself faced with the Trolley Problem and you had to decide which person or people have to die, your answer could vary if a loved one was in either group. Because emotions are not rational, if a loved one was on one track by him or herself odds are way greater that you would sacrifice 5 strangers on the other track in order to save him or her.
Title: Re: utilitarianism
Post by: billy rubin on April 28, 2020, 06:52:06 PM
hey asmo

Quote from: Asmodean on April 24, 2020, 11:28:42 AM
Quote from: billy rubin on April 23, 2020, 08:48:08 PM
utilitarianism is the philosophy that a lot of people assert is optimum:
This depends on where you are. In some areas, people largely put the individual before the collective. Those are the "free societies."

i live in a so-called free society, maybe one of the most outspoken, and the general belief here is cooperation for the general good, even at individual cost. at least at first. but as individuals become more and more disadvantaged, they are beginning to clamor for a balance point more in their direction.

this implies that collectivism isn;t very deep. but is it very deep anywhere? societies like china and india have adopted coillectivist responses to the pandemic, but they have generally been forced by a n authoritarian government, may the marxist system old seer has pointed out.  im nbot sure what to make of new zealand.

Quote
Quotebut there's a catch.

maximum good for the maximum number means that some people's good will be sacrificed for the greater good of more people.
That's not even a deal-breaker to me. The top of the "catch" pile as I see it, is the level of collectivism required.

so youre saying that no there is no general philosophy that you regard as better or worse than another?

what is the measure of value that you use to determine "the level of collectivism required" in a pandemic?

required for what?

- maximum number of lives saved?
- maximum number of healthy people?
- maximum number of people saved who will save other people?
- maximum value of people saved-- artists, politicians, virologists?
Title: Re: utilitarianism
Post by: billy rubin on April 28, 2020, 07:02:06 PM
Quote from: xSilverPhinx on April 28, 2020, 06:04:56 PM
These moral questions result in moral judgments, and it makes more sense to think that moral judgments come from the brain and not some objective framework created by a divine external force. Therefore, you can't divorce these moral questions from the psychological and/or social contexts in which they find themselves. And people are more emotionally than rationally driven in their decisions.

For instance, you could resort to a more or less utilitarian solution to a moral problem depending on how emotionally invested you are in the outcome. To make my point clear, if you found yourself faced with the Trolley Problem and you had to decide which person or people have to die, your answer could vary if a loved one was in either group. Because emotions are not rational, if a loved one was on one track by him or herself odds are way greater that you would sacrifice 5 strangers on the other track in order to save him or her.

uh oh

trolley problem.

that asks th eutilitarian ism question in its most basic terms.

did you ever hear the version with the drawbridg keeper who had to decide whether to crush his little boy who had fallen into the geartrain in order to lower the bridge for the runaway passenger train?

but the trolloy problem is usually phrased in order to catch you on the horns of the dilemma:: kill this set of people, or that set of people . . . decide.

i used to believe that there was another solution to the trolley problem, which was to refuse to participate, and not to touch the switch lever. i was a theist back then and my reasoning was that there was going to be death either way, and the most important decision was whether or not i was willing to be part of a dilemma that required me to kill. the justification for this line of thinking is that allowing death by inaction does not carry the same responsibility as causing death by action, even when the numbver who die is greater.

an extreme version is the dilemma as to whether to torture a terrorist in order to determine the deactiviation code for his bomb that will kill many people. is torture acceptable in this instance? if it is, when is it ever wrong?

i don't think that way anymore, but neither do i have a general solution to the problem. i don't think there is a general solution.
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 08:44:39 PM
Quote from: billy rubin on April 28, 2020, 07:02:06 PM
Quote from: xSilverPhinx on April 28, 2020, 06:04:56 PM
These moral questions result in moral judgments, and it makes more sense to think that moral judgments come from the brain and not some objective framework created by a divine external force. Therefore, you can't divorce these moral questions from the psychological and/or social contexts in which they find themselves. And people are more emotionally than rationally driven in their decisions.

For instance, you could resort to a more or less utilitarian solution to a moral problem depending on how emotionally invested you are in the outcome. To make my point clear, if you found yourself faced with the Trolley Problem and you had to decide which person or people have to die, your answer could vary if a loved one was in either group. Because emotions are not rational, if a loved one was on one track by him or herself odds are way greater that you would sacrifice 5 strangers on the other track in order to save him or her.

uh oh

trolley problem.

that asks th eutilitarian ism question in its most basic terms.

did you ever hear the version with the drawbridg keeper who had to decide whether to crush his little boy who had fallen into the geartrain in order to lower the bridge for the runaway passenger train?

but the trolloy problem is usually phrased in order to catch you on the horns of the dilemma:: kill this set of people, or that set of people . . . decide.

i used to believe that there was another solution to the trolley problem, which was to refuse to participate, and not to touch the switch lever. i was a theist back then and my reasoning was that there was going to be death either way, and the most important decision was whether or not i was willing to be part of a dilemma that required me to kill. the justification for this line of thinking is that allowing death by inaction does not carry the same responsibility as causing death by action, even when the numbver who die is greater.

an extreme version is the dilemma as to whether to torture a terrorist in order to determine the deactiviation code for his bomb that will kill many people. is torture acceptable in this instance? if it is, when is it ever wrong?

i don't think that way anymore, but neither do i have a general solution to the problem. i don't think there is a general solution.

I think the really interesting thing we learn from the Trolley Problem is that there is no universal answer, though most people when asked will sway toward one outcome or another depending on how you ask the question and frame the problem. And that's where psychological differences or contexts come in.

For instance, most people will say they would pull the lever that switches tracks and results in the train killing 1 instead of 5 strangers. That is possibly the most rational utilitarian approach. But if you ask them if they would do the same if that 1 person was a loved one, most say they would opt to save their loved one and murder the 5 strangers. No longer rational.

See I used the word 'murder' there? Some might think it's a bit of a strong choice of word. The way the above Trolley Problem is framed, pulling a lever gives the person making the choice some psychological distance. They're just making a choice and pulling a lever, not actively killing the 1 person or 5 people. If you add the second part of the Trolley problem, this becomes clear.

For those who are not familiar, the second part goes like this:

Say you are on a bridge standing behind a really heavy person and you see a train approaching. On the tracks below just a little further ahead there are 5 people tied up who will die if the train is not stopped. The extremely heavyset man is just above the tracks, and if you push him, he will fall and stop the train, saving the other 5. All people involved are of the same 'emotional value' (strangers) to you.     

Supposing you only had these two choices, do you push him or not?   

Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?

Turns out psychological distancing from the action of killing a person is no longer there. From the moment you push someone, it becomes murder. You are at the center of that action and are forced to take ownership of the choice and acting on that choice, beside being keenly aware it.     
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 08:57:38 PM
Sorry for rambling there without really answering your questions. :blahblah:

:grin:

I just love catching a tiny glimpses at the gears working inside the complex mind.  :P The Trolley Problem was a pretty well designed thought experiment, IMO.
Title: Re: utilitarianism
Post by: Davin on April 28, 2020, 09:16:01 PM
Quote from: xSilverPhinx on April 28, 2020, 08:57:38 PM
Sorry for rambling there without really answering your questions. :blahblah:

:grin:

I just love catching a tiny glimpses at the gears working inside the complex mind.  :P The Trolley Problem was a pretty well designed thought experiment, IMO.
I have issues with the trolley problem. I guess more about how it's used. I feel like people use such an extreme (and highly unlikely), event and then tries to pull the reasons behind the judgment and use them in less extreme situations that are more likely to happen. It's like the opposite of everything looking like a hammer. As if some one asks you how you would solve:
2x2 + 5x + 3 = 0
and once you do, asks you to use that same method on:
2 + 5 = ?

It's a bit crazy to me.
Title: Re: utilitarianism
Post by: billy rubin on April 28, 2020, 09:22:40 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
For instance, you could resort to a more or less utilitarian solution to a moral problem depending on how emotionally invested you are in the outcome. To make my point clear, if you found yourself faced with the Trolley Problem and you had to decide which person or people have to die, your answer could vary if a loved one was in either group. Because emotions are not rational, if a loved one was on one track by him or herself odds are way greater that you would sacrifice 5 strangers on the other track in order to save him or her.

i suggest that the choiceis entirely rational, because loved ones are likely kin, aND  by saving known kin you are saving a portion of your genotype. TThis is orthodox sociobiology, using maynard smiths model of inclusive fitness.

assuming a nearby stranger might share perhaps 1/128 of your chromosomes, that's the same as you share with  a pretty distant cousin some seven divisions removed. but your child shares 1/2 of your genes. so your child contains 64 times as much of you as the stranger does. you will have to save 64 strangers to do your genotype as much good as saving your child.

if the unit of selection is the gene, and if thereis a gene for altruism, it will impose a selection force 64 times as strong regardingyour child as it does regarding a stranger.

so you save the loved one, no emotions required. cold hard natural selection will cause you to evolve a tendency to favor kin over strangers. obviously this has mplications for motherly love, parentlal investment and care for offspring, and so on. its why the step child is neglected compared to the natural siblings.

this ihas been documented pretty clarly in a number of social animals. i'm thinkng of alram calls in belding's ground squiorrels, for example. they warn siblings of dsanger, but let cousins get eaten, when sounding alarm calls might endnger themselves.

https://science.sciencemag.org/content/197/4310/1246

Quote

I think the really interesting thing we learn from the Trolley Problem is that there is no universal answer, though most people when asked will sway toward one outcome or another depending on how you ask the question and frame the problem. And that's where psychological differences or contexts come in.

For instance, most people will say they would pull the lever that switches tracks and results in the train killing 1 instead of 5 strangers. That is possibly the most rational utilitarian approach. But if you ask them if they would do the same if that 1 person was a loved one, most say they would opt to save their loved one and murder the 5 strangers. No longer rational.

See I used the word 'murder' there? Some might think it's a bit of a strong choice of word. The way the above Trolley Problem is framed, pulling a lever gives the person making the choice some psychological distance. They're just making a choice and pulling a lever, not actively killing the 1 person or 5 people. If you add the second part of the Trolley problem, this becomes clear.

For those who are not familiar, the second part goes like this:

Say you are on a bridge standing behind a really heavy person and you see a train approaching. On the tracks below just a little further ahead there are 5 people tied up who will die if the train is not stopped. The extremely heavyset man is just above the tracks, and if you push him, he will fall and stop the train, saving the other 5. All people involved are of the same 'emotional value' (strangers) to you.     

Supposing you only had these two choices, do you push him or not?   

Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?

Turns out psychological distancing from the action of killing a person is no longer there. From the moment you push someone, it becomes murder. You are at the center of that action and are forced to take ownership of the choice and acting on that choice, beside being keenly aware it.   


^^^this was my dilemma regarding being the agent in a trolley scenario, versus standing by and watching.
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 09:23:32 PM
Quote from: Davin on April 28, 2020, 09:16:01 PM
Quote from: xSilverPhinx on April 28, 2020, 08:57:38 PM
Sorry for rambling there without really answering your questions. :blahblah:

:grin:

I just love catching a tiny glimpses at the gears working inside the complex mind.  :P The Trolley Problem was a pretty well designed thought experiment, IMO.
I have issues with the trolley problem. I guess more about how it's used. I feel like people use such an extreme (and highly unlikely), event and then tries to pull the reasons behind the judgment and use them in less extreme situations that are more likely to happen. It's like the opposite of everything looking like a hammer. As if some one asks you how you would solve:
2x2 + 5x + 3 = 0
and once you do, asks you to use that same method on:
2 + 5 = ?

It's a bit crazy to me.

I think those people are misusing the Trolley Problem when they generalise it in that way. I don't think the point of it all is to predict or model what people will do in so and so circumstances, but rather it's a psychological tool to shed light on moral decision making and judgements.

I think the reason it has to be so extreme in its examples is because there are way more variables in the moral grey areas, so it makes sense not to want to tread in those areas. But of course, most moral decisions are made taking into account a huge number of factors, and people will decide things differently based on their biology, experiences, beliefs, etc. 
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 09:52:09 PM
Quote from: billy rubin on April 28, 2020, 09:22:40 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
For instance, you could resort to a more or less utilitarian solution to a moral problem depending on how emotionally invested you are in the outcome. To make my point clear, if you found yourself faced with the Trolley Problem and you had to decide which person or people have to die, your answer could vary if a loved one was in either group. Because emotions are not rational, if a loved one was on one track by him or herself odds are way greater that you would sacrifice 5 strangers on the other track in order to save him or her.

i suggest that the choiceis entirely rational, because loved ones are likely kin, aND  by saving known kin you are saving a portion of your genotype. TThis is orthodox sociobiology, using maynard smiths model of inclusive fitness.

assuming a nearby stranger might share perhaps 1/128 of your chromosomes, that's the same as you share with  a pretty distant cousin some seven divisions removed. but your child shares 1/2 of your genes. so your child contains 64 times as much of you as the stranger does. you will have to save 64 strangers to do your genotype as much good as saving your child.

if the unit of selection is the gene, and if thereis a gene for altruism, it will have a selection 64 times as strong in  your child as it does in a stranger.

so you save the loved one, no emotions required. cold hard natural selection will cause you to evolve a tendency to favor kin over strangers.

Emotions have a genetic basis and evolved as well. ;) And if you research a bit on the neuroscience of decision-making you will see just how important emotions are in everyday decisions. I'd bet they take part in ALL decisions. Marketing has this figured out since the time of Edward Bernays (Freud's nephew). There's even a neurological condition (I'm racking my brain trying to remember the name of the disorder, but can't...maybe it'll come to me soon) in which sufferers have a really hard time deciding basic stuff because they have impaired emotion, even though their ability to rationalise and IQ are normal.

I ask you, if you had to make a choice to save your child or a stranger you would stop and think, I need to save my kid because they share 50% of my genes or would you not think at all and save them because you love them? Act first, think later? 

I suggest you read this short essay on Jonathan Haidt on the rational-emotional mind and moral reasoning, just for fun ;) :

https://cct.biola.edu/riding-moral-elephant-review-jonathan-haidts-righteous-mind/ (https://cct.biola.edu/riding-moral-elephant-review-jonathan-haidts-righteous-mind/)

I'll just add a little snippet here, to entice your appetite.  ;D

QuoteHe [Haidt] shows us that morality is neither the result of rational reflection (a learned exercise in determining values like fairness, justice, or prevention of harm) nor merely of innate, inherited assumptions. Haidt gives us the "first rule of moral psychology": "Intuitions come first, strategic reasoning second" (367). Human morality is largely the result of internal predispositions, which Haidt calls "intuitions." These intuitions predict which way we lean on various issues, questions, or decisions. The rational mind—which the Greek philosophers so valorized and even idolized—has far less control over our moral frameworks than we might think. Intuition is much more basic and determinative than reasoning.

If you're really interested in these two systems: the fast, intuitive thinking versus the slower, rational thinking then I suggest this book by Nobel laureate Daniel Kahneman:

(https://http2.mlstatic.com/livro-thinking-fast-and-slow-importado-D_NQ_NP_896257-MLB31346626019_072019-O.webp)
Title: Re: utilitarianism
Post by: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 10:49:04 PM
Quote from: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."

Yes, the amount of people killed vs. saved is the same, but the mental paths people take to reach a course of action are different. And that's the point I think this problem is trying to show.

It takes a reductionist approach in that it removes a lot of the complexity, which are extra variables and therefore make a "dirty" experiment. This approach has its pros and cons, of course and I think in the cognitive sciences results such as these can rarely be generalised.   
Title: Re: utilitarianism
Post by: Davin on April 28, 2020, 10:56:32 PM
Quote from: xSilverPhinx on April 28, 2020, 10:49:04 PM
Quote from: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."

Yes, the amount of people killed vs. saved is the same, but the mental paths people take to reach a course of action are different. And that's the point I think this problem is trying to show.

It takes a reductionist approach in that it removes a lot of the complexity, which are extra variables and therefore make a "dirty" experiment. This approach has its pros and cons, of course and I think in the cognitive sciences results such as these can rarely be generalised.
I think it's more than the mental paths it takes to get to the conclusion. For instance, one might consider taking someone entirely out of danger and putting them in a position to be harmed (to death), different (and in most cases worse), from changing the direction of a train from five people already in a position of danger to one person already in a position of danger.
Title: Re: utilitarianism
Post by: xSilverPhinx on April 28, 2020, 11:41:02 PM
Quote from: Davin on April 28, 2020, 10:56:32 PM
Quote from: xSilverPhinx on April 28, 2020, 10:49:04 PM
Quote from: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."

Yes, the amount of people killed vs. saved is the same, but the mental paths people take to reach a course of action are different. And that's the point I think this problem is trying to show.

It takes a reductionist approach in that it removes a lot of the complexity, which are extra variables and therefore make a "dirty" experiment. This approach has its pros and cons, of course and I think in the cognitive sciences results such as these can rarely be generalised.
I think it's more than the mental paths it takes to get to the conclusion. For instance, one might consider taking someone entirely out of danger and putting them in a position to be harmed (to death), different (and in most cases worse), from changing the direction of a train from five people already in a position of danger to one person already in a position of danger.

Ah ok, I think I understand the point you're making now. Yes, I think you're right. There are other decisions involved. I think it has to do in part with the psychological distancing I mentioned earlier. Even empathy is involved.

It's interesting that you put it that way. Just as an addendum to my point about emotions driving decisions in a reply to billy rubin, in brain scans evaluating moral decision-making there is higher activation in prefrontal regions (such as the orbitofrontal cortex and ventromedial cortex, both more or less just behind the forehead and above the nasal cavity) which are typically not not very activated in some psychopaths, for example. These two regions are both very important in these kinds of decisions and are linked to the emotional centers in the brain.
Title: Re: utilitarianism
Post by: Asmodean on April 29, 2020, 12:32:25 AM
There is a thing other than the trolley problem, that I think merits more of a... Well, think, in this day and age; self-driving cars. There's talk about programming them with some sort of ethical protocols or what have you. Personally, I oppose the idea entirely - a vehicle's priority, if it can be thusly called, should in my opinion always be the safety of its own occupants.

Come to think of it, that's no more than I demand of myself when driving with passengers, so if for some reason I AM barreling down the motorway at 110 km/h, carrying another person, surrounded by sheer cliffs, and suddenly there IS a kindergarten full of babies, each with a puppy and a kitty besides, well within my maneuvering distance... Requiescat im pace.

I actually did a survey about this, and they analyzed my data wrongly. In the result, they claimed that so-and-so many people would save a pregnant lady before an old guy or a animal before a human... No. From the point of view of a car, placed in a bad situation through no fault of its own, I always sided with my occupants. Simple as that.

This deserves its own thread though, I think, so I digress before we go into more realistic examples and this shit gets well and truly dark.
Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 12:39:15 AM
Quote from: xSilverPhinx on April 28, 2020, 09:52:09 PM
Emotions have a genetic basis and evolved as well. ;) And if you research a bit on the neuroscience of decision-making you will see just how important emotions are in everyday decisions. I'd bet they take part in ALL decisions. Marketing has this figured out since the time of Edward Bernays (Freud's nephew). There's even a neurological condition (I'm racking my brain trying to remember the name of the disorder, but can't...maybe it'll come to me soon) in which sufferers have a really hard time deciding basic stuff because they have impaired emotion, even though their ability to rationalise and IQ are normal.

I ask you, if you had to make a choice to save your child or a stranger you would stop and think, I need to save my kid because they share 50% of my genes or would you not think at all and save them because you love them? Act first, think later? 

no, no . . . no reasoning required in this model. its mechanical, like any examples of natural selection.  animals recognize relatedness for close kin by lifelong association, and then make intuitive decisions based on similarity. think of phenotypic associative mating. if you can distinguishe between your child and and a stranger--to any degree-- then natural selection over time will mindlessly select for behaviors that favor survival of genes that recognize kin and encourage nepotism. im asserting that love is a mechanical motivator like fear, pain, cold, and hunger.

but i'm not at all saying that emotions don't enter in to the equation. i personally think that the exprienceof emoption is the heritable physical trait that is subject to selection and is an important part of the model im suggesting. emotion is the brain's immediate motivation for  action, and its the lizard brain that sees the offspring in danger and screams Save It! or sees the stranger, and cooly concludes, Not Worth The Risk.

Quote
I suggest you read this short essay on Jonathan Haidt on the rational-emotional mind and moral reasoning, just for fun ;) :

https://cct.biola.edu/riding-moral-elephant-review-jonathan-haidts-righteous-mind/ (https://cct.biola.edu/riding-moral-elephant-review-jonathan-haidts-righteous-mind/)

I'll just add a little snippet here, to entice your appetite.  ;D

QuoteHe [Haidt] shows us that morality is neither the result of rational reflection (a learned exercise in determining values like fairness, justice, or prevention of harm) nor merely of innate, inherited assumptions. Haidt gives us the "first rule of moral psychology": "Intuitions come first, strategic reasoning second" (367). Human morality is largely the result of internal predispositions, which Haidt calls "intuitions." These intuitions predict which way we lean on various issues, questions, or decisions. The rational mind—which the Greek philosophers so valorized and even idolized—has far less control over our moral frameworks than we might think. Intuition is much more basic and determinative than reasoning.

i will read the rest of it after i dig my stinking culvert. just finished teaching th enumber three son that you can loosen corrosion-frozen hose connections enough to remove them by hand by banging th ejoint on the cellar steps, if you don't want to walk 1000 feet to the warehouse to get a wrench.

Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 12:48:53 AM
Quote from: Asmodean on April 29, 2020, 12:32:25 AM
There is a thing other than the trolley problem, that I think merits more of a... Well, think, in this day and age; self-driving cars. There's talk about programming them with some sort of ethical protocols or what have you. Personally, I oppose the idea entirely - a vehicle's priority, if it can be thusly called, should in my opinion always be the safety of its own occupants.

well thats interesting, because its s real-world trolley problem. i think there's going to have to be some soul-searching on the part of the programmers in this field.

take a self-driving vehicle--no occupants-- that is programmed to recognize pedestrians and slow down and stop when they are in its path and to steer around them when it can't. it pulls off a  high speed motorway and the brakes fail. ahead is an unavoidable crowd of people, a coarse-grained obstacle. clumps of people, large and small, with narrow gaps.

the vehicle cannot stop. where should it steer?


^^this question is going to have to be addressed, in the real world, pretty soon. id never thought of it before.
Title: Re: utilitarianism
Post by: Asmodean on April 29, 2020, 07:26:57 AM
Quote from: billy rubin on April 29, 2020, 12:48:53 AM
take a self-driving vehicle--no occupants-- that is programmed to recognize pedestrians and slow down and stop when they are in its path and to steer around them when it can't. it pulls off a  high speed motorway and the brakes fail. ahead is an unavoidable crowd of people, a coarse-grained obstacle. clumps of people, large and small, with narrow gaps.
See? It's already getting dark in a hurry.

I think it should emergency-maneuver like a sensible "AI." Avoid frontal impact, avoid oncoming lanes, use side barriers to slow down (If there be a ditch in lieu of side barriers - go in it at an angle conducive to dumping maximum speed for a minimum of damage. If there be a field - even better)

A thing one also ought to factor in, is the "AI's" ability to get on that horn faster than any human would while maneuvering for their lives. Then, one does have to take into account the pedestrians' ability to look towards the sound, do a quick risk assessment and dive for the nearest "not here."
Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 08:37:49 AM
well, avoiding the difficult decision is a very human thing to program into an AI, and i agree the algorithm should run down a list of strategies to attempt before concluding that the unavoidable clumps of people are genunely unavidable.

but that sidestepsthe decision of interest. sooner or later the manufacturers are going to have to decide:

- hit the car or hit the motorcycle?

- hit the bus or hit the pedestrian?

- hop the kerb into the crowd or plow into the jaywalkers?

- kill the child or kill the adult?

im curious as to whats actually being programmed in right now. heading for the ditch is always the better stategy, but what should we program in for the situations without one? what shall be the general rules?

once i drove a semi into a high speed tunnel in pennsylvania, a long one, a mile and a half, narrow, two lanes with no pull off space.

as i went in i blew a radiator hose andthe tunnel disappeared in steam. when it cleared i was still going, 60mph. no where to stop.

then the temperature gauge began to rise. the check engine light went on.the alarms began to sound, the dash display began to flash warnings, ENGINE DAMAGE IMMANENT. SHUT DOWN SHUT DOWN.

. . . and then, in a two-lane high speed tunnel three quarters of a mile from either end, the computer turned my engine off.

as islowed down, i thought over the situation. if i stopped the truck inthe tunnel, i would block one lane. if i was then hit from behind, the wreck might block the entire tunnel. more andmorecars might pile up. there might be a fire. if there were a fire, i was a long way from emergency crews and a blockedvtunnel would keep it that way. many people could die. it had happened before. nasty business.

all this took about five seconds tp process, and then i reached down and pushed a button the engineers had thoughtfully provided for any available human beings to consider:

SHUTDOWN OVERRIDE

the alarms still sounded, but i started the truck in motion, drove the distance to the end of the tunnel, pulled over, shut the hot motor off, and called for help. end of problem.

that ovrerride button might have saved 20 lives, but it was the last truck i drove that had one. iasked my compny afterwards why the new trucks didnt come wi5h overrides and told them my story.

they said, well, after all, how often does that happen. . .

their humanalgorithm had made the decision that the cost of the override circuit was too high, considering the rarity of needing one. of course, i had a different point of view.

what should we program into a self-driving truck to solve this problem?


Title: Re: utilitarianism
Post by: Asmodean on April 29, 2020, 09:26:01 AM
Quote from: billy rubin on April 29, 2020, 08:37:49 AM
but that sidestepsthe decision of interest. sooner or later the manufacturers are going to have to decide:

- hit the car or hit the motorcycle?

- hit the bus or hit the pedestrian?

- hop the kerb into the crowd or plow into the jaywalkers?

- kill the child or kill the adult?
Do let us not forget, "will the potential customer buy my product, which might 'willingly' kill him, or that brand over there *point,* which will always try to save its occupants?"

If it's just me, that's one thing, but let's do a "for instance;" what if I had a kid? would I want to put said kid in the car in question alongside me, well-knowing that it would sacrifice him/her under certain extreme circumstances? That ought to churn through the head of every self-driving-car-buying parent, no?

We really ought to have its own thread for this. I may or may not see to that after work, as I actually have more to respond to here.
Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 01:32:19 PM
sure.

dont forget that the vehicle is empty of occupants. thats the commercial business model right now.

we have those being tested over here now, including little automated delivery vehicles.

plus full size semi trucks that go coast to coast without a driver. just a tester in them right now.

Title: Re: utilitarianism
Post by: Asmodean on April 29, 2020, 02:43:39 PM
I think automated HGVs will have an additional potential weakness in cargo safety. One could abuse its ethical algorithms (Or just its machine common sense) to force it into a stop, raid the trailer and be away long before any-one even suspects anything.

when it comes to who to run over and how when the "AI" in question is not carrying passengers, I'd say lowest human cost when that can be clearly determined. When not - continue on course (Course here may also involve emergency maneuvers - I don't necessarily mean "plough on ahead")
Title: Re: utilitarianism
Post by: Davin on April 29, 2020, 03:02:08 PM
Quote from: xSilverPhinx on April 28, 2020, 11:41:02 PM
Quote from: Davin on April 28, 2020, 10:56:32 PM
Quote from: xSilverPhinx on April 28, 2020, 10:49:04 PM
Quote from: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."

Yes, the amount of people killed vs. saved is the same, but the mental paths people take to reach a course of action are different. And that's the point I think this problem is trying to show.

It takes a reductionist approach in that it removes a lot of the complexity, which are extra variables and therefore make a "dirty" experiment. This approach has its pros and cons, of course and I think in the cognitive sciences results such as these can rarely be generalised.
I think it's more than the mental paths it takes to get to the conclusion. For instance, one might consider taking someone entirely out of danger and putting them in a position to be harmed (to death), different (and in most cases worse), from changing the direction of a train from five people already in a position of danger to one person already in a position of danger.

Ah ok, I think I understand the point you're making now. Yes, I think you're right. There are other decisions involved. I think it has to do in part with the psychological distancing I mentioned earlier. Even empathy is involved.

It's interesting that you put it that way. Just as an addendum to my point about emotions driving decisions in a reply to billy rubin, in brain scans evaluating moral decision-making there is higher activation in prefrontal regions (such as the orbitofrontal cortex and ventromedial cortex, both more or less just behind the forehead and above the nasal cavity) which are typically not not very activated in some psychopaths, for example. These two regions are both very important in these kinds of decisions and are linked to the emotional centers in the brain.
I don't think that many decisions matter without emotions. Emotions help drive us and without them, without caring about anything, there isn't any reason at all to decide one way or the other. That doesn't mean that reasoning doesn't factor into things either. And we're all on some level between purely emotional reactions and well thought out choices. And we're not even the same from day to day. Most of us are fairly stable from day to day or even decisions to decision. We are always fluctuating. Even psychopaths become emotional about many things.

Anyway, good luck with billy, your side is enlightening and interesting at least.
Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 03:22:33 PM
Quote from: Asmodean on April 29, 2020, 02:43:39 PM
when it comes to who to run over and how when the "AI" in question is not carrying passengers, I'd say lowest human cost when that can be clearly determined. When not - continue on course (Course here may also involve emergency maneuvers - I don't necessarily mean "plough on ahead")

"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.

the "plough ahead" choice iz intdresting. if the vehicle must strike one of two equal zized groups of people, doez it use a random number table to pick which one?

AI is sophisticated enough now to make value judgements based on instantaneous data acquisition. should the random number table be replaced by a heirarchy of valuez? preserve children over adults, or preserve uniformed emergency  personnel over those not so dressed?
Title: Re: utilitarianism
Post by: Asmodean on April 29, 2020, 03:33:24 PM
Quote from: billy rubin on April 29, 2020, 03:22:33 PM
"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.
It's no more Utilitarianism than attending a public school is Socialism. Besides, don't care about their "good," only their relative numbers. Keep in mind, Less meat in the way equals to less damage to Asmo the Delivery Truck as well as the unfortunate smaller crowd.

Quotethe "plough ahead" choice iz intdresting. if the vehicle must strike one of two equal zized groups of people, doez it use a random number table to pick which one?
To what end would one use a random number here? Continue on your present course (pre-planned maneuvers including)

QuoteAI is sophisticated enough now to make value judgements based on instantaneous data acquisition. should the random number table be replaced by a heirarchy of valuez? preserve children over adults, or preserve uniformed emergency  personnel over those not so dressed?
Actually, the reason I put "AI" in quotation marks throughout this thing is that yes, computers are fast at logical data analysis. They are also remarkably stupid at pretty much anything else.
Title: Re: utilitarianism
Post by: billy rubin on April 29, 2020, 04:42:29 PM
Quote from: Asmodean on April 29, 2020, 03:33:24 PM
Quote from: billy rubin on April 29, 2020, 03:22:33 PM
"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.
It's no more Utilitarianism than attending a public school is Socialism. Besides, don't care about their "good," only their relative numbers. Keep in mind, Less meat in the way equals to less damage to Asmo the Delivery Truck as well as the unfortunate smaller crowd.

public school IS socialism. i have no problem with that. i'd like more of it.

but "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?


Quote
To what end would one use a random number here? Continue on your present course (pre-planned maneuvers including)

good point. i was wondering how to decide what to do if the vehicle has lost control and couldn't stay on the road. in that case a decision would have to be made. but if we're talking about jaywalking pedestrians, then defaulting to the original legal path is as good as any other.

Quote
Actually, the reason I put "AI" in quotation marks throughout this thing is that yes, computers are fast at logical data analysis. They are also remarkably stupid at pretty much anything else.

but they do what they're told. so we're talking about what the human beings who programmed their software decided that they would do. if th eprogrammers decide that police, firemen, and emergency medical technicians are worth more, then they can program the vehicle to run over ordinary people until a certain utilitarian tipping point is reached.

perhaps one fireman is worth two bicyclists. and the value of a fireman goes up by say, ten times, if the vehicle has detected more than usual activity on the emergency radio channel.

in the end, some sort of algorithm is going to be put into place. if we can program an autoinmous vehicle to avoid high cost accidents, then relative values are inevitably going to be assigned. even not to assign is to assign. im very interested in that discussion, because i see it right now, in my own society.

my president has been able to obtain multiple covid19 tests, in order to catch his illness early. his vice president doesn't wear a mask in hospitals, "because  he is tested regulalry and is known not to be infectious." athletes and celebrities have been able to obtain covid19 tests as well in my country. high-value people get the the attention their value merits.

but not me. even though i drive a truck and am therefore essental, i can't get a test to find out whether ive been infected. i am not of sufficiently high value.

seems to me, if my cell phone has a chinese-style "social value" score, then that might someday be available to the spftware to decide whether to run me over or not.
Title: Re: utilitarianism
Post by: Asmodean on April 30, 2020, 07:30:47 AM
Quote from: billy rubin on April 29, 2020, 04:42:29 PM
public school IS socialism. i have no problem with that. i'd like more of it.

Quote from: Scandinavia, in response to a certain US senatorNope.
It's a little nit-picky, but the distinction is not without importance. What it is in our case of self-driving cars, is coincidentally utilitarian.

Quotebut "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?
Not at all. There is no "we" pushing a man here. There is Asmo, the self-driving freight truck, spinning wildly out of control - too wildly to stop, but at the same time not wildly enough to be incapable of making trajectory-related choices. To it, limiting potential damage is a multivariate analysis, purely mathematical in nature. Even if its builders cared about such things as saving lives, it doesn't. I do propose preserving human life to be a paramount variable in emergency calculations, thus making the problem of "which life is worth more" unavoidable, but I also propose the solution; priority one: the legal occupants of the vehicle. If none present, or already accounted for, priority two: the highest number. Here, no distinction is made whether the less-fortunate lower number is fat people or pregnant ladies or kindergarten classes on an outing.

In the special case of an unmanned vehicle, these processes would be Utilitarian, which is unsurprising and probably unavoidable, but without any adherence to Utilitarianism. What I propose here is a much simpler philosophy.

Quote
but they do what they're told. so we're talking about what the human beings who programmed their software decided that they would do. if th eprogrammers decide that police, firemen, and emergency medical technicians are worth more, then they can program the vehicle to run over ordinary people until a certain utilitarian tipping point is reached.
They can, and probably will. I'm doing something similar here - proposing how I would approach programming Asmo, the self-driving freight truck.

Quoteperhaps one fireman is worth two bicyclists.
Respectful cyclists, or the kind who don't ever use hand-signals, hold up traffic and run red lights? Because in the latter case, a fireman, who is not also one of them, is worth a hell of a lot more.

There is a serious point to be made here, and a good reason to avoid such assignment of value entirely; how does Asmo, the self-driving freight truck know whether or not the fireman is also a cyclist, or a cyclist is also a fireman?

Quotein the end, some sort of algorithm is going to be put into place. if we can program an autoinmous vehicle to avoid high cost accidents, then relative values are inevitably going to be assigned. even not to assign is to assign. im very interested in that discussion, because i see it right now, in my own society.
As you have probably gathered, I too find this stuff way too interesting. I agree, them algorithms are coming. What's more, people will do their utmost to hack, disable and modify them in their own vehicle fleets. When it well and truly hits the open road, it will be a bit of a shitstorm, I think.

Quotebut not me. even though i drive a truck and am therefore essental, i can't get a test to find out whether ive been infected. i am not of sufficiently high value.
I believe the excuse my country was using is that either you are sick, in which case you may get tested, or you are not, in which case you won't. I'm not certain of whether or not they grouped potential candidates to be tested by how essential their job is, or for that matter, whether they work alone or not, but my point is this; don't take it too personally - when the Zs are a-coming, we are "all" just numbers in a spread sheet
Title: Re: utilitarianism
Post by: billy rubin on April 30, 2020, 03:41:00 PM
im looking for common terms, asmo

here is jeremy benthams definition of utility, which has been called utilitarianism

By the principle of utility is meant that principle which approves or disapproves of every action whatsoever according to the tendency it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words to promote or to oppose that happiness.

i would suggezt that death would be an unhappy state.

is ^^^this how you are using the term?

Title: Re: utilitarianism
Post by: billy rubin on April 30, 2020, 04:58:25 PM
silver, looking over roberts review of haidts book, it appears to me that what haidt calls "intuition" is interchangeable with what i might call "evolved responses." specifically i agrree with his assertion that much of human morality is post hoc rationalizations for his intuited decisins, which i would call evolved behaviour patters.  im going to have to order hiz book.

im not clear on whether haidt proposez a mechanizm for the creation of intuited beehaviour. i believe it is selection, and that the selective prezsure iz quantified by an overall heritable tendency to pass the " intuited " behavior on to subsequent generations.

still reading but my legally mandated half hour break is up.
Title: Re: utilitarianism
Post by: Asmodean on April 30, 2020, 08:18:42 PM
Quote from: billy rubin on April 30, 2020, 03:41:00 PM
is ^^^this how you are using the term?
More or less. I'm also fine with expanding the scope to encompass related political systems and philosophies.

So if something is utilitarian, it's not necessarily utilitarianist. The first describes the applicable philosophy - the second prescribes it.
Title: Re: utilitarianism
Post by: billy rubin on April 30, 2020, 09:17:09 PM
Quote from: Asmodean on April 30, 2020, 07:30:47 AM
Quotebut "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?
Not at all. There is no "we" pushing a man here. There is Asmo, the self-driving freight truck, spinning wildly out of control - too wildly to stop, but at the same time not wildly enough to be incapable of making trajectory-related choices. To it, limiting potential damage is a multivariate analysis, purely mathematical in nature. Even if its builders cared about such things as saving lives, it doesn't. I do propose preserving human life to be a paramount variable in emergency calculations, thus making the problem of "which life is worth more" unavoidable, but I also propose the solution; priority one: the legal occupants of the vehicle. If none present, or already accounted for, priority two: the highest number. Here, no distinction is made whether the less-fortunate lower number is fat people or pregnant ladies or kindergarten classes on an outing.


i would say that asmo the self-driving  freight truck is a tool in the hands of the engineers who wroite the driving program and somehow created the n-space that that would be consulted to retrieve the correct decision.. sure, the AI makes decisions case by case, but the choices available to it are only the ones foreseen by the programmers. if they left a variable out of the multivariate model, it would not be considered in calculating the response.  as in this tesla fatality oin japan:

QuoteThe driver of the Tesla had dozed off shortly before the crash, and when another vehicle ahead of him changed lanes to avoid the group, the Model X accelerated and ran into them, according to the complaint filed Tuesday in federal court in San Jose, California. Tesla is based in nearby Palo Alto.'


The accident was the result of flaws in Tesla's autopilot system, including inadequate monitoring of whether the driver is alert and a lack of safeguards against unforeseen traffic situations, according to the complaint. Tesla's autopilot system has been involved in other fatal accidents, such as a 2018 crash in Mountain View, California, when a Model X driven on autopilot slammed into a concrete barrier.

the truck does care about saving lives, because it is onbly an extension of the mind of the programmer. it is the programmer who will or will not push the fat man onto the track, because the programmer makes the decision in advance, and writes the code that the AI will use to perform th eaction, based on the foreseen situation. unless i misunderstand AI. i am not a progrmmer.

so the asmo truck programmed to save th emost lives is reflecting a utilitarianismic mindset on the part of the programmer. so far as i know, there are no regulations on how to make theses decisions, just ethics seminars on the part of the industry.

what sorts of ethics are important to industrialists? are they the same ethics as those held bt the people asmo squashes?
Title: Re: utilitarianism
Post by: billy rubin on May 01, 2020, 09:08:46 AM
well shoot

on the way in to work i started wondering about the multivariate model for which group of humans to kill, if a zelf driving truck had to decide between kill ing one of three groups of people.

what variables would go in?

number of people

likelihood of injury (could be predicted by vehicle speed at impact]

color of ubiform? firemen over policemen over postmen?

race of victim (in a race based oppressive society)


gender predicted by clothing?

. . .  and the interesting one, total social value score, as measured by a phone app that keeps track of your misdeeds and accomplishments. the chinese already do thiz.

also proportion of children children are inexperienced at decision makingg, proneto more serious injury, and are not physically able to dodge az well. so an accident with children might score as more severe.predict childre n by proportion of humans more than one standard deviation under mean height for area.

this is indeed getting pretty dark

but very possible. ican inagine elon musks engineers thinking this over right now.

and maybe code elon himself by facial recognition. any group withŕ  elon in it gets a free pass. or maybe you might purchase a pass.
Title: Re: utilitarianism
Post by: Davin on May 01, 2020, 05:03:37 PM
I've got to say, there is a lot of bad talk here and a lot of misunderstanding going on about programming and AI in particular. To be clear, the Asmo has it right. And billy rubin has it embarrassingly wrong. Like almost all of it. It's very clear that billy rubin knows almost nothing about programming.

I would atempt to explain to billy rubin, but they do not accept emergent properties, which is something modern software design attempts to control, but definitely takes into consideration to create. Also, play a fucking video game, those are full of emergent properties.

billy rubin says that the programs are in the hands of the developers, let's not get into the problems that statement creates when trying to consider how many hundreds of thousands of developers that puts responsibility on just yet, and focus on the idea that programs do what the developers intend. Any developer that has five minutes of development experience would instantly laugh at that statement. But let's get into an example. I really like the game Super Metroid (1994), I beat that game easily over a thousand times. Part of the fun of that game, is doing things the developer's did not intend to allow. But there are emergent properties in video games, specifically designed for, and Super Metroid is jam packed with them. The developer's did not have to program specifically for each and every choice a player can make, that would be super fucking stupid and extremely insane. What they develop is how the system reacts to what is going in the game. Games and enemy AI has gotten more complex since then (sometimes to silly effect). So the point is, the developers are not in complete control of their software. Anyone who has played a video game and any developer is more than five minutes of programming experience can attest to that.

Now let's get into laying the entire responsibility on "the intent of the developers." Which ones are you talking about? Because modern software development is built on the work of thousands of developers over the decades. The ones that built the programming language, the ones that built the language that that language was inspired by and used to create the first compiler of that language... and let's not forget the libraries that the developers use that come from multiple companies each with their own team of developers developing the tools and libraries based off of the work of thousands of developers before them. That's a lot of people with their own intents and possibility for things to be introduced that were not intended.

And then there's the problem of noise. Anyone who has developed for very delicate machines knows that there is noise involved and needs to be dealt with. If you think that computers are perfect creations that are separate from the chaos of the universe, you're seeing the illusion that the engineers and developers have created to handle the noise. Sometimes there's a tiny surge and a 0 becomes a 1. A lot of times when transferring data garbage gets introduced and needs to be verified and sorted out (check out internet package protocols). We've gotten good at handling these things, and making sure the end users do not ever see this, but it's still there, under the surface. Computers are not perfect logic machines they are still subject to the laws of physics that do interfere with their processing.

To wrap things up before I keep on ranting:

And all of that is before we get into the added complexity of AI. If we can't agree on the above realities of software development, then trying to move on to AI would be a fool's errand.

Edit: I'll also add that many tools and libraries are open source which means they could be created, improved, fixed, and maintained by any number of developers from one to thousands.
Title: Re: utilitarianism
Post by: Davin on May 01, 2020, 05:36:15 PM
Part two on modern software development.

Developing software in modern companies is not something simple like billy rubin is implying: Developer's program the software. In reality, there is a huge complex chain involved. For instance, in my company... let's choose one simple path for a new feature to keep things simple.

To start things off, an end user mentions to their supervisor that it would help them out if the software could do X.
Their supervisor (often the product owner) talks to one of our business analysts and they hash out what X is and how they would like it to work and makes sure that it falls in line with the business rules.
The BA talks to some of us software engineers and clarifies what X is and some some general ideas of how to incorporate that into the software.
Then some software engineers get together and estimate the complexity of the task of implementing X.
It gets put into a stack of thousands of other tasks that is ranked by the BA's and some product owners.
It eventually makes it's way into the small pile of things that one of us developers are going to develop in our three week period of software iteration time.
One of us software engineers will pick up that X task.
We look into it, design the solution, and then implement the solution in a branch of code (code based on the software but held separate for a time for complex reasons not important to this story).
Then the software engineer will get a few other software engineers to look through the new code to make sure it's good and clean looking and meets our coding standards.
Then the software engineer will create tests, both manual and automatic, to ensure that the new code is working as intended.
The new code is run with the old code through thousands of other automatic tests to make sure it didn't break any other functionality.
Then it gets merged into the code base.
Then it gets passed on to be tested by the BAs and product owners.
Then after the three week period of time is over, the software is released and the end users can now use feature X like they wanted.

The point of the story: there is a lot of responsibility to go around in the process of developing software that is not only on the developers. And this is a simple example, when there are electrical and hardware engineers involved it gets more complex and the responsibility spreads over even more people.
Title: Re: utilitarianism
Post by: Tom62 on May 01, 2020, 10:15:35 PM
Yes, Davin is absolutely right.
Title: Re: utilitarianism
Post by: xSilverPhinx on May 02, 2020, 02:25:52 AM
Quote from: Davin on May 01, 2020, 05:03:37 PM
And then there's the problem of noise. Anyone who has developed for very delicate machines knows that there is noise involved and needs to be dealt with. If you think that computers are perfect creations that are separate from the chaos of the universe, you're seeing the illusion that the engineers and developers have created to handle the noise. Sometimes there's a tiny surge and a 0 becomes a 1. A lot of times when transferring data garbage gets introduced and needs to be verified and sorted out (check out internet package protocols). We've gotten good at handling these things, and making sure the end users do not ever see this, but it's still there, under the surface. Computers are not perfect logic machines they are still subject to the laws of physics that do interfere with their processing.

This intrigued me. I don't know a thing about programming computers but I wonder how quantum computing fits into this. Would the software be the same as in a non-quantum computer even if the way the computers work are different?
Title: Re: utilitarianism
Post by: Asmodean on May 04, 2020, 10:30:47 AM
Quote from: billy rubin on April 30, 2020, 09:17:09 PM
i would say that asmo the self-driving  freight truck is a tool in the hands of the engineers who wroite the driving program and somehow created the n-space that that would be consulted to retrieve the correct decision..
I disagree. It is no more a tool in the hands of its designers than your house is a tool in the hands of its architect. Otherwise, at which precise point, if ever, does the product become the consumer's tool, rather than the creator's?

Quotethe truck does care about saving lives, because it is onbly an extension of the mind of the programmer. it is the programmer who will or will not push the fat man onto the track, because the programmer makes the decision in advance, and writes the code that the AI will use to perform th eaction, based on the foreseen situation. unless i misunderstand AI. i am not a progrmmer.
I think you give the programmer too much freedom. Davin did a very good job describing the process, so I shall pick no nits here, and read "programmer" as "creator," for example "Volkswagen AG" rather than "Günther the VW coder."

Quoteso the asmo truck programmed to save th emost lives is reflecting a utilitarianismic mindset on the part of the programmer. so far as i know, there are no regulations on how to make theses decisions, just ethics seminars on the part of the industry.
Asmo is not programmed to save lives - it is programmed to deliver cucumbers and bicycle parts safely and on time. As a part of that function, it is programmed to handle situations, in which it may cause human and/or animal casualties. Then, it is programemd to minimize those. I know the distinction may be difficult to see if you measure it by result alone, but it is a matter of your approach vector, which is important when discussing applicable philosophies. Two plus one equals three, so does the square root of nine. A three is a three, but the ways of getting to it are very different indeed.

Quotewhat sorts of ethics are important to industrialists? are they the same ethics as those held bt the people asmo squashes?
They want to sell their product.


EDIT: this is not too far out of my alley, so I'll try to respond. If any-one wants to do better, please do  ;)
Quote from: xSilverPhinx on May 02, 2020, 02:25:52 AM
This intrigued me. I don't know a thing about programming computers but I wonder how quantum computing fits into this. Would the software be the same as in a non-quantum computer even if the way the computers work are different?
Yes. But also, no, and it's actually a bit of a problem. Low-level software architecture required for quantum computing is different to that required for digital computing.

It may well be that you can run a program on a quantum computer, written in a common programming language, like C++ or .NET. However, what you write will have to be compiled (Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states) to a "language" a quantum processor can use, preferably with some degree of efficacy.

So theoretically, you can write your high-level program in whatever framework suits you. It's the compiling down to assembly level (physical gates/signal processing) that's more challenging. How do you represent the states and such like.
Title: Re: utilitarianism
Post by: xSilverPhinx on May 04, 2020, 04:23:12 PM
Quote from: Asmodean on May 04, 2020, 10:30:47 AM
Quote from: xSilverPhinx on May 02, 2020, 02:25:52 AM
This intrigued me. I don't know a thing about programming computers but I wonder how quantum computing fits into this. Would the software be the same as in a non-quantum computer even if the way the computers work are different?
Yes. But also, no, and it's actually a bit of a problem. Low-level software architecture required for quantum computing is different to that required for digital computing.

It may well be that you can run a program on a quantum computer, written in a common programming language, like C++ or .NET. However, what you write will have to be compiled (Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states) to a "language" a quantum processor can use, preferably with some degree of efficacy.

So theoretically, you can write your high-level program in whatever framework suits you. It's the compiling down to assembly level (physical gates/signal processing) that's more challenging. How do you represent the states and such like.

Heh, I like that answer: yes but also no.  :P

"Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states"

Yes, this is what I was wondering, about the 1s and 0s. I could be wrong (and most likely am), but a quantum computer can also be both 1 and 0 a the same time? If that's true, it would require some sort of software-level workaround, wouldn't it? :notsure:
Title: Re: utilitarianism
Post by: billy rubin on May 04, 2020, 09:38:16 PM
Quote from: Asmodean on May 04, 2020, 10:30:47 AM

I think you give the programmer too much freedom. Davin did a very good job describing the process, so I shall pick no nits here, and read "programmer" as "creator," for example "Volkswagen AG" rather than "Günther the VW coder."

maybe so, but the minutiae of a developer's workday doesn't address the relevant points about how or whether to construct an ethical model for a self-driving vehicle. whatever the process, in the end a product is delivered that reflects the decisions of the people planning it. they may choose one strategy, which reflects one set of values, or they may choose another strategy that reflects something else. or they may choose no strategy, in which case not to decide is to have decided, as i mentioned earlier.

Quote from: asmo
Quoteso the asmo truck programmed to save th emost lives is reflecting a utilitarianismic mindset on the part of the programmer. so far as i know, there are no regulations on how to make theses decisions, just ethics seminars on the part of the industry.
Asmo is not programmed to save lives - it is programmed to deliver cucumbers and bicycle parts safely and on time. As a part of that function, it is programmed to handle situations, in which it may cause human and/or animal casualties. Then, it is programemd to minimize those. I know the distinction may be difficult to see if you measure it by result alone, but it is a matter of your approach vector, which is important when discussing applicable philosophies. Two plus one equals three, so does the square root of nine. A three is a three, but the ways of getting to it are very different indeed.

so what do you think the approach philosophes are, how do they differ, and what do you see in the future. after all, the future is here.

Quote from: asmo
Quotewhat sorts of ethics are important to industrialists? are they the same ethics as those held bt the people asmo squashes?
They want to sell their product.

they wont if they violate local ethics with it:

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

in 2018 a young woman in arizona was killed by a self driving uber car. she was jaywalking with a bicycle, and the human in the car had given up control to the computer.

long story short, it was the car's fault:

QuoteThe recorded telemetry showed the system had detected Herzberg six seconds before the crash, and classified her first as an unknown object, then as a vehicle, and finally as a bicycle, each of which had a different predicted path according to the autonomy logic. 1.3 seconds prior to the impact, the system determined that emergency braking was required, which is normally performed by the vehicle operator. However, the system was not designed to alert the operator, and did not make an emergency stop on its own accord, as "emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior", according to NTSB.[11]

the death wasn't the result of an ethical decision, it was just ordinary stupidity on the part of the programmers, who designed the program to recognize pedestrians, bicyclists, and vehicles, but who never considered that a sometimes humans pushed their bicycles instead of riding them.  the result of it all was that uber ceased testing self driving vehicles in several cities, was banned from doing so in arizona, and had to settle  wrongful death lawsuit with herzberg's family. so the industrialists who wanted to sell their product got a black eye, at the time. i don't know what has happened since.

the point is that if the industrialists want to sell the cars, and the cars kill too many people because of failures, systems or ethical, then the industrialists will not sell too many more cars. hence the need for an decision framework that will make enough acceptable decisions to keep the industrialists in business.

watching the behavior of rich industrialists over history, i think i can generally predict the focus.






Title: Re: utilitarianism
Post by: Davin on May 04, 2020, 10:09:36 PM
Quote from: xSilverPhinx on May 04, 2020, 04:23:12 PM
Quote from: Asmodean on May 04, 2020, 10:30:47 AM
Quote from: xSilverPhinx on May 02, 2020, 02:25:52 AM
This intrigued me. I don't know a thing about programming computers but I wonder how quantum computing fits into this. Would the software be the same as in a non-quantum computer even if the way the computers work are different?
Yes. But also, no, and it's actually a bit of a problem. Low-level software architecture required for quantum computing is different to that required for digital computing.

It may well be that you can run a program on a quantum computer, written in a common programming language, like C++ or .NET. However, what you write will have to be compiled (Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states) to a "language" a quantum processor can use, preferably with some degree of efficacy.

So theoretically, you can write your high-level program in whatever framework suits you. It's the compiling down to assembly level (physical gates/signal processing) that's more challenging. How do you represent the states and such like.

Heh, I like that answer: yes but also no.  :P

"Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states"

Yes, this is what I was wondering, about the 1s and 0s. I could be wrong (and most likely am), but a quantum computer can also be both 1 and 0 a the same time? If that's true, it would require some sort of software-level workaround, wouldn't it? :notsure:
Like the Asmo said, they'd have to substitute the lower level machine code with the language. Any parts of a piece of software would have to include an interface library. Possibly a new boolean variable type, as opposed to trying to stuff it into the existing nullable boolean type, because I think we'd still want the functionality of that.

But the major bulk of a program wouldn't need to have any consideration for the quantum states. Much like as a developer I don't need to worry about packet loss when streaming or transferring a file, because those libraries handle that for me.
Title: Re: utilitarianism
Post by: Davin on May 04, 2020, 10:21:30 PM
Quote from: billy rubin on May 04, 2020, 09:38:16 PM
Quote from: Asmodean on May 04, 2020, 10:30:47 AM

I think you give the programmer too much freedom. Davin did a very good job describing the process, so I shall pick no nits here, and read "programmer" as "creator," for example "Volkswagen AG" rather than "Günther the VW coder."

maybe so, but the minutiae of a developer's workday doesn't address the relevant points about how or whether to construct an ethical model for a self-driving vehicle. whatever the process, in the end a product is delivered that reflects the decisions of the people planning it. they may choose one strategy, which reflects one set of values, or they may choose another strategy that reflects something else. or they may choose no strategy, in which case not to decide is to have decided, as i mentioned earlier.
I don't think that's true at all. Also, not a very good deflection from what I said, straw men seem to be your go to. I brought it up because a lot of what you were saying doesn't make sense in the context of real life programming. You can, if you want, proceed down an entirely abstract discussion, but at that point it doesn't matter whether it's a self driving car or a person driving a car because you're simply talking about how to decide between certain options.

Quote from: billy rubin
Quote
The recorded telemetry showed the system had detected Herzberg six seconds before the crash, and classified her first as an unknown object, then as a vehicle, and finally as a bicycle, each of which had a different predicted path according to the autonomy logic. 1.3 seconds prior to the impact, the system determined that emergency braking was required, which is normally performed by the vehicle operator. However, the system was not designed to alert the operator, and did not make an emergency stop on its own accord, as "emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior", according to NTSB.[11]
the death wasn't the result of an ethical decision, it was just ordinary stupidity on the part of the programmers, who designed the program to recognize pedestrians, bicyclists, and vehicles, but who never considered that a sometimes humans pushed their bicycles instead of riding them.  the result of it all was that uber ceased testing self driving vehicles in several cities, was banned from doing so in arizona, and had to settle  wrongful death lawsuit with herzberg's family. so the industrialists who wanted to sell their product got a black eye, at the time. i don't know what has happened since.
This is not an accurate summation of what happened. It looks like you're trying to bend the facts to meet your narrative.

The detection AI messed up, but there are a lot of reasons for why that might have happened, and you've decided to claim one way without any supporting evidence. That's not very honest.
Title: Re: utilitarianism
Post by: xSilverPhinx on May 05, 2020, 02:09:21 AM
Quote from: Davin on May 04, 2020, 10:09:36 PM
Quote from: xSilverPhinx on May 04, 2020, 04:23:12 PM
Quote from: Asmodean on May 04, 2020, 10:30:47 AM
Quote from: xSilverPhinx on May 02, 2020, 02:25:52 AM
This intrigued me. I don't know a thing about programming computers but I wonder how quantum computing fits into this. Would the software be the same as in a non-quantum computer even if the way the computers work are different?
Yes. But also, no, and it's actually a bit of a problem. Low-level software architecture required for quantum computing is different to that required for digital computing.

It may well be that you can run a program on a quantum computer, written in a common programming language, like C++ or .NET. However, what you write will have to be compiled (Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states) to a "language" a quantum processor can use, preferably with some degree of efficacy.

So theoretically, you can write your high-level program in whatever framework suits you. It's the compiling down to assembly level (physical gates/signal processing) that's more challenging. How do you represent the states and such like.

Heh, I like that answer: yes but also no.  :P

"Translated to lower order instructions - basically, your quantum equivalent of ones and zeros, although a quantum bit is not limited to those two states"

Yes, this is what I was wondering, about the 1s and 0s. I could be wrong (and most likely am), but a quantum computer can also be both 1 and 0 a the same time? If that's true, it would require some sort of software-level workaround, wouldn't it? :notsure:
Like the Asmo said, they'd have to substitute the lower level machine code with the language. Any parts of a piece of software would have to include an interface library. Possibly a new boolean variable type, as opposed to trying to stuff it into the existing nullable boolean type, because I think we'd still want the functionality of that.

But the major bulk of a program wouldn't need to have any consideration for the quantum states. Much like as a developer I don't need to worry about packet loss when streaming or transferring a file, because those libraries handle that for me.

Ok, I think I understand it a little better now.  :thumbsup: Grazie!
Title: Re: utilitarianism
Post by: Asmodean on May 05, 2020, 11:49:24 AM
Quote from: billy rubin on May 04, 2020, 09:38:16 PM
maybe so, but the minutiae of a developer's workday doesn't address the relevant points about how or whether to construct an ethical model for a self-driving vehicle. whatever the process, in the end a product is delivered that reflects the decisions of the people planning it. they may choose one strategy, which reflects one set of values, or they may choose another strategy that reflects something else. or they may choose no strategy, in which case not to decide is to have decided, as i mentioned earlier.
As I am not the one pushing the developer angle, I'll happily drop it. My point is this; to overlook the consumer in this is a disservice to he "larger picture." Consumers have a lot of power when it comes to how a product performs. In Capitalist societies, those products are all made for a market, most of them - for a competitive one. You can try using legislation to push one agenda or another. We are also overlooking state nannying here. Ought there be a law? Probably. If and when it comes, however, I'd rather not have it be informed by a top-down, prescriptive philosophical system. Bottom-up reacting-to-circumstances works for me.

Quote
so what do you think the approach philosophes are, how do they differ, and what do you see in the future. after all, the future is here.
here are some applicable examples at their extremes;

Fatalism: if it happens - it happens. Follow the GPS. Period.
Nihilism: Protect cargo before people.
Capitalism: Do whatever sells your product in any given locale.
Solipsism: The truck and the people it is to run over may not be real.
Consequentialism (herein, Utilitarianism): The ends justify the means.
Asmoism: Seek to deliver your goods on time while minimizing damage.

[Must. Resist...]
...Intersectionalism: Asmo the Huwhite Self-driving Freight Truck is oppressing that Truck of Color over yonder. That racist, muh-sojinist, Nazi piece of shit! Ban it from Twitter immediately!
[.../Failure]

Quote
they wont if they violate local ethics with it:
Will they not indeed..? I think this is only partly true, otherwise, how are the sweatshop industries thriving by selling product to countries, where the ethical norm is to be loudly (and often, sincerely) appalled at the working conditions of the less fortunate?
Title: Re: utilitarianism
Post by: billy rubin on May 05, 2020, 01:59:27 PM
Quote from: Asmodean on May 05, 2020, 11:49:24 AM

here are some applicable examples at their extremes;

Fatalism: if it happens - it happens. Follow the GPS. Period.
Nihilism: Protect cargo before people.
Capitalism: Do whatever sells your product in any given locale.
Solipsism: The truck and the people it is to run over may not be real.
Consequentialism (herein, Utilitarianism): The ends justify the means.
Asmoism: Seek to deliver your goods on time while minimizing damage.

[Must. Resist...]
...Intersectionalism: Asmo the Huwhite Self-driving Freight Truck is oppressing that Truck of Color over yonder. That racist, muh-sojinist, Nazi piece of shit! Ban it from Twitter immediately!
[.../Failure]


interesting summaries. some of them are obvously operatng right now, others obviously persist only within certain cultures. you left out another, which is the guiding philosophy of hnduism. i don't know whether there is a word in english for it, but it  consists of an acknowledgment that suffering and joy are conditions that apply to all living things, and how one experiences life is  athing that transcens individual life spans and is part of a cosmic scoreboard of just deserts. so th eorthodox hindu can look dispassionately at a starving beggar, reasoning that the beggar is paying a price for transgressions in  previous life, and can look forward to eventual elevation in a future one. in the meantime, the beggar starves. this is the current situation in india today, a nuclear power with a space program that has citizens scavenging the garbage dumps for survival.

the hindu approach might be to avoid accidents with self-drivingvehicles if possible, but accept them as inevitable and not worry too much about them when they occur.


Quote
Quote
they wont if they violate local ethics with it:
Will they not indeed..? I think this is only partly true, otherwise, how are the sweatshop industries thriving by selling product to countries, where the ethical norm is to be loudly (and often, sincerely) appalled at the working conditions of the less fortunate?

in america we don't pay attention to other countries. most of us don't know that other countries exist.

the key within socieites lik eamerica is to oppress the citizens to the benefit of the rich and simultaneously give them a false target for their anger. that's what being done now, for example. its very clear what our problems are, but we have a carefully cultivated false narrative that benefits those in power at the expense of the remaining 90 percent.

thjis pandemic has illustrated the situation very clearly. i don't like to susbstitute memes for thinking, in general, but i came across thjis today, and it pretty neatly summarizes the current ethical crisis in my country. some hyperbole, but mostly spot on:

(https://i.ibb.co/WtpcWNX/Screenshot-20200504-221443-Chrome.jpg) (https://ibb.co/ySB8nmC)

if the pandenic disappears quickly, we will return to business as normal just as quickly. if it persists long er, there is a chance that social change will result, at a high price in lives.

we haven't got to the guillotine yet, if we ever do