News:

The default theme for this site has been updated. For further information, please take a look at the announcement regarding HAF changing its default theme.

Main Menu

utilitarianism

Started by billy rubin, April 23, 2020, 08:48:08 PM

Previous topic - Next topic

billy rubin

#30
well, avoiding the difficult decision is a very human thing to program into an AI, and i agree the algorithm should run down a list of strategies to attempt before concluding that the unavoidable clumps of people are genunely unavidable.

but that sidestepsthe decision of interest. sooner or later the manufacturers are going to have to decide:

- hit the car or hit the motorcycle?

- hit the bus or hit the pedestrian?

- hop the kerb into the crowd or plow into the jaywalkers?

- kill the child or kill the adult?

im curious as to whats actually being programmed in right now. heading for the ditch is always the better stategy, but what should we program in for the situations without one? what shall be the general rules?

once i drove a semi into a high speed tunnel in pennsylvania, a long one, a mile and a half, narrow, two lanes with no pull off space.

as i went in i blew a radiator hose andthe tunnel disappeared in steam. when it cleared i was still going, 60mph. no where to stop.

then the temperature gauge began to rise. the check engine light went on.the alarms began to sound, the dash display began to flash warnings, ENGINE DAMAGE IMMANENT. SHUT DOWN SHUT DOWN.

. . . and then, in a two-lane high speed tunnel three quarters of a mile from either end, the computer turned my engine off.

as islowed down, i thought over the situation. if i stopped the truck inthe tunnel, i would block one lane. if i was then hit from behind, the wreck might block the entire tunnel. more andmorecars might pile up. there might be a fire. if there were a fire, i was a long way from emergency crews and a blockedvtunnel would keep it that way. many people could die. it had happened before. nasty business.

all this took about five seconds tp process, and then i reached down and pushed a button the engineers had thoughtfully provided for any available human beings to consider:

SHUTDOWN OVERRIDE

the alarms still sounded, but i started the truck in motion, drove the distance to the end of the tunnel, pulled over, shut the hot motor off, and called for help. end of problem.

that ovrerride button might have saved 20 lives, but it was the last truck i drove that had one. iasked my compny afterwards why the new trucks didnt come wi5h overrides and told them my story.

they said, well, after all, how often does that happen. . .

their humanalgorithm had made the decision that the cost of the override circuit was too high, considering the rarity of needing one. of course, i had a different point of view.

what should we program into a self-driving truck to solve this problem?




"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Asmodean

Quote from: billy rubin on April 29, 2020, 08:37:49 AM
but that sidestepsthe decision of interest. sooner or later the manufacturers are going to have to decide:

- hit the car or hit the motorcycle?

- hit the bus or hit the pedestrian?

- hop the kerb into the crowd or plow into the jaywalkers?

- kill the child or kill the adult?
Do let us not forget, "will the potential customer buy my product, which might 'willingly' kill him, or that brand over there *point,* which will always try to save its occupants?"

If it's just me, that's one thing, but let's do a "for instance;" what if I had a kid? would I want to put said kid in the car in question alongside me, well-knowing that it would sacrifice him/her under certain extreme circumstances? That ought to churn through the head of every self-driving-car-buying parent, no?

We really ought to have its own thread for this. I may or may not see to that after work, as I actually have more to respond to here.
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.

billy rubin

sure.

dont forget that the vehicle is empty of occupants. thats the commercial business model right now.

we have those being tested over here now, including little automated delivery vehicles.

plus full size semi trucks that go coast to coast without a driver. just a tester in them right now.



"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Asmodean

I think automated HGVs will have an additional potential weakness in cargo safety. One could abuse its ethical algorithms (Or just its machine common sense) to force it into a stop, raid the trailer and be away long before any-one even suspects anything.

when it comes to who to run over and how when the "AI" in question is not carrying passengers, I'd say lowest human cost when that can be clearly determined. When not - continue on course (Course here may also involve emergency maneuvers - I don't necessarily mean "plough on ahead")
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.

Davin

Quote from: xSilverPhinx on April 28, 2020, 11:41:02 PM
Quote from: Davin on April 28, 2020, 10:56:32 PM
Quote from: xSilverPhinx on April 28, 2020, 10:49:04 PM
Quote from: Davin on April 28, 2020, 10:10:44 PM
Quote from: xSilverPhinx on April 28, 2020, 08:44:39 PM
Most people will say they would not push him. But this is odd, you may think. The outcome is exactly the same as the first part of the Trolley problem! Save 5 strangers by murdering 1 stranger. So, what's going on here?
That's because the result is only the exact same if you only consider the amount of people killed vs. saved. We instinctively understand that things are more complicated than that, even in an oversimplified thought experiment. Not many are able to explain it though. So most will say not much more than something like, "because it's different."

Yes, the amount of people killed vs. saved is the same, but the mental paths people take to reach a course of action are different. And that's the point I think this problem is trying to show.

It takes a reductionist approach in that it removes a lot of the complexity, which are extra variables and therefore make a "dirty" experiment. This approach has its pros and cons, of course and I think in the cognitive sciences results such as these can rarely be generalised.
I think it's more than the mental paths it takes to get to the conclusion. For instance, one might consider taking someone entirely out of danger and putting them in a position to be harmed (to death), different (and in most cases worse), from changing the direction of a train from five people already in a position of danger to one person already in a position of danger.

Ah ok, I think I understand the point you're making now. Yes, I think you're right. There are other decisions involved. I think it has to do in part with the psychological distancing I mentioned earlier. Even empathy is involved.

It's interesting that you put it that way. Just as an addendum to my point about emotions driving decisions in a reply to billy rubin, in brain scans evaluating moral decision-making there is higher activation in prefrontal regions (such as the orbitofrontal cortex and ventromedial cortex, both more or less just behind the forehead and above the nasal cavity) which are typically not not very activated in some psychopaths, for example. These two regions are both very important in these kinds of decisions and are linked to the emotional centers in the brain.
I don't think that many decisions matter without emotions. Emotions help drive us and without them, without caring about anything, there isn't any reason at all to decide one way or the other. That doesn't mean that reasoning doesn't factor into things either. And we're all on some level between purely emotional reactions and well thought out choices. And we're not even the same from day to day. Most of us are fairly stable from day to day or even decisions to decision. We are always fluctuating. Even psychopaths become emotional about many things.

Anyway, good luck with billy, your side is enlightening and interesting at least.
Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

billy rubin

Quote from: Asmodean on April 29, 2020, 02:43:39 PM
when it comes to who to run over and how when the "AI" in question is not carrying passengers, I'd say lowest human cost when that can be clearly determined. When not - continue on course (Course here may also involve emergency maneuvers - I don't necessarily mean "plough on ahead")

"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.

the "plough ahead" choice iz intdresting. if the vehicle must strike one of two equal zized groups of people, doez it use a random number table to pick which one?

AI is sophisticated enough now to make value judgements based on instantaneous data acquisition. should the random number table be replaced by a heirarchy of valuez? preserve children over adults, or preserve uniformed emergency  personnel over those not so dressed?


"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Asmodean

Quote from: billy rubin on April 29, 2020, 03:22:33 PM
"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.
It's no more Utilitarianism than attending a public school is Socialism. Besides, don't care about their "good," only their relative numbers. Keep in mind, Less meat in the way equals to less damage to Asmo the Delivery Truck as well as the unfortunate smaller crowd.

Quotethe "plough ahead" choice iz intdresting. if the vehicle must strike one of two equal zized groups of people, doez it use a random number table to pick which one?
To what end would one use a random number here? Continue on your present course (pre-planned maneuvers including)

QuoteAI is sophisticated enough now to make value judgements based on instantaneous data acquisition. should the random number table be replaced by a heirarchy of valuez? preserve children over adults, or preserve uniformed emergency  personnel over those not so dressed?
Actually, the reason I put "AI" in quotation marks throughout this thing is that yes, computers are fast at logical data analysis. They are also remarkably stupid at pretty much anything else.
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.

billy rubin

Quote from: Asmodean on April 29, 2020, 03:33:24 PM
Quote from: billy rubin on April 29, 2020, 03:22:33 PM
"lowest human cost" iz utilitarianism, the greatest good for the greatest number. so killing one to save five iz the guideline. or trade two for twenty.
It's no more Utilitarianism than attending a public school is Socialism. Besides, don't care about their "good," only their relative numbers. Keep in mind, Less meat in the way equals to less damage to Asmo the Delivery Truck as well as the unfortunate smaller crowd.

public school IS socialism. i have no problem with that. i'd like more of it.

but "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?


Quote
To what end would one use a random number here? Continue on your present course (pre-planned maneuvers including)

good point. i was wondering how to decide what to do if the vehicle has lost control and couldn't stay on the road. in that case a decision would have to be made. but if we're talking about jaywalking pedestrians, then defaulting to the original legal path is as good as any other.

Quote
Actually, the reason I put "AI" in quotation marks throughout this thing is that yes, computers are fast at logical data analysis. They are also remarkably stupid at pretty much anything else.

but they do what they're told. so we're talking about what the human beings who programmed their software decided that they would do. if th eprogrammers decide that police, firemen, and emergency medical technicians are worth more, then they can program the vehicle to run over ordinary people until a certain utilitarian tipping point is reached.

perhaps one fireman is worth two bicyclists. and the value of a fireman goes up by say, ten times, if the vehicle has detected more than usual activity on the emergency radio channel.

in the end, some sort of algorithm is going to be put into place. if we can program an autoinmous vehicle to avoid high cost accidents, then relative values are inevitably going to be assigned. even not to assign is to assign. im very interested in that discussion, because i see it right now, in my own society.

my president has been able to obtain multiple covid19 tests, in order to catch his illness early. his vice president doesn't wear a mask in hospitals, "because  he is tested regulalry and is known not to be infectious." athletes and celebrities have been able to obtain covid19 tests as well in my country. high-value people get the the attention their value merits.

but not me. even though i drive a truck and am therefore essental, i can't get a test to find out whether ive been infected. i am not of sufficiently high value.

seems to me, if my cell phone has a chinese-style "social value" score, then that might someday be available to the spftware to decide whether to run me over or not.


"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Asmodean

#38
Quote from: billy rubin on April 29, 2020, 04:42:29 PM
public school IS socialism. i have no problem with that. i'd like more of it.

Quote from: Scandinavia, in response to a certain US senatorNope.
It's a little nit-picky, but the distinction is not without importance. What it is in our case of self-driving cars, is coincidentally utilitarian.

Quotebut "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?
Not at all. There is no "we" pushing a man here. There is Asmo, the self-driving freight truck, spinning wildly out of control - too wildly to stop, but at the same time not wildly enough to be incapable of making trajectory-related choices. To it, limiting potential damage is a multivariate analysis, purely mathematical in nature. Even if its builders cared about such things as saving lives, it doesn't. I do propose preserving human life to be a paramount variable in emergency calculations, thus making the problem of "which life is worth more" unavoidable, but I also propose the solution; priority one: the legal occupants of the vehicle. If none present, or already accounted for, priority two: the highest number. Here, no distinction is made whether the less-fortunate lower number is fat people or pregnant ladies or kindergarten classes on an outing.

In the special case of an unmanned vehicle, these processes would be Utilitarian, which is unsurprising and probably unavoidable, but without any adherence to Utilitarianism. What I propose here is a much simpler philosophy.

Quote
but they do what they're told. so we're talking about what the human beings who programmed their software decided that they would do. if th eprogrammers decide that police, firemen, and emergency medical technicians are worth more, then they can program the vehicle to run over ordinary people until a certain utilitarian tipping point is reached.
They can, and probably will. I'm doing something similar here - proposing how I would approach programming Asmo, the self-driving freight truck.

Quoteperhaps one fireman is worth two bicyclists.
Respectful cyclists, or the kind who don't ever use hand-signals, hold up traffic and run red lights? Because in the latter case, a fireman, who is not also one of them, is worth a hell of a lot more.

There is a serious point to be made here, and a good reason to avoid such assignment of value entirely; how does Asmo, the self-driving freight truck know whether or not the fireman is also a cyclist, or a cyclist is also a fireman?

Quotein the end, some sort of algorithm is going to be put into place. if we can program an autoinmous vehicle to avoid high cost accidents, then relative values are inevitably going to be assigned. even not to assign is to assign. im very interested in that discussion, because i see it right now, in my own society.
As you have probably gathered, I too find this stuff way too interesting. I agree, them algorithms are coming. What's more, people will do their utmost to hack, disable and modify them in their own vehicle fleets. When it well and truly hits the open road, it will be a bit of a shitstorm, I think.

Quotebut not me. even though i drive a truck and am therefore essental, i can't get a test to find out whether ive been infected. i am not of sufficiently high value.
I believe the excuse my country was using is that either you are sick, in which case you may get tested, or you are not, in which case you won't. I'm not certain of whether or not they grouped potential candidates to be tested by how essential their job is, or for that matter, whether they work alone or not, but my point is this; don't take it too personally - when the Zs are a-coming, we are "all" just numbers in a spread sheet
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.

billy rubin

im looking for common terms, asmo

here is jeremy benthams definition of utility, which has been called utilitarianism

By the principle of utility is meant that principle which approves or disapproves of every action whatsoever according to the tendency it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words to promote or to oppose that happiness.

i would suggezt that death would be an unhappy state.

is ^^^this how you are using the term?



"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

billy rubin

silver, looking over roberts review of haidts book, it appears to me that what haidt calls "intuition" is interchangeable with what i might call "evolved responses." specifically i agrree with his assertion that much of human morality is post hoc rationalizations for his intuited decisins, which i would call evolved behaviour patters.  im going to have to order hiz book.

im not clear on whether haidt proposez a mechanizm for the creation of intuited beehaviour. i believe it is selection, and that the selective prezsure iz quantified by an overall heritable tendency to pass the " intuited " behavior on to subsequent generations.

still reading but my legally mandated half hour break is up.


"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Asmodean

Quote from: billy rubin on April 30, 2020, 03:41:00 PM
is ^^^this how you are using the term?
More or less. I'm also fine with expanding the scope to encompass related political systems and philosophies.

So if something is utilitarian, it's not necessarily utilitarianist. The first describes the applicable philosophy - the second prescribes it.
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.

billy rubin

Quote from: Asmodean on April 30, 2020, 07:30:47 AM
Quotebut "relative numbers" is precisely what utilitarianism is all about. "more or less meat" segues into the trolley problem's fat man. do we push him onto the tracks to save a million people? if no, why not? if yes, then how about two people?
Not at all. There is no "we" pushing a man here. There is Asmo, the self-driving freight truck, spinning wildly out of control - too wildly to stop, but at the same time not wildly enough to be incapable of making trajectory-related choices. To it, limiting potential damage is a multivariate analysis, purely mathematical in nature. Even if its builders cared about such things as saving lives, it doesn't. I do propose preserving human life to be a paramount variable in emergency calculations, thus making the problem of "which life is worth more" unavoidable, but I also propose the solution; priority one: the legal occupants of the vehicle. If none present, or already accounted for, priority two: the highest number. Here, no distinction is made whether the less-fortunate lower number is fat people or pregnant ladies or kindergarten classes on an outing.


i would say that asmo the self-driving  freight truck is a tool in the hands of the engineers who wroite the driving program and somehow created the n-space that that would be consulted to retrieve the correct decision.. sure, the AI makes decisions case by case, but the choices available to it are only the ones foreseen by the programmers. if they left a variable out of the multivariate model, it would not be considered in calculating the response.  as in this tesla fatality oin japan:

QuoteThe driver of the Tesla had dozed off shortly before the crash, and when another vehicle ahead of him changed lanes to avoid the group, the Model X accelerated and ran into them, according to the complaint filed Tuesday in federal court in San Jose, California. Tesla is based in nearby Palo Alto.'


The accident was the result of flaws in Tesla's autopilot system, including inadequate monitoring of whether the driver is alert and a lack of safeguards against unforeseen traffic situations, according to the complaint. Tesla's autopilot system has been involved in other fatal accidents, such as a 2018 crash in Mountain View, California, when a Model X driven on autopilot slammed into a concrete barrier.

the truck does care about saving lives, because it is onbly an extension of the mind of the programmer. it is the programmer who will or will not push the fat man onto the track, because the programmer makes the decision in advance, and writes the code that the AI will use to perform th eaction, based on the foreseen situation. unless i misunderstand AI. i am not a progrmmer.

so the asmo truck programmed to save th emost lives is reflecting a utilitarianismic mindset on the part of the programmer. so far as i know, there are no regulations on how to make theses decisions, just ethics seminars on the part of the industry.

what sorts of ethics are important to industrialists? are they the same ethics as those held bt the people asmo squashes?


"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

billy rubin

#43
well shoot

on the way in to work i started wondering about the multivariate model for which group of humans to kill, if a zelf driving truck had to decide between kill ing one of three groups of people.

what variables would go in?

number of people

likelihood of injury (could be predicted by vehicle speed at impact]

color of ubiform? firemen over policemen over postmen?

race of victim (in a race based oppressive society)


gender predicted by clothing?

. . .  and the interesting one, total social value score, as measured by a phone app that keeps track of your misdeeds and accomplishments. the chinese already do thiz.

also proportion of children children are inexperienced at decision makingg, proneto more serious injury, and are not physically able to dodge az well. so an accident with children might score as more severe.predict childre n by proportion of humans more than one standard deviation under mean height for area.

this is indeed getting pretty dark

but very possible. ican inagine elon musks engineers thinking this over right now.

and maybe code elon himself by facial recognition. any group withŕ  elon in it gets a free pass. or maybe you might purchase a pass.


"I cannot understand the popularity of that kind of music, which is based on repetition. In a civilized society, things don't need to be said more than three times."

Davin

#44
I've got to say, there is a lot of bad talk here and a lot of misunderstanding going on about programming and AI in particular. To be clear, the Asmo has it right. And billy rubin has it embarrassingly wrong. Like almost all of it. It's very clear that billy rubin knows almost nothing about programming.

I would atempt to explain to billy rubin, but they do not accept emergent properties, which is something modern software design attempts to control, but definitely takes into consideration to create. Also, play a fucking video game, those are full of emergent properties.

billy rubin says that the programs are in the hands of the developers, let's not get into the problems that statement creates when trying to consider how many hundreds of thousands of developers that puts responsibility on just yet, and focus on the idea that programs do what the developers intend. Any developer that has five minutes of development experience would instantly laugh at that statement. But let's get into an example. I really like the game Super Metroid (1994), I beat that game easily over a thousand times. Part of the fun of that game, is doing things the developer's did not intend to allow. But there are emergent properties in video games, specifically designed for, and Super Metroid is jam packed with them. The developer's did not have to program specifically for each and every choice a player can make, that would be super fucking stupid and extremely insane. What they develop is how the system reacts to what is going in the game. Games and enemy AI has gotten more complex since then (sometimes to silly effect). So the point is, the developers are not in complete control of their software. Anyone who has played a video game and any developer is more than five minutes of programming experience can attest to that.

Now let's get into laying the entire responsibility on "the intent of the developers." Which ones are you talking about? Because modern software development is built on the work of thousands of developers over the decades. The ones that built the programming language, the ones that built the language that that language was inspired by and used to create the first compiler of that language... and let's not forget the libraries that the developers use that come from multiple companies each with their own team of developers developing the tools and libraries based off of the work of thousands of developers before them. That's a lot of people with their own intents and possibility for things to be introduced that were not intended.

And then there's the problem of noise. Anyone who has developed for very delicate machines knows that there is noise involved and needs to be dealt with. If you think that computers are perfect creations that are separate from the chaos of the universe, you're seeing the illusion that the engineers and developers have created to handle the noise. Sometimes there's a tiny surge and a 0 becomes a 1. A lot of times when transferring data garbage gets introduced and needs to be verified and sorted out (check out internet package protocols). We've gotten good at handling these things, and making sure the end users do not ever see this, but it's still there, under the surface. Computers are not perfect logic machines they are still subject to the laws of physics that do interfere with their processing.

To wrap things up before I keep on ranting:

  • Things happen outside of the intention of developers
  • The developers do not program for each and every single thing (fuck that would make programming terrible and take a million years)
  • Developers use software tools and libraries developed and engineered by thousands of other developers from a bunch of other companies (who the fuck thinks every program is created from scratch, let alone that a program must be made by the same developers from start to finish?)
  • There is natural noise that interferes with computer processing

And all of that is before we get into the added complexity of AI. If we can't agree on the above realities of software development, then trying to move on to AI would be a fool's errand.

Edit: I'll also add that many tools and libraries are open source which means they could be created, improved, fixed, and maintained by any number of developers from one to thousands.
Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.