News:

Unnecessarily argumentative

Main Menu

Recent posts

#1
Laid Back Lounge / Re: Game: The Next Person... (...
Last post by Dark Lightning - April 17, 2024, 09:58:45 PM
False. There are dead rats and nopes in there.
TNP does their own home repairs.
#2
Laid Back Lounge / Re: Game: The Next Person... (...
Last post by Recusant - April 17, 2024, 09:02:17 PM
Quote from: The Magic Pudding.. on April 17, 2024, 01:34:11 PMHe was never heard from again.
The nxt has heard from him since

He last logged in late in February. I've been wondering about him as well.

Tnp enjoys working under houses.  :thumbsup2:
#3
Pandering. Script writer thinks hey, the Christians will be pleased. The less intelligent ones anyway.

The "Three Laws" are likewise almost certainly futile.

"Why Asimov's Three Laws Of Robotics Can't Protect Us" | Gizmodo

That site throws up a firewall after a few articles, and archive sites don't get past its "Continue Reading" button.

I'll quote some of the interesting/relevant sections . . .

QuoteTo learn if Asimov's Three Laws could help, we contacted two AI theorists who have given this subject considerable thought: Ben Goertzel — an AI theorist and chief scientist of financial prediction firm Aidyia Holdings — and Louie Helm — the Deputy Director of the Machine Intelligence Research Institute (MIRI) and Executive Editor of Rockstar Research Magazine. After speaking to them, it was clear that Asimov's Laws are wholly inadequate for the task — and that if we're to guarantee the safety of SAI, we're going to have to devise something entirely different.

[. . .]

"I honestly don't find any inspiration in the three laws of robotics," said Helm. "The consensus in machine ethics is that they're an unsatisfactory basis for machine ethics." The Three Laws may be widely known, he says, but they're not really being used to guide or inform actual AI safety researchers or even machine ethicists.

"One reason is that rule-abiding systems of ethics — referred to as 'deontology' — are known to be a broken foundation for ethics. There are still a few philosophers trying to fix systems of deontology — but these are mostly the same people trying to shore up 'intelligent design' and 'divine command theory'," says Helm. "No one takes them seriously."

He summarizes the inadequacy of the Three Laws accordingly:

  • Inherently adversarial
  • Based on a known flawed ethical framework (deontology)
  • Rejected by researchers
  • Fails even in fiction

Goertzel agrees. "The point of the Three Laws was to fail in interesting ways; that's what made most of the stories involving them interesting," he says. "So the Three Laws were instructive in terms of teaching us how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes."

Goertzel doesn't believe they would work in reality, arguing that the terms involved are ambiguous and subject to interpretation — meaning that they're dependent on the mind doing the interpreting in various obvious and subtle ways.

[. . .]

"I think it would be unwise to design artificial intelligence systems or robots to be self-aware or conscious," says Helm. "And unlike movies or books where AI developers 'accidentally' get conscious machines by magic, I don't expect that could happen in real life. People won't just bungle into consciousness by accident — it would take lots of effort and knowledge to hit that target. And most AI developers are ethical people, so they will avoid creating what philosophers would refer to as a 'beings of moral significance.' Especially when they could just as easily create advanced thinking machines that don't have that inherent ethical liability."

Accordingly, Helm isn't particularly concerned about the need to develop asymmetric laws governing the value of robots versus people, arguing (and hoping) that future AI developers will use some small amount of ethical restraint.

[. . .]

"That said, I think people are made of atoms, and so it would be possible in theory to engineer a synthetic form of life or a robot with moral significance," says Helm. "I'd like to think no one would do this. And I expect most people will not. But there may inevitably be some showboating fool seeking notoriety for being the first to do something — anything — even something this unethical and stupid."

Given the obvious inadequacies of Asimov's Three Laws, I was curious to know if they could still be salvaged by a few tweaks or patches. And indeed, many scifi writers have tried to do just these, adding various add-ons over the years (more about this here).

"No," says Helm, "There isn't going to be a 'patch' to the Three Laws. It doesn't exist."

In addition to being too inconsistent to be implementable, Helm says the Laws are inherently adversarial.

"I favor machine ethics approaches that are more cooperative, more reflectively consistent, and are specified with enough indirect normativity that the system can recover from early misunderstandings or mis-programmings of its ethics and still arrive at a sound set of ethical principles anyway," says Helm.

Goertzel echos Helm's concerns.

"Defining some set of ethical precepts, as the core of an approach to machine ethics, is probably hopeless if the machines in question are flexible minded AGIs [artificial general intelligences]," he told io9. "If an AGI is created to have an intuitive, flexible, adaptive sense of ethics — then, in this context, ethical precepts can be useful to that AGI as a rough guide to applying its own ethical intuition. But in that case the precepts are not the core of the AGI's ethical system, they're just one aspect. This is how it works in humans — the ethical rules we learn work, insofar as they do work, mainly as guidance for nudging the ethical instincts and intuitions we have — and that we would have independently of being taught ethical rules."

Given the inadequacies of a law-based approach, I asked both Goertzel and Helm to describe current approaches to the "safe AI" problem.

"Very few AGI researchers believe that it would be possible to engineer AGI systems that could be guaranteed totally safe," says Goertzel. "But this doesn't bother most of them because, in the end, there are no guarantees in this life."

Goertzel believes that, once we have built early-stage AGI systems or proto-AGI systems much more powerful than what we have now, we will be able to carry out studies and experiments that will tell us much more about AGI ethics than we now know.

"Hopefully in that way we will be able to formulate good theories of AGI ethics, which will enable us to understand the topic better," he says, "But right now, theorizing about AGI ethics is pretty difficult, because we don't have any good theories of ethics nor any really good theories of AGI."

He also added: "And to the folks who have watched Terminator too many times, it may seem scary to proceed with building AGIs, under the assumption that solid AGI theories will likely only emerge after we've experimented with some primitive AGI systems. But that is how most radical advances have happened."

Think about it, he says: "When a group of clever cavemen invented language, did they wait to do so until after they'd developed a solid formal theory of language, which they could use to predict the future implications of the introduction of language into their society?"

Again, Goertzel and Helm are on the same page. The Machine Intelligence Research Institute has spent a lot of time thinking about this — and the short answer is that it's not yet an engineering problem. Much more research is needed.

"What do I mean by this? Well, my MIRI colleague Luke Muehlhauser summarized it well when he said that problems often move from philosophy, to math, to engineering," Helm says. "Philosophy often asks useful questions, but usually in such an imprecise way that no one can ever know whether or not a new contribution to an answer represents progress. If we can reformulate the important philosophical problems related to intelligence, identity, and value into precise enough math that it can be wrong or not, then I think we can build models that will be able to be successfully built on, and one day be useful as input for real world engineering."

[Link to full article.]
#4
I doubt the godless Communist Chinese have even thought to defend their AI from the true faith.
#5
Laid Back Lounge / Re: Game: The Next Person... (...
Last post by The Magic Pudding.. - April 17, 2024, 01:34:11 PM
Quote from: Ecurb Noselrub on December 17, 2023, 01:03:08 PMTrue - I am but my wife is not. She commands the kitchen, but my part (money) is done. I guess that makes us old and traditional?

We are going to New Orleans for 5 days after Christmas - taking our grandson (17). I love N.O.


He was never heard from again.
The nxt has heard from him since
#6
Laid Back Lounge / Re: Reasons to be cheerful!
Last post by Dark Lightning - April 17, 2024, 01:34:08 PM
Quote from: Tank on April 16, 2024, 01:48:13 PM
Quote from: Dark Lightning on April 16, 2024, 01:28:29 PMThanks! ...and thanks.  I checked. It took over 7 years to get to 3k posts. What a slacker! ;D

Quality posts one and all.

:thumb:
#7
I'm watching The Sarah Connor Chronicles again, why? I've mentioned my internet woes elsewhere.

So we have a robot Scottish woman from the future (she) who wants to teach an emergent AI to be nice but she isn't qualified.  It seems the AI has caused a death, possibly inadvertently. 
She asks her black (and hence a believer) ex FBI employee guy what he'd teach the AI.
You want to teach it commands, start with the first ten.

WTF, they're mostly about god bullshit...
Surely an AI wouldn't fall for that crap
Though we did so with a tweak so might they.
I don't think we have a choice, you can be sure those Iranians have theirs facing Mecca and not eating pork.
The plain truth is, if we don't teach our robots to fear and love our Christian god, they will be vulnerable to false beliefs, otherish false beliefs, not our ones.
#8
Laid Back Lounge / Re: Reasons to be cheerful!
Last post by Tank - April 17, 2024, 09:01:24 AM
Luxembourg rests easy.
#9
Laid Back Lounge / Re: Reasons to be cheerful!
Last post by Asmodean - April 17, 2024, 07:30:06 AM
The Asmo has back aches, but is still cheery because He gets bonus on His next paycheck, May is rapidly approaching with like five-or-so public holidays and they be washing winter off the streets, so no more salt for His divine Car. :smilenod:

Uncharacteristically much Gray Cheeriness. :smilenod:
#10
Laid Back Lounge / Re: Jokes Thread (Was named An...
Last post by Tank - April 16, 2024, 01:48:53 PM
Stolen!