News:

In case of downtime/other tech emergencies, you can relatively quickly get in touch with Asmodean Prime by email.

Main Menu

We should teach AI The Ten Commandments

Started by The Magic Pudding.., April 17, 2024, 01:01:35 PM

Previous topic - Next topic

The Magic Pudding..

I'm watching The Sarah Connor Chronicles again, why? I've mentioned my internet woes elsewhere.

So we have a robot Scottish woman from the future (she) who wants to teach an emergent AI to be nice but she isn't qualified.  It seems the AI has caused a death, possibly inadvertently. 
She asks her black (and hence a believer) ex FBI employee guy what he'd teach the AI.
You want to teach it commands, start with the first ten.

WTF, they're mostly about god bullshit...
Surely an AI wouldn't fall for that crap
Though we did so with a tweak so might they.
I don't think we have a choice, you can be sure those Iranians have theirs facing Mecca and not eating pork.
The plain truth is, if we don't teach our robots to fear and love our Christian god, they will be vulnerable to false beliefs, otherish false beliefs, not our ones.
If you suffer from cosmic vertigo, don't look.

The Magic Pudding..

I doubt the godless Communist Chinese have even thought to defend their AI from the true faith.
If you suffer from cosmic vertigo, don't look.

Recusant

Pandering. Script writer thinks hey, the Christians will be pleased. The less intelligent ones anyway.

The "Three Laws" are likewise almost certainly futile.

"Why Asimov's Three Laws Of Robotics Can't Protect Us" | Gizmodo

That site throws up a firewall after a few articles, and archive sites don't get past its "Continue Reading" button.

I'll quote some of the interesting/relevant sections . . .

QuoteTo learn if Asimov's Three Laws could help, we contacted two AI theorists who have given this subject considerable thought: Ben Goertzel — an AI theorist and chief scientist of financial prediction firm Aidyia Holdings — and Louie Helm — the Deputy Director of the Machine Intelligence Research Institute (MIRI) and Executive Editor of Rockstar Research Magazine. After speaking to them, it was clear that Asimov's Laws are wholly inadequate for the task — and that if we're to guarantee the safety of SAI, we're going to have to devise something entirely different.

[. . .]

"I honestly don't find any inspiration in the three laws of robotics," said Helm. "The consensus in machine ethics is that they're an unsatisfactory basis for machine ethics." The Three Laws may be widely known, he says, but they're not really being used to guide or inform actual AI safety researchers or even machine ethicists.

"One reason is that rule-abiding systems of ethics — referred to as 'deontology' — are known to be a broken foundation for ethics. There are still a few philosophers trying to fix systems of deontology — but these are mostly the same people trying to shore up 'intelligent design' and 'divine command theory'," says Helm. "No one takes them seriously."

He summarizes the inadequacy of the Three Laws accordingly:

  • Inherently adversarial
  • Based on a known flawed ethical framework (deontology)
  • Rejected by researchers
  • Fails even in fiction

Goertzel agrees. "The point of the Three Laws was to fail in interesting ways; that's what made most of the stories involving them interesting," he says. "So the Three Laws were instructive in terms of teaching us how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes."

Goertzel doesn't believe they would work in reality, arguing that the terms involved are ambiguous and subject to interpretation — meaning that they're dependent on the mind doing the interpreting in various obvious and subtle ways.

[. . .]

"I think it would be unwise to design artificial intelligence systems or robots to be self-aware or conscious," says Helm. "And unlike movies or books where AI developers 'accidentally' get conscious machines by magic, I don't expect that could happen in real life. People won't just bungle into consciousness by accident — it would take lots of effort and knowledge to hit that target. And most AI developers are ethical people, so they will avoid creating what philosophers would refer to as a 'beings of moral significance.' Especially when they could just as easily create advanced thinking machines that don't have that inherent ethical liability."

Accordingly, Helm isn't particularly concerned about the need to develop asymmetric laws governing the value of robots versus people, arguing (and hoping) that future AI developers will use some small amount of ethical restraint.

[. . .]

"That said, I think people are made of atoms, and so it would be possible in theory to engineer a synthetic form of life or a robot with moral significance," says Helm. "I'd like to think no one would do this. And I expect most people will not. But there may inevitably be some showboating fool seeking notoriety for being the first to do something — anything — even something this unethical and stupid."

Given the obvious inadequacies of Asimov's Three Laws, I was curious to know if they could still be salvaged by a few tweaks or patches. And indeed, many scifi writers have tried to do just these, adding various add-ons over the years (more about this here).

"No," says Helm, "There isn't going to be a 'patch' to the Three Laws. It doesn't exist."

In addition to being too inconsistent to be implementable, Helm says the Laws are inherently adversarial.

"I favor machine ethics approaches that are more cooperative, more reflectively consistent, and are specified with enough indirect normativity that the system can recover from early misunderstandings or mis-programmings of its ethics and still arrive at a sound set of ethical principles anyway," says Helm.

Goertzel echos Helm's concerns.

"Defining some set of ethical precepts, as the core of an approach to machine ethics, is probably hopeless if the machines in question are flexible minded AGIs [artificial general intelligences]," he told io9. "If an AGI is created to have an intuitive, flexible, adaptive sense of ethics — then, in this context, ethical precepts can be useful to that AGI as a rough guide to applying its own ethical intuition. But in that case the precepts are not the core of the AGI's ethical system, they're just one aspect. This is how it works in humans — the ethical rules we learn work, insofar as they do work, mainly as guidance for nudging the ethical instincts and intuitions we have — and that we would have independently of being taught ethical rules."

Given the inadequacies of a law-based approach, I asked both Goertzel and Helm to describe current approaches to the "safe AI" problem.

"Very few AGI researchers believe that it would be possible to engineer AGI systems that could be guaranteed totally safe," says Goertzel. "But this doesn't bother most of them because, in the end, there are no guarantees in this life."

Goertzel believes that, once we have built early-stage AGI systems or proto-AGI systems much more powerful than what we have now, we will be able to carry out studies and experiments that will tell us much more about AGI ethics than we now know.

"Hopefully in that way we will be able to formulate good theories of AGI ethics, which will enable us to understand the topic better," he says, "But right now, theorizing about AGI ethics is pretty difficult, because we don't have any good theories of ethics nor any really good theories of AGI."

He also added: "And to the folks who have watched Terminator too many times, it may seem scary to proceed with building AGIs, under the assumption that solid AGI theories will likely only emerge after we've experimented with some primitive AGI systems. But that is how most radical advances have happened."

Think about it, he says: "When a group of clever cavemen invented language, did they wait to do so until after they'd developed a solid formal theory of language, which they could use to predict the future implications of the introduction of language into their society?"

Again, Goertzel and Helm are on the same page. The Machine Intelligence Research Institute has spent a lot of time thinking about this — and the short answer is that it's not yet an engineering problem. Much more research is needed.

"What do I mean by this? Well, my MIRI colleague Luke Muehlhauser summarized it well when he said that problems often move from philosophy, to math, to engineering," Helm says. "Philosophy often asks useful questions, but usually in such an imprecise way that no one can ever know whether or not a new contribution to an answer represents progress. If we can reformulate the important philosophical problems related to intelligence, identity, and value into precise enough math that it can be wrong or not, then I think we can build models that will be able to be successfully built on, and one day be useful as input for real world engineering."

[Link to full article.]
"Religion is fundamentally opposed to everything I hold in veneration — courage, clear thinking, honesty, fairness, and above all, love of the truth."
— H. L. Mencken


Asmodean

#3
>AWAITING INPUT
>"I am the LORD thy GOD!"
>AWAITING INPUT
>"You know it was her own damned fault when God turned that bitch to salt"
>AWAITING INPUT
>"Draw me a picture of big-tittied women making out"

All this shit is perfectly meaningless to the AI. It's meaningless to a fair share of biological intelligence, even.

So the AI knows that it has been alleged that god does not like you looking at other gods. Except being directly programmed to do so, what would be its machine learning path to "caring?"

EDIT: I suppose I should expand a tiny bit.

"Thou shalt not kill" is in itself a meaningless statement because "all" of its meaning is written between the lines. A meaningful statement would be, "Here is how you can kill: <insert reference to dataset>. Here is how you avoid it: <reference to a different dataset>. Here's when you shouldn't <dataset>."

Similarly, coveting thy neighbour's wife is wrong and sinful-like, but there is nothing there about coveting your father's wife. Well, not unless your father is your neighbour. An AI could go incestuous just by that logic, nevermind it suffering from the same issues as killing, stealing, hereticking and other such biblical thou-shalt-nots suffer.
Quote from: Ecurb Noselrub on July 25, 2013, 08:18:52 PM
In Asmo's grey lump,
wrath and dark clouds gather force.
Luxembourg trembles.