News:

Look, I haven't mentioned Zeus, Buddah, or some religion.

Main Menu

All things AI

Started by Dave, May 21, 2018, 04:03:59 PM

Previous topic - Next topic

Arturo



I posted this elsewhere but this is an AI bot that's been on the web for awhile. The bot is free to use and is publically available. The bot's posts are in blue...and it's savage.
It's Okay To Say You're Welcome
     Just let people be themselves.
     Arturo The1  リ壱

Bad Penny II

QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

I pay homage to the most excellent Alan but I don't think much of this test.
Why bother making a human like machine when you can bypass our human foibles and make something better?
I'm glad the early aviators gave up on the flapping bird like wing and went with fixed wings, cars would be crap with legs.
Take my advice, don't listen to me.

Dave

Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

I pay homage to the most excellent Alan but I don't think much of this test.
Why bother making a human like machine when you can bypass our human foibles and make something better?
I'm glad the early aviators gave up on the flapping bird like wing and went with fixed wings, cars would be crap with legs.

Can you further explain why you dislike the actual test, Bad Penny?
Tomorrow is precious, don't ruin it by fouling up today.
Passed Monday 10th Dec 2018 age 74

Bad Penny II

Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
Take my advice, don't listen to me.

Dave

Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited. If you then upped the very useful (to us) abilities then you woukd increasingly automatically create a pseu-human intelligence anywsy.

The current voice operated systems are less intelligent than but, I think, similar to dogs. They acquire a set of "tricks" that they then perform on command (sometimes). When they can watch just what you do, predict, independantly and from experience, that you need a cuppa and a biscuit after doing the gardening - and ask you, fill the kettle, switch it on, open the biscuit jar . . . The dog might bring you your slippers without asking, once trained.

It's horses for courses, mate, if you want hunanlike responses and actions you want a human like AI. If you want learned stimulus/response system, automation with a touch of nous, self learning ability, then the lesser intelligence model might suffice.

Self-drive cars are, sort of, faster mechanical guide dogs. I am wondering when the first "AI" mechanical guide dog will hit the market (if it hasn't already!)
Tomorrow is precious, don't ruin it by fouling up today.
Passed Monday 10th Dec 2018 age 74

Bad Penny II

Quote from: Dave on May 30, 2018, 01:18:28 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited.

I wasn't suggesting a model based on crows or flipper.
An artificial intelligence would exceed humans in ways and be deficient in others.
I don't like us using us as the unit of measure of intelligence.
I think you've sold crow and flipper a bit short too.
Take my advice, don't listen to me.

Davin

Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
Then you'd like Frans de Waal's approach to testing animals for intelligence. I think it could also be applied to AI once AI becomes a little more advanced.
Always question all authorities because the authority you don't question is the most dangerous... except me, never question me.

Dave

Quote from: Bad Penny II on May 30, 2018, 01:43:18 PM
Quote from: Dave on May 30, 2018, 01:18:28 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited.

I wasn't suggesting a model based on crows or flipper.
An artificial intelligence would exceed humans in ways and be deficient in others.
I don't like us using us as the unit of measure of intelligence.
I think you've sold crow and flipper a bit short too.

Maybe I am short changing them a bit, but if I remember correctly most of the solutions they find to "novel" problems are based on either variations of standard behaviour or trying evey thing you can until something you want happens.

Now I am going to agree that covers a fsir lump of human behaviour as well, but I am going to suggest that analysis of the problem, and especially failed solution attempts, is more of a human trait. Hmm, though I seem to remember crows looking all round the problem before trying out solutions . . .

It annoys me when, in the videos of crows dropping blocks into water to raise the level, and the peanut, they do not include the number of hours or futile and repeated attempts. It is probably somewhere in the scholarly stuff, as might be offering the same problem to kids of various ages.
Tomorrow is precious, don't ruin it by fouling up today.
Passed Monday 10th Dec 2018 age 74

Arturo

I think the demonstration of Google's Duplex was more bang and flash than what we will actually get. Most of the responses are probably the same and after a short while, you will probably see the pattern in them. But as far as actually picking up the phone and talking to one while you are at work will probably be a different result.

Although it has demonstrated the ability to hear difficult accents much better than anyone else could have predicted. And I would say much better than most people I know.

So as long as you see them side by side, Duplex would, in my estimation, fail the turing test. But in practical use, would likely pass it unless someone was specifically looking for that and paying close attention.
It's Okay To Say You're Welcome
     Just let people be themselves.
     Arturo The1  リ壱

Dave

QuoteMan 1, machine 1: landmark debate between AI and humans ends in draw

It was man 1, machine 1 in the first live, public debate between an artificial intelligence system developed by IBM and two human debaters.

The AI, called Project Debater, appeared on stage in a packed conference room at IBM's San Francisco office embodied in a 6ft tall black panel with a blue, animated "mouth". It was a looming presence alongside the human debaters Noa Ovadia and Dan Zafrir, who stood behind a podium nearby.

Although the machine stumbled at many points, the unprecedented event offered a glimpse into how computers are learning to grapple with the messy, unstructured world of human decision-making.

https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater

Quote from the radio, "It will take emotion out of decision making."

Robojudge is on its way, folks!
Tomorrow is precious, don't ruin it by fouling up today.
Passed Monday 10th Dec 2018 age 74

Arturo

AND THEN MAN CREATED MACHINE IN HIS OWN IMAGE
It's Okay To Say You're Welcome
     Just let people be themselves.
     Arturo The1  リ壱

Recusant

#26
This one perhaps belongs in the Brain thread, but I reckoned we shouldn't let Dave's AI thread languish.

Some interesting ideas in the piece, and bonus marks for name-checking Philip K. Dick (can almost overlook "sci-fi"). I think we'll get something like real artificial intelligence sooner or later but what the Hel do I know?

Quote for the post (with my modifications to establish context): "Consciousness is an emergent property born from the nested frequencies of synchronized spontaneous fluctuations in neuron activity levels." Deep, man.  :toke:


"Artificial intelligence research may have hit a dead end" | Salon

QuotePhilip K. Dick's iconic 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" posed an intriguing question in its title: would an intelligent robot dream?

In the 53 years since publication, artificial intelligence research has matured significantly. And yet, despite Dick being prophetic about technology in other ways, the question posed in the title is not something AI researchers are that interested in; no one is trying to invent an android that dreams of electric sheep.

Why? Mainly, it's that most artificial intelligence researchers and scientists are busy trying to design "intelligent" software programmed to do specific tasks. There is no time for daydreaming.

Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play?

Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.

[Continues . . .]

I can't subscribe to the thesis that consciousness requires a body with senses that moves through the environment. The gathering of information about the environment through sensory apparatus, yes. However it seems presumptuous to say that it must be in a single package, as it is in biological entities.
"Religion is fundamentally opposed to everything I hold in veneration — courage, clear thinking, honesty, fairness, and above all, love of the truth."
— H. L. Mencken


Recusant

Progress, perhaps. Cue the theremin.  :smokin cool:

"These Synthetic Neurons Use Ions to Hold Onto 'Memories', Just Like Our Brains Do" | Science Alert

QuoteScientists have created key parts of synthetic brain cells that can hold cellular "memories" for milliseconds. The achievement could one day lead to computers that work like the human brain.

These parts, which were used to model an artificial brain cell, use charged particles called ions to produce an electrical signal, in the same way that information gets transferred between neurons in your brain.

Current computers can do incredible things, but this processing power comes at a high energy cost. In contrast, the human brain is remarkably efficient, using roughly the energy contained in two bananas to do an entire day's work.

While the reasons for this efficiency aren't entirely clear, scientists have reasoned that if they could make a computer more like the human brain, it would require way less energy.

One way that scientists try to replicate the brain's biological machinery is by utilizing the power of ions, the charged particles that the brain relies on to produce electricity.

In the new study, published in the journal Science on Aug. 6, researchers at the Centre national de la recherche scientifique in Paris, France, created a computer model of artificial neurons that could produce the same sort of electrical signals neurons use to transfer information in the brain; by sending ions through thin channels of water to mimic real ion channels, the researchers could produce these electrical spikes.

And now, they have even created a physical model incorporating these channels as part of unpublished, ongoing research.

[Continues . . .]

The Science page for this paper (linked in the article) doesn't even show hoi polloi the abstract. I found that on another site.

QuoteAbstract:

Recent advances in nanofluidics have enabled the confinement of water down to a single molecular layer. Such monolayer electrolytes show promise in achieving bioinspired functionalities through molecular control of ion transport. However, the understanding of ion dynamics in these systems is still scarce. Here, we develop an analytical theory, backed up by molecular dynamics simulations, that predicts strongly nonlinear effects in ion transport across quasi–two-dimensional slits. We show that under an electric field, ions assemble into elongated clusters, whose slow dynamics result in hysteretic conduction. This phenomenon, known as the memristor effect, can be harnessed to build an elementary neuron. As a proof of concept, we carry out molecular simulations of two nanofluidic slits that reproduce the Hodgkin-Huxley model and observe spontaneous emission of voltage spikes characteristic of neuromorphic activity.

"Religion is fundamentally opposed to everything I hold in veneration — courage, clear thinking, honesty, fairness, and above all, love of the truth."
— H. L. Mencken


Recusant

Here's to Keith Laumer.  :cheers: 

"A.I. Joe: The Dangers of Artificial Intelligence and the Military" | Public Citizen

QuoteThe U.S. Department of Defense (DOD) and the military-industrial complex are rushing to embrace an artificial intelligence (AI)-driven future.

There's nothing particularly surprising or inherently worrisome about this trend. AI is already in widespread use and evolving generative AI technologies are likely to suffuse society, remaking jobs, organizational arrangements and machinery.

At the same time, AI poses manifold risks to society and military applications present novel problems and concerns, as the Pentagon itself recognizes.
This report outlines some of the primary concerns around military applications of AI use. It begins with a brief overview of the Pentagon's AI policy. Then it reviews:

  • The grave dangers of autonomous weapons – "killer robots" programmed to make their own decisions about use of lethal force.
  • The imperative of ensuring that decisions to use nuclear weapons can be made only by humans, not automated systems.
  • How AI intelligence processing can increase not diminish the use of violence.
    The risks of using deepfakes on the battlefield.

The report then reviews how military AI start-ups are crusading for Pentagon contracts, including by following the tried-and-true tactic of relying on revolving door relationships.

The report concludes with a series of recommendations:

  • The United States should pledge not to develop or deploy autonomous weapons, and should support a global treaty banning such weapons.
  • The United States should codify the commitment that only humans can launch nuclear weapons.
  • Deepfakes should be banned from the battlefield.
  • Spending for AI technologies should come from the already bloated and wasteful Pentagon budget, not additional appropriations.


[Continues . . .]
"Religion is fundamentally opposed to everything I hold in veneration — courage, clear thinking, honesty, fairness, and above all, love of the truth."
— H. L. Mencken