Happy Atheist Forum

General => Science => Topic started by: Dave on May 21, 2018, 04:03:59 PM

Title: All things AI
Post by: Dave on May 21, 2018, 04:03:59 PM
Getting enough stuff in the media and elsewhere to think this needs a dedicated, but generalised, thread of its own - the digital version of Silver's brain thread!

Here are a few links to start it off:

Quote
Artificial Intelligence.
Should we beware the machines? Professor Stephen Hawking has warned the rise of Artificial Intelligence could mean the end of the human race. He's joined other renowned scientists urging computer programmers to focus not just on making machines smarter, but also ensuring they promote the good and not the bad. How seriously should we take the warnings that super-intelligent machines could turn on us? And what does AI teach us about what it means to be human? Helena Merriman examines the risks, the opportunities and how we might avoid being turned into paperclips.
https://www.bbc.co.uk/programmes/b05372sx

QuoteUsing artificial intelligence to fight terrorism

At the end of a week in which we have been looking at the development of artificial intelligence (AI), we examine how effective computers can be in tracking suspected sex offenders - or terrorists.

The head of MI5 told us this week that the terrorist threat is at its highest level since 9/11. Our security correspondent, Gordon Corera, has been hearing about one British company - founded by people with a background in government and intelligence - to see what its AI technology can do.
https://www.bbc.co.uk/programmes/p032y5cj

QuoteArtificial Intelligence
In Our Time

Melvyn Bragg and guests discuss artificial intelligence. Can we create a machine that creates? Some argue so. And is consciousness, as we are, with headaches and tiffs and moods and small pleasures and sore feet - often all at the same time - capable of taking place in a machine? Artificial intelligence machines have been growing much more intelligent since Alan Turing's pioneering days at Bletchley in World War Two. Its claims are now very grand indeed. It is 31 years since Stanley Kubrick and Arthur C Clarke gave us HAL - the archetypal thinking computer of the film 2001: A Space Odyssey. But are we any nearer to achieving the thinking, feeling computer? Or is it just a dream - and should it remain as one?With Igor Aleksander, Professor, Imperial College London and inventor of Magnus - a neural computer which he says is an artificially conscious machine; John Searle, Professor of Philosophy, University of California and one of only two people in the world to invent an argument, the Chinese Room Argument, which destroys the plausibility of the idea of conscious machines.
https://www.bbc.co.uk/programmes/p00545h7

The above are for discussion only.
Title: Re: All things AI
Post by: Dave on May 21, 2018, 04:36:10 PM
Only watch this if you like cringing:

https://youtu.be/rO-89oBeBbQ
Title: Re: All things AI
Post by: Dave on May 21, 2018, 10:11:11 PM
Either the BBC are either using computers to read news items or they have found a reader with intonations and cadences that sound very like a better, female, version of Steven Hawkin's voder. It was a description of sentences handed out to rebel soldiers in Turkey.

https://www.bbc.co.uk/programmes/w172w4drhtr0f8b

Title: Re: All things AI
Post by: Dave on May 28, 2018, 08:49:46 AM
Discrimination in recognition systems:

QuoteThe Observer
Interview
'A white mask worked better': why algorithms are not colour blind

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination.
https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias

The problrm goes beyond that into the automatic recognitiin of skin cancers etc.

Aleo on BBC:
QuoteRacist AI
Business Daily

Can artificial intelligence and face recognition technology be racist? AI is increasingly being used in all aspects of our lives but there is a problem with it. It often can't see people because of the colour of their skin. Zoe Kleinman speaks to Joy Buolamwini founder of the Algorithmic Justice League, Suresh Venkatasubramanian from the School of Computing at the University of Utah and Calum Chase, an AI expert and author about what is being done to overcome this problem.
(Photo: Facial recognition system, Credit: Getty Images)

https://www.bbc.co.uk/programmes/w3cswgkg

Some of this is because the big data collectors, Google etc,  statistically get more images of white people to train their AI systems and sell to others.
Title: Re: All things AI
Post by: Arturo on May 28, 2018, 03:54:54 PM
I think that is a far cry from racism though. It's not like they intentionally picked out white people only to recognize faces from. I think it's good that this is being brought forth but I don't think it's racism. I think it would be racism if they were intentionally trying to keep people of color out of their market and eliminate them from doing so. And it would only cement their racism if they said that their reason for doing so is if they were irreversibly, fundamentally, and/or inherently flawed in some way.

I wear glasses and often times camera filters have facial recognition and they don't recognize my face correctly. Do I go out and say that people are discriminating against me because I wear glasses? No I think that would be silly.

Because of the things I've seen change when going back and watching old 90's wrestling, I think it's gotten a lot better. The 90's wrestlers of color were always boo'd. And anyone who would fight them, if they were not a person of color, would always be cheered. Even if they were a bad guy. A white person beating up a person of color was always cheered for. Now when I see white people act, (namely people on this forum and elsewhere), I see racism being shot down and people of color being stuck up for when they can't do it themselves. And I think that's a good thing because it gives us a bigger voice.

"I'm interested in one thing Neo, the future. And believe me, I know, the only way to get there is together."

Title: Re: All things AI
Post by: Dave on May 28, 2018, 04:15:36 PM
I don't think they ever said it was "racism", Arturo, but 'statistical bias".  Taking any bunch, using "blind" techniques (every third person say) of people at random out of a crowd that is not congregated for a specific purpose (anti-racialist rally for eg) and the chances are you will not get a representative spread of the national racial mix. In some areas you might get more of any racial type. That forms a statistical bias and may not be useful for working out marketting strategy for goods that might have a cultural (food, clothing etc) influence.

There are other methods, of course, and the above is simplistic. But it seems "bias" does creep in without any sinister motive. But makes the findings untrustworthy.
Title: Re: All things AI
Post by: Arturo on May 28, 2018, 04:52:09 PM
No no, you're right. They didn't say it was racist. But they asked the question. I was just giving my piece on it.
Title: Re: All things AI
Post by: Tank on May 28, 2018, 04:53:38 PM
" I think it would be racism if they were intentionally trying to keep people of color out of their market and eliminate them from doing so."

Does that make any difference to the person who is subject to elimination? No, of course it doesn't. Unconscious racism and conscious racism can have the same result. AI and selection algorithms can and will have significant impact on our future and if those algorithms have unconscious biases how will we know if we don't have free and ready to them?
Title: Re: All things AI
Post by: Arturo on May 28, 2018, 06:14:07 PM
Quote from: Tank on May 28, 2018, 04:53:38 PM
" I think it would be racism if they were intentionally trying to keep people of color out of their market and eliminate them from doing so."

Does that make any difference to the person who is subject to elimination? No, of course it doesn't. Unconscious racism and conscious racism can have the same result. AI and selection algorithms can and will have significant impact on our future and if those algorithms have unconscious biases how will we know if we don't have free and ready to them?

I don't understand your last question there. I think you made a typo or omitted something by mistake.

I'm not programmer or expert on AI, but from what I can tell is that AI can only be programmed to know what we tell it to. It might take that off into other directions like Google's deepmind did with the game "Go" in Korea, ultimately beating the champion, instead of breaking like they thought it would when it went in the other direction that they saw as flawed.

But yeah, for me it's tomato, tomato. Corporations are in the business of making money. And in order to make the most money, they have to appeal to everyone. So it's in their best interest to correct this imperfection. That's why I say it's good that this is brought up. Because the implications can lead to positive outcomes. The same as when the same thing happens in social situations.
Title: Re: All things AI
Post by: Dave on May 28, 2018, 06:45:40 PM
Quote from: Arturo on May 28, 2018, 06:14:07 PM
Quote from: Tank on May 28, 2018, 04:53:38 PM
" I think it would be racism if they were intentionally trying to keep people of color out of their market and eliminate them from doing so."

Does that make any difference to the person who is subject to elimination? No, of course it doesn't. Unconscious racism and conscious racism can have the same result. AI and selection algorithms can and will have significant impact on our future and if those algorithms have unconscious biases how will we know if we don't have free and ready to them?

I don't understand your last question there. I think you made a typo or omitted something by mistake.

I'm not programmer or expert on AI, but from what I can tell is that AI can only be programmed to know what we tell it to. It might take that off into other directions like Google's deepmind did with the game "Go" in Korea, ultimately beating the champion, instead of breaking like they thought it would when it went in the other direction that they saw as flawed.

But yeah, for me it's tomato, tomato. Corporations are in the business of making money. And in order to make the most money, they have to appeal to everyone. So it's in their best interest to correct this imperfection. That's why I say it's good that this is brought up. Because the implications can lead to positive outcomes. The same as when the same thing happens in social situations.

Yeah, I was wondeting about Tank's last line as well!  :grin:

With regards to them pesky algorithms,  true, they are a list of instructions shich the computer cannot deviate from. But, to get the right redult you have to compose exactly the right instruction, no room at all for unexpected factors to cause a single hitch in thousands of lines of code. Misspell a crtical word, forget to zero some  sriabke . . .

The world is full of tiny errors that have big results. Ask a stupid question and you will get a stupid answer. Oh, just remembered good old GIGO - Garbage In Garbage Out!

Where is Davin when you need him?
Title: Re: All things AI
Post by: Tank on May 28, 2018, 07:01:23 PM
Algorithms are usually propitiatory and a) not available for public inspection and b) probably too complex to understand by the average person. So we won't have access to them 'physically' and intellectually. We will have to trust the commercial entity that  created them and as we know that never goes wrong :D
Title: Re: All things AI
Post by: Tank on May 28, 2018, 07:08:29 PM
There is also the issue of AI learning being uninterpretable and chaotic in nature. Two AI systems subjected to identical stimuli from the point of birth (switch on) will not develop identically because of quantum fluctuations during the learning process. They are by their very nature unpredictable.
Title: Re: All things AI
Post by: Dave on May 28, 2018, 07:53:44 PM
Quote from: Tank on May 28, 2018, 07:08:29 PM
There is also the issue of AI learning being uninterpretable and chaotic in nature. Two AI systems subjected to identical stimuli from the point of birth (switch on) will not develop identically because of quantum fluctuations during the learning process. They are by their very nature unpredictable.

Bit like some people, eh!

True, not sure how much in the way of a personality, ethics etc can be programmed in. Such things probably need vast volumes of mrmory tgat have no other functional use - expensive. Asimov's "positronic brain" had, IIRC, a similar structure to the organic brain, paths were established during learning and remained as firmware. The Three Laws being similarly "burned in".

It has to get cheap enough for potential "rogues" to be recognised and culled at an early stage. Then you need clever, and ecpensive, testing to spot those potential rogues!

The whole field is in its infancy; limited function, energy hungry, still pretty unsafe except for basic functions like route folloeing and collision avoidance. Though even the latter will be oicked over until there are enough stats to show that, on average, it is safer to ket a high-end self-drive car loose in city traffic than have a human driver.

I can see pseudo AI systems, stimulus-reaction systems as we have now, acting as visitor recognisers, house control systems, security etc becoming cheaper in the next ten years. Maybe able enough to recognise that little Jimmy has slipped his leash and is headed for the big bad road and turned left out of the the door. Especially if little Jimmy is wearing his GPS tag . . .

True AI systems with truly independent decision making and action selection (even if within parameters) are a whole other bundle of fun!

A lot of this will fall of the back of the Chinese systems I think. I doubt that they will be able to resist marketing the techniques and scooping up even more foreign capital. But some are already concerned about back-doors and logic bombs built into their products. We truly live in interesting times .

Correction: the "positronic brain" was a matrix made of metal alloys and wss volatile and thus required a constant supply of energy to function. Thought I had better check my memory. We can do better than that can't we? Gotta get the energy requirements and heat losses down though.
Title: Re: All things AI
Post by: Arturo on May 28, 2018, 10:31:20 PM
Quote from: Tank on May 28, 2018, 07:01:23 PM
Algorithms are usually propitiatory and a) not available for public inspection and b) probably too complex to understand by the average person. So we won't have access to them 'physically' and intellectually. We will have to trust the commercial entity that  created them and as we know that never goes wrong :D

It also is not subject to being impervious to being corrected. Like Dave said the field is still in it's infancy and many of these things are still getting bugs worked out. When a person makes a new product, the goal is to impress the consumer, not create a flop. My stance is to give the customer the best product possible before release.
Title: Re: All things AI
Post by: Dave on May 29, 2018, 02:37:03 AM
Quote from: Arturo on May 28, 2018, 10:31:20 PM
Quote from: Tank on May 28, 2018, 07:01:23 PM
Algorithms are usually propitiatory and a) not available for public inspection and b) probably too complex to understand by the average person. So we won't have access to them 'physically' and intellectually. We will have to trust the commercial entity that  created them and as we know that never goes wrong :D

It also is not subject to being impervious to being corrected. Like Dave said the field is still in it's infancy and many of these things are still getting bugs worked out. When a person makes a new product, the goal is to impress the consumer, not create a flop. My stance is to give the customer the best product possible before release.

My guess is that certain algorithms will acquire "class names", like functions in a washing machine. Sales patter will go along the lines of, "The Able 5 model House Control and Command system has grade A recognition routines in both the audio and visual functions, at least 99% accuracy once trained. It can differentiate between real command phrases and accidental ones from media due to its enhanced voice spectrum analysis and source location determination abilities. Safeguarding and barring routines have been greatly improved. . ."

(Subtext to last, ". . . after the embsrrasing court case due to pre-teens mimicking the parents' passwords for access to the drinks cabinet and a hard porn channel . . .")

Your camera spec does not need to tell you how its face/smile recognition works, just imply that it does so reliably.

But, if a routine requires "this plus this plus this and that" to get it right chosing an ambiguous "that" is potentially gonna foul things up! Like ". . . if there is light then . . ." Without specifying just what kind of light and not fitting sensors to differentiate between any conditions of sunlight and street plus headlights.
Title: Re: All things AI
Post by: Arturo on May 29, 2018, 02:58:46 PM
(https://cdn.discordapp.com/attachments/363808132863885313/450797066902437908/unknown.png)

I posted this elsewhere but this is an AI bot that's been on the web for awhile. The bot is free to use and is publically available. The bot's posts are in blue...and it's savage.
Title: Re: All things AI
Post by: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

I pay homage to the most excellent Alan but I don't think much of this test.
Why bother making a human like machine when you can bypass our human foibles and make something better?
I'm glad the early aviators gave up on the flapping bird like wing and went with fixed wings, cars would be crap with legs.
Title: Re: All things AI
Post by: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

I pay homage to the most excellent Alan but I don't think much of this test.
Why bother making a human like machine when you can bypass our human foibles and make something better?
I'm glad the early aviators gave up on the flapping bird like wing and went with fixed wings, cars would be crap with legs.

Can you further explain why you dislike the actual test, Bad Penny?
Title: Re: All things AI
Post by: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
Title: Re: All things AI
Post by: Dave on May 30, 2018, 01:18:28 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited. If you then upped the very useful (to us) abilities then you woukd increasingly automatically create a pseu-human intelligence anywsy.

The current voice operated systems are less intelligent than but, I think, similar to dogs. They acquire a set of "tricks" that they then perform on command (sometimes). When they can watch just what you do, predict, independantly and from experience, that you need a cuppa and a biscuit after doing the gardening - and ask you, fill the kettle, switch it on, open the biscuit jar . . . The dog might bring you your slippers without asking, once trained.

It's horses for courses, mate, if you want hunanlike responses and actions you want a human like AI. If you want learned stimulus/response system, automation with a touch of nous, self learning ability, then the lesser intelligence model might suffice.

Self-drive cars are, sort of, faster mechanical guide dogs. I am wondering when the first "AI" mechanical guide dog will hit the market (if it hasn't already!)
Title: Re: All things AI
Post by: Bad Penny II on May 30, 2018, 01:43:18 PM
Quote from: Dave on May 30, 2018, 01:18:28 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited.

I wasn't suggesting a model based on crows or flipper.
An artificial intelligence would exceed humans in ways and be deficient in others.
I don't like us using us as the unit of measure of intelligence.
I think you've sold crow and flipper a bit short too.
Title: Re: All things AI
Post by: Davin on May 30, 2018, 02:36:25 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
Then you'd like Frans de Waal's approach to testing animals for intelligence. I think it could also be applied to AI once AI becomes a little more advanced.
Title: Re: All things AI
Post by: Dave on May 30, 2018, 02:54:13 PM
Quote from: Bad Penny II on May 30, 2018, 01:43:18 PM
Quote from: Dave on May 30, 2018, 01:18:28 PM
Quote from: Bad Penny II on May 30, 2018, 12:24:16 PM
Quote from: Dave on May 30, 2018, 11:46:31 AM
Quote from: Bad Penny II on May 30, 2018, 10:42:52 AM
QuoteWiki:The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Can you further explain why you dislike the actual test, Bad Penny?

The human centric testing for intelligence.
When testing dolphins or corvids for intelligence is it about what they can do or how human they seem?
I think it depends on what you want an AI to do. If you want a pseudo-human, with a reasonable IQ, perhaps human centric ibtelligence is what you need. If you based the model on corvidae or delphinidae you would get a level of intelligence that is limited.

I wasn't suggesting a model based on crows or flipper.
An artificial intelligence would exceed humans in ways and be deficient in others.
I don't like us using us as the unit of measure of intelligence.
I think you've sold crow and flipper a bit short too.

Maybe I am short changing them a bit, but if I remember correctly most of the solutions they find to "novel" problems are based on either variations of standard behaviour or trying evey thing you can until something you want happens.

Now I am going to agree that covers a fsir lump of human behaviour as well, but I am going to suggest that analysis of the problem, and especially failed solution attempts, is more of a human trait. Hmm, though I seem to remember crows looking all round the problem before trying out solutions . . .

It annoys me when, in the videos of crows dropping blocks into water to raise the level, and the peanut, they do not include the number of hours or futile and repeated attempts. It is probably somewhere in the scholarly stuff, as might be offering the same problem to kids of various ages.
Title: Re: All things AI
Post by: Arturo on May 30, 2018, 05:01:56 PM
I think the demonstration of Google's Duplex was more bang and flash than what we will actually get. Most of the responses are probably the same and after a short while, you will probably see the pattern in them. But as far as actually picking up the phone and talking to one while you are at work will probably be a different result.

Although it has demonstrated the ability to hear difficult accents much better than anyone else could have predicted. And I would say much better than most people I know.

So as long as you see them side by side, Duplex would, in my estimation, fail the turing test. But in practical use, would likely pass it unless someone was specifically looking for that and paying close attention.
Title: Re: All things AI
Post by: Dave on June 19, 2018, 09:39:20 AM
QuoteMan 1, machine 1: landmark debate between AI and humans ends in draw

It was man 1, machine 1 in the first live, public debate between an artificial intelligence system developed by IBM and two human debaters.

The AI, called Project Debater, appeared on stage in a packed conference room at IBM's San Francisco office embodied in a 6ft tall black panel with a blue, animated "mouth". It was a looming presence alongside the human debaters Noa Ovadia and Dan Zafrir, who stood behind a podium nearby.

Although the machine stumbled at many points, the unprecedented event offered a glimpse into how computers are learning to grapple with the messy, unstructured world of human decision-making.

https://www.theguardian.com/technology/2018/jun/18/artificial-intelligence-ibm-debate-project-debater

Quote from the radio, "It will take emotion out of decision making."

Robojudge is on its way, folks!
Title: Re: All things AI
Post by: Arturo on June 19, 2018, 01:06:49 PM
AND THEN MAN CREATED MACHINE IN HIS OWN IMAGE
Title: Re: All things AI
Post by: Recusant on May 02, 2021, 06:20:54 AM
This one perhaps belongs in the Brain thread, but I reckoned we shouldn't let Dave's AI thread languish.

Some interesting ideas in the piece, and bonus marks for name-checking Philip K. Dick (can almost overlook "sci-fi"). I think we'll get something like real artificial intelligence sooner or later but what the Hel do I know?

Quote for the post (with my modifications to establish context): "Consciousness is an emergent property born from the nested frequencies of synchronized spontaneous fluctuations in neuron activity levels." Deep, man.  :toke:


"Artificial intelligence research may have hit a dead end" | Salon (https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/)

QuotePhilip K. Dick's iconic 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" posed an intriguing question in its title: would an intelligent robot dream?

In the 53 years since publication, artificial intelligence research has matured significantly. And yet, despite Dick being prophetic about technology in other ways, the question posed in the title is not something AI researchers are that interested in; no one is trying to invent an android that dreams of electric sheep.

Why? Mainly, it's that most artificial intelligence researchers and scientists are busy trying to design "intelligent" software programmed to do specific tasks. There is no time for daydreaming.

Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play?

Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.

[Continues . . . (https://www.salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/)]

I can't subscribe to the thesis that consciousness requires a body with senses that moves through the environment. The gathering of information about the environment through sensory apparatus, yes. However it seems presumptuous to say that it must be in a single package, as it is in biological entities.
Title: Re: All things AI
Post by: Recusant on August 08, 2021, 07:30:43 AM
Progress, perhaps. Cue the theremin.  :smokin cool:

"These Synthetic Neurons Use Ions to Hold Onto 'Memories', Just Like Our Brains Do" | Science Alert (https://www.sciencealert.com/scientists-create-artificial-neurons-that-can-hold-memories-for-milliseconds)

QuoteScientists have created key parts of synthetic brain cells that can hold cellular "memories" for milliseconds. The achievement could one day lead to computers that work like the human brain.

These parts, which were used to model an artificial brain cell, use charged particles called ions to produce an electrical signal, in the same way that information gets transferred between neurons in your brain.

Current computers can do incredible things, but this processing power comes at a high energy cost. In contrast, the human brain is remarkably efficient, using roughly the energy contained in two bananas to do an entire day's work.

While the reasons for this efficiency aren't entirely clear, scientists have reasoned that if they could make a computer more like the human brain, it would require way less energy.

One way that scientists try to replicate the brain's biological machinery is by utilizing the power of ions, the charged particles that the brain relies on to produce electricity.

In the new study, published in the journal Science on Aug. 6, researchers at the Centre national de la recherche scientifique in Paris, France, created a computer model of artificial neurons that could produce the same sort of electrical signals neurons use to transfer information in the brain; by sending ions through thin channels of water to mimic real ion channels, the researchers could produce these electrical spikes.

And now, they have even created a physical model incorporating these channels as part of unpublished, ongoing research.

[Continues . . . (https://www.sciencealert.com/scientists-create-artificial-neurons-that-can-hold-memories-for-milliseconds)]

The Science page for this paper (linked in the article) doesn't even show hoi polloi the abstract. I found that on another site (https://afroviral.com/modeling-of-emergent-memory-and-voltage-spiking-in-ionic-transport-through-angstrom-scale-slits/).

QuoteAbstract:

Recent advances in nanofluidics have enabled the confinement of water down to a single molecular layer. Such monolayer electrolytes show promise in achieving bioinspired functionalities through molecular control of ion transport. However, the understanding of ion dynamics in these systems is still scarce. Here, we develop an analytical theory, backed up by molecular dynamics simulations, that predicts strongly nonlinear effects in ion transport across quasi–two-dimensional slits. We show that under an electric field, ions assemble into elongated clusters, whose slow dynamics result in hysteretic conduction. This phenomenon, known as the memristor effect, can be harnessed to build an elementary neuron. As a proof of concept, we carry out molecular simulations of two nanofluidic slits that reproduce the Hodgkin-Huxley model and observe spontaneous emission of voltage spikes characteristic of neuromorphic activity.

Title: Re: All things AI
Post by: Recusant on March 02, 2024, 07:30:28 PM
Here's to Keith Laumer.  :cheers: 

"A.I. Joe: The Dangers of Artificial Intelligence and the Military" | Public Citizen (https://www.citizen.org/article/ai-joe-report/)

QuoteThe U.S. Department of Defense (DOD) and the military-industrial complex are rushing to embrace an artificial intelligence (AI)-driven future.

There's nothing particularly surprising or inherently worrisome about this trend. AI is already in widespread use and evolving generative AI technologies are likely to suffuse society, remaking jobs, organizational arrangements and machinery.

At the same time, AI poses manifold risks to society and military applications present novel problems and concerns, as the Pentagon itself recognizes.
This report outlines some of the primary concerns around military applications of AI use. It begins with a brief overview of the Pentagon's AI policy. Then it reviews:

  • The grave dangers of autonomous weapons – "killer robots" programmed to make their own decisions about use of lethal force.
  • The imperative of ensuring that decisions to use nuclear weapons can be made only by humans, not automated systems.
  • How AI intelligence processing can increase not diminish the use of violence.
    The risks of using deepfakes on the battlefield.

The report then reviews how military AI start-ups are crusading for Pentagon contracts, including by following the tried-and-true tactic of relying on revolving door relationships.

The report concludes with a series of recommendations:

  • The United States should pledge not to develop or deploy autonomous weapons, and should support a global treaty banning such weapons.
  • The United States should codify the commitment that only humans can launch nuclear weapons.
  • Deepfakes should be banned from the battlefield.
  • Spending for AI technologies should come from the already bloated and wasteful Pentagon budget, not additional appropriations.


[Continues . . . (https://www.citizen.org/article/ai-joe-report/)]