News:

if there were no need for 'engineers from the quantum plenum' then we should not have any unanswered scientific questions.

Main Menu

artificial inteligence

Started by karl_greifenklau, January 15, 2008, 01:20:41 AM

Previous topic - Next topic

karl_greifenklau

hi! there my first post here ... sorry for my bad language ,content matter(never study english)

what do you think about artificial inteligence ?
or do you have some special reference i'm very interesed(eventualy for my futures career in informatics)


i think it's very important a separated topic at science category with AI==artificial intelgence,
because if a total or partial AI it's posible to be made, will be another argument of nonexistence of god presented in any religion

i have not read much (and i want to) about AI, but i see a lot of american mouvie with that(like: termintor :D)

i have a lot to say but hard to explain (language reasons :|) and i must go now (to be continued)

 waiting for replies

jcm

#1
I think the biggest undertaking in AI will be the job of creating consciousness in a computer. I don't really understand how consciousness works but I don't think it is anything beyond my physical components. The "me" in my brain is nothing special. There is no timeless soul that existed before my body or my brain. I had to learn about the world as well as myself from the beginning. If I existed before my body or my brain, then I would have had experiences/memories before then, but I don't. This is a good thing for AI because consciousness, in theory, is not magical or special. The components of my brain make me who I am. The more we learn about how the parts of the brain work, the closer we will get in creating a brain like computer. We have already developed advanced prosthetics, supercomputers, and experimented in cloning. In time we will merge these different technologies together. In the future, there may not be a difference between humans and machines. Yeah, we will all be a bunch of Borgs roaming around assimilating people.
For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring. -cs

Mister Joy

#2
It's difficult to learn how parts of the brain work in explicit detail without engaging in unethical methods of study. Often the case, unfortunately. We wouldn't know half of what we do today about biology if it weren't for grave robbery, illegal autopsies and even murders that went on in the early development of the science. Psychologists now, for instance, are extremely restricted in terms of what they can and can't do in their research, to the point of being a bit extreme arguably. I think we're a very long way from fully understanding how the conscience actually functions biologically.

I wonder if, in the very distant future, it would be possible to purchase your own personality traits and way of thinking. Circa 3010 Magazine: "Fashionable personality of the week: THE DEPRESSIVE INTROVERT. Best offer only £350!" That would be a very strange world.

Whitney

#3
Welcome to the forum, karl.

The Japanese have already created a robot which can mimic human emotional responses when it told various words and what those words are associated with.  It is programmed to make a face of disgust at the word president because it associates the word with Bush Jr...lol.  

When dealing with AI, it seems to me that the way to define it as having consciousness is to ask if it is able to decide that it wants to disobey commands.  After all, we already have tons of robots that can sense their surroundings and exhibit a response (some in a bit more complex manner than others).  Consciousness in robots would require us learning how to make free will or at least free will to the same extent humans possess.

I don't think it would be a good idea to make robots with that level of consciousness because we don't need a bunch of metal humans running around...and that's basically what they would be.  Robots are helpful to us because they obey commands....robots with full AI capabilities would not be nearly as helpful and could potentially decide to turn against us (as is illustrated by many AI related movies).

SteveS

#4
Quote from: "laetusatheos"It is programmed to make a face of disgust at the word president because it associates the word with Bush Jr...lol.
For real?  That's hilarious!

Quote from: "laetusatheos"robots with full AI capabilities would not be nearly as helpful and could potentially decide to turn against us (as is illustrated by many AI related movies).
Yeah - I always like to focus on the "artificial" part of the phrase.  In other words, an "artificially" intelligent machine is one that is duplicating a job that would otherwise require a human intelligence.  That doesn't necessarily mean the machine requires will, personality, emotion, or desire.  In other words, the machine is "acting intelligently" for whatever task it is performing.

And I'm okay with that.  Please, people, don't build any Cylons in your basement - it'll all end in tears!
 :wink:

bitter_sweet_symphony

#5
Quote.robots with full AI capabilities would not be nearly as helpful and could potentially decide to turn against us (as is illustrated by many AI related movies).

Actually it depends on how intelligent the robots are. The current robots can't "think" beyond what is programmed into it. You tell it to associate a group of words to some actions. If asked to interpret a command it doesn't know it will give an error message. But it can't "guess" it. For example, if asked the number of stars that exist, it won't answer "millions". This gives the programmers more control on the robots.

Unless a virus attacks the robot, it won't turn against its owners. If the defense of a country is handed over to robots then there is a chance that the enemies of the country can easily hack into the defense system and turn all robot soldiers to their side. So, robots by themselves turn against us, but an enemy agent can hack into it and wreck havoc.

 
QuoteThat doesn't necessarily mean the machine requires will, personality, emotion, or desire. In other words, the machine is "acting intelligently" for whatever task it is performing.

I think, from a robot's point of view "intelligence" and "emotion" won't be much different. Both will be a type of input-output system and will have to be programmed. The programmer will have to sit down and program the robot to cry and work slower, when it is lonely. Emotions, like intelligence can only be simulated.

karl_greifenklau

#6
when i make delimitetion partial / total AI inteligence i was thinking more at the "software" not at physical part of robot.

i think we can simulate a partial/total AI in a safe virtualworld

forgot to say that a total AI is complete, only when is capable at his turn to build another totalAI / or emulate a human.
a normal human are acting like a partial AI, a total intelingent human know how his inteligence working, and know also how to build one. like we-are watching in 2 paralel miror at the same time or recording the screen of tv that play what we're recording. what's happend at 1 level its happening at all the level recursivly, this is the way how i think a total AI it's functions


all the actuals robots are preprogramed to do things, at the same level a insects is programed to search food. (difrences bettween a insects and a robot are: insect are also preprogramed to multiplicate itself, and the "software" of thes insects are more efficient because its builded dinamicly and have  high resolution /accurates sensors, )

after we build a partial AI in a matrix i think we can progame robots more faster to do things more elaborious. honda buld a robot that can walk/ run.
but this is not such a big deal because i'ts not efiicient at (dynamic biological)natural level of learningspeed

karl_greifenklau

http://singinst.org/research/summary (singularity institute for artificial inteligence organization have already  started building a general AI ... )
http://singinst.org/media/interviews (ontopic interviws)

http://youtube.com/watch?v=5hsvCib83ME ( ben goertzel's googletechtalk )

http://youtube.com/watch?v=AyzOUbkUf3M (59 min neural networks googletechtalk)

joeactor

Interesting topic.

I've actually coded several differnt types of AI.  Some attempt to approximate how the human mind works, and others come at it from a variety of other algorythms.

There's no real reason why an AI system has to resemble a human one at all.  Just look at the variety of robots we've created.  I wouldn't say that the Mars rover or the Roomba resembles anything even vaguely human.

And so it may be with AI.  The first self-aware system may differ from us on a variety of levels.  It may have no emotions, or be capable of feeling emotions that we don't even have words for.

In any case, short of a major breakthrough, we're a long way off from true AI, IMHO...  The systems we've got today are very specialized in one topic (ie chess), or are at a very rudimentary learning level of "thinking".  Plenty of tricks are used to make AI seem more realistic, but in esscence most of them are still tricks.

I do recall reading about an interesting project with a system that was reading the dictionary.  As it read, it was making its own inferences based on cross-referncing words, phrases and concepts.  I wonder how that's doing?

Off to code SkyNet,
JoeActor

rotu

Hi all. lol

I'm new here. I have been summarily dismissed from a local religion forum for being too athiest, I guess, so I subsequently found this forum. Looks like a good one.

I am currently reading Physics of the Impossible by Michio Kaku and just yesterday finished the chapter on AI. The problems with AI are huge, especially when you get into judgement calls. Basically, a computer does just that...computes. You can fill it with all kinds of information but all it can do with it is compute. The human brain has trillions of connections and we are reaching the limits of what we can do with current computer technology. Everything in computers is getting smaller and eventually, you get to the point where you're dealing with individual atoms and electrons and the Heisenberg uncertainty principle starts to kick in. i.e., you can't know where an electron is and where it's going at the same time. There's a lot more to it than that, but that's kind of a boiled down explanation.

Have you ever heard of the Turing test? The idea is to have a person in one room and a computer in the other. The person would carry on a conversation with the computer (not knowing if it was a computer or a person) and for a computer to pass the Turing test, the person would not be able to tell whether it was, in fact, a computer or a person. So far, computers have been developed that can answer simple questions or rephrase statements, but no computer has even come close to passing the test.

I guess there are computer that have intelligence about equivalent to an ant, but getting to the point of a thinking, feeling, self aware computer or robot is not very likely. What may be possible someday is to integrate humans and robots-the old cyborg or cybernetic organism. They are learning more and more about how the brain works and may someday be able to do something like give us robotic bodies that our brains operate. There have been successes with training quadraplegics to manipulate a computer cursor just with thoughts, so who knows. We could actually evolve into cyborgs someday.

SteveS

Quote from: "rotu"Have you ever heard of the Turing test?
Yes.  Although, I prefer the fictional Voight-Kampff test in Blade Runner.  ;)

And hey, welcome to the board!  If you're too atheistic for the religious forum I think you'll fit in just fine here.

karadan

Cool topic. One i am very interested in.

The Turing test has been proven to be a very one-dimensional and unreliable gauge for intelligence. Read 'In the mind of the machine' by Kevin Warwick. A very interesting read. He proposes that 'intelligence' from a human perspective, is derived through emotional states. Almost all human decisions are based upon emotions. For us to achieve true AI, this will need to be emulated within a machine - something researchers are a lot closer to than people think.

A study was conducted recently (i forget, but will try to find out) where the parts of the brain in charge of distinguishing between objects was mapped out and transposed into program form. A test was then devised to see how well a computer could distinguish a variety of objects from random photos. It got a score of 85% correct in its first test. This program was able to pick out cars, trees, humans and a plethora of other objects from photos it had never seen before. It could also tell pictures which contained animals over pictures which didn't. The most astounding thing about this was, the fact that it had only been shown one example of what an 'animal' was. In this case, a lion. At some point down the line it used it's acquired knowledge to recognize a bird, even though it had never seen one before. This is proof that we are able to make programs complicated enough to not only learn but make informed guesses based upon previous experiences!!

We already have learning robots. Full autonomy is now a reality. All that needs to be done in the next decade is link various areas into one theme and things will start to move very fast indeed. Japan will be the first to do it. They have already started drafts for laws governing the creation of AI. Creating a blank AI template is probably a bad idea because all you'd need to do is show it something particularly nasty, ie, war and it would probably think rather harshly of us all. With the correct and proper guidance, a potentially self aware program would be nurtured to take on the same views and thoughts of the humans creating it. From this perspective, guiding an AI is a very exciting idea. The cataclysmic Hollywood version of events is strictly the realm of fiction, and will stay there, imo.

Because of the above, I think a self aware program is only 10 - 20 years away.

Give it 150 - 200 years and (if a world ending event hasn't already happened) i believe a fully symbiotic relationship between humans and machines will be commonplace.

Hooray for Asimov and his ilk!
QuoteI find it mistifying that in this age of information, some people still deny the scientific history of our existence.

rotu

What you say about human intelligence being a factor of emotions is very true. An example in the book I was reading is a robot doing grocery shopping and trying to decide between brands. What would it use for criteria? The fancy label? The price comparison? The nutrition table? Or would it's decision be based purely on computations? Would, or could it be faced with a situation in which a decision needed to be made, but both had exactly equal consequences? If so, would it do the very human thing of just picking one, or would it be frozen in indecision and maybe get run over by a train?

I don't know, but I think real digital intelligence is a long ways off. Maybe the 200 to 250 year estimate is reasonable, but I don't see it happening in any significant degree in 20-25 years. I could well be wrong, but I think we're approaching a wall in computing power since we're very close to going as small as we can with current technology. Maybe quantum computers, but I think those are still a ways off, too.

crocofish

I'm not an AI expert, but I have had some direct experience with some of the people developing AI systems.  Back in the 1980's, I worked at a company where the Chief Technical Officer had direct connections with Marvin Minsky and the MIT AI Lab.  The CTO declared that "fifth generation computing" was going to be a core technology that the company would pour research money into.  The company spent millions of dollars on AI.  They gave people "knowledge engineer" titles.  They had flashy marketing videos and pamphlets featuring the CTO and Minsky promoting the future of computing.  The company developed an AI workstation with a special custom AI processor chip.  In the end, no significant revolution happened, millions of dollars were burned, and the "knowledge engineers" disappeared.  The CTO left, and today his biography makes no reference to the years of the embarrassing AI mess that he dragged the company through.

Back then, I was a junior engineer and worked with some senior engineers who were very much against the company's AI strategy.  Those senior engineers told me about all the flaws they saw with the AI developments, and how it was mostly marketing with little substance.  Fortunately, my group only shared office space with the AI engineers, and we were not working on AI directly.  All the negative predictions came true as the whole AI effort fell apart when it came time to deliver tangible results.  It left me with a skeptical view any lofty claims about AI.

Now more than 20 years later, I have yet to see any AI system that has impressed me.  Today's computers can store much larger databases and have much faster processors than 20 years ago, but what I have seen in AI systems have been variants on database queries and conditional structures that have been around almost as long as computer science has been studied.  I have seen nothing revolutionary, only scaled up versions of old concepts.

Sure, it is fun to fantasize about thinking machines, but it is important to look at the problem realistically.  I see too many cases where AI advocates jump to wild conclusions without even solving some of the basic problems of intelligence.  It's like the basic problems are too mundane for them, and they want thinking computers now.  I have been very skeptical of the recent writings of long time AI advocate, Ray Kurzweil, about "the singularity" since it blurs reality with science fiction, jumping to big conclusions without convincing me that the intermediate steps to "the singularity" are realistic.  To me, "the singularity" smells like "The Rapture" or Heaven.  They all have evidence to support wishful thinking, but the evidence is flawed.

I do believe that some of the huge knowledge repositories being developed will eventually be useful if an artificial intelligence is developed.  Even Google would be useful for an AI entity, just as Google is useful to humans.  Show me an artificial intelligence that is similar to the natural intelligence of an insect, and I'll will be impressed and feel that we are headed toward a higher AI.  Making a database and a bunch of if-then statements that act like somewhat like an insect doesn't impress me; that's more like a video game character.

I do hope that an impressive AI will be developed, but I feel that there will have to be some major revolutions in computer architecture and programming.  With today's architectures, we can barely make software that keeps running without crashing.
"The cloud condenses, and looks back on itself, in wonder." -- unknown

Vichy

I'd just point out that grave robbery never would have happened if people hadn't been prevented by religious ideas backed up by the state's enforcement which made donating oneself to scientific study for money, and conducting that research, illegal.  Considering some bored people were willing to risk imprisonment and death to get those bodies, I can easily imagine that they would have been more than willing to pay a dying man some money for the right to do research.
There is, and really cannot be, any contradiction between ethics and progress.  What is right is right because it conforms to reality, the 'necessary evil' is a myth created by evil people in order to get us to accept their atrocities with passive resolution of its inevitability.  The notion that something could be wrong and produce right results entirely skews the reality of morality.

On another note, I think AI is awesome and I'm more than a bit transhumanist myself.  When people tell me about how god made our bodies I wonder why an omnipotent being was such a piss-poor engineer.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently." - Fritz