So i'm curious to know what fellow atheists think about the future and the colorful theories by the likes of Ray Kurzweil and other futurists. Oddly enough i'm aware that my nature is to be very optimistic about things, for example I've recently moved to Jacksonville, FL and had no choice in the matter, hated every moment of it but i'm optimistic as my benefactors hate it here as much as I do and are seriously considering making the move to Southern California but I digress. To simplify Ray's theory is that technology has been advancing at a somewhat static rate up until some 40 to 60 years ago when the rate our technology has advanced at has been rapidly increasing and the 21st century is going to supposedly be the century where things get really crazy with the development of self aware A.I which will then push us to blend our biology with our own technology thus leading to the "Singularity" where everyone will be so smart we begin advancing in ways we could never imagine, here, this will explain it better I'm sure:
http://www.kurzweilai.net/articles/art0 ... rintable=1 (http://www.kurzweilai.net/articles/art0134.html?printable=1)
The funny thing is people have equated his theories to religion which I thought was peculiar. Sure, they're a tad grandiose but he's using empirical data to explain his theories.
The singularity is really a giant "what if" as someone who participates in robotics I get far to many questions that link singularity with Sarah Conner, as a (rusty) programmer I can state fairly certainly that Asimov style robotics is the realm of fantasy, real robots deal with their intended application, nothing more, as long as programmers avoid trying to replicate morality then such a scenario is unlikely. Sorry, got into tangent mode.
So do you think it's possible that as we develop more advanced algorithms which lead to yet even more advanced algorithms that we might be able to create self aware A.I? What separates this from other science fiction that has come true or is beginning to emerge? A.I isn't a manner of hardware but advanced software right? Correct me if I'm wrong, I'm merely curious about it. What reason could there be for our software to remain as is?
Cadie will now manage your email for you (http://mail.google.com/mail/help/autopilot/index.html)
(https://www.happyatheistforum.com/forum/proxy.php?request=http%3A%2F%2Fmail.google.com%2Fmail%2Fhelp%2Fautopilot%2Fimages%2Fscreenshots.png&hash=740aa92f92b0b6fb5253d76966af92f88536a749)
(https://www.happyatheistforum.com/forum/proxy.php?request=http%3A%2F%2Fmail.google.com%2Fmail%2Fhelp%2Fautopilot%2Fimages%2Fscreenshot_login_sm.png&hash=0ef34dc15459aa244a1b5b04b3e23608f013ec89)
QuoteTwo Gmail accounts can happily converse with each other for up to three messages each. Beyond that, our experiments have shown a significant decline in the quality ranking of Autopilot's responses and further messages may commit you to dinner parties or baby namings in which you have no interest.
I think that if intelligence can evolve from non-intelligence that we can eventually develop A.I. I just don't have an opinion on how soon that will be possible. I think A.I. would lead to very complicated moral questions as it became more and more "human."
Quote from: "Whitney"I think A.I. would lead to very complicated moral questions as it became more and more "human."
This statement makes me think of Chobbits and Bicentennial Man.
I see makes sense, I guess that is the best way to approach it huh? It's just an opinion, I don't have any facts to bear to back this up but I think it will happen sooner or later, i'm going to go ahead and attribute this to my optimistic nature.
As for HerreticalRant's post, what does that have to do with Ray Kurzweil's theories? Last I checked he isn't saying if you don't send him X amount of money then it won't happen or anything of that nature. I don't see the "scam" here coming from the whole singularity theory. If there is one would you mind indulging me
EDIT: If I misunderstood your post I apologize.
I think we'll destroy ourselves long before the singularity can be achieved. Not a very "happy atheist" thought, so accept my apologies, but I am cynical regarding mankind's willingness to destroy everything for the sake of ideology.
It only takes a few people who want to bring about the end of the world to actually accomplish it. And the means to do so are growing.
Quote from: "Thom Phelps"I think we'll destroy ourselves long before the singularity can be achieved. Not a very "happy atheist" thought, so accept my apologies, but I am cynical regarding mankind's willingness to destroy everything for the sake of ideology.
It only takes a few people who want to bring about the end of the world to actually accomplish it. And the means to do so are growing.
Sadly this is very plausible, the world is in a very volatile state right now however we aren't necessarily guaranteed to destroy each other nor is it particularly likely, it is very possible but not outright likely, once again that is an opinion and can't( I can't at least). Assuming all the crazies out there don't kill us all I think the world will only get better, it seems as the years pass on the population slowly grows a tad bit more intelligent as older generations die off (not to sound callous). This seems to be marked by the increase in atheism and how theism is dropping off.
I think Kurzweil is exploiting the properties of the exponential curve. You could make pretty much any time in history look like a "Singularity" was about to happen with the right scale. If anyone doesn't understand let me know and I'll try and plot some tomorrow.
It's nearly impossible to speculate about how AI will choose to think. A common assumption is that AI will think like us, but that would seem to defeat the point.
I just really hope we don't end up being one of these:
(https://www.happyatheistforum.com/forum/proxy.php?request=http%3A%2F%2Fblog.laptopmag.com%2Fwpress%2Fwp-content%2Fuploads%2F2009%2F06%2Fbattery_morpheus.jpg&hash=5c5647f241fd547861b0d272f6be40c8feb6ca73)
If you want a feasible (at least, in my opinion) idea of how AI would think, read Peter Watts' Rifters trilogy. Amazing sci-fi and a really good look at extremely well researched AI theoretical construction.
When AI comes around, it's most certainly not going to be Wintermute or Data. Actually, a few years back, the military thought they were developing AI by teaching automated tanks to fire on enemy tanks. It was working rather well; the computers were "learning." Turns out they weren't teaching it to attack enemy tanks at all, just recognize ambient light conditions. One small shed that is lit and looks like a tank, and... boom.
I do love me some Kurzweil, though. The Age of Spiritual Machines is a mindbender.
Quote from: "Thom Phelps"I think we'll destroy ourselves long before the singularity can be achieved. Not a very "happy atheist" thought, so accept my apologies, but I am cynical regarding mankind's willingness to destroy everything for the sake of ideology.
It only takes a few people who want to bring about the end of the world to actually accomplish it. And the means to do so are growing.
We've had nuclear capability since the forties. We're still here, aren't we?
I don't just think AI is possible, i think it is inevitable, given the pace at which we are understanding the nuances of conciousness and mimicking how the brain works. Besides, how will we destinguish a real thinking machine from one which can pass a Turing test 100% of the time?
I'm an optimistic futurist
Oh, and this is awesome:
[youtube:104f8740]http://www.youtube.com/watch?v=EzjkBwZtxp4[/youtube:104f8740]
Quote from: "karadan"Oh, and this is awesome:
[youtube:1flowij0]http://www.youtube.com/watch?v=EzjkBwZtxp4[/youtube:1flowij0]
If you move the bow, can it compensate based on the sounds coming from the instrument?
Quote from: "jbeukema"Quote from: "karadan"Oh, and this is awesome:
If you move the bow, can it compensate based on the sounds coming from the instrument?
What do you mean?
Ah yes, the singularity. $13 billion later and we get this
[youtube:355yb7bs]http://www.youtube.com/watch?v=ASoCJTYgYB0[/youtube:355yb7bs]
Truly the robots will soon replace us.
Quote from: "andrewclunn"Ah yes, the singularity. $13 billion later and we get this
[youtube:zyb2jmy7]http://www.youtube.com/watch?v=ASoCJTYgYB0[/youtube:zyb2jmy7]
Truly the robots will soon replace us.
It's not going to just happen instantly in a few years, did you read the Ray Kurzweil paper I linked? If you read that you probably wouldn't have posted said comment.
Quote from: "karadan"Quote from: "jbeukema"Quote from: "karadan"Oh, and this is awesome:
If you move the bow, can it compensate based on the sounds coming from the instrument?
What do you mean?
If I walk up and move its hands or something, can it compensate? Is it truly playing or simply going through a set of programmed movements?
[/quote]
If I walk up and move its hands or something, can it compensate? Is it truly playing or simply going through a set of programmed movements?[/quote]
Oh, no, it isn't actually playing. It is just going through the programmed movements probably mimicked from another player.
To actually play? Now that would be astonishing.
Quote from: "karadan"
If I walk up and move its hands or something, can it compensate? Is it truly playing or simply going through a set of programmed movements?[/quote]
Oh, no, it isn't actually playing. It is just going through the programmed movements probably mimicked from another player.
To actually play? Now
that would be astonishing.[/quote]
Give them enough time and maybe we will be astonished.
Definitely. There's no reason to suggest they won't be able to achieve that and a lot more in the next 50 years.
Real learning machines are just around the corner.
One of the benefits of being a secular person is you don't have to speak in dogmatic absolutes. We don't need to say "the singularity is coming" or "the singularity isn't coming." We have a sentence structure available to us that makes the religious person completely uncomfortable: "I think the singularity might come."
We can even give odds. "I think there's a 60% chance of the singularity coming this century."
It's so liberating to be able to express predictions in terms of probability.
Quote from: "Loffler"One of the benefits of being a secular person is you don't have to speak in dogmatic absolutes. We don't need to say "the singularity is coming" or "the singularity isn't coming." We have a sentence structure available to us that makes the religious person completely uncomfortable: "I think the singularity might come."
We can even give odds. "I think there's a 60% chance of the singularity coming this century."
It's so liberating to be able to express predictions in terms of probability.
"The Singularity is near' is a quote from Ray Kurzweil, one of his books is titled so.
The problem with the Singularity is that it's so singular. :P LOL. i is so much funz!
(cough, cough, ahem.)
Androids will be happening a lot sooner than you think. Probably sometime between 2015-2030. Stanford Researchers have developed an architecture chip that replicates in entirety the environment of brain matter, and have even been able to program the sine wave structure of brain waves into it. The most current prototype replicates about 65,000 neurons. But by 2010, manufacturing will be able to produce a chip that replicates a 400,000 neuron brain, about the size of a mouse.
Want to know what happens when they manufacture human scale neurogrids?
Here's a link to more information, if you want to:
Neurogrid Chip (http://stanford.edu/group/brainsinsilicon/documents/Overview.pdf)
Quote from: "Renegnicat"Androids will be happening a lot sooner than you think. Probably sometime between 2015-2030. Stanford Researchers have developed an architecture chip that replicates in entirety the environment of brain matter, and have even been able to program the sine wave structure of brain waves into it. The most current prototype replicates about 65,000 neurons. But by 2010, manufacturing will be able to produce a chip that replicates a 400,000 neuron brain, about the size of a mouse.
Want to know what happens when they manufacture human scale neurogrids? lol, i'm joking of course but this is good news.
Quote from: "LARA"The problem with the Singularity is that it's so singular. :P LOL. i is so much funz!
(cough, cough, ahem.)
Right there with you,
Lara. I'm a transhumanist hopeful, but I'm just not a fan of Kurzweil. I essentially developed my own independent view of transhumanism prior to ever having heard of RK.
So since we're going to have awesome cyborgs in the next decade and we're understanding the human brain more every day who thinks we're going to be able to use cyborg bodies Ghost in the shell style? Can you imagine that? I always thought Ghost in the Shell was the most accurate picture of the future IMHO.
Funny thing. Ray already made another statement about this sort of thing a few days ago. XD
http://dvice.com/archives/2009/09/kurzweil-a-worl.php (http://dvice.com/archives/2009/09/kurzweil-a-worl.php)
Quote from: "Ultima22689"Quote from: "Loffler"One of the benefits of being a secular person is you don't have to speak in dogmatic absolutes. We don't need to say "the singularity is coming" or "the singularity isn't coming." We have a sentence structure available to us that makes the religious person completely uncomfortable: "I think the singularity might come."
We can even give odds. "I think there's a 60% chance of the singularity coming this century."
It's so liberating to be able to express predictions in terms of probability.
"The Singularity is near' is a quote from Ray Kurzweil, one of his books is titled so.
Did this post get cut off? It doesn't seem to be a complete response.
Technology is not advancing as they say it is. A faster computer is not a big advance.
The singularity is a scam. When computers were being developed, people knew what they wanted, and it was based on "gates." Like the AND gate used to make an adding machine. But, in making AI, nobody has the slightest idea of what to make. Even if it could be make, no one knows what they want. The fastest computer with the largest AI still is entirely unable to deal with any situation which the programers have not already thought of. The best AI can still be outwitted by a flatworm.
"The singularity" is like having a symposium to discuss the ethics of what you are going to do when you all win that lottery money, but nobody has bought a ticket, nobody knows where to buy one, and noone has started a lottery anyway.
Quote from: "Spork"Technology is not advancing as they say it is. A faster computer is not a big advance.
The singularity is a scam. When computers were being developed, people knew what they wanted, and it was based on "gates." Like the AND gate used to make an adding machine. But, in making AI, nobody has the slightest idea of what to make. Even if it could be make, no one knows what they want. The fastest computer with the largest AI still is entirely unable to deal with any situation which the programers have not already thought of. The best AI can still be outwitted by a flatworm.
"The singularity" is like having a symposium to discuss the ethics of what you are going to do when you all win that lottery money, but nobody has bought a ticket, nobody knows where to buy one, and none has started a lottery anyway.
I'm just curious, did you read any of the previous posts in this thread before posting? If you had you would have noticed that there are some technologies that actually do out wit a flatworm -_-
Also, links plox? I think more people would have known about supercomputers being outwitted by flatworms which I think is BS personally, I've seen robots built in universities and even in highschool than can do far more than a flatworm can.
Spork is probably a one hit wonder however i'm still going to address his assertions to which he never provided evidence for.
Quote from: "Spork"Technology is not advancing as they say it is. A faster computer is not a big advance.
If thats the case go ahead and do everything you need to with a Windows 98 Computer that has a 250mb harddrive and 2 meg of ram. Oh and while you are at it surf the internet on a 28k dial-up. Also who is "they" and where did you come across this information?
Quote from: "Spork"The singularity is a scam. When computers were being developed, people knew what they wanted, and it was based on "gates." Like the AND gate used to make an adding machine.
Oh rly what exactly did they wan't when they made the first computer? Do you even know what the first computer was? And what are these "gates" you are talking about ... do you mean features?
Quote from: "Spork"But, in making AI, nobody has the slightest idea of what to make. Even if it could be make, no one knows what they want.
Making shit up now are we? Please back up your assertions with proof or at the very least where you got your information from. Oh and I see AI that play chess and poker ... there are AI robots on the battlefield (No its not science fiction anymore ... they are equipped with a gun which is sweet) and there are UAV with AI in them to figure out how to do a mission and return safely.
Quote from: "Spork"The fastest computer with the largest AI still is entirely unable to deal with any situation which the programers have not already thought of. The best AI can still be outwitted by a flatworm.
Again i'd like to see where you get your information from. I know AI can learn your strategy in Chess and in Poker and react accordingly. Go ahead an play Poker or Chess against these bots if you think you are smarter then them. I'd say poker ... at least you can get lucky as hell and win.
Quote from: "Spork""The singularity" is like having a symposium to discuss the ethics of what you are going to do when you all win that lottery money, but nobody has bought a ticket, nobody knows where to buy one, and noone has started a lottery anyway.
Again another assertion about it ... please back up these assertions with some evidence ... don't be surprised if we don't take your word on it ... we have people show up here who don't know

they are talking about all the time.
Quote from: "Will"It's nearly impossible to speculate about how AI will choose to think. A common assumption is that AI will think like us, but that would seem to defeat the point.
I just really hope we don't end up being one of these:
I will hopefully end up as one of these against the filthy silicon bastards:
(https://www.happyatheistforum.com/forum/proxy.php?request=http%3A%2F%2Fwww.lasersportsusa.com%2Fimages%2Fjohn_connor_future.jpg&hash=59e65a6675065481f9cb0c21d654c982759cc501)
I really hate advanced robotics....