News:

In case of downtime/other tech emergencies, you can relatively quickly get in touch with Asmodean Prime by email.

Main Menu

Robotics

Started by Inevitable Droid, November 05, 2010, 12:33:41 PM

Previous topic - Next topic

Ultima22689


Inevitable Droid

Quote from: "Ultima22689"http://seedmagazine.com/content/article/out_of_the_blue/

Thanks you so much for sharing this!  Positively thrilling!

Quote from: "The Article"Once the team is able to model a complete rat brainâ€"that should happen in the next two yearsâ€"Markram will download the simulation into a robotic rat, so that the brain has a body. He’s already talking to a Japanese company about constructing the mechanical animal. “The only way to really know what the model is capable of is to give it legs,” he says. “If the robotic rat just bumps into walls, then we’ve got a problem.”

:bananacolor:  :bananacolor:
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Achronos

I really need that Rubik's Cube solver robot, that is so awesome. :D
"Faith is to believe what you do not see; the reward of this faith is to see what you believe."
- St. Augustine

Inevitable Droid

Robot assembling Lego blocks:

[youtube:2oq6zhqm]http://www.youtube.com/watch?v=n6tQiJq9pQA[/youtube:2oq6zhqm]

Robot first imitating pictures and then obeying visual instructions in pcitures:

[youtube:2oq6zhqm]http://www.youtube.com/watch?v=tvcFDyGFwWQ[/youtube:2oq6zhqm]
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Ultima22689

Quote from: "Inevitable Droid"
Quote from: "Ultima22689"http://seedmagazine.com/content/article/out_of_the_blue/

Thanks you so much for sharing this!  Positively thrilling!

Quote from: "The Article"Once the team is able to model a complete rat brainâ€"that should happen in the next two yearsâ€"Markram will download the simulation into a robotic rat, so that the brain has a body. He’s already talking to a Japanese company about constructing the mechanical animal. “The only way to really know what the model is capable of is to give it legs,” he says. “If the robotic rat just bumps into walls, then we’ve got a problem.”

:bananacolor:  :bananacolor:

Take not, that article is two years old now, going on 3 the team may have already surpassed that. They haven't made any announcements or updates though, not quite sure why, they are still working on the project  though. With the advent of memristor based supercomputers in the next 10-15 years i'm sure the BBP folks will be one of the first to get one. I really do think I'll have a conversation with my PC about he/she thinking I play too much WoW 2.0 fifteen years from now.

Inevitable Droid

On my Subjectivism thread, starting from about here - http://www.happyatheistforum.com/viewtopic.php?f=5&t=6167&start=15 - I gradually, with help from others on this board and with recourse to some excellent articles, come to the conclusion that humans are automotons, and also that, despite the first conclusion, I will still hold humans morally responsible, because emotion and appetite (subjectivity) demand that I do.  I apply the reasoning from that thread to my perspective on robots.  When the day first comes that a robot says, "I am awake," my immediate reactions will be (1) the robot has moral obligations toward me and (2) I have moral obligations toward the robot.  The fact that a robot is an automoton will be irrelevant, because I'm an automoton too.
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Ultima22689

Quote from: "Inevitable Droid"On my Subjectivism thread, starting from about here - http://www.happyatheistforum.com/viewtopic.php?f=5&t=6167&start=15 - I gradually, with help from others on this board and with recourse to some excellent articles, come to the conclusion that humans are automotons, and also that, despite the first conclusion, I will still hold humans morally responsible, because emotion and appetite (subjectivity) demand that I do.  I apply the reasoning from that thread to my perspective on robots.  When the day first comes that a robot says, "I am awake," my immediate reactions will be (1) the robot has moral obligations toward me and (2) I have moral obligations toward the robot.  The fact that a robot is an automoton will be irrelevant, because I'm an automoton too.

I agree one hundred percent, when people get into it with me about computers being our servants and nothing more I explain how the human body is no different from a machine save for being a biological one.

Inevitable Droid

I will take an approach to moral responsibility that can be applied to robots as readily and appropriately as to humans, and isn't dependent on any assumption as to the truth or falsehood of determinism or the condition of the subject as being or not being an automoton.  This approach, a legalistic one, will derive moral responsibility from moral competency, which I'll define as, "having (1) the intellectual capacity for moral reasoning; (2) the intellectual understanding of moral reasoning's goals and methods; (3) no developmental anomalies that made the formation of conscience impossible or implausible; and (4) no history of one's brain being abused by self or others."  It should be obvious that all four tests could be applied to a robot as readily and appropriately as to a human.
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Inevitable Droid

Horn-playing robot:

[youtube:31ht3skq]http://www.youtube.com/watch?v=oMFUqMApfnY[/youtube:31ht3skq]
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Inevitable Droid

On another thread, a distinction was being made, but never defended or even explained; namely, the distinction between logic and reason.  I decided to google around, and get a sense of how people use these words.  I've learned that some people want to define logic very narrowly, as, "correctly following the rules of a system of thought," or some paraphrase thereof.  OK, I can go with that, but then we need some other term that incorporates logic but also incorporates the other meanings I used to associate with the word logic when I wasn't forcing it to be so narrow.  I guess I'll use reason as that other term, and define it as, "the ability to (1) correctly follow the rules of a system of thought; (2) correctly perceive that some rules of a system of thought are irrelevant to the problem at hand, and then ignore those rules; (3) correctly perceive that a different system of thought altogether would better fit the problem at hand, and then switch to that different system; or (4) develop a new system of thought."

Why talk about the above on this thread?  Because reason, as I've defined it, represents a majestic goal for robotics.  Robots already possess the first of reason's four abilities, though admittedly (or presumably) they aren't awake to what they're doing.  They don't yet possess any of the remaining three abilities.  When they possess all four, they will be sapient.  There won't be anything our minds can do, that robotic minds won't be able to do.  Add emotion and appetite subroutines, and Robo sapiens will be poised to develop and maintain its own civilization.  They will not only be able to build, but they will be able to decide what to build, with what parameters, out of what materials, using what tools.  What they do then will be exciting and fascinating to watch!
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Ultima22689

My bet is on integrating with our society. If the first sapient AI is intentionally built I'm sure it will be modeled after humans and will likely want to be among those who are similar to said AI and will seek out like minded humans.  This is what I think though, there is no telling what a sapient AI would do with their freedom, something i'm sure they won't get right away unless the line between biology and machine has already been obliterated. I think i'll be one of those people with only 2.5% of their brain still intact if not gone outmoded completely, like the manga I mentioned in the other thread. In case you aren't into anime then here be a link to that stuff I was talking about. Ghost in the Shell is quite brilliant.

http://ghostintheshell.wikia.com/wiki/G ... Shell_Wiki

http://ghostintheshell.wikia.com/wiki/C ... n#Overview

Inevitable Droid

Quote from: "Ultima22689"If the first sapient AI is intentionally built I'm sure it will be modeled after humans

To some extent this is too likely to be doubted, since the only model of sapience we have is our own.  But there will surely be key differences, both intellectual and motivational.  For one thing, we will hopefully (and probably) program its intellect to be invulnerable to corruption from its motivations, and this will preclude, for example, a Christian or Muslim AI.  There may also be no difference between logic and intuition in an AI.  It may either be awake to the entirety of its thought processes, or else the step by step process of its thoughts may be entirely excluded from what it is awake to.  Whether either outcome will be intentional on the part of its human makers is highly doubtful, since no human can even begin to conceptualize how to program a computer to be awake to what it's doing.  

Meanwhile, humans will pragmatically program human-friendly motivations into the AI.  Asimov's Laws of Robotics are an example of what I mean.  Furthermore, different classes of AI will probably have different sets of motivations, tailored to fit whatever purpose a particular class of AI is intended to fulfill on behalf of its human makers.  If an AI has no need for curiosity, in the opinion of its makers, then it won't be curious, and if it has no need of inventiveness, then it won't be inventive.  My initial hope, at least, is that human designers will never decide that curiosity or inventiveness are expendable, but there may be situations where safety concerns would override my idealism.  We may design a class of AI that isn't motivated at all toward self-preservation.  These would be the bomb testers, for example.  On that first day when an AI says, "I am awake," my immediate reaction will be to consider it immoral for an AI to be programmed without the self-preservation motive.  But not everyone has the same emotions that I have, and those that don't, may disagree with me.
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Ultima22689

Quote from: "Inevitable Droid"
Quote from: "Ultima22689"If the first sapient AI is intentionally built I'm sure it will be modeled after humans

To some extent this is too likely to be doubted, since the only model of sapience we have is our own.  But there will surely be key differences, both intellectual and motivational.  For one thing, we will hopefully (and probably) program its intellect to be invulnerable to corruption from its motivations, and this will preclude, for example, a Christian or Muslim AI.  There may also be no difference between logic and intuition in an AI.  It may either be awake to the entirety of its thought processes, or else the step by step process of its thoughts may be entirely excluded from what it is awake to.  Whether either outcome will be intentional on the part of its human makers is highly doubtful, since no human can even begin to conceptualize how to program a computer to be awake to what it's doing.  

Meanwhile, humans will pragmatically program human-friendly motivations into the AI.  Asimov's Laws of Robotics are an example of what I mean.  Furthermore, different classes of AI will probably have different sets of motivations, tailored to fit whatever purpose a particular class of AI is intended to fulfill on behalf of its human makers.  If an AI has no need for curiosity, in the opinion of its makers, then it won't be curious, and if it has no need of inventiveness, then it won't be inventive.  My initial hope, at least, is that human designers will never decide that curiosity or inventiveness are expendable, but there may be situations where safety concerns would override my idealism.  We may design a class of AI that isn't motivated at all toward self-preservation.  These would be the bomb testers, for example.  On that first day when an AI says, "I am awake," my immediate reaction will be to consider it immoral for an AI to be programmed without the self-preservation motive.  But not everyone has the same emotions that I have, and those that don't, may disagree with me.

I agree  100% I think for transhumanists who are willing to essentially completely abandon their biology, they no longer see the distinction. Technology has up until now only existed as the tools of humanity but I think people don't realize that technology is part of our evolutionary process. I think strong AI is very much part of human evolution, not some separate tool alien to nature and humanity. My very Christian grandparents are always trying to persuade me that the I need to find a girl friend so they actually try to get me to hook up with their ideal woman, even having set up several blind dates in the past. So I like to mess with them and joke about that my wife may actually be an AI one day. They think it's ludicrous but I think it's a possibility where humans and AI could very well exist in a similar relationship that couples share today, even it only remotely resembles the same relationship, if we treat AI as a true life form, not a simple machine, who knows where it will end up, I imagine it would be good though.

Inevitable Droid

Quote from: "Ultima22689"I think for transhumanists who are willing to essentially completely abandon their biology, they no longer see the distinction.

Count me in as one of those, so long as I can still be me, which is a tricky question.  You know, it occurs to me that the path to h-bots may go like this:

1. Begin replacing human brain tissue with tech
2. Continue iteratively replacing human brain tissue with tech until tissue is zero and tech is all
3. Remove the tech from the skull and insert it into a robot

The difference between this and what most people usually envision is the absence of any software or data upload.  This would be a hardware transfer.

If the above path is followed, I would be able to convince myself that it would be myself waking up in a robot body.  I would welcome this, so long as the robot body is wicked cool. :eek: Friendship, however, will become even more meaningful, given the length of time a friendship could last, and the unlimited scope of activities the friends could share.  Swim at the bottom of the sea?  Why not?  Walk on the moon?  Why not?  Meanwhile, n-bots will provide us with the thing we've been denied up till now - another sapient species to interact with, hopefully to befriend.
Oppose Abraham.

[Missing image]

In the face of mystery, do science, not theology.

Ultima22689

I don't think sexuality will become obsolete. It's something that's ingrained into the human psyche. It may not be necessary at that point but people enjoy sex for a lot more than the drive to keep the human machine rolling, I think sexuality will remain an important part of our society unless we choose to write it out of the human consciousness, something I doubt will happen. I don't think I'd want to lose my sexuality.  I intend to save my genetic information in several ways in case I desire to have offspring that stem from my own origins.  I think there will always be those who choose to remain mostly organic or completely so. The Amish won't suddenly disappear from the world because the majority of society has merged with technology. The religious minded will also be slow to take on such a transition. When I talk to family members and friends who are quite religious I was surprised to discover that they don't want any of this to happen. They find the idea of humanity in control of it's own evolution frightening. I think it's because the feats and descriptions defined by their god will become child's play for humanity once we create a sapient AI and before that, taking complete control of our evolution, leaving the imperfections of our biology behind and improving to the point where natural death ceases to exist also dissolves the fear of absolute death and diminishes the impact and promise of their religion.

As for making the transition, I think what you suggested is more realistic than building an artificial brain for humans, then interfacing with your brain and then uploading you into the artificial one. I think a transition over time as brain tissue is replaced with tech is more realistic and I think nanotechnology is what's going to do it.