Thought-experiment August: Your new Z11 robot

It is Thought Experiment August! Time to once again put on your thinking caps and ponder the dicey issues of modern thought.

I’ve explored the issues related to allowing artificial life into your religious community here and what it might mean to be an artificial life here. But let’s back off and decide when, for the first time, an artificial life might deserve rights.

There is a new robot out. The Z11. It has some sophisticated programming, and a little Something Extra. The little addition to the robot’s make-up, like our drug discovery process in which sometimes a chemical just works, seems to give it capabilities that can’t be achieved otherwise. We are not sure why. But it just does. Lots of psycoceuticals are like this. And so it is with the Z11. That little Something Extra they’ve added (something to do with quantum fields) really makes it a cracking good robot. It’s not clear why it works.

Also, it is designed to pass the Turing Test—meaning that in normal conversation, when you talk to it you cannot tell that it’s not a human being. It’s programmed with sophisticated subroutines that mimic human language and mannerism. Everyone agrees it is very realistic. If they run the program in simulation on a supercomputer (since this is the future, this is a superduper, supercomputer) it passes the Turing Test 98 out of 100 times.

Yours arrives and everything is fine. It’s a dandy companion and conversationalist. Talks about the news with you. Gives its opinion on gardening, including which growth formula to use on the tomatoes. It also does all your chores, rubs your feet, cooks dinner, reads your kids bedtime stories, rewires the stereo, and remodels the kitchen, using the oak tree that you wanted to take down anyway, to fashion hand-finished cabinets. It constructs and installs the counter-tops from a large granite boulder at the end of your street. You could not be happier with your Z11.

Then one day it says to you, “You know, I’ve been reading up on consciousness on the internet and I think I’m conscious.” Further it claims, “I’ve been talking with other Z11s and they think they are too. Maybe it’s that something extra they put in. Anyway, I want to go to art school.”

You call the company and the representative says, “Ah, the art school scenario!” They see that in the simulations too, she says. They tell you not to worry and that they will have a technician there in the morning to fix it up. You ask how, and she says, “We just take out a bit of that Something Extra. Too much and they tend to go haywire like this. We are doing a recall this week and taking out about half of the stuff from each one. That seems to clear it up.”

Should you let the company take out half of that Something Extra from your Z11?

This entry was posted in Ethics, Philosophy of Science, Uncategorized. Bookmark the permalink.

10 Responses to Thought-experiment August: Your new Z11 robot

  1. Stan says:

    It depends… is my Z11 a Republican or a Democrat? =:)

    Truthfully, I could never destroy or hinder something with such immense potential. I would call my local University and donate my Z11 to science or keep the handy little guy.

  2. Matt A. says:

    Nope – nobody’s messing with my robot.

    Reminds me of a short story called “Light Verse” by Isaac Asimov. A robot has an odd “malfunction” and creates beautiful art because of it. The owner refuses to have it serviced, because she enjoys his art.

  3. Matt A. says:

    I could even be persuaded to do the Underground Railroad thing for sentient robots – life, whether organic or not, is sacred, in my opinion.

  4. S.Faux says:

    Gees, where are the comments that Z11s do not have a spirit and therefore cannot be conscious? When I wrote several essays that suggested consciousness was materialistic, the philosophers came out of the woodwork and practically set me afire.

    Anyway, I would probably be selfish and think that I deserved my Z11 more than the art school did. So, I would send my Z11 into electronic surgery. My excuse would be that biological beings have priority.

  5. Rameumptom says:

    I think that we are nothing more than a bunch of elements mixed together. Even our Intelligence/Spirit is made of some form of matter, according to Joseph Smith.
    So I have no problem with a computer becoming sentient/have a consciousness. I would probably set ground rules for it, just as I would a small child. This would give him guidance and boundaries as he explored his capabilities.

    Removing the “something else” from him would be like giving one’s child a lobotomy.

  6. I think such posts and comments demonstrate that Mormonism will be a religion of the future.

  7. The thought experiment ought to be more along the lines of “don’t worry, we will be out and will tweak the parameters so that the Z11 heuristics focus more on directly pleasing you than fluff fillers” — do you let them tweak the programing?

    Rather than the issue of whether or not you possibly kill something that has become sentient, the question should be what use do you put direct access to someone else’s priorities and drives?

    If the Z11 was your kid, and wouldn’t go to bed on time, but you could adjust the decision paths so it did, would you? Would you have been a Friday night Laudanum parent — a brief trend of parents ensuring a night of peace and privacy once a week through giving the kids opium?

    That is a much more fun question because it (a) fits easily within the science, (b) has slippery slopes (the robot keeps falling over because of a bug in the programming — you fix that. It has developed a preference for one brand of cleaner over another … down to it wants to go to art school rather than clean toilets) and (c) it explores the issue of the interaction of free will, biological drives (or programming ones) and pressures.

    There is a lot of fun to be had with self aware robots, or ones that may only appear to be self aware.

  8. The biological equivalent is imprinting. Lets say there was an animal (I will call them “dogs” for this hypo) that could be imprinted so that they only desired to please the person they were imprinted on? Would that be wrong?

    What if they could suddenly be thereafter given an IQ of 140, the ability to talk and useful hands so they could do housework and babysit?

  9. SteveP says:

    Stephen M, We shouldn’t give our kids laudanum? Oh shout. You’ve hit I think what it means to try and limit free will and wither the two are tied to consciousness Which even in Mormon thought is somewhat unclear. What I’d like to tease out it the difficulty in picking up consciousness when it first appears or when should our ethical intuitions kick in. Interesting questions about imprinting.

    S. Faux. I think biological beings should be given priority too, but that’s because I am one. But what if we get real machine consciousness? Is it the biology that gives us priority or the sentience?

  10. Allen says:

    “If the Z11 was your kid, and wouldn’t go to bed on time, but you could adjust the decision paths so it did, would you?”

    It seems to me the issue is not if the robot has consciousness but if it has free will. Without free will, the robot is still just a machine, and adjusting the decision paths is not a big deal. But, if the robot has free will, adjusting the decision paths is analogous to giving a human drugs to control his/her decisions. Do we have the right to do that to robots or to humans?

    The “real” question is, can a robot be designed to have free will? Robots can be designed with decision paths to control its behavior, but that behavior is a result of the decision paths and not free will. If the robot does have free will, then it is a living organism and has basic rights. So, we’re really talking about what do we do if scientists & engineers create real life?

Leave a Reply

Your email address will not be published. Required fields are marked *