December 20, 2006

If your robot seems human enough, would it not damage your soul to mistreat it?

So you think this is foolishness?
Robots might one day be smart enough to demand emancipation from their human owners, raising the prospects they'll have to be treated as citizens, according to a speculative paper released by the British government.
A robot is a machine. It's not human. But perhaps by the time the robots get this good, the evidence will show that we are just machines.

In any case, isn't it bad for your soul to mistreat something that you see as human-like? For example, if you are in a lucid dream interacting with people whom you realize do not exist, do you think you can do things to them that you would not do to a real person? And what do you think of the child who tortures her doll?

MORE: Here.

50 comments:

Dave said...

Seems to me we are machines.

Ryan Hatch said...

Maybe we should make human-like robots so people can mistreat them on purpose, instead of humans. Many sins and offenses against humans might be averted through such catharsis.

Balfegor said...

In any case, isn't it bad for your soul to mistreat something that you see as human-like?

Authors are in for a long stretch of hell if this is so. And artists like Goya too. Not, of course, to say that they're not. So you may be right.

But just to make the distinction I see clear, I think it's not necessarily the case that tormenting humaniform simulacra, whatever the medium, is bad for your soul (or rather, morally wrong) -- I think it's only if, in so tormenting them, you engage with them or conceive of them, psychologically, as fellow-humans that it becomes really reprehensible (leaving aside the problem of psychopaths who don't really see their actual fellow-humans as fellow-humans). It may desensitise you somewhat to human suffering, when you encounter it in the real world, but all kinds of things do that, without being intimately connected with your own agency.

Gahrie said...

1) Humans are not machines, no matter how much the Left would wish we were.

2) There is a kernel of truth in the thesis. The most pernicious evil of slavery was the effect it had on the slaveowners. (note I did not say the greatest evil, that of course was perpetrated on the slaves themselves)

3) However it is absurd on its face to suggest that one day robots must be treated as citizens. A machine, no matter how smart, is still a machine. (unless and until one develops a {for lake of a better term} soul)

Christy said...

Does it count as abuse if I used my dog robot to tease my beloved cats? I used to take my Omnibot 2000 to the ski house on weekends and put him (Dexter) on skis by day and make him serve drinks by night. These days he's boxed up in the basement. Is that misuse?

Yeah, I'm a robot person. wouldn't dream of treating them badly.

Dave said...

The definition of machine.

The implication that those who think man is machine are liberal is a non sequitur.

Abraham said...

I subscribe to the Star Trek philosophy that any sentinent intelligence ought to be treated ethically, whether it is animal, alien, or machine. Machine AI has not yet reached this level, and seems unlikely to any time soon, but if it does, what is the argument against extending "human" rights?

Anonymous said...

It is as Balfegor says, and I can elaborate.

If your soul is damaged greatly if you mistreat a person, and takes no damage at all if you mistreat an inanimate object, then we can claim your soul is damaged partly if you mistreat something that is partway between person and inanimate object.

However, damage to your soul is also dependent upon your perception. So, rewrite: your soul's damage varies proportionally as the degree to which you perceive the mistreatment to be of a person. This holds whether you're certain it's a maybe-person, or you're maybe-certain it's a person.

This also holds if you do not actively mistreat, but rather see mistreatment nearby and are able to act to prevent it if you choose.

So if you're alone in a room with something, and you know for a fact it's a robot, and no one will ever know what you do in there, then you could mistreat it all day with no ill effect. If twenty people are in the room, and they all know it's a robot, same deal; one can mistreat it, the rest can see it, and suffer no damage.

If someone walked into the room after the act started, however, and didn't know it was a robot (they missed some information), then there will be some damage. And that's where it gets tricky.

Heh. Relativity theory, anyone?

StrangerInTheseParts said...

Seems to me this argument goes to the same place as what happens to the soul of people who play extremely realistic videogames.

Does all that ultra-realistic killing syphon off aggression that would otherwise be expressed on the street?

Or does it desensitize and increase the aggression of otherwise mild citizens?

MadisonMan said...

Will supersentient robots be allowed to marry? Or is that only for people who can reproduce?

Seneca the Younger said...

Tell you what: you tell me what being "human" means and I'll tell you if robots are human.

Gahrie said...

dave:

My point is that there is more to being a human than just being alive and having a body. There is something that separates us, no matter how much the Left wishes this wasn't true, from the animals. Some call it a soul.

Now if you want to argue that animals are in effect biological machines, you'll get very little arguement from me.

Dave said...

Gahrie: if animals are machines then humans are machines, as humans are animals. Why that needs to be a politicized point of contention is beyond me.

Anonymous said...

Gahrie, suppose you see someone pick up a hammer and say, "I'm going to bash this thing's head in," referring to a person sleeping at the other end of the room. Would you stop them? Would you expect guilt if you didn't?

Suppose you took no action for whatever reason, and the deed was done. Bloody material splatters all over the wall and floor. The corpse's hands twitch in response to a nervous system now without control. Would you feel regret? Damage?

Suppose the person with a hammer then showed you the manner with which that person lying in the corner was actually created using sophisticated techniques for manufacturing skin-like material, a calcium-based infrastructure for support, a red ferrous fluid for conducting fuel throughout it, and various specialized devices ensconced in its cavities. What would you feel then?

Anonymous said...

(I hope my previous gedanken does not offend anyone. Though, in a way, if it does, it serves to demonstrate the point.)

Anonymous said...

abraham: I would think the argument against extending human rights to sufficiently sophisticated machines to be clear...

...if such a machine is functioning differently than you intend, you could simply throw it away and replace it if it were just a machine. If it's not, then you have to expend an extraordinary amount of effort (trying the machine in court, transferring it to someone else, looking after it, etc.) to get the function you need. Indeed, we wouldn't ethically be able to acquire such machines solely for certain functions, if we ruled they were "human enough".

Molon_Labe_Lamp said...

I suspect if robots became humanlike in both form and interaction, we as the orginals would soon develop means to tell the difference and quickly adapt our feelings.

Put simply a robot would have to pass not one but an evergrowing and quickly changing set of Turing tests.

Whatever being "human" means, and regradless of some peoples attempts here to reduce it to mechanical principles, will not change the fact that we will always judge robots as the other.

If you doubt me think about race

WhatsAPundit said...

Personally, I'm in favor of allowing robosexuals to join in civil unions, but the institution of marriage should be reserved for meatbags, er, humans.

Elizabeth said...

I think Gahrie is a machine and "no matter how much the Left wishes this wasn't true" is part of his program:

Gahrie, whaddya want for dinner? "Spaghetti carbonara 'cause it's the best pasta ever, no matter how much the Left wishes this wasn't true."

Gahrie, wanta go to the mall? "Why, yes! The best shopping days are right before Christmas, no matter how much the Left wishes this wasn't true."

Anonymous said...

Molon_Labe_Lamp: I agree you couldn't fool a human in the long run. However, I am very convinced you could fool virtually anyone in a short run, particularly if some of the sophisticated tests you're referring to would violate ethical treatment of human beings by even being administered. Someday, an android could potentially pass as human for over a year, maybe more.

That'll be a while, though. Best androids I've seen so far would fool you for probably several minutes:

http://www.engadget.com/2006/10/14/zou-renti-gets-an-evil-android-twin-too/

(Read the comments for extra laughs.)

Chris O'Brien said...

You think this is just all just a speculative joke? Do you?

Ask the remnants of humanity on a little show called Battlestar Galactica my friend about what happens when robots rise up. Its no picnic.

(I mean the one with Richard Hatch and Lorne Greene. I hear the new one sucks.)

Anonymous said...

I'm thinkng about what prisons for robots might be like. Would the death penalty be allowed for "seriously errant" robots (assuming, of course, that calling them "dangerously defective" would be a discriminatory term prohibited by law)?

Gahrie said...

Paul Brinkley:

I think you misunderstand me. I would be opposed to the mistreatment of human-like machines, but not out of concern for the machine, but out of concern for the effect it would have on humans.

Balfegor said...

Re: Internet Ronin

I'm thinkng about what prisons for robots might be like. Would the death penalty be allowed for "seriously errant" robots (assuming, of course, that calling them "dangerously defective" would be a discriminatory term prohibited by law)?

Every robot shall get one free bite. What are the rules on having animals destroyed?

Gahrie said...

dave:

Either you didn't read my post, or you are ignoring it on purpose. We are different from animals in that we possess something other than our physical bodies. In Western Civilization this is generally labeled as a soul. If we did not possess this external, we would indeed be animals and mere machines.

The reason why it is political is because the left insists on equating animals and humans, or in labeling humans as mere animals. The whole "boy is to a fish is to a rat thing" is purely Leftist rhetoric. It is the Left that is attempting to grant rights to animals.

Molon_Labe_Lamp said...

Paul,

I think the idea of an android passing as a human is possible but that isn't the same thing as passing the Turing test or proving artificially intelligent. Probably because the people it would interact with would have no reason to suspect that it's anything but human.

When an android can repeatedly fool a wary group of humans each with their own personal variation of the Turning test then maybe we'll have something.

But to me it's still irrelevant. There are humans today that go to war and kill simply because they look different. It's terribly naive to think that we'd ever allow androids no matter how lifelike into our community as equals. Since necessity is the mother of invention we as humans will then develop an unending number of methods for discovering human falseness

This of course would lead to false positives of a few members of the species, I'm looking at you Henry Kissinger.

Anonymous said...

If the robot was running a computer program, this program could be backed up and restored. So any abuse to the robot could be erased, and any damage repaired (a new body constructed, if necessary) So would that abuse count, if the robot could not be permanently harmed by it?

And to further blur the distinction between man and machine, assume your robot is a functional simulation of a scan of your own brain. In other words, it thinks it's you.

Dave said...

LOL Elizabeth.

Gahrie you make no sense.

Richard Dolan said...

Ann says: "In any case, isn't it bad for your soul to mistreat something that you see as human-like?"

The "robot" context is an odd context for issues about "mistreatment" of this sort to come into play. But the underlying principles are central to other more pressing controversies, such as abortion and environmental protection of special or "sacred" natural spaces. In those contexts, not only is it "bad for your soul to mistreat something that you see as human-like," but lots of political controversy can result from the strong feelings that such "mistreatment" can generate.

For example, abortion early in the term is less troubling (at least in terms of non-religious objections) because fetal development hasn't progressed to the point where we identify the fetus as "human-like." The environmental context is a bit different since "human-like" is measured in terms of values rather than physical appearance. Most real estate developments don't raise these issues. But if a proposed development site encompasses some special natural feature of is regarded as "sacred" in some way, then development of the site can amount to a sacrifice of the values that people have come to associate with or even deem to be embodied in that site. The environmentalists who chain themselves to trees to prevent logging are clearly of the view that cutting down that tree is "bad for your soul," and it's not much of a stretch to see a form of pantheism in their view of the values at stake.

In the contexts Ann raises -- lucid dreams, dolls and robots -- the judgments that come into play are more aesthetic than ethical, but that doesn't make them any the less powerful. Ann's examples about a lucid dream, or a girl "torturing" her doll, ask whether the focus should be on the intentions and behavior of the subject/actor or instead on the significance of the harm (if any) to the objects subjected to the behavior. One can also view the abortion controversy and certain environmental disputes as raising the same issues, even if (especially if) one rejects the notion that a fetus (or a redwood) is endowed by a Creator with a sanctity that we must not destroy.

Gahrie said...

Dave:

Frankly, coming from you, I'll take that as a compliment.

Robert said...

This brings up several things for me.
For starters, back when I was lucid dreaming more frequently, one of my 'dream magic' tricks was seizing a dreamperson's head and saying loudly
YOU KNOW WHO I AM, DON'T YOU?
This usually resulted in them admitting that it _was_ my dream, and they had to do what I wanted them to do.
Details, I'll leave to your imagination.

On the fictional front, Cordwainer Smith's excellent Instrumental of Man SF series included the idea of 'animal people', animals whose ancestors had been genetically modified to _resemble_ humans more-or-less, but who were legally and socially nonhuman. The psychological and philosopical implications of a permanent underclass did not go unexplored.

Balfegor said...

Re: Molon

I think the idea of an android passing as a human is possible but that isn't the same thing as passing the Turing test or proving artificially intelligent.

I know the Turing Test is a real test and all that. But it has lost all credibility for me, since I saw those old ELIZA scripts, where (allegedly) people were fooled into spending hours on IRC talking to a bot in the belief that it was just someone who had a lot of macros handy (to put up the same responses repeatedly). And those other ones, where they try to, uh, chat them up.

I'm sure there's a technical reason why these simplistic chatterbots haven't actually passed the Turing Test, despite passing successfully as human for hours on end. We don't generally expect bots to cuss, or use slang, or write like a sub-literate teenager. Or go on about how much they love sex. So it's trickery these things get by on. But still. They get by pretty well.

Anonymous said...

Gahrie said: "I think you misunderstand me. I would be opposed to the mistreatment of human-like machines, but not out of concern for the machine, but out of concern for the effect it would have on humans."

Re-read my comment to you in that light, then. That was exactly what I was talking about, too. :-)

(In fact, I was under the impression that was what the OP was about.)

Revenant said...

We are different from animals in that we possess something other than our physical bodies. In Western Civilization this is generally labeled as a soul.

So basically your argument is that the only difference between a human and an animal is that humans supposedly have something you can't demonstrate they have, and animals supposedly lack that something, although you can't demonstrate that they lack it.

That's a dangerously flimsy foundation for a system of morality.

In any case, even if you subscribe to the "humans are somehow special" hypothesis, the idea of machine rights is no stranger than the idea of animal rights.

Tim said...

"In any case, isn't it bad for your soul to mistreat something that you see as human-like?"

Well, first one would have to agree one has a soul; then we can discuss what makes something "human-like." Some people see cats and dogs as human-like; other people eat them.

Ann's question suggests the positive/negative effect upon your soul hinges upon your perception of "human-like" qualities rather than the reality of "human-like" qualities. If so, is the soul of an aging shut-in whose dog died of neglect because she forget to take it to the vet worse off than the soul of a street butcher in Seoul, Korea who serves up dressed doggy without a care?

And if so, is that really fair?

Gahrie said...

the idea of machine rights is no stranger than the idea of animal rights.

Well I agree with this.

But you see, I don't agree that animals have rights, and find the idea that they do ridiculous.

Tim said...

"But you see, I don't agree that animals have rights, and find the idea that they do ridiculous."

Agreed. The notion of extending rights toward animals is generally absurd; even more so toward machines. That animals don't have rights doesn't necessarily mean we are free to abuse them, if only because we know animals feel pain and abuse is cruel to the animal (and debases the abuser); but purpose matters too, as animal testing and consumption are worthy endeavors.

The animal rights crowd is nuts. How else does one explain Bill Maher's opposition to animal testing to save people, but supporting stem cell research, since, we all know, an animal has more rights than an embryonic human?

Marghlar said...

When an android can repeatedly fool a wary group of humans each with their own personal variation of the Turning test then maybe we'll have something.

Well, I think it is very likely that as AI gets more sophisitcated, programs might get so good at Turing tests that if you set the bar high enough to weed them all out, you'll start getting a lot of false positives when testing humans with the same program.

At some point, it is quite possible that the failure rates would be identical.

And that is where Ann's ethical dilemna occurs. I'm of the opinion that it would depend greatly on whether the artificial organism would have the capacity to suffer either physically or mentally. Just because we can design a machine that talks to people like a person would, does not mean that we have also designed it to have a self-preservation instinct or to feel something analogous to pain or fear. And in the end, it's the sentience that should count, not the sapience.

Now as for Ann's question: I've recently been enjoying Destroy All Humans 2 a great deal on my playstation 2. That game involves no end of vaporizing, abducting, and extracting the brains of simulated human beings, in a glorious romp of sci-fi campiness. It's great fun, and I don't think it harms me in any way (whether or not I have a soul) because I know that the representations I am harming cannot suffer. I think a (non-sentient) realistic-looking android would be no different.

Anonymous said...

Someday, an android could potentially pass as human for over a year, maybe more.

And someday one of them will be named Miss USA and Donald Trump will have to decide whether it keeps her crown.

Given that machines can think and react a thousand times or more faster than humans can, I suspect you would see the military make wide use of these kinds of androids. They could walk into an ambush, realize it and fire a directly accurate shot into the forehead of all twenty enemy soldiers who are surrounding them before a single one of those enemy soldiers could pull the trigger.

And if those robots ever do develop thinking skills.... ever see the Terminator movies?

Anonymous said...

Here's an interesting corollary:

Most people involved with developing robots feel (military applications-- see previous post-- notwithstanding) that Asimov's three laws of robotics (which he developed while writing a work of science fiction) make a lot of sense and would likely be incorporated into the design of such a sentient robot:

1. A robot will not harm a human being, or through inaction allow a human being to come to harm.

2. A robot will obey all orders given it by a human being, except where such orders contradict the first law.

3. A robot will preserve itself from harm, except where such orders contradict the first or second law.

These bring up all sorts of intriguing questions:

Could you tell a robot to destroy or damage itself? Does ownership play a role in whether such an order would be followed?

A robot could be mistreated by a human, but would a human then be unable to ask an android to mistreat them (i.e. 'spank me.')?

If the robot perceives a human about to attack another human, can it use force against the attacker? What if only deadly force will work(i.e. a bullet through the brain just in time to prevent the attacker from activating a bomb switch)

And here is another question: Is it mistreatment to interchange parts? i.e. if you buy an android "Sexy Supermodel" and an android, "Macho Male bodybuilder" and an android "Doberman Pinscher," and then start switching body parts around does that constitute 'mistreatment?'

LoafingOaf said...

That animals don't have rights doesn't necessarily mean we are free to abuse them, if only because we know animals feel pain and abuse is cruel to the animal (and debases the abuser); but purpose matters too, as animal testing and consumption are worthy endeavors.

The animal rights crowd is nuts.


Yeah, they're a little nuts. But they can't possibnly be more insane and deranged than America's meat industry. I wonder who benefits from that "worthy endeavor"? The morbidly obese cheeseburger eaters pulling into fast food joints every day?

I agree that we shouldn't consider ourselves free to unnecessarily abuse animals. The fact is that we do abuse animals in the most monstrous fashion imaginable. When are people gonna say enough is enough to that?

When robots take over we'd better pray they are nicer to us than we are to animals.

Anonymous said...

Robots will be more humane to us than we currently are with our fellow animals. We in fact, will at this time be psuedo-robots ourselves.

Molon_Labe_Lamp said...

RSB,

How do you know that? Once they gain intelligence, divining how or what they think is pure speculation.

Asimov's 3 rules of Robotics are cute but they amount to civil rights violations once you grant AI human status.

Interesting that you choose to call humans, psuedo robots rather than the opposite.

jvgordon said...

If humans and machines in partnership continue at Moore's Law's pace of improving semiconductor performance, then machines will have the computing power of human brains in approximately 2030. That's in our lifetimes. Assuming that it is physically possible to go much farther from there, machines will be smarter than us after that. It's hard to believe they won't be fully self-aware at some point before that, and perhaps more self-aware (since they could actually look at how they think, potentially). Why shouldn't that computer that is more self-aware than a human have rights like a human?

Molon_Labe_Lamp said...

jvgordan,

Smarter than us by 2030? They're already smarter than us. Why do you think I use MS Excel rather than pen and paper to perform caculations. but they're only following rules built to accept specific input and return specific output.

The point is computers in certain tasks absolutely dominate a human counterpart. So aren't they already smarter? The whole statment is bogus becuase the mind works in such a completely different way that comparing the two is impossible.


Intelligent answers to questions is not intelligence nor self-awareness. The results of Moores law promises no such thing. It predicts that we will have much more computing horsepower available. It's like saying that constant advances in internal combustion engine technology will get us to Mars.

I would postulate Moore's Law has been more of a hindrance than a help. Cheap and constant increases in binary computation have meant that research into other more elegant computation methods possibly yielding something closer to the mind have been neglected. Just like cheap oil supresses alternative fuel research.

Daryl Herbert said...

The whole statment is bogus becuase the mind works in such a completely different way that comparing the two is impossible.

True enough. For now. I think everyone agrees on this.

The point of the question is: what happens in the future? What happens when we start making some progress that blurs what is (now) obviously a very clear line?

Daryl Herbert said...

The line between man and machine might appears to be bright and solid and impenetrable, just as the line between man and animal.

But what happens if we start playing with DNA, mixing human and animal DNA together?

Not counting the DNA that humans and chimps already share, if a human/chimp hybrid was created with 10% human/90% chimp DNA, should we consider that a person? 50/50? 90/10?

What if the chimp has 0% human DNA, but has been genetically modified to make it as intelligent as a human? As a human retard? As a genius-level human? It's not a "human," but it would be a "person."

A chimp with genius-level intelligence and zero empathy for other beings? A genius-level human sociopath?

"Human" rights are lame--let's have "personal rights." That also sounds like something easier to sell to libertarians (because they belong to individual persons, not "humans" as a group).

Revenant said...

That animals don't have rights doesn't necessarily mean we are free to abuse them

By the same token, the idea that animals DO have rights doesn't presume that they're the same rights humans enjoy. Just because PETA's insane doesn't mean the notion of animal having rights is.

If you don't posit that animals possess some sort of right to freedom from unnecessary torture, the notion that it is wrong to torture animals for fun collapses. The majority of humans who feel in their gut that tormenting animals is wrong are reduced to ridiculous special pleading that such torture is bad because it is bad for the *humans*.

Molon_Labe_Lamp said...

Daryl,

You are correct in pointing out I've been too focused on the technical aspect.

Here's some other ideas:

The Ego Dilemma

Can the human ego suvive while a possibly superior creature exists? We humans have gotten quite used to being on the top of the food chain. Expect us to act quickly, violently and deciscively to keep that position. In other words I think we'll begin unsponsored robocide at the first sign of sentience.

The Socio-Economic Issue

Capitalist societies trade in ideas and expect a reasonable ROI. As an investor I see a lot of risk here. Why would I invest in creating a construct who could leave the facorty floor upon completion, be given civil rights, gain ownership of itself and pursue it's own interests with no guaranntee it will have any interest in working at all. No reasonable company would pursue this becuase the products's civil rights just wiped out their IP strategy, Not to mention you can't sell the damned thing because you never owned it to start with.

So technically, yeah I suppose in the distant future we may develop the understanding to pull this off. But beyond novelty I see little point.

I suppose I'm still dodging the orginal question of what rights to confer. However I think this whole idea is nerd porn and is never likely to play out as we've envisioned it here.

Anonymous said...

A lot depends on what you believe Human Consciousness stems from. If you believe in a soul, which many people do and many people do not, then that's your answer. If you believe in some sort of emergence from the way neurons/enzymes/proteins are organized in the brain, then there's a distinct possibility that we create truly intelligent artificial minds. For a really good theory of artificial intelligence, read Douglas Hofstadter's Godel Escher Bach. Also, Daniel Dennett's consciousness explained does a decent job of proposing a materialist view of consciousness. I haven't read Richard Dawkins' The God Delusion, but I'm sure it makes similar arguments in favor of a materialist explanation for our own intelligence.

No, an intelligent program would not necessarily be accurate or fast at math. The kinds of systems that would give rise to intelligence would also necessarily give rise to unpredictability to a degree that those machines can probably make arithmetic errors. That is, for a machines to become self aware is for machines to not understand themselves at the "neuron" or "hardware" level, and will be unable to use their chips for perfect math functions. Or something like that.