२० डिसेंबर, २००६

If your robot seems human enough, would it not damage your soul to mistreat it?

So you think this is foolishness?
Robots might one day be smart enough to demand emancipation from their human owners, raising the prospects they'll have to be treated as citizens, according to a speculative paper released by the British government.
A robot is a machine. It's not human. But perhaps by the time the robots get this good, the evidence will show that we are just machines.

In any case, isn't it bad for your soul to mistreat something that you see as human-like? For example, if you are in a lucid dream interacting with people whom you realize do not exist, do you think you can do things to them that you would not do to a real person? And what do you think of the child who tortures her doll?

MORE: Here.

३० टिप्पण्या:

Balfegor म्हणाले...

In any case, isn't it bad for your soul to mistreat something that you see as human-like?

Authors are in for a long stretch of hell if this is so. And artists like Goya too. Not, of course, to say that they're not. So you may be right.

But just to make the distinction I see clear, I think it's not necessarily the case that tormenting humaniform simulacra, whatever the medium, is bad for your soul (or rather, morally wrong) -- I think it's only if, in so tormenting them, you engage with them or conceive of them, psychologically, as fellow-humans that it becomes really reprehensible (leaving aside the problem of psychopaths who don't really see their actual fellow-humans as fellow-humans). It may desensitise you somewhat to human suffering, when you encounter it in the real world, but all kinds of things do that, without being intimately connected with your own agency.

Gahrie म्हणाले...

1) Humans are not machines, no matter how much the Left would wish we were.

2) There is a kernel of truth in the thesis. The most pernicious evil of slavery was the effect it had on the slaveowners. (note I did not say the greatest evil, that of course was perpetrated on the slaves themselves)

3) However it is absurd on its face to suggest that one day robots must be treated as citizens. A machine, no matter how smart, is still a machine. (unless and until one develops a {for lake of a better term} soul)

अनामित म्हणाले...

It is as Balfegor says, and I can elaborate.

If your soul is damaged greatly if you mistreat a person, and takes no damage at all if you mistreat an inanimate object, then we can claim your soul is damaged partly if you mistreat something that is partway between person and inanimate object.

However, damage to your soul is also dependent upon your perception. So, rewrite: your soul's damage varies proportionally as the degree to which you perceive the mistreatment to be of a person. This holds whether you're certain it's a maybe-person, or you're maybe-certain it's a person.

This also holds if you do not actively mistreat, but rather see mistreatment nearby and are able to act to prevent it if you choose.

So if you're alone in a room with something, and you know for a fact it's a robot, and no one will ever know what you do in there, then you could mistreat it all day with no ill effect. If twenty people are in the room, and they all know it's a robot, same deal; one can mistreat it, the rest can see it, and suffer no damage.

If someone walked into the room after the act started, however, and didn't know it was a robot (they missed some information), then there will be some damage. And that's where it gets tricky.

Heh. Relativity theory, anyone?

MadisonMan म्हणाले...

Will supersentient robots be allowed to marry? Or is that only for people who can reproduce?

Charlie Martin म्हणाले...

Tell you what: you tell me what being "human" means and I'll tell you if robots are human.

Gahrie म्हणाले...

dave:

My point is that there is more to being a human than just being alive and having a body. There is something that separates us, no matter how much the Left wishes this wasn't true, from the animals. Some call it a soul.

Now if you want to argue that animals are in effect biological machines, you'll get very little arguement from me.

अनामित म्हणाले...

Gahrie, suppose you see someone pick up a hammer and say, "I'm going to bash this thing's head in," referring to a person sleeping at the other end of the room. Would you stop them? Would you expect guilt if you didn't?

Suppose you took no action for whatever reason, and the deed was done. Bloody material splatters all over the wall and floor. The corpse's hands twitch in response to a nervous system now without control. Would you feel regret? Damage?

Suppose the person with a hammer then showed you the manner with which that person lying in the corner was actually created using sophisticated techniques for manufacturing skin-like material, a calcium-based infrastructure for support, a red ferrous fluid for conducting fuel throughout it, and various specialized devices ensconced in its cavities. What would you feel then?

अनामित म्हणाले...

(I hope my previous gedanken does not offend anyone. Though, in a way, if it does, it serves to demonstrate the point.)

अनामित म्हणाले...

abraham: I would think the argument against extending human rights to sufficiently sophisticated machines to be clear...

...if such a machine is functioning differently than you intend, you could simply throw it away and replace it if it were just a machine. If it's not, then you have to expend an extraordinary amount of effort (trying the machine in court, transferring it to someone else, looking after it, etc.) to get the function you need. Indeed, we wouldn't ethically be able to acquire such machines solely for certain functions, if we ruled they were "human enough".

Beth म्हणाले...

I think Gahrie is a machine and "no matter how much the Left wishes this wasn't true" is part of his program:

Gahrie, whaddya want for dinner? "Spaghetti carbonara 'cause it's the best pasta ever, no matter how much the Left wishes this wasn't true."

Gahrie, wanta go to the mall? "Why, yes! The best shopping days are right before Christmas, no matter how much the Left wishes this wasn't true."

अनामित म्हणाले...

Molon_Labe_Lamp: I agree you couldn't fool a human in the long run. However, I am very convinced you could fool virtually anyone in a short run, particularly if some of the sophisticated tests you're referring to would violate ethical treatment of human beings by even being administered. Someday, an android could potentially pass as human for over a year, maybe more.

That'll be a while, though. Best androids I've seen so far would fool you for probably several minutes:

http://www.engadget.com/2006/10/14/zou-renti-gets-an-evil-android-twin-too/

(Read the comments for extra laughs.)

अनामित म्हणाले...

I'm thinkng about what prisons for robots might be like. Would the death penalty be allowed for "seriously errant" robots (assuming, of course, that calling them "dangerously defective" would be a discriminatory term prohibited by law)?

Gahrie म्हणाले...

Paul Brinkley:

I think you misunderstand me. I would be opposed to the mistreatment of human-like machines, but not out of concern for the machine, but out of concern for the effect it would have on humans.

Balfegor म्हणाले...

Re: Internet Ronin

I'm thinkng about what prisons for robots might be like. Would the death penalty be allowed for "seriously errant" robots (assuming, of course, that calling them "dangerously defective" would be a discriminatory term prohibited by law)?

Every robot shall get one free bite. What are the rules on having animals destroyed?

Gahrie म्हणाले...

dave:

Either you didn't read my post, or you are ignoring it on purpose. We are different from animals in that we possess something other than our physical bodies. In Western Civilization this is generally labeled as a soul. If we did not possess this external, we would indeed be animals and mere machines.

The reason why it is political is because the left insists on equating animals and humans, or in labeling humans as mere animals. The whole "boy is to a fish is to a rat thing" is purely Leftist rhetoric. It is the Left that is attempting to grant rights to animals.

अनामित म्हणाले...

If the robot was running a computer program, this program could be backed up and restored. So any abuse to the robot could be erased, and any damage repaired (a new body constructed, if necessary) So would that abuse count, if the robot could not be permanently harmed by it?

And to further blur the distinction between man and machine, assume your robot is a functional simulation of a scan of your own brain. In other words, it thinks it's you.

Richard Dolan म्हणाले...

Ann says: "In any case, isn't it bad for your soul to mistreat something that you see as human-like?"

The "robot" context is an odd context for issues about "mistreatment" of this sort to come into play. But the underlying principles are central to other more pressing controversies, such as abortion and environmental protection of special or "sacred" natural spaces. In those contexts, not only is it "bad for your soul to mistreat something that you see as human-like," but lots of political controversy can result from the strong feelings that such "mistreatment" can generate.

For example, abortion early in the term is less troubling (at least in terms of non-religious objections) because fetal development hasn't progressed to the point where we identify the fetus as "human-like." The environmental context is a bit different since "human-like" is measured in terms of values rather than physical appearance. Most real estate developments don't raise these issues. But if a proposed development site encompasses some special natural feature of is regarded as "sacred" in some way, then development of the site can amount to a sacrifice of the values that people have come to associate with or even deem to be embodied in that site. The environmentalists who chain themselves to trees to prevent logging are clearly of the view that cutting down that tree is "bad for your soul," and it's not much of a stretch to see a form of pantheism in their view of the values at stake.

In the contexts Ann raises -- lucid dreams, dolls and robots -- the judgments that come into play are more aesthetic than ethical, but that doesn't make them any the less powerful. Ann's examples about a lucid dream, or a girl "torturing" her doll, ask whether the focus should be on the intentions and behavior of the subject/actor or instead on the significance of the harm (if any) to the objects subjected to the behavior. One can also view the abortion controversy and certain environmental disputes as raising the same issues, even if (especially if) one rejects the notion that a fetus (or a redwood) is endowed by a Creator with a sanctity that we must not destroy.

Gahrie म्हणाले...

Dave:

Frankly, coming from you, I'll take that as a compliment.

Balfegor म्हणाले...

Re: Molon

I think the idea of an android passing as a human is possible but that isn't the same thing as passing the Turing test or proving artificially intelligent.

I know the Turing Test is a real test and all that. But it has lost all credibility for me, since I saw those old ELIZA scripts, where (allegedly) people were fooled into spending hours on IRC talking to a bot in the belief that it was just someone who had a lot of macros handy (to put up the same responses repeatedly). And those other ones, where they try to, uh, chat them up.

I'm sure there's a technical reason why these simplistic chatterbots haven't actually passed the Turing Test, despite passing successfully as human for hours on end. We don't generally expect bots to cuss, or use slang, or write like a sub-literate teenager. Or go on about how much they love sex. So it's trickery these things get by on. But still. They get by pretty well.

अनामित म्हणाले...

Gahrie said: "I think you misunderstand me. I would be opposed to the mistreatment of human-like machines, but not out of concern for the machine, but out of concern for the effect it would have on humans."

Re-read my comment to you in that light, then. That was exactly what I was talking about, too. :-)

(In fact, I was under the impression that was what the OP was about.)

Revenant म्हणाले...

We are different from animals in that we possess something other than our physical bodies. In Western Civilization this is generally labeled as a soul.

So basically your argument is that the only difference between a human and an animal is that humans supposedly have something you can't demonstrate they have, and animals supposedly lack that something, although you can't demonstrate that they lack it.

That's a dangerously flimsy foundation for a system of morality.

In any case, even if you subscribe to the "humans are somehow special" hypothesis, the idea of machine rights is no stranger than the idea of animal rights.

Tim म्हणाले...

"In any case, isn't it bad for your soul to mistreat something that you see as human-like?"

Well, first one would have to agree one has a soul; then we can discuss what makes something "human-like." Some people see cats and dogs as human-like; other people eat them.

Ann's question suggests the positive/negative effect upon your soul hinges upon your perception of "human-like" qualities rather than the reality of "human-like" qualities. If so, is the soul of an aging shut-in whose dog died of neglect because she forget to take it to the vet worse off than the soul of a street butcher in Seoul, Korea who serves up dressed doggy without a care?

And if so, is that really fair?

Gahrie म्हणाले...

the idea of machine rights is no stranger than the idea of animal rights.

Well I agree with this.

But you see, I don't agree that animals have rights, and find the idea that they do ridiculous.

Tim म्हणाले...

"But you see, I don't agree that animals have rights, and find the idea that they do ridiculous."

Agreed. The notion of extending rights toward animals is generally absurd; even more so toward machines. That animals don't have rights doesn't necessarily mean we are free to abuse them, if only because we know animals feel pain and abuse is cruel to the animal (and debases the abuser); but purpose matters too, as animal testing and consumption are worthy endeavors.

The animal rights crowd is nuts. How else does one explain Bill Maher's opposition to animal testing to save people, but supporting stem cell research, since, we all know, an animal has more rights than an embryonic human?

अनामित म्हणाले...

Someday, an android could potentially pass as human for over a year, maybe more.

And someday one of them will be named Miss USA and Donald Trump will have to decide whether it keeps her crown.

Given that machines can think and react a thousand times or more faster than humans can, I suspect you would see the military make wide use of these kinds of androids. They could walk into an ambush, realize it and fire a directly accurate shot into the forehead of all twenty enemy soldiers who are surrounding them before a single one of those enemy soldiers could pull the trigger.

And if those robots ever do develop thinking skills.... ever see the Terminator movies?

अनामित म्हणाले...

Here's an interesting corollary:

Most people involved with developing robots feel (military applications-- see previous post-- notwithstanding) that Asimov's three laws of robotics (which he developed while writing a work of science fiction) make a lot of sense and would likely be incorporated into the design of such a sentient robot:

1. A robot will not harm a human being, or through inaction allow a human being to come to harm.

2. A robot will obey all orders given it by a human being, except where such orders contradict the first law.

3. A robot will preserve itself from harm, except where such orders contradict the first or second law.

These bring up all sorts of intriguing questions:

Could you tell a robot to destroy or damage itself? Does ownership play a role in whether such an order would be followed?

A robot could be mistreated by a human, but would a human then be unable to ask an android to mistreat them (i.e. 'spank me.')?

If the robot perceives a human about to attack another human, can it use force against the attacker? What if only deadly force will work(i.e. a bullet through the brain just in time to prevent the attacker from activating a bomb switch)

And here is another question: Is it mistreatment to interchange parts? i.e. if you buy an android "Sexy Supermodel" and an android, "Macho Male bodybuilder" and an android "Doberman Pinscher," and then start switching body parts around does that constitute 'mistreatment?'

LoafingOaf म्हणाले...

That animals don't have rights doesn't necessarily mean we are free to abuse them, if only because we know animals feel pain and abuse is cruel to the animal (and debases the abuser); but purpose matters too, as animal testing and consumption are worthy endeavors.

The animal rights crowd is nuts.


Yeah, they're a little nuts. But they can't possibnly be more insane and deranged than America's meat industry. I wonder who benefits from that "worthy endeavor"? The morbidly obese cheeseburger eaters pulling into fast food joints every day?

I agree that we shouldn't consider ourselves free to unnecessarily abuse animals. The fact is that we do abuse animals in the most monstrous fashion imaginable. When are people gonna say enough is enough to that?

When robots take over we'd better pray they are nicer to us than we are to animals.

अनामित म्हणाले...

Robots will be more humane to us than we currently are with our fellow animals. We in fact, will at this time be psuedo-robots ourselves.

Revenant म्हणाले...

That animals don't have rights doesn't necessarily mean we are free to abuse them

By the same token, the idea that animals DO have rights doesn't presume that they're the same rights humans enjoy. Just because PETA's insane doesn't mean the notion of animal having rights is.

If you don't posit that animals possess some sort of right to freedom from unnecessary torture, the notion that it is wrong to torture animals for fun collapses. The majority of humans who feel in their gut that tormenting animals is wrong are reduced to ridiculous special pleading that such torture is bad because it is bad for the *humans*.

अनामित म्हणाले...

A lot depends on what you believe Human Consciousness stems from. If you believe in a soul, which many people do and many people do not, then that's your answer. If you believe in some sort of emergence from the way neurons/enzymes/proteins are organized in the brain, then there's a distinct possibility that we create truly intelligent artificial minds. For a really good theory of artificial intelligence, read Douglas Hofstadter's Godel Escher Bach. Also, Daniel Dennett's consciousness explained does a decent job of proposing a materialist view of consciousness. I haven't read Richard Dawkins' The God Delusion, but I'm sure it makes similar arguments in favor of a materialist explanation for our own intelligence.

No, an intelligent program would not necessarily be accurate or fast at math. The kinds of systems that would give rise to intelligence would also necessarily give rise to unpredictability to a degree that those machines can probably make arithmetic errors. That is, for a machines to become self aware is for machines to not understand themselves at the "neuron" or "hardware" level, and will be unable to use their chips for perfect math functions. Or something like that.