"The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:
"0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
Then again, most potentially dangerous machines one sees in a factory prominently display a brightly colored "emergency stop" button. Is there something like this on this robot and, if not, shouldn't there be?
To implement Asimov's laws, you need to first design a robot that knows the difference between a human being and a piece of machinery. This robot didn't know that what it was manipulating was a human being because it wasn't programmed that way.
So I would add a fifth law to Asimov's laws:
Law number -1 (negative one): A robot must be capable of distinguishing between a living human being and all other objects.
"... Zuckerberg expects technology to evolve to a point that we can share whole thoughts and full “sensory and emotional” experiences, telepathically..."
I'd hate to have been mind-melded to this poor guy when the robot got him.
I had a chance to write some software that drove a smaller fixed industrial robot one. It was about the size of small person with one really long arm.
It was supposed to slowly wave a wand over the head of a mannequin that was talking on a cellphone, to measure the RF energy around the head.
One slip in the code and it could knock the head clean off the mannequin and send it flying across the room.
As far as Asimov's three laws, software that could be that discerning that quickly isn't even on the horizon yet. Nobody reading this blog will live to see a reliable implementation of Asimov's laws.
Industrial robots are just multi-axis CNC machines. There are safety protocols for working around them. One of them is to stay outside the machines working envelope while it is under power.
Just watched the first episode of an AMC show called "Humans" about our future robot overlords. Very creepy and well-acted, particularly by the gal who plays the main robot character.
I'll come closer to believing robots are going to rule us when Facebook starts serving up relevant ads and Google can tell the difference between people and gorillas.
Computers can do very simple things very, very well if you tell them exactly what to do. Otherwise, they're morons, much bigger morons than morons of the people type.
That' right , Freeman. Thay're just machines. They can only do what they are progranmmed to do. And they will do that over and over and over again until they are programmed to stop. They're just dumb machines.
Rusty's right, they are just stupid machines. Very smart stupid machines, they will do exactly what they are told, very precisely, time after time. But like any machine, they will only do what they are told.
I work with robots and have some questions about this story. German standards may be different but in the US:
1) the robot would be caged. If the door to the cage is opened, the robot will stop dead.
2) There will be an emergency stop button that when pushed, will stay in until manually reset. When pushed, the robot stops dead in its tracks. It does not cycle back to home position.
3) the worker would not enter the cage without disconnecting power (electric, air, hydraulic)and padlocking the switch etc in the off position. S/he will have the only key. If more than one worker is on the job, each will have their own padlock so the robot (or any machine, generally) cannot be energized until all workers have removed their locks.
If this was in the US, I would say that it is a failure of safety procedures. The robot sounds like it did what robots occasionally do. It acted unpredictably and erratically. that is why safety procedures exist.
I have seen some pretty lax safety processes in other countries. A soap mixing tank where a worker would enter for cleaning with no more protection than turning the switch off, for example. One of the reasons I have grey hair.
The article mentioned that this was not a "collaborative robot" 2 years ago, this term didn't exist. A year ago few, even in the industry, knew what it meant. Not it is getting into general news usage.
Good!
this is the most exciting development in robots in may years. A collaborative robot is one that is inherently safe and can work alongside humans, shoulder to shoulder, with no guarding.
I'd not thought about it before but I have long been familiar with Asimovs laws and these collaborative robots (sort of) approach compliance with them.
If anyone is interested in learning more, I wrote an article in Packaging Digest in May as an intro to Cobots, what they are, what they do and what they aren't.
I am a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for me to earn fees by linking to Amazon.com and affiliated sites.
Encourage Althouse by making a donation:
Make a 1-time donation or set up a monthly donation of any amount you choose:
29 comments:
But Isaac Asimov said that couldn't happen:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
"The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:
"0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
Then again, most potentially dangerous machines one sees in a factory prominently display a brightly colored "emergency stop" button. Is there something like this on this robot and, if not, shouldn't there be?
If it was a stationery robot it would have enveloped him.
I am Laslo.
Obviously, Isaac Asimov didn't design this robot.
To implement Asimov's laws, you need to first design a robot that knows the difference between a human being and a piece of machinery. This robot didn't know that what it was manipulating was a human being because it wasn't programmed that way.
So I would add a fifth law to Asimov's laws:
Law number -1 (negative one): A robot must be capable of distinguishing between a living human being and all other objects.
"grabbed and crushed him"
Like a gorilla.
"... Zuckerberg expects technology to evolve to a point that we can share whole thoughts and full “sensory and emotional” experiences, telepathically..."
I'd hate to have been mind-melded to this poor guy when the robot got him.
I know what the kids last words were: "What does this button do?"
VW in Germany.
Wonder if the guy was Jewish?
Hmmmmm.
Industrial Park Rangers were called in to put the robot down.
Think of all the social work needed for wayward robots.
*recaptcha asked me to prove I was not a robot.
"Danger Will Robinson!"
I had a chance to write some software that drove a smaller fixed industrial robot one. It was about the size of small person with one really long arm.
It was supposed to slowly wave a wand over the head of a mannequin that was talking on a cellphone, to measure the RF energy around the head.
One slip in the code and it could knock the head clean off the mannequin and send it flying across the room.
As far as Asimov's three laws, software that could be that discerning that quickly isn't even on the horizon yet. Nobody reading this blog will live to see a reliable implementation of Asimov's laws.
"German news agency DPA reported that prosecutors were considering whether to bring charges, and if so, against whom."
Or should it be "against what"?
I'm waiting for the first robot designed to murder a particular human. That's real progress.
Imagine how awful that would be to see.
Did an enemy of the deceased program the robot? Murder Mystery Plot Alert!
Murder isn't the only robot crime to watch out for. There's robbery.
Thankfully the man killed wasn't named John Conner
Industrial robots are just multi-axis CNC machines. There are safety protocols for working around them. One of them is to stay outside the machines working envelope while it is under power.
Blogger sinz52 said...
Law number -1 (negative one): A robot must be capable of distinguishing between a living human being and all other objects.
This thing was a mechanical arm, not a robot in the clickbaity Asimov sense, which is why they didn't show a picture of it.
you know you are in trouble when the robot tricks you into thinking it was human error that caused the "malfunction"
Skynet...
Just watched the first episode of an AMC show called "Humans" about our future robot overlords. Very creepy and well-acted, particularly by the gal who plays the main robot character.
Ferget it Fernandinande. They're on a roll.
I'll come closer to believing robots are going to rule us when Facebook starts serving up relevant ads and Google can tell the difference between people and gorillas.
Computers can do very simple things very, very well if you tell them exactly what to do. Otherwise, they're morons, much bigger morons than morons of the people type.
That' right , Freeman. Thay're just machines. They can only do what they are progranmmed to do. And they will do that over and over and over again until they are programmed to stop. They're just dumb machines.
Rusty's right, they are just stupid machines. Very smart stupid machines, they will do exactly what they are told, very precisely, time after time. But like any machine, they will only do what they are told.
I work with robots and have some questions about this story. German standards may be different but in the US:
1) the robot would be caged. If the door to the cage is opened, the robot will stop dead.
2) There will be an emergency stop button that when pushed, will stay in until manually reset. When pushed, the robot stops dead in its tracks. It does not cycle back to home position.
3) the worker would not enter the cage without disconnecting power (electric, air, hydraulic)and padlocking the switch etc in the off position. S/he will have the only key. If more than one worker is on the job, each will have their own padlock so the robot (or any machine, generally) cannot be energized until all workers have removed their locks.
If this was in the US, I would say that it is a failure of safety procedures. The robot sounds like it did what robots occasionally do. It acted unpredictably and erratically. that is why safety procedures exist.
I have seen some pretty lax safety processes in other countries. A soap mixing tank where a worker would enter for cleaning with no more protection than turning the switch off, for example. One of the reasons I have grey hair.
HJohn Henry
The article mentioned that this was not a "collaborative robot" 2 years ago, this term didn't exist. A year ago few, even in the industry, knew what it meant. Not it is getting into general news usage.
Good!
this is the most exciting development in robots in may years. A collaborative robot is one that is inherently safe and can work alongside humans, shoulder to shoulder, with no guarding.
I'd not thought about it before but I have long been familiar with Asimovs laws and these collaborative robots (sort of) approach compliance with them.
If anyone is interested in learning more, I wrote an article in Packaging Digest in May as an intro to Cobots, what they are, what they do and what they aren't.
http://www.packagingdigest.com/robotics/what-are-collaborative-robots-and-why-should-you-care1505
My email is in the article if anyone wants to talk about them more. Or just swap robot stories.
John Henry
Post a Comment