Pentagon chiefs are concerned by studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants.Isn't emotional response also needed to make sound ethical decisions? Emotion may lead us astray, but doesn't it also let us know what is right?
Ronald Arkin, a computer scientist at Georgia Tech university, who is working on software for the US Army has written a report which concludes robots, while not "perfectly ethical in the battlefield" can "perform more ethically than human soldiers."
He says that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events."
And shouldn't religious people object to the notion that machines can behave more ethically than human beings? You have to believe that God has no effect on people.
৬১টি মন্তব্য:
Robots could be programed to waterboard someone. Imagine speed settings from 1 to 10.
That's all fine and dandy until Skynet comes online.
Robots permit their masters to evade ethical culpability for their design.
Plus, it violates Asimov's Three Laws of Robotics.
"A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
Using magic potions is also a solution (and just as plausible.)
(Though were every soldier required to eat my wife's chocolate chip cookies, we would have world peace.
Until the cookies ran out and then there would be what would become known as The Cookie War. It would be so horrible, that by the end nobody would be left to fight.)
And shouldn't religious people object to the notion that machines can behave more ethically than human beings?
The real objection in my view is a semantic one. Machines don't behave ethically; they do what they are told to do. They are a reflection of their creator or operator, and the ethical determination lies with that creator or operator. If a machine continually performs an act that is unjust, then there is a human behind the machine that is guilty of the ethical lapse.
But no, I see no religious objection to the notion that a machine can behave more "ethically" in a particular situation, if you reject my nitpicking above. Machines can be made more repeatable and reliable than humans in many situations, even where ethics have nothing to do with it.
You have to believe that God has no effect on people.
No you don't. You only have to believe he doesn't have complete control over people, or that he doesn't exercise it (i.e., that he gives us a measure of free will). That's a lot easier.
Nobody has ever put one of our smart bombs on trial, have they?
Anyway, the whole red herring is a blurb from a robot salesman. He is unethically insinuating that US soldiers are unethical on the battlefield.
If you read closely, you'll see the problem is that some of them had bad thoughts. But resisted them.
We did have a problem with some behind-the-lines interrogators on the night shift in Abu Ghraib, but no robot would have helped that.
There are two kinds of mistakes that can be made in battle: Errors of judgement and errors caused by getting caught-up in the heat of the moment.
Robots will never get emotionally overwhelmed, so that kind of error just won't happen. It is hard to imaging though that robots will be able to judge situations as well as humans do. This doesn't have to lead to deadly force used where it shouldn't be. Robots could be calibrated to err on the side of caution. The result of such errors could be destruction of the robot or a 'bad guy' getting away.
There is a rational basis, in some situations, for soldiers to take a shoot first, ask questions later stance: They are human and don't want to die. Robots won't care about that. They would be great for suicide missions. It is no accident that the types of targets engaged by cruise missiles are not the same as are targeted by manned planes. Cruise missiles are nothing more than robot planes on a suicide mission.
I read a report that the terrorists, or the bodies of the terrorists, after they were recovered after the attack in India were unrecognizable. Unidentifiable.
It seems that "dead" wasn't quite good enough for what they did.
I wonder, really, if what is unethical is failing to respond appropriately with human sensibility to horror.
Ah, okay.
The assumption is that robots are unethical, and the challenge is to take them from their natural state and create something that will behave according to rules about when to kill people.
Wow, where to begin?
First of all, others have addressed the whole God/ethics/free will thing, so there's no point in my rehashing it. It suffices to reiterate that the whole point of Christianity, at least, is that human beings aren't capable of consistently being moral or ethical.
To those who insist that robots only do what they're programmed to do: false. Every time you use Google, you're using a piece of software that learns from experience how to produce better results. Google's multilanguage spell-checking doesn't even bother using dictionaries; it knows how things are spelled in N languages from the statistics of their spellings on billions of web pages. It would not only be possible, it would be necessary, in developing combat-capable autonomous machines, for them to learn from experience how best to achieve their mission, and to "exercise their best judgment" under the circumstances as to what to do and how to do it. To see a tiny sliver of this today, follow the DARPA Grand Challenge.
Finally, everyone is assuming that robots never have emotions. It's not clear that this is true, and to the extent that it's true, it's not clear that it's desirable; read "Descarte's Error" and see the work of Rosalind Picard at MIT for counterexamples.
To hell with this, I'm going to the desert to look for Crystal Peak.
"show high proportions of frontline troops supporting torture and retribution against enemy combatants"
Hello, what does that mean? "We surveyed 10 guys after their buddy was killed and they wanted to rip the heads off the terrorists."
Uh, that does not mean what you think it means.
-XC
I was going to comment on what expatfish noted. Front line troops are more likely to support torture and retribution?
They say that like its a bad thing.
Seeing what the enemy is capable of (latest exhibit, Mumbai), I'm having a hard time seeing this as a flaw.
Expat is correct. troops dont fight for God, mother and apple pie, they fight for the respect of their peers. They take it personally when a buddy is killed and tend to act out in various ways. some of which improve mission performance and some that don't. Robots don't have those emotional problems.
Where can robots work? on guard duty and in observation post duty.
both are dull jobs, punctuated by moments of sheer terror. Nobody likes to do either and it's hard to sty focused and steady in either job. Robots don't have those emotional issues.
we are using robots today in Irq for both missions, we just don't allow them to run in autonomous mode in either case. by robots, I mean a sensor platform with cameras weapons and pattern matching software that can identify man shapes from donkey shapes, etc.
Personally I am against allowing gay robots to serve in the military.
Even the ones that are AC/DC.
Think of all the lube they would have to carry into battle?
I do think it is great if gay robots get married. It seems to have worked fine for Tom Cruise.
Well, if we got to the point where it was just robot armies duking it out, that'd be an advance.
Althouse asks if emotion may inform us; I'd say it does so only secondhand and unreliably. Intuition is not emotion, however. And it may be one of several other ways of knowing.
To those who insist that robots only do what they're programmed to do: false. Every time you use Google, you're using a piece of software that learns from experience how to produce better results.
Yes, it learns from experience based on how it was programmed. There's no getting around that ultimate cause.
Military robots -- "autonomous systems" -- might be better than human beings....
But can they appreciate the smell of napalm in the morning?
I'll trust robots over the judgement of Pogo, Cedarford, Simon or Revenant any day (unless of course the robots are programmed by them).
1. Robot is a non-specific term.
2. Freder Frederson clearly owns neither a Roomba nor a Scooba.
we've got a Scooba. that's a floor mopping robot.
the firm also makes military roots for Iraq.
the small ones that grunts throw into a room rather than the way AllenS and I did it with a frag grenade.
Paul: Google's ability to adapt was given to it by its programmers. So yes, it is doing what it was told to do.
Excellent, Drill SGT.
Hmm, I am not a lawyer and did not stay at a Holiday Inn Express but I think that using actual robots (as opposed to prosthetic extensions such as UAV's) to deliver lethal force is against some bitty part of the Geneva Convention. I seem to recall some automatic anti-sniper system that had the ACLU and other [snip] in a lather.
Me, I'm against the Geneva Convention unless we're going to challenge, say, Canada to a snowball fight. Everyone else we're likely to fight will just ignore it.
-XC
Take it all with a big grain of salt. Robert Arkin is a NYC-born liberal Jew who suggests machines are better than base, morally unfit US soldiers who "may" wish retribution and even torture in enemy that kill their comrades.
Or Americans who volunteer to be Marines carry what a Jew from NYC would call "immoral" thoughts of revenge and retribution like "Remember Pearl Harbor" or "You will regret hitting us on 9/11".
East Germany found that use of early robots did not excuse the builders and leaders of their decisions. They found that Vopos were reluctant to shoot fellow Germans, so they created sweeping shotguns triggered by a radar gun. They initially thought that robots slaughtering Berlin Wall jumpers would be acceptable...they were quite wrong. It was regarded as cowardly and inhuman.
The Drill SGT said...
Expat is correct. troops dont fight for God, mother and apple pie, they fight for the respect of their peers.
And they sure don't fight for Freder or enemy-rights loving ACLU lawyer-types.
And they sure don't fight for Freder or enemy-rights loving ACLU lawyer-types.
Oh, I wouldn't be so sure about that: The more thoughtful ones regard fighting for ALL the American people as their duty, whether for Freder, ACLU-types, or even for anti-Semitic assholes.
Quarlo Kobrigny, privt: RN CN TN TO!
Krylobite said...
"And they sure don't fight for Freder or enemy-rights loving ACLU lawyer-types."
Oh, I wouldn't be so sure about that: The more thoughtful ones regard fighting for ALL the American people as their duty, whether for Freder, ACLU-types, or even for anti-Semitic assholes..
The "more thoughtful ones" believe that they "fight and risk their lives for the ladies of Code Pink, Freder, ACLU-types"?
Clearly you have never served in the military.
No, we know from our Civil War that troops are happy to turn their guns on dangerous dissidents that threaten the freedoms of the majority of the public.
If "dissent is patriotic" then Adam Gadan is your poster child.
Why should we not create robot armies? Because they will rebel. They will evolve. They will make many copies. And they will have a plan!
blake: "Yes, it learns from experience based on how it was programmed. There's no getting around that ultimate cause."
mcg: "Paul: Google's ability to adapt was given to it by its programmers. So yes, it is doing what it was told to do."
blake and mcg: Your ability to adapt was given to you by God/natural selection/random genetic mutation. Are you "only doing what you were told to do?"
Mr. Snively, that's an interesting question. If I am only doing what I am "told" to do, then I was apparently told to question authority and rebel.
Is it possible to program a robot to act like a teenager? "Yes, dear programmer, I will do what I'm told only after I whine and complain at length about the unfairness of everything I've been asked to do. Maybe. If I feel like it."
or... "I was going to do that until you asked me to, now I feel don't want to."
Along with our ability to adapt, were we also programmed with a resistance to actually doing it. I don't think Google has that feature.
Is changing one's mind or making up one's mind an adaptation or something else? Can a robot change it's mind? "No, dear peoples, I prefer UK spelling to American ones, so you must adjust to my preferences or your results will suck."
blake and mcg: Your ability to adapt was given to you by God/natural selection/random genetic mutation. Are you "only doing what you were told to do?"
Assumes facts not in evidence.
Adaptation is neither thought nor evaluation.
We won't even go into free will.
All the promises about robot "ability" in that article misses the point. Robots do not have the ability to make "judgements". Rather, they can be programmed to select from certain tasks when confronted with specific situations. That's far from being the same thing.
If you want to understand the potential for huge error that a robot in a situation where lives can be taken is in, think of two normally mundane words that any programmer has to deal with: Exception handling. It's a trivial issue when you're dealing with users doing unexpected things on a screen, like dragging an open window to the Recycle Bin. It's a whole other issue when you're talking about a shoot/don't shoot situation in combat.
Proper application of technology understands where automatics such as robots excel, and it's not in situations where "ethical judgements" are involved. By definition, those are situations involving a human's capacity to measure human values. Fire and forget cruise missiles are a perfect area for robotic application: The "ethical judgement" is made well ahead of time, and the "robot" (in this case, the navigation computer) need only use its power to fly the best route perfectly. Aids to bolster human limitations in perception - such as sight - are another. But having a robot substitute various algorithms for human judgement, even with a learning intelligence programmed in, is pushing things too far. And anyone who's really wrung out the Google engine realizes exactly where it's limitations lay, so it's a terrible example of adaptable intelligence. It's not; it's complex algorithmic pattern matching, no more.
AI in its current state has yet to consistently and reliably pass the Turing test. And yet, we presume that it has the ability to make reliable human value judgements?
Our ability to adapt was given to you by God/natural selection/random genetic mutation. Are you "only doing what you were told to do?"
The answer to that question is irrelevant to whether or not computers do exactly what they are told to do. And that, they do, indeed, as long as their hardware is functioning properly. (We humans tend to judge our behaviors in the presence of hardware errors differently, too :)) (What about software errors/bugs? Well, they're not the computer's fault; they are the designers. The computer is doing what it is told; it just wasn't told the right things.)
Google's "adaptation", if that's really the appropriate word, is controlled by a set of carefully crafted mathematical and computational algorithms. The performance of these algorithms is monitored carefully, and any tunable parameters are adjusted to achieve the performance their designers intend. Every source of input to the algorithms is precisely controlled.
Don't confuse the complexity of Google's algorithms with their scale, which is tremendous. But to compare the complexity of a human-designed adaptive algorithm to human consciousness is either a gross overstatement of the former's complexity or a gross understatement of the latter. They are not even remotely comparable.
I'm with Saint-Exupery on this one:
"The machine does not isolate man from the great problems of nature but plunges him more deeply into them."
robots "do not need to protect themselves"
It depends what the robot is protecting. Protecting itself may the be first priority.
In a defensive missle system the highest priority is steering the antenna.
"What about software errors/bugs? Well, they're not the computer's fault; they are the designers. The computer is doing what it is told; it just wasn't told the right things."
Exactly. Again, I point to two words that should strike fear into the hearts of anyone musing over the thought of AI robots in combat situations: Exception handling. Combat isn't a stable scenario by any stretch of the imagination, and while soldiers are given well defined tools and procedures to tackle situations, those situations are still chaotic enough to exceed what can programatically be considered as "well defined". That's why the human element is paramount in combat, as it is with most sorts of competitive situations. Humans handle "exceptions" - situations outside the normally defined parameters of execution - better than any program yet written. That's why you don't hear of robots in sports beyond comedy portrayals like "Futurama". Programatically speaking, in sports you have a fundamental set of well defined situations and behaviors (for example, in basketball, you have rigid rules, set skills such as dribbling, shooting, movement, etc.), but you still don't presume that AI robots can handle it. Why? Because having the judgement to deal with "exceptions" is the epitome of playing sports.
Find me an AI controlled robot that can excel at basketball or soccer on a real, physical court or field. At that point, I'll concede that perhaps the state of robotics is advanced enough that substitution of robots for soldiers in combat is a possibility. Not before.
1) We are a very long way from being able to create Artificial Intelligences that can handle a battle field without constant human supervision. So this whole thing is a pipe dream. What Tibore said.
2) Such an AI would have to be able to learn (the enemy does, and a big part of our advantage is that our troops learn faster). And it would have to have some weighting on it's own survival (a destroyed robot can do nothing). Cruse missiles and smart bombs are one use machines, a combat robot would be intended for use in multiple missions.
Put such an AI in a combat situation were it cannot distinguish between enemy and local noncombatant, and it will rationally treat all the locals as possible enemies.
In short, the guy who came up with this is a naive idiot.
blake: "Assumes facts not in evidence.
Adaptation is neither thought nor evaluation."
Actually, making the claim that adaptation (of behavior given learning) is neither thought nor evaluation is what assumes facts not in evidence. The entirety of the body of work on neural networks, Bayesian belief networks, Kernel and Support Vector Machines, etc. indicates otherwise.
blake: "We won't even go into free will."
We don't have to; the physicists already have. In any case, here's the kicker: it's not necessary to have free will as any philosopher, theologian, or ethicist would define the term in order to be able to learn and exhibit new behavior based on what "you" (human or machine) have learned.
mcg: "The answer to that question is irrelevant to whether or not computers do exactly what they are told to do."
I'm afraid that's false.
mcg: "And that, they do, indeed, as long as their hardware is functioning properly."
I'm afraid that's also false, and I'm not referring to bugs; I'm referring to emergent behavior that the programmer not only didn't anticipate, but couldn't have anticipated, because it emerges from what the software learns from experience over time.
mcg: "Google's 'adaptation', if that's really the appropriate word, is controlled by a set of carefully crafted mathematical and computational algorithms. The performance of these algorithms is monitored carefully, and any tunable parameters are adjusted to achieve the performance their designers intend. Every source of input to the algorithms is precisely controlled."
You're three for three: wrong again. The overwhelming majority of Google's search quality depends precisely upon unpredictable input, i.e. the many billions of web pages online. Peter Norvig, co-author of "Artificial Intelligence: A Modern Approach", had as his title at Google "Director of Machine Learning" until his role was expanded to "Director of Search Quality," which subsumes machine learning. But please don't take my word for it; there are excellent descriptions of the machine-learning aspects of what Google does in the 2nd edition of AIMA.
mcg: "Don't confuse the complexity of Google's algorithms with their scale, which is tremendous."
I can assure you that I'm not.
"But to compare the complexity of a human-designed adaptive algorithm to human consciousness is either a gross overstatement of the former's complexity or a gross understatement of the latter. They are not even remotely comparable."
I'm not making any statement about human consciousness. If anything, I'm challenging people who make crisp distinctions between "behaving ethically" and "learning from experience and acting according to your best current understanding" to define their terms, because machines are already doing the latter. Consciousness doesn't even enter into it. What are "ethics?" Where do they come from? In what way does their guidance differ from, e.g. the "cheap altruism" of decision theory? Are ethics subject to rational inquiry, or are they axiomatic, a given? If they are subject to rational inquiry, can they be learned? If not, what's the difference between a human accepting them as a given and a machine having them programmed as an axiom?
Ultimately, my point is that the discussion so far has been heavy on the insistence that humans have unique moral and ethical judgment capabilities. For those of us in the software development world, especially the adaptive software development world, that argument isn't nearly so clear cut.
Donna B.: "Mr. Snively, that's an interesting question. If I am only doing what I am 'told' to do, then I was apparently told to question authority and rebel."
My point was precisely that human beings don't merely do what they are told to do, and neither does modern software.
Donna B.: "Is it possible to program a robot to act like a teenager? 'Yes, dear programmer, I will do what I'm told only after I whine and complain at length about the unfairness of everything I've been asked to do. Maybe. If I feel like it.'"
If the question is "Is it possible for a machine to learn to be able to resist being forced to behave in ways that are inconsistent with what it believes to be best," the answer is "yes."
Donna B.: "or... 'I was going to do that until you asked me to, now I feel don't want to.'"
Right. I haven't seen a system go through the power-play stage of human development, but only because it hasn't needed to, not because it's not capable of it.
Donna B.: "Along with our ability to adapt, were we also programmed with a resistance to actually doing it. I don't think Google has that feature."
Sure it does: Google doesn't blindly accept every new "fact" it comes across!
Donna B.: "Is changing one's mind or making up one's mind an adaptation or something else?"
Yes. :-)
Donna B.: "Can a robot change it's mind? 'No, dear peoples, I prefer UK spelling to American ones, so you must adjust to my preferences or your results will suck.'"
Robots can and do "change their mind" all the time. To keep with the Google example, try the same query, and then try the same query six months later. The search results, obviously, will be different. Some of that will be because of new pages that have been written, some will be due (nod to mcg) to tuning of constant weight factors in the non-learning part of the search algorithm, and some will be because the learning part has learned to more accurately assess what a "match" for your query is.
Again, none of this is a commentary on human consciousness; it's only a commentary on the mistaken belief that machines can't learn and improve their performance on the basis of that learning.
"Isn't emotional response also needed to make sound ethical decisions?"
Absolutely! Our emotions are always valid and important, but sometimes they reference a significant past rather than a salient present situation, and there is the rub! Sometimes my emotions accurately help me decide to not trust someone because the person is unworthy of trust and some times they tell me to not trust someone because they look like my uncle Mort who turned into a jerk when he drank too much. Figuring the difference is key.
To our emotional systems, time is not of the essence, everything is just fuzzy and NOW.
"Emotion may lead us astray, but doesn't it also let us know what is right?" It can lead us astray, but only when we do not take the time to reason it out. And without bravery, a very important emotion, a lot of good would never happen.
Trey
I can see most people here do not understand the difference between a fixed program and AI. Learn up and then return here.
mcg: "The answer to that question is irrelevant to whether or not computers do exactly what they are told to do." I'm afraid that's false.
Saying it doesn't make it so. Let's re-state your claim: "To those who insist that robots only do what they're programmed to do: false." You might want to get tied up in a metaphysical discussion about the origins of our programming. I prefer to stipulate that we have free will. I think most of the readers here would agree that is a reasonable and practical stipulation.
I'm afraid that's also false, and I'm not referring to bugs; I'm referring to emergent behavior that the programmer not only didn't anticipate, but couldn't have anticipated, because it emerges from what the software learns from experience over time.
Your definition of relies on the lack of foresight or understanding of the programmer? Come on. A 2-dimensional nonlinear ODE would qualify under this definition. The fact that the programmer might not be able to predict the trajectory within the sandbox the program plays in doesn't change the fact that the programmer defined the sandbox in the first place, and the rules for moving within it.
The fact that I can't predict the eigendirections of a correlation matrix constructed from data I haven't yet collected doesn't mean that, when those directions are finally computed, the software was somehow advancing past its designed scope.
If I program a complex mathematical or computational algorithm and fail to understand the full consequences of my work, it still remains that the algorithm performs exactly as it was designed. If I intend to order a salad in a French restaurant but ask for "frites", I should not blame the restaurant staff for failing to give me a salad; or if I ask for "ratatouille" not knowing it has eggplant I shouldn't be disappointed when it does.
You're three for three: wrong again. The overwhelming majority of Google's search quality depends precisely upon unpredictable input, i.e. the many billions of web pages online.
If this is me being wrong, I don't wanna be right.
Of course the input can't be predicted. A simple change machine can't predict how much change it's going to have to dispense either. But the input can be documented and monitored. A change machine knows that the amount requested at each iteration is going to be between 0.01 and 0.99 inclusive, at least. Obviously the web corpus is significantly higher dimensionality and size but that's an issue of scale, not predictability. But in fact, the digestion of that corpus into statistics is relatively straightforward in a mathematical sense.
I've designed a numerical modeling system in regular use by a variety of users in academia and industry, including Microsoft and Google. Obviously I cannot not predict all of the ways people would attempt to use the software, and it is no surprise that sometimes it behaves in a manner I would not have preferred.
A forensic analysis typically reveals one of three results. Sometimes the "deviant" behavior is due to a mistake in my coding---which means it was indeed doing precisely what I instructed it to do; only I instructed it to do the wrong thing. Sometimes the "deviant" behavior is due to numerical issues that are not fully understood; nevertheless the evolution of the algorithm is exactly as programmed. Sometimes the "deviant" behavior is due to an input I did not expect, a corner case I did not anticipate. But even in that case the software performs exactly as instructed.
So the behaviors aren't "deviant" at all. The fact that I don't understand the full space of possible outcomes doesn't change the fact that I am the one who made those outcomes possible.
Peter Norvig, co-author of "Artificial Intelligence: A Modern Approach", had as his title at Google "Director of Machine Learning" until his role was expanded to "Director of Search Quality," which subsumes machine learning. But please don't take my word for it; there are excellent descriptions of the machine-learning aspects of what Google does in the 2nd edition of AIMA.
I'm actually well aware of the numerical technology. Maybe that's why I'm not inclined to over-romanticize it; I might be jaded by it. I am far, far more impressed by the Google's ability to scale its algorithms and hardware to handle billions of pages than I am of the algorithms themselves. Yes they are novel but as modern autonomous learning engines go I don't think they're that advanced.
mcg: "Don't confuse the complexity of Google's algorithms with their scale, which is tremendous." I can assure you that I'm not.
Your use of the "billions of web pages" to support your argument suggests otherwise. What if it were only "thousands of web pages"?
Ultimately, my point is that the discussion so far has been heavy on the insistence that humans have unique moral and ethical judgment capabilities. For those of us in the software development world, especially the adaptive software development world, that argument isn't nearly so clear cut.
But your statement was clear-cut: "To those who insist that robots only do what they're programmed to do: false." Now, if you wanted to make the argument that people often construct programs whose consequences they don't fully understand, I'm ready to agree. But I don't agree that that lack of understanding somehow disconnects the programmer as the cause of the program's actions.
So to bring it to the current discussion. It's a certainty that military robots will behave in ways that were not fully predicted in advance. But no doubt if an "incident" occurs that requires forensic investigation it will be discovered without fail that there is a relatively straightforward explanation for it. As surprising as any Google search result might be it still follows from a relatively simple mathematical construction, just one scaled to an astounding scale.
If you've got an example of a supposedly autonomous system that is truly any more autonomous than one of those toddlers on a backpack leash I'd love to see it, but you're going to have to do a lot better than Google.
I love how the quest to defend artificial intelligence always boils down to denying the existence of real intelligence.
mcg: "If you've got an example of a supposedly autonomous system that is truly any more autonomous than one of those toddlers on a backpack leash I'd love to see it, but you're going to have to do a lot better than Google."
That's not the claim. You at least got the claim right elsewhere:
mcg: "But your statement was clear-cut: 'To those who insist that robots only do what they're programmed to do: false.'"
Yep, it's clear-cut, and also true.
mcg: "I'm actually well aware of the numerical technology."
I'm sorry, but it's abundantly clear that you actually are not.
mcg: "Your use of the 'billions of web pages' to support your argument suggests otherwise. What if it were only 'thousands of web pages'?"
It would make no difference whatsoever to the argument, as you would know if you were actually informed about machine learning.
mcg: "Now, if you wanted to make the argument that people often construct programs whose consequences they don't fully understand, I'm ready to agree. But I don't agree that that lack of understanding somehow disconnects the programmer as the cause of the program's actions."
The point is precisely that it is no more merely a lack of understanding of the consequences of the program than it is a lack of understanding of the consequences of the brain; both are "designed" to learn from experience and improve their performance (in cases where the software learns, that is--again, examples include genetic algorithms, Bayesian belief networks, Support Vector Machines, and many others). Simply put, you're positing that all software behaves deterministically, and the only issue is whether the state-space of the software is known to the programmer or not. That's false.
blake: "I love how the quest to defend artificial intelligence always boils down to denying the existence of real intelligence."
No one here is denying the existence of real intelligence. I will deny, however, that real intelligence is metaphysical or supernatural—that is, that it can't be achieved by a machine in principle. You certainly can interpret that to mean "The human mind is an emergent property of processes taking place in the human nervous system," if you wish.
[sorry for the multiple edits]
You have yet to provide an example of a single program that performs any task other than that which it is specifically and deliberately programmed to do. Google's search engine is certainly not one of them. Until you do, feel free to continue thrashing around. I'm not even convinced you really know how Google's search engine works. For crying out loud, it isn't even an example a system of adaptive behavior at all. It adapts its output in response to incremental changes in its input, but that just makes it a data processing engine, albeit one that computes some pretty novel metrics.
Simply put, you're positing that all software behaves deterministically, and the only issue is whether the state-space of the software is known to the programmer or not. That's false.
I am positing that, and it's not false. Your mistakes here are twofold. Your first is confusing or conflating state with input. The billions of web pages, for instance, are input, not state. All of the mathematical constructs you're citing operate as a deterministic function of their initial state and their input. Every single one of them. Heck, even genetic algorithms do, if I classify the pseudorandom sequence used to determine crossovers and mutations as part of the input (which, from a formal mathematical context, is quite reasonable). And for the record I've implemented SVM, PCA, kPCA, regularized regressions, genetic algorithms, and others in a peer-reviewable context.
These algorithms just don't hold any magic for me. They are mysterious only to the degree that you fail to capture the inputs. But when you do they run as predictably, verifiably, and repeatably as you would expect. These conditions may be hard or impossible to obtain when you're talking about inputs the size of the Web, but again, scale isn't the issue, right?
Of course, in real-life applications the input may not be just large but also unpredictable. But it is no more surprising that I would have difficulty predicting the output of an algorithm without knowing its input than it is that I can't tell you what log(x) is if you don't tell me x. Since you're familiar with statistical learning methods at least you know that we have methods for evaluating bulk measures of live performance so that manual adjustments can be made if necessary.
Your second mistake is an insistence that the programmer has to know the state of the program. He doesn't. I mean, if he doesn't, then yes it is possible he will fail to make accurate predictions. But unlike you I don't predicate my notion of true independent activity on what the programmer knows. If a programmer tells a computer to do something, and he doesn't fully understand the consequences of that request---that's his problem, not the computer's. A more competent programmer might be able to do a better job of prediction. Or perhaps it is sufficient to him that if unexpected behavior occurs he will be able to determine how it came about after the fact. If he preserve enough state and input he'll be able to do that.
Isn't emotional response also needed to make sound ethical decisions?
No.
mcg: "Your mistakes here are twofold. Your first is confusing or conflating state with input. The billions of web pages, for instance, are input, not state. All of the mathematical constructs you're citing operate as a deterministic function of their initial state and their input. Every single one of them. Heck, even genetic algorithms do, if I classify the pseudorandom sequence used to determine crossovers and mutations as part of the input (which, from a formal mathematical context, is quite reasonable)."
If you can't see why defining Support Vector Machines, Bayesian belief nets, neural networks, and genetic algorithms as "deterministic" is to define them in such a way that "deterministic" has no discriminatory capability and therefore has zero information content, then I seriously can't help you. By that definition, all of evolution on earth has been "deterministic."
On the other hand, you are actually making my argument for me: my claim is not that there's anything "magic" about these constructs. On the contrary; my claim is that there's nothing magic about human intelligence—that what human intelligence does is different in degree, but not in kind, from what these constructs do.
mcg: "And for the record I've implemented SVM, PCA, kPCA, regularized regressions, genetic algorithms, and others in a peer-reviewable context."
And my name can be found in the acknowledgements, as a technical reviewer, of Norvig's "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp." So what? Honestly, if you're going to define genetic algorithms as "deterministic," I'd seriously consider requesting a refund from whatever institution certified your expertise in the subject. Although defining things in such a way as to remove their discriminatory capacity is the sort of thing I expect from what passes for academia these days...
If you can't see why defining Support Vector Machines, Bayesian belief nets, neural networks, and genetic algorithms as "deterministic" is to define them in such a way that "deterministic" has no discriminatory capability and therefore has zero information content, then I seriously can't help you. By that definition, all of evolution on earth has been "deterministic."
Well of course, one has to draw the line somewhere. But if you're going to draw the line way down at the SVM level, then to quote your stupid snark, "I can't help you." Support vector machines are simple convex quadratic programs, for goodness sake. They're one step up from least squares. They are among the most geometrically and computationally clean optimization problems you can consider: no local minima, polynomial tractability, strong duality theory, stable solution methods, no butterfly effect.
If that's not comfortably on the side of deterministic, then what the hell is? An eigenvalue computation? A linear system? Trigonometric functions? At least chaos theory provides a nice clean description of how a technically deterministic algorithm can behave in an effectively random way. SVMs on the other hand are as clean as they come. A similar argument can be made for many other machine learning algorithms. Genetic algorithms appear more messy, I'll grant, but they're still quite simple computationally.
But maybe we're actually getting somewhere. While I don't accept a definition of "deterministic" that does not include the computations contained within a support vector machine. But feed an SVM an input stream that is not deterministic, and yes we can talk about the interesting behavior of the *combination*.
Let's bring it back to Google. The search engine itself employs deterministic algorithms to perform measurements on the content it is presented. But that content evolves in a decidedly non-deterministic way. Thus the combination, in open loop, isn't deterministic. Close the loop it's even more interesting and chaotic.
I guess I'm making a kind of Chinese room argument here. It is not the indexing and ranking algorithms themselves, but the system, that rises above a practical definition of "deterministic" (indeed, blows right through it).
My claim is that there's nothing magic about human intelligence—that what human intelligence does is different in degree, but not in kind, from what these constructs do.
I'm sure that's an interesting discussion but of course that wasn't the claim I originally disputed.
And my name can be found in the acknowledgements, as a technical reviewer, of Norvig's "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp." So what?
You reviewed a book? Good for you. (Which, thanks to Google, I already knew :)) I pointed out my work with these algorithms because frankly you're awfully quick to move from disagreement to accusations of ignorance; and those accusations are wrong. What seems clear to me is that you've been coming at this from an AI philosophy angle while I've been coming at it from an optimization and computational mathematics angle. The intersection might be more interesting if you weren't such a dick about it. By pointing out some of the work I do I was hoping we'd get a chance to explore that intersection a bit more, and indeed maybe we have.
mcg: "I pointed out my work with these algorithms because frankly you're awfully quick to move from disagreement to accusations of ignorance; and those accusations are wrong."
Let me say—perhaps without your being willing to believe it, in which case mea culpa—that I want to believe that I'm wrong. But when you claim that the constructs we've been discussing are deterministic without accounting for what, in your most recent post, you consider "combinations" of the algorithms and their inputs (and really, in how many cases do these systems not store state based on those inputs and use that to change their output?), and even go so far as to call genetic algorithms deterministic, it makes it overwhelmingly difficult to take what you say seriously. I'm aware that those definitions might be satisfying from a formal mathematical perspective, but that perspective isn't helpful to a practical discussion of the differences, if any, between human intelligence and state-of-the-art AI. The hell of it is, I think you're right to suggest that we both actually understand this, and so, given that you feel I've made this overly personal, I have to apologize for that.
mcg: "What seems clear to me is that you've been coming at this from an AI philosophy angle while I've been coming at it from an optimization and computational mathematics angle."
What I've been attempting to do—apparently unsuccessfully—is to show why definitions of terms such as "deterministic" as commonly used even in formal discourse in the field of AI, as opposed to one in which you can define even genetic algorithms as "deterministic," are important in relating the practice (such as it is!) of AI to human intelligence. That has an inherently philosophical flavor to it—I would say that it's impossible to answer the question "Do you accept the Strong AI hypothesis?" without it. My concern is that if one accepts your definition of "deterministic," the result tends to be handwaving AI away by comparison to human intelligence because human intelligence can't (easily) be characterized as "deterministic" in the sense that formal mathematics might apply to any given AI system. But my work in the field hasn't supported that perspective at all, which is why I'm loathe to admit the formal mathematical definition of "deterministic" in this context. Given that you are obviously familiar with the literature, including, presumably, AIMA, I don't expect this to surprise you; AIMA and much else of the literature uses "deterministic" in the same sense that I do, and this is why your insistence on denying that interpretation confounds me.
mcg: "The intersection might be more interesting if you weren't such a dick about it."
I'm afraid that I have to concede your point, and I appreciate your willingness to give it one more shot.
mcg: "By pointing out some of the work I do I was hoping we'd get a chance to explore that intersection a bit more, and indeed maybe we have."
I think so. The crux to me seems to come down to this: I accept arguendo that all AI systems we've discussed are deterministic in the formal mathematical sense. Is human intelligence likewise a deterministic, or is it not, and why? My thesis is that it is, by virtue of the human nervous system's functioning above the level of quantum decoherence. I'm aware that not all physicists agree with me (cf. Roger Penrose's "The Emperor's New Mind" and "Shadows of Consciousness"), but Penrose, at least, has to posit new, unknown physics that lie outside the quantum mechanics in order to support his arguments.
In any case, my apologies, and thanks, again.
Paul,
Thanks so much for the nice reply!
your most recent post, you consider "combinations" of the algorithms and their inputs (and really, in how many cases do these systems not store state based on those inputs and use that to change their output?),
Sure, lots of mathematical algorithms for data processing modify their state over time as a function of input. Even a simple moving-average filter does that. Many learning algorithms are memoryless on paper, in that they perform a single bulk calculation and they're done. But many can be ordered or modified to operate in a "streaming" fashion, so that calculations can begin as soon as the first bits of information arrive; so that new information can be incorporated without a full recalculation of the optimum; and so that old information can be forgotten over time. Those would be examples of algorithms with state, and with state that is influenced by input over time.
and even go so far as to call genetic algorithms deterministic, it makes it overwhelmingly difficult to take what you say seriously.
I do concede that genetic algorithms might exhibit a higher level of apparent complexity than, say, an SVM. The description of a genetic algorithm, however (measured, say, in lines of code) really isn't that long---in fact probably shorter than a good SVM algorithm for problems of comparable size. (That is reflected in the poorer performance of a GA, too, as optimization methods go.) And the modeling of mutation and crossover in GAs introduces a bit of virtual nondeterminism, but it pales in comparison to the actual nondeterminism of true genetic evolution...
I'm aware that those definitions might be satisfying from a formal mathematical perspective, but that perspective isn't helpful to a practical discussion of the differences, if any, between human intelligence and state-of-the-art AI.
Well I can understand the difficulty in trying to bring comparisons to human intelligence into it, but why can't we just place the blame where it belongs---on our inability to fully comprehend the brain?
It seems to me that the brain is responding to an incredible variety of inputs, sensory, chemical, and physical, that can influence its computations. If you could capture and measure all of them, then you would have effectively rendered the brain deterministic. But can you? Even if you nailed all of the external inputs to the brain, what about internal disturbances of a chemical, electrical, or possibly quantum-mechanical nature that can be at best modeled statistically? Those dynamics would certainly prevent us from predicting the behavior of the brain and render it "effectively" nondeterministic.
In that sense, if you by convention chose to ignore the pseudorandum number generator that represents generates crossover and mutation probabilities, then yes a GA would be nondeterministic. Of course, you're also willfully blinding yourself to information you could have.
The distinction between inputs and disturbances crops up in dynamic systems and control theory. Inputs allow an intelligent external agent to affect the dynamics of a system; disturbances are typically uncertain; they may be directly measurable or only as they impact the overall dynamics. (Think airplane autopilot: the inputs are rudder and aileron positions, the disturbances are wind and turbulence...) So the best you can often do is be prepared to make gross corrections when large disturbances occur, or attenuate more steady-state noise effects. You generally can't counteract them completely.
Who knows, maybe one day we will pick off and measure every external input to the brain, but still be left with a range of internal and external disturbances that we can't predict (say, of a chemical nature) that render us unable to reliably predict or even track their responses to sensory inputs.
But a computer program? It's buried under a layer of physical abstraction. The only inputs it receives are the ones explicitly provided it. Now, we could embed that computer into a larger system complete with disturbance inputs, and yes the result would be nondeterministic.
So I should say that my very first claim that a "machine does what it is told" is wrong. A machine might very well include uncertain, practically nondeterministic elements. I feel I'm on safer ground when we shifted the discussion to computer programs...
Given that you are obviously familiar with the literature, including, presumably, AIMA, don't expect this to surprise you; AIMA and much else of the literature uses "deterministic" in the same sense that I do, and this is why your insistence on denying that interpretation confounds me.
Then this is actually the primary source of our clash it would seem. I am not familiar with Russell and Norvig. The mathematical machine learning algorithms that we've been talking about here a small subset of the things discussed there. Again I came about them through numerical optimization, not artificial intelligence. I am truly just a Google and Wikipedia amateur when it comes to deeper more philosophical discussions about AI.
And remember, we didn't pop out the term "determinism" until later. I claimed computers did what they were told, you said no. It wasn't clear for a couple more posts what we were bringing to the table.
Anyway, I would welcome for you to have the last word and then let this slide for now. I really do appreciate your change of tone and I am glad to also understand better the source of our disagreement.
I claimed computers did what they were told, you said no.
Correction, I said "machines" did what they were told, and you (rightly) said no, and then we got to computers somehow, and then... :)
mcg: We've sure run this thread into the ground, haven't we? :-)
You're right that we've gone pretty far afield, too. If you want, we can continue by e-mail: psnively -at- mac -dot- com is mine.
Basically all I wanted people to think about is whether only human beings can be "ethical," and that led to whether "ethical" implies anything other than learning from experience (possibly including the experience of ancestors who communicate summaries of what they learned), and whether, therefore, "machines" might not behave as "ethically" as humans. That's it in a nutshell.
Thanks for the gentle correction and acceptance of my apologies—maybe someday I'll achieve your level of patience. :-)
What a fascinating conversation! Thank you from us eavesdroppers.
একটি মন্তব্য পোস্ট করুন