November 19, 2013

"Computers, like children, are more often taught by rote. They’re given thousands of rules and bits of data to memorize..."

"... If X happens, do Y; avoid big rocks — then sent out to test them by trial and error."
 This is slow, painstaking work, but it’s easier to predict and refine than machine learning. The trick, as in any educational system, is to combine the two in proper measure. Too much rote learning can make for a plodding machine. Too much experiential learning can make for blind spots and caprice. The roughest roads in the Grand Challenge were often the easiest to navigate, because they had clear paths and well-defined shoulders. It was on the open, sandy trails that the cars tended to go crazy. “Put too much intelligence into a car and it becomes creative,” Sebastian Thrun told me.
From a New Yorker article about self-driving cars.  I'm interested in the analogy to the education of humans especially with respect to the fear of creativity arising from too much intelligence but also the downside of too much experiential learning: blind spots and caprice.

36 comments:

Bob Boyd said...

Years ago I crashed into a Chevy Caprice.
It was in my blind spot.

Bob Ellison said...

It's biography.

I've been teaching kids how to drive lately. They tend to start out creative and then get scared (or yelled) into timidity. Then when I say "Move. Move now. Now!" on a left turn, they quite reasonably object that I had been pushing them to timidity. I say "just go around that guy", and they object that the move is prohibited by the driver's manual.

This is the basic AI problem. How do you program judgement and wisdom combined with creativity?

Matt Sablan said...

Once my driverless car learns to flip people off, it will pass the Turing test.

PB said...

As usual, the NY Times writes at length on something of which it knows little.

Anonymous said...

Re: "How do you program judgement and wisdom combined with creativity?"

Hell with Self-Driving Cars; Figure This Out on a Reliable Basis and We Can Have a Self-Driving President and Congress.

Shouting Thomas said...

Article is way too long. I skimmed parts of it.

Quibble #1... The article works on the assumption that we all agree that absolute safety is a great thing, and that that should be a baseline requirement for cars and driving.

No, I don't agree with that. That's a mandate from a boring, feminized culture.

Quibble #2... I learned computers on the job in order to make money. Learning advanced skills in a classroom came much later on. Computers bored the shit out of me until I discovered that computer skills were worth money.

At one point in my life (say, circa 1978), six small corporate law firms had given me the keys to their offices, and asked me to come in whenever I had the time to attend to their IT systems. I did my work in the wee hours after midnight, after a gig or rehearsal. Shades of "The Man Who Fell to Earth!"

At the beginning of the PC era, competent people simply did not exist in numbers sufficient to service the demand. So, those firms handed me the manuals to their systems and asked me if I would take the time to learn them. I did, and for a period of about five years I named my price and my hours. (Such opportunities still exist for those clever enough to search and find them.)

Matt Sablan said...

"Too much rote learning can make for a plodding machine. Too much experiential learning can make for blind spots and caprice."

-- I think it matters; for example, logic is, essentially, rote. What is a fallacy? These are the fallacies. If thinking has these fallacies, it is not logical. If the goal of the lesson is to teach someone to think logically, that plodding is what you want.

I kind of want, for example, a doctor to be cautiously plodding along, checking the boxes, during heart surgery. I think it is less "balance the methods" so much as "apply the right method for the right result."

Matt Sablan said...

[As for the article, if I had a print copy, I'd've read it cover to cover. This is right up my lane. But, online? I agree. Too long. Maybe I'll see if there's a print version.]

Bob Boyd said...

“Put too much intelligence into a car and it becomes creative,”

Not enough and it will coast to a stop at the side of the road and play Youtube videos.

Anonymous said...

Re: "the fear of creativity arising from too much intelligence..."

And That is How We Work Our Way to Communism. We're Smarter Now: We Have Figured Out How to Make it Work.

JackOfVA said...

I wonder if the programming includes Asimov's three laws:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Ann Althouse said...

"PB Reader said...As usual, the NY Times writes at length on something of which it knows little."

You call yourself "Reader," but you're not reading too well. This article is in The New Yorker.

Anonymous said...

Re: "too much experiential learning: blind spots and caprice."

Cut the Blue Wire! It's ALWAYS the Blue Wire!

No: Wait! Green ---


Boom.

Anonymous said...

Maybe the Robots Would do Better in a "flipped classroom".

rhhardin said...

Machine intelligence has been just around the corner since the 50s.

It's the world's longest-running unfulfilled promise.

Guys are inclined to think stuff can be automated.

What's happened is mostly that computers are so cheap that everything that can be automated has been, which looks like an advance in machine intelligence but isn't really. It's just cost declines bringing in the low fruit.

My theory is that literary effects are the whole deal, and nobody's looking at literary effects.

You notice language when you notice one.

The origin has been described

"When there was as yet no shrub of the field upon earth, and as yet no grasses of the field had sprouted, because Yahweh had not sent rain upon the earth, and there was no man to till the soil, but a flow welled up from the ground and watered the whole surface of the earth, then Yahweh molded Adam from the earth's dust (adamah), and blew into the nostrils the breath of life, and Adam became a living being.''

A literary effect describes it.

It works because it's self-referential. You can't think your way back out of it.

It seems like an origin.

I'm sorry to impiously dismantle the machinery, as Mallarme said.

Matt Sablan said...

Computers get literary too.

SJ said...

Construing "programming" as "teaching" is an interesting thought.

A computer is programmed through a method of rote instructions, because there is no other way to issue instructions to a digital-logic machine.

However, if those instructions allow it to detect and react to large, generic problems (like Asimov's Three Laws, or the rules of the road), then the computer has been programmed intelligently. I think the intelligence is still mostly in the programmer(s), not in the machine.

MadisonMan said...

As someone who programs for a living, and who has raised children, I will say that the two things are nothing alike.

Computers are not taught. They are programmed to accept a variety of inputs (and maybe modify their programming based on those inputs).

I will never get in a self-driving car, because it can't have been fully tested for all eventualities. None of my code is, there's a law of diminishing returns. But for a car, is there?

Bushman of the Kohlrabi said...

“As you look outside, and walk through parking lots and past multilane roads, the transportation infrastructure dominates,” Brin said. “It’s a huge tax on the land.” Most cars are used only for an hour or two a day, he said. The rest of the time, they’re parked on the street or in driveways and garages. But if cars could drive themselves, there would be no need for most people to own them. A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently, waiting at parking lots between calls. They’d be cheaper and more efficient than taxis—by some calculations, they’d use half the fuel and a fifth the road space of ordinary cars—and far more flexible than buses or subways. Streets would clear, highways shrink, parking lots turn to parkland. “We’re not trying to fit into an existing business model,” Brin said. “We are just on such a different planet.”


LOL.

Matt Sablan said...

I've been hearing about the dream of personal public transit via robot cars for nearly a decade. If it happens, that'll be amazing.

cubanbob said...

I've been hearing about the dream of personal public transit via robot cars for nearly a decade. If it happens, that'll be amazing."

I'll just settle for nuclear fusion and electricity to cheap to meter. Nuclear fusion has always been twenty years from now for the last sixty years.

Andy Freeman said...

> None of my code is, there's a law of diminishing returns. But for a car, is there?

Yes.

The current alternative to computer-driven cars isn't perfection, it's human-driven cars. If a given computer-driving system fails more often than people, we shouldn't use it. However, if a given computer-driving system fails 10% as often as people (or rather, its failures cost 10% what people failures cost), why shouldn't we use it?

That's an instance of a more general rule - perfection is never an option. If you prefer, failure is always possible.

Sorun said...

"A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently."

Then we can trade horror stories about the urine, used condoms, and pot smell we had endure on our way to work.

We could all also abandon our space-wasting homes and move into dormitories.

Matt Sablan said...

Sorun: Thanks for finding another way to ruin the dream.

MadisonMan said...

Sorun: Thanks for finding another way to ruin the dream.

Totally. On the good side, though, the homeless problem would be solved. They could all live in idle cars.

Bob Ellison said...

Wise words, Andy Freeman.

n.n said...
This comment has been removed by the author.
n.n said...

Human systems are by their nature adaptive. In fact, human systems are capable of expressing more degrees of freedom than any other known system. Computer systems are by their nature static. Neither system operates in a vacuum. The latter can be expanded to imitate the former, but the former must be repressed to imitate the latter.

Adaptive systems are more efficient and accurate when initiated with a base knowledge. This is why humans are first taught in rote, because it overcomes the learning curve which would otherwise exist. This is why parents restrict the liberty or creativity of their children, and only gradually remove constraints, because we are not born with a full or even sufficient comprehension of our environment.

One caveat with the education by rote, and it is significant, is that comprehensive transfer of knowledge and skill, while simultaneously constraining freedom or creativity, will lead an adaptive system to settle and remain in a local stable state. While this may produce a good product, it may also prevent reaching another state, which may be better or worse.

Yes, I know. This argument could be used to judge the value of different human philosophies (e.g. conservatism, liberalism). Fortunately, every system, no matter how stable, is subject to internal fluctuations or perturbations. Ironically, the ostensibly dynamic system may, in fact, produce the more stable state, if actually less optimal for current and future development.

It is not known if there exists a perfectly stable and optimal state. In fact, the behavior of our universe is dictated by an evolutionary process (i.e. chaotic change), which ensures that as sinks and sources are introduced and removed, the parameters of a state, or state transitions, will always occur if ever so slightly.

n.n said...
This comment has been removed by the author.
n.n said...

re: stable state

Since we cannot or do not know if a state is optimal, it is desirable that transfers of knowledge and skill be constrained to foundational topics. That is to say, more basic knowledge and skill, and less inferred knowledge (e.g. art) and skill (i.e. performance). However, a caveat is that as we pursue another, hopefully better state, we need to preserve our current stable state. Fortunately, this is not a problem in practice, since people are not uniformly biased. Some people will act to preserve the current state, while others will act to identify other states or transitions. Ideally, the change will be sufficiently gradual, but also perceptible (e.g. feedback), so that less optimal states or transitions can be rejected beforehand.

Crunchy Frog said...

The first time a car drives itself into a street filled with little kids on Halloween is when this lovely little dream turns into a nightmare.

Imagine trying to figure out liability insurance costs for one of these frankencars. No thanks.

Freeman Hunt said...

I look forward to my car driving itself. I'm tired of trying to hear what is being said to me while facing the other way. How nice it will be to turn around and treat the car as a tiny living room. How nice, when alone in the car, to read.

Andy Freeman said...

> "A fleet of vehicles could operate as a personalized public-transportation system, picking people up and dropping them off independently."

> Then we can trade horror stories about the urine, used condoms, and pot smell we had endure on our way to work.

Said fleet need not be a monopoly. Some folks might choose a vendor that gets rid of customers who trash the vendor's vehicles.

Hmm, you're right - that can't happen as it would discriminate against the filthy, or against folks who can't afford such vendors, or something.


Auntie Ann said...

Teaching children by rote has been anathema in education circles since the 1960's.

Joe said...

“Put too much intelligence into a car and it becomes creative,”

Statements like this irritate the shit out of me. Intelligence isn't recalling facts. Computers aren't intelligent. No matter how advanced computers are, they don't become creative, they just do crazy shit that inevitably followed some rule, somewhere.

Anonymous said...

"However, if those instructions allow it to detect and react to large, generic problems (like Asimov's Three Laws, or the rules of the road), then the computer has been programmed intelligently. I think the intelligence is still mostly in the programmer(s), not in the machine."

That's incorrect, in the context of modern machine learning.

"What's happened is mostly that computers are so cheap that everything that can be automated has been, which looks like an advance in machine intelligence but isn't really. It's just cost declines bringing in the low fruit."

So's this.

"Computers are not taught. They are programmed to accept a variety of inputs (and maybe modify their programming based on those inputs)."

And this.

This is a good recent textbook on the subject. This is also good for beginners who want a very hands-on approach.