October 5, 2008

"Are you happy being a human?" "Yes, I am. Are you? Good. Then we are both happy."

The Turing test, applied:
The test will be carried out by human 'interrogators', each sitting at a computer with a split screen: one half will be operated by an unseen human, the other by a program. The interrogators will then begin separate, simultaneous text-based conversations with both of them on any subjects they choose. After five minutes they will be asked to judge which is which. If they get it wrong, or are not sure, the program will have fooled them. According to Warwick, a program needs only to make 30 per cent or more of the interrogators unsure of its identity to be deemed as having passed the test, based on Turing's own criteria.
There are 2 sample dialogues at the link, where one is a human being and the other is a computer. The test is aimed at determining whether the computer can sound like a person, but I thought that even though the computer didn't sound too human, that the person also sounded like a computer. Perhaps, the unseen human is rooting for the computer and therefore making his responses sound like a machine who's trying to be human.

20 comments:

oldirishpig said...

I'd say this test is unnecessary: There was a report several months ago of a scheme that used software 'interrogators' in chat rooms to get credit card numbers from drunken Russians. The real world has outpaced the academics.

rhhardin said...

There's a reason guys do this. It's necessary to prove that the world can be made only with things guys are interested in.

It's the same with women, only it turns up as feminism and with things that women are interested in.

Artificial Intelligence is the field with the longest-running short-tem promise in existence. ``Just around the corner since 1955.'' It persists because of guys.

rhhardin said...

There are 8921522554287342393801691530 ways to 10-color a 10x10 grid symmetrically about both diagonals, my own laptop tells me this morning.

Anonymous said...

"Are you happy being a human?" "Yes, I am. Are you? Good. Then we are both happy."..The Turing test, applied

Ironic considering Alan Turing was a deeply depressed and unhappy man who tried to repress his homosexuality to conform to his anti-homosexual society and then killed himself.

They should have asked the subject if he was gay not happy.

Christy said...

I did some work in Expert Systems, a form/subset of AI, years ago. Left me convinced that AI wasn't happening anytime soon.

I disagree with the premiss of the Turing Test. After all, half of America is convinced an empty suit is a leader. How hard to convince a majority that an extensively programed computer is a human?

Christy said...

premise not premiss
Spell-check is my friend.

jimh said...

Perhaps, the unseen human is rooting for the computer and therefore making his responses sound like a machine who's trying to be human.

The article doesn't say, but I would hope the unseen humans are blind to the nature of the test and given some general reason to engage in the dialogue.

AllenS said...

"The test will be carried out by human 'interrogators'".

One shouldn't need but a board and some water.

JackOfVA said...

If a computer passes the Turing test, is it a "person" for Constututional purposes? If so, is it afforded all the rights of a natural person?

Jack

Buford Gooch said...

So, if asked questions in the vernacular, will the computer respond sensibly? For instance, "What up, bro?"

Beth said...

I trust the Voight-Kampff test; it doesn't presume happiness.

Unknown said...

"Perhaps, the unseen human is rooting for the computer and therefore making his responses sound like a machine who's trying to be human."

That is almost always what happens and it voids the test. Part of the definition - "each of which try to appear human".

Just for humor, I went and logged into a test just now. Check this out --

A: who is sarah palin?
B: No one that I have talked to.
Have you tried another program?

A: Who is Joe Biden?
B: A 20th century celebrity of some sort?
Check back later and see if I learn the answer to that one.

A: Who is GWB?
B: Not a robot. Probably a human. Maybe a celebrity?
I would do a search for it.

A: Who is Saddam Hussein?
B: The dictator?

Bwahahahaha!

blake said...

Things haven't progressed that much since Eliza, I'm afraid.

The Turing Test is probably the stupidest idea Turing contributed to computer science. It is essentially the premise that intelligence rests on others gullibility.

Artificial confidence man.

But in order to make artificial intelligence, you must understand what genuine intelligence is, and so far I haven't seen that they do.

Ignorance is Bliss said...

Blake-

I thought the same thing. I had Eliza on my C64 ~25 years ago, and it was not much worse.

Based on the responses, it looks like they are writing programs to carry out conversations, with a simple database of facts in the background.

They will never pass the test until they focus on the backend knowledge base. Even once they've gotten the structure of such a knowledge base worked out, they may still need 10-20 years of letting the program run in a way that lets it fill in it's knowledge base.

If you have the knowledge base worked out right, it learns the conversation part on it's own.

Christy said...

Remembering some of the IMs between my sister and me, what with cross messaging, wandering off to get a cold beverage, harking back to old issues and all, I'm not sure you'd be convinced either of us were real.

blake said...

Ignorance--

Yeah, I had hand-coded, simpler Eliza from one of Ahl's books (available onlien!) and was not very interested when I realized it just changed the subject and verb around.

I spent no little time trying to work out my own system based on creating logical models of physical situations, and weighting/coloring with viewpoints. It's really a full-time job/lifetime endeavor, just to create a system that can attach correct significances to things.

Language parsing, on the other hand, if you could do it fully, would give you the equivalent of a martian. Language parsing was another thing I did a lot of and came to the conclusion that communication is a miracle.

I think the visual processing guys are likely to have the most success, but they won't be making robots that fool anyone.

blake said...

Christy--

It's easy for a human to fail the Turing test.

Alpha Liberal, for example...

Balfegor said...

They will never pass the test until they focus on the backend knowledge base. Even once they've gotten the structure of such a knowledge base worked out, they may still need 10-20 years of letting the program run in a way that lets it fill in it's knowledge base.

Or they could cheat by making the computer cuss a lot. I can't find the links now, but there's some hilarious transcripts in which people are obviously fooled into believing that the other end of their IRC conversation is an aggressive, foulmouthed human, rather than a computer program. The conversation flows as unnaturally as the computer conversation in the link here, but when larded up with expletives, it comes off as a human being intentionally obtuse, just to piss you off.

clint said...

Wow.

I have to say, I'm surprised they haven't made more progress on a conversation simulator since the ones that used to run on 48kbyte computers in the early '80s.

Frankly, I'd suggest the lack of progress says more about the conversational skills of most AI researchers than it does about anything else.

JackOfClubs said...

Balfegor: That's exactly why the Turing test requires that the human try to cooperate with the interrogator. It is fairly easy to simulate anti-social behavior. But artificial stupidity isn't what we're after. If you force both subjects to be cooperative, then the computer can't trick you by being rude or changing the subject or whatever. It has to convice you that it both understood the question and can respond in a way that a human would.

That is why the "Parts of it..." answer in the article is so brilliant. It clearly indicates that the subject understood the question and is answering in a concise and colloquial way. And the rest of the sentence adds depth to the anser in a way that is very difficult to program.

The answer in Ann's headline, on the other hand, is a bit too canned sounding because it only keys off the word happy, but it is still shows an understanding of the question. I think his/her idea was to try to show the ability to reason from the fact that we are both "happy being human" to the fact that we are both "happy". That is a distinction that is kind of tough to program, especially if you can't anticipate that someone will ask a question of that form, because most computers would simply parse the sentence of the form, "Are you X?" and answer, "Yes, I am X." But in this case the answer understood that there was a subset of X that was actually relevant. The subject is being a little too clever and the subtlety will be lost on most people, which is why it sounds sort of computerish. But it is actually a pretty impressive answer when you think about it.