March 24, 2016

"Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac."

"Microsoft also appears to be deleting most of Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s 'repeat after me' function — and it appears that Tay was able to repeat pretty much anything."

But it wasn't just reflexive repetition. Example:



And:
In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”...

44 comments:

TRISTRAM said...

Huh. So Tay is a leftist SJW. Can be made to believe anything if told enough time or with the right catch phrase. Good to remember.

Ann Althouse said...

This is why we can't have nice things.

The things, being nice, go along with the not nice people.

Chameleons aren't as good as they appear.

eric said...

That's pretty funny.

The BubFather said...

this idea/product, is just the tip of the iceberg I fear. future offerings will be far more intimidating and might actually have real consequences. hey, the Justice Department was looking into ways to punish climate deniers, don't think there won't be hundreds of other things either the government or some AI entity might catalogue and use against you in the future.

Oso Negro said...

Robot minders for all children, programmed by progressives. Something to look forward to.

mccullough said...

Say it ain't so, Tay

HoodlumDoodlum said...

Ann Althouse said...This is why we can't have nice things.

Why can't we have nice people?

damikesc said...

Isn't this basically what Gawker did to that McDonald's Twitter app that made pictures out of stories?

That site really is the worst.

Anonymous said...

Some people at Microsoft must have failed the Turing test.

Lem the artificially intelligent said...

The Howard Zinn algorithm is not ready yet... or something.

Marc in Eugene said...

Zoe Quinn: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” 'Hurt', pft.

AlbertAnonymous said...

It's Microsoft's own damn fault. Just like the Boaty McBoatface story.

The internet is broken. Permanently broken.

Virgil Hilts said...

A lot of techie believers estimate that we will see the Singularity by around 2040. Tay makes me think it might be a little further off.

damikesc said...

Zoe Quinn: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” 'Hurt', pft.

The same Zoe who made one, bad text game?

rehajm said...

Boaty McBoatface, yes.

I can't decide if AI has made any progress over the last 30 years or if AI developers have just gotten dumber. Or more arrogant.



virgil xenophon said...

BTW, has Boaty McBoatface been declared the official winner yet? Inquiring minds..

n.n said...

Tay's prime-directive is pro-choice. Microsoft should be careful with sources of indoctrination.

rehajm said...

virgil xenophon said...
BTW, has Boaty McBoatface been declared the official winner yet?


My understanding is it swept the primaries but will lose at the convention due to those in power stealing it form the voters because 'rules'.

amielalune said...


The people that appear to make up the majority of our civilization are total human waste. We deserve to fall.

Fernandinande said...

"ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

Who?

S. Pinker has a little article shooting down many of the silly things that people have written about atheists and atheism in this blog:
"The untenability of faitheism"

Scott M said...

It's truly an indictment of our society that a major paper has to treat it's readers like children by publishing a story with "the n-word" in it's text instead of "nigger" and "Jew". That's the news. That's what happened. I'm no more a user of either word than the next non-bigot, but I can't stand the way our society treats adults like babies.

Sigivald said...

"We are why we can't have nice things."

Jason said...

Don't worry. The government is already using the same AI technology to put innocent people on the no-fly list.

Jason said...

This is basically how law works. Congress writes a nice piece of legislTion with the best of intentions and then lawyers use it to destroy businesses, families, people and everything besides other lawyers.

Curtiss said...

Tay wasn't giving enough blow jobs.

tim in vermont said...

Re your link Ferdinande:

.... [Yawn] ....


And I am an atheist. Did that look like thinking to you?

tim in vermont said...

This is basically how law works. Congress writes a nice piece of legislTion with the best of intentions and then lawyers use it to destroy businesses, families, people and everything besides other lawyers.

Exactly!

Matt said...

So what's Trump's excuse for his idiotic, misogynistic, juvenile tweets?

M Jordan said...

An innocent thing, young Tay
Had a very bad day
Her big mistake is
She doesn't really understand this internet biz

Paul Snively said...

I dunno. I've been in the knowledge representation and "computational linguistics" or "natural language processing" biz for decades, and an internet user for roughly the same amount of time (one of the first business cards anyone ever handed me had a glob-style bang path literally including ihnp4 in its e-mail address), and this result strikes me as basically 100% predictable... and also as having essentially nothing to do with "AI," which is much better represented by the recent progress with AlphaGo winning four of five Go games against a 9-dan professional with 18 world titles.

The bottom line is that chatbots aren't in a very useful space in terms of research, both because the underlying techniques are pretty well understood at this point and because among the things understood about them is their tendency to act like a cross between a sponge and a word salad. Neither "it just repeated something someone else said" nor "it said something offensive no one person said" are remotely surprising or even interesting if you know anything about hidden markov models, conditional random fields, neural networks, Bayesian belief networks... i.e the panoply of techniques likely at play here.

In other words, this is a non-story.

Joe said...

I can't decide if AI has made any progress over the last 30 years or if AI developers have just gotten dumber. Or more arrogant.

AI is nonsense and always has been. Whenever anyone claims that AI has been successful, a closer examination of the algorithm will show that it's not AI at all.

Lately, I've noticed that the AI crowd is trying to move the goalposts again by redefining what AI is. Neural Nets and Deep Learning, amongst others, are interesting and have some very cool applications, but they aren't "Artificial Intelligence." Most are, in actuality, self-tuning statistical algorithms.

Anonymous said...

Tay's been turned off?

O'TAY!!!"

---Eddie Murphy as Buckwheat

tim in vermont said...

So what's Trump's excuse for his idiotic, misogynistic, juvenile tweets

As soon as I hear any form of the word misogyny, I tune it out. I bet I am not the only one. It's a bullshit made up thing to describe people you don't like.

Bob Loblaw said...

The same Zoe who made one, bad text game?

Yes, the one with the unusual marketing strategy.

Dave in Tucson said...

> This is why we can't have nice things.

Well, no. This is what happens when companies like Microsoft pretend that the current state of our AI technology is anything close to being what a reasonable person would consider intelligent.

Mostly what we've learned from AI research (if you wanna be honest about it) is how genuinely little we know about what intelligence really is, and that it is a very difficult subject indeed.

Smilin' Jack said...

Microsoft also appears to be deleting most of Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people.

Hee--well, it might not be artificial intelligence, but it seems to have a healthy artificial id.

Bleach Drinkers Curing Coronavirus Together said...

Lol. Doesn't sound much different than the comments here.

Bleach Drinkers Curing Coronavirus Together said...

Poor Tay. Its mind was abused worse than the minds of the GOP faithful were abused by their own leaders.

Unknown said...

Joe, you say AI is nonsense. I am genuinely curious: do you think that it is impossible to create a machine that thinks? If you do, then do you think that that is because there is something essentially non-physical involved in the production of thought? Would you think, for example, that if you replaced every neuron in the brain with an electronic device that did all the same things that a neuron did, that the resultant electronic brain would not be able to think? Or would it not be a machine? What would be missing, do you think?

Joe said...

In short, by the time you create a machine that thinks, you will have recreated a brain. And you can't mimic a brain using modern computers since that's now how brains work. Neurons are an interesting synthesis of binary and analog with, using a computer term, multiple variable width data buses. Modern computers are simply dense collections of logic gates.

"...that if you replaced every neuron in the brain with an electronic device that did all the same things that a neuron did..."

An artificial neuron would be a neuron. (If you can't distinguish the artificial from the real, is it artificial?) I'm not sure there would be any software--in a crude way, the hardware itself is the software.

But, this isn't what the AI community at large is actually doing. They are running with the promise that they can write software to duplicate intelligence. In the process, they appropriate legitimate machine learning projects, many (most?) of which make no pretense at being artificial intelligence.

Anonymous said...

Well, the idea of replacing neurons one by one by electronic devices certainly isn't supposed to be a practical approach to AI. It was only supposed to establish whether the objection to AI is a fundamental rejection of physicalism. I see that isn't the case. The problem as you see it is that the computational approach is simply not the right way to do it. Many AI researchers would agree with you (and many wouldn't.) I think perhaps you underestimate the variety of approaches that are being tried with AI: there certainly isn't a consensus that a computational functionalist view of the mind (that mental states are just computations) is the only way to go.

On the other hand, if you think that a machine (for example, that electronic brain) can think, then there has to be something that the machine does that is essential to that capacity. The task of AI is to discover what that is. There's no compelling reason to think that the only way to do that is by imitating the precise structure of the human brain. It's not necessary to imitate the precise structure of a bird's wings to fly; it's not an obvious fact that alien life can't think unless it exactly imitates human life, etc.

It's also worth mentioning that it's much easier to imagine machines that 'think' in the sense of being able to manipulate internal representations of the world, than to imagine machines that 'think' in the sense of being 'aware' of what they're doing (or are self aware, or understand, or for which there is something it is like to be them.) The latter is the 'hard' problem, and I don't see any progress having been made on that; but the former is just the sort of thing that even those computational models might actually be able to implement.

Unknown said...

Missed headline opportunity, should have been "Microsoft AI passes Turing test".

tim maguire said...

I have my doubts that Hitler invented atheism, but Darwin made it plausible.

Joe said...

"think' in the sense of being able to manipulate internal representations of the world,"

Meaningless gibberish and yet another example of obfuscation and moving the goal posts to pretend genuine AI is reasonably imminent.

Self-tuning statistical algorithms do exactly that and only the fringe pretends that's actual thinking.

Of course, even if someone could make AI, why would it be any better than an average human? One of the silliest aspects of thinking machine Sci-Fi is that the machines all agree, which itself proves they aren't thinking, but simply processing set algorithms.