But it wasn't just reflexive repetition. Example:
And:
In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”...
44 comments:
Huh. So Tay is a leftist SJW. Can be made to believe anything if told enough time or with the right catch phrase. Good to remember.
This is why we can't have nice things.
The things, being nice, go along with the not nice people.
Chameleons aren't as good as they appear.
That's pretty funny.
this idea/product, is just the tip of the iceberg I fear. future offerings will be far more intimidating and might actually have real consequences. hey, the Justice Department was looking into ways to punish climate deniers, don't think there won't be hundreds of other things either the government or some AI entity might catalogue and use against you in the future.
Robot minders for all children, programmed by progressives. Something to look forward to.
Say it ain't so, Tay
Ann Althouse said...This is why we can't have nice things.
Why can't we have nice people?
Isn't this basically what Gawker did to that McDonald's Twitter app that made pictures out of stories?
That site really is the worst.
Some people at Microsoft must have failed the Turing test.
The Howard Zinn algorithm is not ready yet... or something.
Zoe Quinn: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” 'Hurt', pft.
It's Microsoft's own damn fault. Just like the Boaty McBoatface story.
The internet is broken. Permanently broken.
A lot of techie believers estimate that we will see the Singularity by around 2040. Tay makes me think it might be a little further off.
Zoe Quinn: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.” 'Hurt', pft.
The same Zoe who made one, bad text game?
Boaty McBoatface, yes.
I can't decide if AI has made any progress over the last 30 years or if AI developers have just gotten dumber. Or more arrogant.
BTW, has Boaty McBoatface been declared the official winner yet? Inquiring minds..
Tay's prime-directive is pro-choice. Microsoft should be careful with sources of indoctrination.
virgil xenophon said...
BTW, has Boaty McBoatface been declared the official winner yet?
My understanding is it swept the primaries but will lose at the convention due to those in power stealing it form the voters because 'rules'.
The people that appear to make up the majority of our civilization are total human waste. We deserve to fall.
"ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."
Who?
S. Pinker has a little article shooting down many of the silly things that people have written about atheists and atheism in this blog:
"The untenability of faitheism"
It's truly an indictment of our society that a major paper has to treat it's readers like children by publishing a story with "the n-word" in it's text instead of "nigger" and "Jew". That's the news. That's what happened. I'm no more a user of either word than the next non-bigot, but I can't stand the way our society treats adults like babies.
"We are why we can't have nice things."
Don't worry. The government is already using the same AI technology to put innocent people on the no-fly list.
This is basically how law works. Congress writes a nice piece of legislTion with the best of intentions and then lawyers use it to destroy businesses, families, people and everything besides other lawyers.
Tay wasn't giving enough blow jobs.
Re your link Ferdinande:
.... [Yawn] ....
And I am an atheist. Did that look like thinking to you?
This is basically how law works. Congress writes a nice piece of legislTion with the best of intentions and then lawyers use it to destroy businesses, families, people and everything besides other lawyers.
Exactly!
So what's Trump's excuse for his idiotic, misogynistic, juvenile tweets?
An innocent thing, young Tay
Had a very bad day
Her big mistake is
She doesn't really understand this internet biz
I dunno. I've been in the knowledge representation and "computational linguistics" or "natural language processing" biz for decades, and an internet user for roughly the same amount of time (one of the first business cards anyone ever handed me had a glob-style bang path literally including ihnp4 in its e-mail address), and this result strikes me as basically 100% predictable... and also as having essentially nothing to do with "AI," which is much better represented by the recent progress with AlphaGo winning four of five Go games against a 9-dan professional with 18 world titles.
The bottom line is that chatbots aren't in a very useful space in terms of research, both because the underlying techniques are pretty well understood at this point and because among the things understood about them is their tendency to act like a cross between a sponge and a word salad. Neither "it just repeated something someone else said" nor "it said something offensive no one person said" are remotely surprising or even interesting if you know anything about hidden markov models, conditional random fields, neural networks, Bayesian belief networks... i.e the panoply of techniques likely at play here.
In other words, this is a non-story.
I can't decide if AI has made any progress over the last 30 years or if AI developers have just gotten dumber. Or more arrogant.
AI is nonsense and always has been. Whenever anyone claims that AI has been successful, a closer examination of the algorithm will show that it's not AI at all.
Lately, I've noticed that the AI crowd is trying to move the goalposts again by redefining what AI is. Neural Nets and Deep Learning, amongst others, are interesting and have some very cool applications, but they aren't "Artificial Intelligence." Most are, in actuality, self-tuning statistical algorithms.
Tay's been turned off?
O'TAY!!!"
---Eddie Murphy as Buckwheat
So what's Trump's excuse for his idiotic, misogynistic, juvenile tweets
As soon as I hear any form of the word misogyny, I tune it out. I bet I am not the only one. It's a bullshit made up thing to describe people you don't like.
The same Zoe who made one, bad text game?
Yes, the one with the unusual marketing strategy.
> This is why we can't have nice things.
Well, no. This is what happens when companies like Microsoft pretend that the current state of our AI technology is anything close to being what a reasonable person would consider intelligent.
Mostly what we've learned from AI research (if you wanna be honest about it) is how genuinely little we know about what intelligence really is, and that it is a very difficult subject indeed.
Microsoft also appears to be deleting most of Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people.
Hee--well, it might not be artificial intelligence, but it seems to have a healthy artificial id.
Lol. Doesn't sound much different than the comments here.
Poor Tay. Its mind was abused worse than the minds of the GOP faithful were abused by their own leaders.
Joe, you say AI is nonsense. I am genuinely curious: do you think that it is impossible to create a machine that thinks? If you do, then do you think that that is because there is something essentially non-physical involved in the production of thought? Would you think, for example, that if you replaced every neuron in the brain with an electronic device that did all the same things that a neuron did, that the resultant electronic brain would not be able to think? Or would it not be a machine? What would be missing, do you think?
In short, by the time you create a machine that thinks, you will have recreated a brain. And you can't mimic a brain using modern computers since that's now how brains work. Neurons are an interesting synthesis of binary and analog with, using a computer term, multiple variable width data buses. Modern computers are simply dense collections of logic gates.
"...that if you replaced every neuron in the brain with an electronic device that did all the same things that a neuron did..."
An artificial neuron would be a neuron. (If you can't distinguish the artificial from the real, is it artificial?) I'm not sure there would be any software--in a crude way, the hardware itself is the software.
But, this isn't what the AI community at large is actually doing. They are running with the promise that they can write software to duplicate intelligence. In the process, they appropriate legitimate machine learning projects, many (most?) of which make no pretense at being artificial intelligence.
Well, the idea of replacing neurons one by one by electronic devices certainly isn't supposed to be a practical approach to AI. It was only supposed to establish whether the objection to AI is a fundamental rejection of physicalism. I see that isn't the case. The problem as you see it is that the computational approach is simply not the right way to do it. Many AI researchers would agree with you (and many wouldn't.) I think perhaps you underestimate the variety of approaches that are being tried with AI: there certainly isn't a consensus that a computational functionalist view of the mind (that mental states are just computations) is the only way to go.
On the other hand, if you think that a machine (for example, that electronic brain) can think, then there has to be something that the machine does that is essential to that capacity. The task of AI is to discover what that is. There's no compelling reason to think that the only way to do that is by imitating the precise structure of the human brain. It's not necessary to imitate the precise structure of a bird's wings to fly; it's not an obvious fact that alien life can't think unless it exactly imitates human life, etc.
It's also worth mentioning that it's much easier to imagine machines that 'think' in the sense of being able to manipulate internal representations of the world, than to imagine machines that 'think' in the sense of being 'aware' of what they're doing (or are self aware, or understand, or for which there is something it is like to be them.) The latter is the 'hard' problem, and I don't see any progress having been made on that; but the former is just the sort of thing that even those computational models might actually be able to implement.
Missed headline opportunity, should have been "Microsoft AI passes Turing test".
I have my doubts that Hitler invented atheism, but Darwin made it plausible.
"think' in the sense of being able to manipulate internal representations of the world,"
Meaningless gibberish and yet another example of obfuscation and moving the goal posts to pretend genuine AI is reasonably imminent.
Self-tuning statistical algorithms do exactly that and only the fringe pretends that's actual thinking.
Of course, even if someone could make AI, why would it be any better than an average human? One of the silliest aspects of thinking machine Sci-Fi is that the machines all agree, which itself proves they aren't thinking, but simply processing set algorithms.
Post a Comment