१० डिसेंबर, २०२०

"How could a computer possibly know you sound like a Debbie Downer? Amazon said it spent years training its tone AI..."

"... by having people categorize voice recordings. The company held internal trials and says it tried to address any biases that might arise from varying ethnicity, gender, or age. In our experience, the Halo could detect ups and downs in our voice, but seemed to misinterpret situations regularly. And some of the feedback feels, ironically, a bit tone-deaf — especially when judging a woman’s voice. Our sample size of two isn’t sufficient to conclude whether Amazon’s AI has gender bias. But when we both analyzed our weeks of tone data, some patterns emerged. The three most-used terms to describe each of us were the same: 'focused,' 'interested' and 'knowledgeable.' The terms diverged when we filtered just for ones with negative connotations. In declining order of frequency, the Halo described Geoffrey’s tone as 'sad,' 'opinionated,' 'stern' and 'hesitant.' Heather, on the other hand, got 'dismissive,' 'stubborn,' 'stern' and 'condescending.'... The very existence of a tone-policing AI that makes judgment calls in those terms feels sexist. Amazon has created an automated system that essentially says, 'Hey, sweetie, why don’t you smile more?'"


I think it would be cool to have a wristband that informed me what my tone was... and even cooler to have a conversation with a trusted companion while we both had these tone-police wristbands on and could see each other's display. But wait... why do we need wristbands? Why can't I have this AI in my iPhone and monitor the tone of anybody I happen to be talking to, and why shouldn't I assume that anyone listening to me can be generating this information? Is this alarming? 

If it's alarming, is it because we're going to be off-loading our human judgment that makes us so brilliantly sensitive to the infinite tones of the human voice? Is it because a machine will seem objective, so that you won't just wonder whether someone is being condescending to you, you'll have a scientific/"scientific" verification of condescension or whatever? Is it because you'll figure out how to train your voice to get words you like to appear on the screen and you won't quite be you anymore? Is it because you won't know the extent to which others have trained their voice to disguise their real intentions and the value of our gift for the understanding of speech will crash? 

Oh, by the way, I'm an Amazon Associate, so when — if — you buy a Halo wristband, I'd appreciate it if you'd use this link.

४४ टिप्पण्या:

Joe Smith म्हणाले...

This is the first time I wished something was killed in the womb.

rhhardin म्हणाले...

XP speaks robot to me.

set speechobject=createobject("sapi.spvoice")
speechobject.speak "$MESSAGE"

$ cscript temp.vbs

Earnest Prole म्हणाले...

Speaking of which, why does Siri insist on being such an unconscionable bitch? Lighten up, Sugartits.

Sprezzatura म्हणाले...

Is a husband a trusted companion?

I don’t know what’s worse. If Meade is described like this, or if it’s someone else who is the trusted companion, not Meade.

It always seemed a bit odd and cold when gay folks talked about their partners, because marriage wasn’t an option. “Trusted companion” makes “partner” seem big time loving. IMHO.

Sprezzatura म्हणाले...

When Spitzer called up the Emperor’s Club I bet he requested a trusted companion.

BUMBLE BEE म्हणाले...

Happy to hear such enthusiasm. Hmmm Blonde... short frame... wrong "Tone"? Well if your BMI is in line with the standards you will be made a comfort woman, If BMI is inappropriate you will be a book binding. Thanks for the interest.

stevew म्हणाले...

Success in my chosen profession results from my ability to express an appropriate and productive tone and to read the tone and attitude of the people I work with. I have been very successful. This device is of no use to me.

rehajm म्हणाले...

Alexa sees you when you're sleeping....

...she knows when you're awake.

BUMBLE BEE म्हणाले...

What you didn't read the shrink wrap warranty? Too bad, step back in line.

RMc म्हणाले...

I think it would be cool to have a wristband that informed me what my tone was

"Tone" is something women invoke when they're losing an argument: "I don't like your tone!"

Michael K म्हणाले...

Not a chance. I don't even look at Amazon Prime.

An Amazon guy came down to do a race with me but that was 10 years ago.

WK म्हणाले...

I didn’t have a wrist band when I was growing up but my mom was pretty good about identifying things in my tone. My wife seems to be able to assess my daughter’s tone as well.

Achilles म्हणाले...

""How could a computer possibly know you sound like a Debbie Downer? Amazon said it spent years training its tone AI...""

"... by having people categorize voice recordings.


Just a short primer so everyone has a vague idea of how this is done.

Machine Learning algorithms are trained with labeled data. What Amazon did was take recordings of people who described themselves as "sad" "happy" etc. These "Labels" were probably in the for of a 1-5 scale Sad = 1 Happy = 5 or some similar thing with a set of non-continuous buckets. There will be scales for calm angry and other emotional spectrums.

Each recording is considered the X and is best described as a vector. Each recording is broken down to a time series with all of the sound frequencies of the spoken phrase broken out in a Fourier transform. Sometimes you can see this represented by recording equipment. The vocal recording is thus broken down into a set of numbers with the amplitude of each frequency in the normal vocal range over time representing values. Over time these amplitudes change and these changes can be tracked as multiple derivatives.

There is a lot of processing going on but at the end of it you have a spoken recording broken down to a bunch of numbers, X, and matched against the label provided, y.

Then shit gets crazy.

This part is massively abbreviated. The Algorithm is fed millions of recordings. I goes through some truly ridiculous shit taking a lot of "combinations" and applying a "weight" to each combination. You take X, put a bunch of shit in between X and y, and it predicts y. Just call it X+ for now. It does this by putting a theta, or "weight," in front of every node in X+ and does a partial derivative calculation changing each "weight" by a "step size," which is a parameter you set, to find a local minimum in a loss function that creates a set of "weights" that predicts the lowest loss predicting y out of the X+.

You can also say that given the millions of recordings a set of "weights" applied to each step transforming X+ -> y is created. Each prediction is wrong by an amount. But the total sum of wrongness is the lowest in the predictive model over the millions of recordings.

As you can guess, it is garbage in garbage out. The math is easy.

Amazon pays a lot of money for labeled data. This is the reason Google makes so much money.

They have labeled data that they take from you while you use their "free" products. Amazon doesn't need to make a dime selling you something to make money on the transaction.

Narr म्हणाले...

Makes me glad I held on to my old mood rings.

This is simply techubris.

Narr
Ooo, algorithms

Achilles म्हणाले...

I want to add that the partial derivative is taken to find which "Weight" to change that moves the y(predicted) closer to the y(actual). I keeps iterating through this until it stops getting closer to actual also described as the local minimum. The more combinations you create the longer it takes to do the calculation. But it also the more chances you have to find a combination that gets y(pred) closer to y(actual).

Ignorance is Bliss म्हणाले...

In declining order of frequency, the Halo described Geoffrey’s tone as 'sad,' 'opinionated,' 'stern' and 'hesitant.' Heather, on the other hand, got 'dismissive,' 'stubborn,' 'stern' and 'condescending.'... The very existence of a tone-policing AI that makes judgment calls in those terms feels sexist.

Those terms feel sexist?

I suspect the AI was going out of its way to not be sexist. It really wanted to tell Heather she sounded bitchy, bossy, shrill, and hysterical.

Sebastian म्हणाले...

"Amazon has created an automated system that essentially says, 'Hey, sweetie, why don’t you smile more?'"

So? Sounds better and cheaper than either therapy or going to Mars.

mockturtle म्हणाले...

Arrogant of me, perhaps, but I consider myself a better judge of people's voices and my own body fat than some damned robot. And a woman speaking with a munchkin-like voice might be saying something deeply significant but I won't be paying attention.

Achilles म्हणाले...

Ignorance is Bliss said...

Those terms feel sexist?

I suspect the AI was going out of its way to not be sexist. It really wanted to tell Heather she sounded bitchy, bossy, shrill, and hysterical.


The sexism, if it exists, exists in the labeling of the data used to build the predictive model.

Joe Smith म्हणाले...

"Those terms feel sexist?"

What about racist? Alexa can't even hear black people.

They sell a separate version with 'Aisha' for that...

reader म्हणाले...

Me: Dude, what's wrong?
H: Nothing, why?
Me: You seem upset.
H: I'm not upset.
Me: Ok.
H: Why are you saying I'm upset.
Me: I'm not saying you're upset. I'm asking if you're upset.
H: Why?
Me: You seem tense. Terse.
H: I'm not tense or terse.
Me: You seem like you are.
H: No I don't.
Me: Ok.
H: I'm upset now.

Very close to multiple discussions over the course of twenty-six years of marriage.

RK म्हणाले...

I got irritated twice today. Both times was a device deciding I wanted to do something that I didn't. It's becoming my pet peeve. "Alexa, get off my lawn!"

madAsHell म्हणाले...

I know I’m screwed when the wife says “This is cute!”.

I’ve recently added every sentence that begins with “Well, Siri says......” to the list oh-shit-I’m-screwed list.

madAsHell म्हणाले...

You know, I really should not comment from a mobile device when speeding well beyond the wine limit!

Greg The Class Traitor म्हणाले...

"But wait... why do we need wristbands? Why can't I have this AI in my iPhone"

Because Amazon controls the wristband. They can design it to do whatever they want. To violate your privacy however they want, with no real chance for you to figure out what they're doing.

Amazon does not control your iPhone. Apple's privacy tools mean that they have to tell you a lot more about what they're doing. Are they tracking your location all the time? Are they listening to you all the time, even you you have it "disabled"? With your iPhone you could block that. With the wrist band you can't

Breezy म्हणाले...

Why are we trying to perfect human-ness? The wrong reads are just as important as the right ones...

Assistant Village Idiot म्हणाले...

Tavistock coaches used to do this for groups years ago. i don't know if they are still used in group interactions.

Douglas B. Levene म्हणाले...

I'm still waiting for wearable tech that will tell me that it's detected the very first signs of cancer or heart disease long before any symptoms appear. I would wear that.

Achilles म्हणाले...

Douglas said...

I'm still waiting for wearable tech that will tell me that it's detected the very first signs of cancer or heart disease long before any symptoms appear. I would wear that.

Would you?

I can give you an idea how this is going to go.

You aren't going to like it much.

But I don't see many ways out unless you all are prepared for a full on Butlerian Jihad.

stevew म्हणाले...

While cooking dinner tonight I instructed my Google Home device to set a timer, twice. It did so, and executed flawlessly, both times. My tone was never mentioned or a factor. Though it may have been captured and analyzed.

rhhardin म्हणाले...

There's the cloth wrist band by Alexander that was used to estimate the time of day. It was called Alexander's rag time-band.

Kevin म्हणाले...

HAL : [on Dave's return to the ship, after he has killed the rest of the crew] Look Dave, I can see you're really upset about this.

daskol म्हणाले...

That was an excellent explanation, Achilles, and you managed to avoid nearly all ML buzz words using instead the mathematical terminology that desexies it a bit, returns it to its rather old pattern matching roots (probably not this one, but many of the tamer ML applications adding value to companies today use algos not much changed since the 1950s or 60s). Impressive.

madAsHell म्हणाले...

In the Scott Adams world..........The next simulation is beginning.

Complete with biases!!

Biff म्हणाले...

Coming soon in the "I'm not a robot" CAPTCHA test for Althouse comments:

"Please listen to the following sentence. Rate its tone."

Scot म्हणाले...

How about you have this AI between your ears? Some say human brains learn good.

Mr. Forward म्हणाले...

Tonewood in the lumber business is the musical wood from various species that is used for wind and string instruments.
I dropped a Norwegian Spruce beam one time and it sounded like a piano landing on its head. I don't move a lot of pianos but now when I do I check for holes in the lawn first.

DEEBEE म्हणाले...

‘ we're going to be off-loading our human judgment’
That sounds a bit Rip Van Winklish. Twitter, Facebook much?

tim maguire म्हणाले...

the Halo described Geoffrey’s tone as 'sad,' 'opinionated,' 'stern' and 'hesitant.' Heather, on the other hand, got 'dismissive,' 'stubborn,' 'stern' and 'condescending.'... The very existence of a tone-policing AI that makes judgment calls in those terms feels sexist. Amazon has created an automated system that essentially says, 'Hey, sweetie, why don’t you smile more?'"

The guy is labelled “sad” but the writer says it’s telling the woman to smile more. Yes, I see sexism here, but it’s not Halo, it’s the writer who is forcing the data into a predetermined analysis.

tim maguire म्हणाले...

reader, I’m impressed. That sounds like a lot of conversations my wife and I have, but I’ve always blamed the relentless repetition of the theme to her certainty that what happens at the end of the conversation confirms that she was right all along. Also interesting that you see the problem, but recognizing it doesn’t help. It keeps happening anyway.

Witness म्हणाले...

"Amazon said it spent years training its tone AI by having people categorize voice recordings."

... so the best this thing can possibly be doing is providing you with some kind of weighted average of how other people perceive your tone...

"Is it [alarming] because a machine will seem objective, so that you won't just wonder whether someone is being condescending to you, you'll have a scientific/"scientific" verification of condescension or whatever?"

... but this is how a significant chunk of people will treat it, because reasons.

Related: https://www.thestar.com/news/insight/2016/01/16/when-us-air-force-discovered-the-flaw-of-averages.html

Gotagonow म्हणाले...

A simple upgrade: dog collar static shock.
(whine) "But I want that cupcake".
"Your health level goal is set to 'aggressive'."
Lennon used to call Yoko "Mommy".

Todd म्हणाले...

This will just shoehorn nicely into the up coming social credit system that will be implemented once China, I mean Biden is in office.

If you "sound" harsh, you will lose credit. It is all part of the new normal where EVERYONE just LOVES big brother, enthusiastically!

Narr म्हणाले...

This really harshes my mellow, you know?

Narr
Dali called his wife Mommy too, IIRC.