२२ एप्रिल, २०२१

"Now, a robot has been trained to speak aloud its inner decision-making process, giving us a view of how it prioritises competing demands...."

"The researchers asked Pepper to set a dinner table according to etiquette rules they had encoded into the robot... When instructed to place a napkin on a fork with inner speech enabled, Pepper asked itself what the etiquette required and concluded that this request went against the rules it had been given. It then asked the researchers if putting the napkin on the fork was the correct action. When told it was, Pepper said, 'OK, I prefer to follow your desire,' and explained how it was going to place the napkin on the fork. When asked to do the same task without voicing the inner speech, Pepper knew this contradicted the etiquette rules so didn’t perform the task or explain why.... With the potential for robots to become more common in the future, this type of programming could help the public understand their abilities and limitations, says Sarah Sebo at the University of Chicago. 'It maintains people’s trust and enables seamless collaborations and interactions between humans and robots,' she says."

New Scientist reports. 

First, I wondered if people would become too trusting, and maybe the robots will take over. Then, I thought, the robots could help us sort out our rules and preferences. And maybe we'll become better at thinking through our own decisionmaking, as robots demonstrate how to make rational decisions and show us, transparently, what they are taking into account.

FROM THE EMAIL: Tim writes: 

The “inner speech” is just a basic program at present, says Pipitone. “At this moment, it’s a narrative of Pepper’s process,” she says. 

Yeah, this is a testing and debug technique as old as programming, except we just used print functions in different parts of the decision trees instead of text to voice. Often is was a sign of desperation, with me anyway. When your program simply refuses to work and you don't know why, it's pretty frustrating, that I will allow. Who would have thought that I could have gotten written up in New Scientist for it though? Instead of calling them debug statements within the program's logic, I should have called them the program's "inner voice" giving us the skinny on its lived experience?

As for trusting, I just drove a car across country that had a "driver assist" function that was so good I came to trust it *a lot* in light traffic situations. Finding and opening a water bottle, changing the playlist on Spotify, then I caught myself starting to do the same stuff in a car without the feature. Yikes!