First, I wondered if people would become too trusting, and maybe the robots will take over. Then, I thought, the robots could help us sort out our rules and preferences. And maybe we'll become better at thinking through our own decisionmaking, as robots demonstrate how to make rational decisions and show us, transparently, what they are taking into account.
FROM THE EMAIL: Tim writes:
The “inner speech” is just a basic program at present, says Pipitone. “At this moment, it’s a narrative of Pepper’s process,” she says.
Yeah, this is a testing and debug technique as old as programming, except we just used print functions in different parts of the decision trees instead of text to voice. Often is was a sign of desperation, with me anyway. When your program simply refuses to work and you don't know why, it's pretty frustrating, that I will allow. Who would have thought that I could have gotten written up in New Scientist for it though? Instead of calling them debug statements within the program's logic, I should have called them the program's "inner voice" giving us the skinny on its lived experience?
As for trusting, I just drove a car across country that had a "driver assist" function that was so good I came to trust it *a lot* in light traffic situations. Finding and opening a water bottle, changing the playlist on Spotify, then I caught myself starting to do the same stuff in a car without the feature. Yikes!