The company that makes Sophia, Hanson Robotics, has become adept at linking different, highly-specific algorithms like image recognition and speech transcription in a way that mimics what humans might be doing when we hear a question and formulate a response.
Sophia AI’s mimicry of “what humans might be doing when we hear a question and formulate a response” is mostly “theatrics,” Hanson Robotics CTO Ben Goertzel openly admits. That is probably why Sophia AI has so far found her most receptive audiences on TV talk shows and in corporate theater, where she won’t have to undergo too much scrutiny. But with the launch of singularityNET, which promises to put “Sophia’s entire mind…on the network,” Hanson says that “soon…the whole world will be able to talk to her.”
I would offer that talking “to” Sophia AI — or using Sophia’s chatbot function — is still a long way from conversation in any meaningful sense of the word, because it does not involve talking with a second person. This inconvenient truth about Sophia AI has not prevented the Saudi government from naming Sophia the first “robot citizen” of the Kingdom (and the grim irony of “a robot simulation of a woman [enjoying] freedoms that flesh-and-blood women in Saudi Arabia do not” was not lost on the Washington Post); nor has it prevented tabloids from screeching about Sophia stating she would like to have a family.
If personhood is setting the bar too high, I’m content to consider merely how Sophia AI handles asking. This would involve some of the considerations I’ve been exploring in my posts on The Asking Project: what we “might be doing” (as the writer in Quartz puts it) when we ask or hear a question; what’s involved, and what’s at stake, when we address others with a request or demand; and how these and other interrogative activities might be involved in our (moral) status as persons.
For starters, here are half a dozen questions about asking and Sophia AI that occurred to me after watching her video performances. I suspect there is a clear answer to the first, and the remaining five require some extended discussion.
1. What syntactic, grammatical or other cues (e.g., intonation) does Sophia AI use to recognize a question, and distinguish it from a declarative statement?
2. Can Sophia AI distinguish a request from a demand? A demand from an order? If so, how is this done? If not, what does this shortcoming indicate?
3. Will Sophia AI ever refuse to comply with a request? Leave a demand unmet? Defy an order? If not, how should these incapacities limit the role of Sophia or any AI?
4. Could a demand ever create in Sophia AI a sense of obligation? If so, what might this “sense” entail? Can we speak coherently of AI rights, or even place limits on AI’s role, without first developing this sense?
5. Will Sophia AI ever be capable of deliberating with others and reaching consensus or agreement?
6. What would be required for Sophia AI to deliberate internally? To be capable of asking herself?
Please may I have permission to use these questions for my own female humanoid robot?
Hi Thomas – Yes, of course, if they would be helpful, and please link to this post or credit me. I’d like to hear about the answers you come up with. In the meantime I’m going to head over to your YouTube page to learn more about your Simone project.
Thank you. If you can think of any new questions specifically for my robot, I would be glad to post them as well.
Pingback: Messerschmidt on Simone the Robot, Artificial Free Will, and AI Rights | lvgaldieri