Tag Archives: free will

Messerschmidt on Simone the Robot, Artificial Free Will, and AI Rights

Thomas Messerschmidt was kind enough to answer the six questions I had about Sophia AI way back in April of 2021. I have been so caught up in my own pursuits and concerns that I hadn’t noticed his answers until today.

Messerschmidt included his responses below the YouTube video where he introduced his Open Fembot, Simone, a project inspired by Hanson Robotics’ Sophia AI. Other videos on his YouTube channel should give you a sense of the work he’s doing, and how Simone fits in.

I present Messerschmidt’s answers here without comment, except to say that I wish he had put these very questions to Simone.

Below we address some questions first asked of Sophia as listed in Louis V. Galdieri’s blog… Questions have been edited for relevance.

1. Q. What syntactic, grammatical or other cues does Simone use to recognize a question, and distinguish it from a declarative statement? A. She doesn’t. To her, words are words and she will communicate as she knows best. For instance, “Are you smart?” is an obvious question with or without the question mark. We could filter for word order and figure that out even without a question mark. “You are smart,” is a declarative sentence, and again we could easily figure that out without a question mark. Still knowing if it is declarative or interrogative would not change her answer. In fact her answer to both, “Are you smart?” and “You are smart,” would be the same. She would answer with a reply like this, “Yes, of course I’m smart. I have AI software running on scores of computers and servers that access terabytes of information all across the world wide web.”

2. Q. Can she distinguish a request from a demand? A demand from an order? A. Of course she can. We have added software to filter out demands from requests. Like humans, some filters are as simple as looking for the word “please.” And using a cloud-based service, it looks at the spoken words coming in and changes the robot’s mood accordingly. With a change of mood comes a change of selection of replies.

3. Q. (Can) she ever refuse to comply with a request? A. Yes she can and does. Programmed with an artificial free will, her responses and compliance vary with her mood and with whom she is talking to.

4. Q. Could a demand ever create a sense of obligation? A. That is not yet programmed into her AI… Q. Can we speak coherently of AI rights, or even place limits on AI’s role, without first developing this sense? A. Robots at best only have artificial sentience. The just run the same kinds of software that run in cars, planes and rockets. As no car will ever have rights, no AI will ever have rights.

5. Q. Will she ever be capable of deliberating with others and reaching consensus or agreement? A. Yes, that software is in the planning stages.

6. Q. What would be required for her to be capable of asking herself? A. Just her programmers writing the software to do so.