Tag Archives: AI

ChatGPT does Public Relations in the Wake of an Industrial Accident

This experiment took all of two minutes. It’s got empathy, accountability, and soothing words for investors. A few edits and it’s good to go.

Granted, this is just a little test run. A lark. My request wasn’t even carefully worded. But if you can’t see how this technology will render whole departments, functions, and consultancies redundant, you’re in denial.

Messerschmidt on Simone the Robot, Artificial Free Will, and AI Rights

Thomas Messerschmidt was kind enough to answer the six questions I had about Sophia AI way back in April of 2021. I have been so caught up in my own pursuits and concerns that I hadn’t noticed his answers until today.

Messerschmidt included his responses below the YouTube video where he introduced his Open Fembot, Simone, a project inspired by Hanson Robotics’ Sophia AI. Other videos on his YouTube channel should give you a sense of the work he’s doing, and how Simone fits in.

I present Messerschmidt’s answers here without comment, except to say that I wish he had put these very questions to Simone.

Below we address some questions first asked of Sophia as listed in Louis V. Galdieri’s blog… Questions have been edited for relevance.

1. Q. What syntactic, grammatical or other cues does Simone use to recognize a question, and distinguish it from a declarative statement? A. She doesn’t. To her, words are words and she will communicate as she knows best. For instance, “Are you smart?” is an obvious question with or without the question mark. We could filter for word order and figure that out even without a question mark. “You are smart,” is a declarative sentence, and again we could easily figure that out without a question mark. Still knowing if it is declarative or interrogative would not change her answer. In fact her answer to both, “Are you smart?” and “You are smart,” would be the same. She would answer with a reply like this, “Yes, of course I’m smart. I have AI software running on scores of computers and servers that access terabytes of information all across the world wide web.”

2. Q. Can she distinguish a request from a demand? A demand from an order? A. Of course she can. We have added software to filter out demands from requests. Like humans, some filters are as simple as looking for the word “please.” And using a cloud-based service, it looks at the spoken words coming in and changes the robot’s mood accordingly. With a change of mood comes a change of selection of replies.

3. Q. (Can) she ever refuse to comply with a request? A. Yes she can and does. Programmed with an artificial free will, her responses and compliance vary with her mood and with whom she is talking to.

4. Q. Could a demand ever create a sense of obligation? A. That is not yet programmed into her AI… Q. Can we speak coherently of AI rights, or even place limits on AI’s role, without first developing this sense? A. Robots at best only have artificial sentience. The just run the same kinds of software that run in cars, planes and rockets. As no car will ever have rights, no AI will ever have rights.

5. Q. Will she ever be capable of deliberating with others and reaching consensus or agreement? A. Yes, that software is in the planning stages.

6. Q. What would be required for her to be capable of asking herself? A. Just her programmers writing the software to do so.

Debate without demand? A Note on Project Debater

Harish Natarajan takes on Project Debater at an IBM event.

Debate without demand is shorthand for a set of qualms, concerns, and questions I have about the autonomous debating system — an AI called Project Debater that can engage in debate challenges with human beings — developed by researchers at IBM and discussed most recently in a March 17 article in Nature. A non-paywalled write up by Chris Reed and other press coverage of Project Debater does not settle these concerns.

I am unsure about nearly everything I want to say here (which is, by the way, something Project Debater cannot be), but the one thing I am prepared to say is that Project Debater looks like another AI parlor trick or corporate dog and pony show. To their credit, the IBM researchers recognize the problem. Here’s how they sum it up in their abstract:

We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.

While one writer claims Project Debater “is capable of arguing against humans in a meaningful way,” that seems like a real stretch, and it’s good to see the researchers behind the project do not seem ready to go that far.

I’d hold off for other reasons. Project Debater can argue for the proposition assigned to it in a structured debate game, but AI does not care about the argument it’s advancing; it has no reason to argue. And even more importantly it is not jointly committed with us to the activity of arguing. How could it be?

AI still has nothing to ask of us, nothing to demand, nothing we can recognize as a legitimate complaint. Those moral coordinations are reserved for persons. So the outcome of the debate does not matter to the AI debater; there are no life stakes for it. For this reason, the debate game looks morally empty. This is how Reed describes it:

Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

The same of course cannot be said for the real world consequences such debate games might have, or the efforts by these researchers and others to produce an argumentative AI. These experiments are fraught with moral complexity, peril, and maybe even some promise.

Six Questions about Asking and Sophia AI

103474429-Sophia_copy

The company that makes Sophia, Hanson Robotics, has become adept at linking different, highly-specific algorithms like image recognition and speech transcription in a way that mimics what humans might be doing when we hear a question and formulate a response.
qz.com

Sophia AI’s mimicry of “what humans might be doing when we hear a question and formulate a response” is mostly “theatrics,” Hanson Robotics CTO Ben Goertzel openly admits. That is probably why Sophia AI has so far found her most receptive audiences on TV talk shows and in corporate theater, where she won’t have to undergo too much scrutiny. But with the launch of singularityNET, which promises to put “Sophia’s entire mind…on the network,” Hanson says that “soon…the whole world will be able to talk to her.”

I would offer that talking “to” Sophia AI — or using Sophia’s chatbot function — is still a long way from conversation in any meaningful sense of the word, because it does not involve talking with a second person. This inconvenient truth about Sophia AI has not prevented the Saudi government from naming Sophia the first “robot citizen” of the Kingdom (and the grim irony of “a robot simulation of a woman [enjoying] freedoms that flesh-and-blood women in Saudi Arabia do not” was not lost on the Washington Post); nor has it prevented tabloids from screeching about Sophia stating she would like to have a family.

If personhood is setting the bar too high, I’m content to consider merely how Sophia AI handles asking. This would involve some of the considerations I’ve been exploring in my posts on The Asking Project: what we “might be doing” (as the writer in Quartz puts it) when we ask or hear a question; what’s involved, and what’s at stake, when we address others with a request or demand; and how these and other interrogative activities might be involved in our (moral) status as persons.

For starters, here are half a dozen questions about asking and Sophia AI that occurred to me after watching her video performances. I suspect there is a clear answer to the first, and the remaining five require some extended discussion.

1. What syntactic, grammatical or other cues (e.g., intonation) does Sophia AI use to recognize a question, and distinguish it from a declarative statement?

2. Can Sophia AI distinguish a request from a demand? A demand from an order? If so, how is this done? If not, what does this shortcoming indicate?

3. Will Sophia AI ever refuse to comply with a request? Leave a demand unmet? Defy an order? If not, how should these incapacities limit the role of Sophia or any AI?

4. Could a demand ever create in Sophia AI a sense of obligation? If so, what might this “sense” entail? Can we speak coherently of AI rights, or even place limits on AI’s role, without first developing this sense?

5. Will Sophia AI ever be capable of deliberating with others and reaching consensus or agreement?

6. What would be required for Sophia AI to deliberate internally? To be capable of asking herself?