Tag Archives: artificial intelligence

Debate without demand? A Note on Project Debater

Harish Natarajan takes on Project Debater at an IBM event.

Debate without demand is shorthand for a set of qualms, concerns, and questions I have about the autonomous debating system — an AI called Project Debater that can engage in debate challenges with human beings — developed by researchers at IBM and discussed most recently in a March 17 article in Nature. A non-paywalled write up by Chris Reed and other press coverage of Project Debater does not settle these concerns.

I am unsure about nearly everything I want to say here (which is, by the way, something Project Debater cannot be), but the one thing I am prepared to say is that Project Debater looks like another AI parlor trick or corporate dog and pony show. To their credit, the IBM researchers recognize the problem. Here’s how they sum it up in their abstract:

We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.

While one writer claims Project Debater “is capable of arguing against humans in a meaningful way,” that seems like a real stretch, and it’s good to see the researchers behind the project do not seem ready to go that far.

I’d hold off for other reasons. Project Debater can argue for the proposition assigned to it in a structured debate game, but AI does not care about the argument it’s advancing; it has no reason to argue. And even more importantly it is not jointly committed with us to the activity of arguing. How could it be?

AI still has nothing to ask of us, nothing to demand, nothing we can recognize as a legitimate complaint. Those moral coordinations are reserved for persons. So the outcome of the debate does not matter to the AI debater; there are no life stakes for it. For this reason, the debate game looks morally empty. This is how Reed describes it:

Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

The same of course cannot be said for the real world consequences such debate games might have, or the efforts by these researchers and others to produce an argumentative AI. These experiments are fraught with moral complexity, peril, and maybe even some promise.

Six Questions about Asking and Sophia AI

103474429-Sophia_copy

The company that makes Sophia, Hanson Robotics, has become adept at linking different, highly-specific algorithms like image recognition and speech transcription in a way that mimics what humans might be doing when we hear a question and formulate a response.
qz.com

Sophia AI’s mimicry of “what humans might be doing when we hear a question and formulate a response” is mostly “theatrics,” Hanson Robotics CTO Ben Goertzel openly admits. That is probably why Sophia AI has so far found her most receptive audiences on TV talk shows and in corporate theater, where she won’t have to undergo too much scrutiny. But with the launch of singularityNET, which promises to put “Sophia’s entire mind…on the network,” Hanson says that “soon…the whole world will be able to talk to her.”

I would offer that talking “to” Sophia AI — or using Sophia’s chatbot function — is still a long way from conversation in any meaningful sense of the word, because it does not involve talking with a second person. This inconvenient truth about Sophia AI has not prevented the Saudi government from naming Sophia the first “robot citizen” of the Kingdom (and the grim irony of “a robot simulation of a woman [enjoying] freedoms that flesh-and-blood women in Saudi Arabia do not” was not lost on the Washington Post); nor has it prevented tabloids from screeching about Sophia stating she would like to have a family.

If personhood is setting the bar too high, I’m content to consider merely how Sophia AI handles asking. This would involve some of the considerations I’ve been exploring in my posts on The Asking Project: what we “might be doing” (as the writer in Quartz puts it) when we ask or hear a question; what’s involved, and what’s at stake, when we address others with a request or demand; and how these and other interrogative activities might be involved in our (moral) status as persons.

For starters, here are half a dozen questions about asking and Sophia AI that occurred to me after watching her video performances. I suspect there is a clear answer to the first, and the remaining five require some extended discussion.

1. What syntactic, grammatical or other cues (e.g., intonation) does Sophia AI use to recognize a question, and distinguish it from a declarative statement?

2. Can Sophia AI distinguish a request from a demand? A demand from an order? If so, how is this done? If not, what does this shortcoming indicate?

3. Will Sophia AI ever refuse to comply with a request? Leave a demand unmet? Defy an order? If not, how should these incapacities limit the role of Sophia or any AI?

4. Could a demand ever create in Sophia AI a sense of obligation? If so, what might this “sense” entail? Can we speak coherently of AI rights, or even place limits on AI’s role, without first developing this sense?

5. Will Sophia AI ever be capable of deliberating with others and reaching consensus or agreement?

6. What would be required for Sophia AI to deliberate internally? To be capable of asking herself?