Debate without demand? A Note on Project Debater

Harish Natarajan takes on Project Debater at an IBM event.

Debate without demand is shorthand for a set of qualms, concerns, and questions I have about the autonomous debating system — an AI called Project Debater that can engage in debate challenges with human beings — developed by researchers at IBM and discussed most recently in a March 17 article in Nature. A non-paywalled write up by Chris Reed and other press coverage of Project Debater does not settle these concerns.

I am unsure about nearly everything I want to say here (which is, by the way, something Project Debater cannot be), but the one thing I am prepared to say is that Project Debater looks like another AI parlor trick or corporate dog and pony show. To their credit, the IBM researchers recognize the problem. Here’s how they sum it up in their abstract:

We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.

While one writer claims Project Debater “is capable of arguing against humans in a meaningful way,” that seems like a real stretch, and it’s good to see the researchers behind the project do not seem ready to go that far.

I’d hold off for other reasons. Project Debater can argue for the proposition assigned to it in a structured debate game, but AI does not care about the argument it’s advancing; it has no reason to argue. And even more importantly it is not jointly committed with us to the activity of arguing. How could it be?

AI still has nothing to ask of us, nothing to demand, nothing we can recognize as a legitimate complaint. Those moral coordinations are reserved for persons. So the outcome of the debate does not matter to the AI debater; there are no life stakes for it. For this reason, the debate game looks morally empty. This is how Reed describes it:

Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

The same of course cannot be said for the real world consequences such debate games might have, or the efforts by these researchers and others to produce an argumentative AI. These experiments are fraught with moral complexity, peril, and maybe even some promise.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s