Tag Archives: moral life

Debate without demand? A Note on Project Debater

Harish Natarajan takes on Project Debater at an IBM event.

Debate without demand is shorthand for a set of qualms, concerns, and questions I have about the autonomous debating system — an AI called Project Debater that can engage in debate challenges with human beings — developed by researchers at IBM and discussed most recently in a March 17 article in Nature. A non-paywalled write up by Chris Reed and other press coverage of Project Debater does not settle these concerns.

I am unsure about nearly everything I want to say here (which is, by the way, something Project Debater cannot be), but the one thing I am prepared to say is that Project Debater looks like another AI parlor trick or corporate dog and pony show. To their credit, the IBM researchers recognize the problem. Here’s how they sum it up in their abstract:

We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.

While one writer claims Project Debater “is capable of arguing against humans in a meaningful way,” that seems like a real stretch, and it’s good to see the researchers behind the project do not seem ready to go that far.

I’d hold off for other reasons. Project Debater can argue for the proposition assigned to it in a structured debate game, but AI does not care about the argument it’s advancing; it has no reason to argue. And even more importantly it is not jointly committed with us to the activity of arguing. How could it be?

AI still has nothing to ask of us, nothing to demand, nothing we can recognize as a legitimate complaint. Those moral coordinations are reserved for persons. So the outcome of the debate does not matter to the AI debater; there are no life stakes for it. For this reason, the debate game looks morally empty. This is how Reed describes it:

Backed by its argumentation techniques and fuelled by its processed data sets, the system creates a 4-minute speech that opens a debate about a topic from its repertoire, to which a human opponent responds. It then reacts to its opponent’s points by producing a second 4-minute speech. The opponent replies with their own 4-minute rebuttal, and the debate concludes with both participants giving a 2-minute closing statement.

The same of course cannot be said for the real world consequences such debate games might have, or the efforts by these researchers and others to produce an argumentative AI. These experiments are fraught with moral complexity, peril, and maybe even some promise.

Another Thought On Gessen’s Shift

In response to a comment on yesterday’s post about Masha Gessen’s “Trump: The Choice We Face,” I remarked that the opposition Gessen sets up in her essay between realist and moral reasoning seems a little too clean and stark. It is also not one we can carry over, intact, into political life.

We should like to be able to choose, always, between right and wrong, and do what is right; but life does not present itself in these terms, and it’s easy to imagine cases in which moral reasoning might prevail and political action would thereby be limited, or impossible; where strict adherence to the moral could usher in its own Robbespierrean terrors; or where we simply failed to take into account the extent to which moral reasoning is already conditioned and determined by the actual, by the real.

Of course we should try to temper realism with moral reasoning, but we should probably not complete Gessen’s shift: we can never operate entirely from one side or the other.

It’s important to recognize the shortcomings of the transactional and still reserve the power to deliberate about what to do and outcomes we would like to see. A balanced view wouldn’t force the choice between realism and morality, but allow for the fact that sometimes people have to get their hands dirty; and when they must, they can and should act while remaining fully aware — at times they will be tragically aware — of the moral difficulties in which they have entangled themselves.

It’s rare in life, and in political life rarer still, that we are able simply to substitute moral reasoning about right and wrong for practical deliberation, just as it’s always cold and inhuman to reduce practical deliberation to a calculation of costs and outcomes without consideration of what we owe to ourselves and others.