Discussion about this post

User's avatar
David J. Winters's avatar

The study concludes a greater potential to persuade among people already admitting to a desire to have their beliefs challenged (as they're users of r/changemyview). That people are generalizing this to a greater potential for AI to persuade, simpliciter, is wrongheaded as virtually no one is amenable to having even a small number of their views challenged, openly, in most any other context. I'd be more afraid if the researchers’ AIs managed to make believers out of even a single person on r/atheism, say, as that kind of negative control (convincing staunch believers in X to believe NOT-X) would be much more telling...

Researchers also compared their (not just a single AI response but an AI determined ‘best of’ 16 response assessed by the researchers anyway, making it not really AI determined at all…) response to those of “normal R/changemyview users”. A likely confounder here is the frequent, incommensurate-to-most-other-contexts, lack of decorum, even arrogance, of the average Reddit user, absent in AI generated responses, causing a ‘writing off’ and ignoring of sound arguments before they can even be assessed. Also, the refreshing nature of a thoughtful, decorous, respondent among the callous might be sufficient for persuasion in itself (viz. the OP is so discouraged by the thoughtless/callous responses, s/he’s more than happy to validate a neutral response for neutrality alone, where s/he never would in a context of sufficient overall decorum). Being 6x more persuasive than the average Redditor is not such a feat therefore. Compare just the immediate, unvetted, responses of AI bots to those of humans closer to a baseline of social decorum and then we’ll talk.

There’s also a bias of survivorship going on here as there’s no analysis of deleted posts for, among other things, why they were deleted. What if, for example, common rationales for deletion were: the responses seemed too bot-like, the responses were generally unconvincing, the responses were equally convincing but contradictory, so unconvincing taken as a whole, etc. All of these reasons would feature as evidence against the greater persuasiveness of AI, but this potential has been swept under the rug.

I wouldn’t make too much of this study. The main concern is, humans using human standards vetted all the AI responses.

Tom White's avatar

When an infinitely intelligent, productive, capable system becomes a sycophant what could possibly go wrong?

2 more comments...

No posts

Ready for more?