The study concludes a greater potential to persuade among people already admitting to a desire to have their beliefs challenged (as they're users of r/changemyview). That people are generalizing this to a greater potential for AI to persuade, simpliciter, is wrongheaded as virtually no one is amenable to having even a small number of their views challenged, openly, in most any other context. I'd be more afraid if the researchers’ AIs managed to make believers out of even a single person on r/atheism, say, as that kind of negative control (convincing staunch believers in X to believe NOT-X) would be much more telling...
Researchers also compared their (not just a single AI response but an AI determined ‘best of’ 16 response assessed by the researchers anyway, making it not really AI determined at all…) response to those of “normal R/changemyview users”. A likely confounder here is the frequent, incommensurate-to-most-other-contexts, lack of decorum, even arrogance, of the average Reddit user, absent in AI generated responses, causing a ‘writing off’ and ignoring of sound arguments before they can even be assessed. Also, the refreshing nature of a thoughtful, decorous, respondent among the callous might be sufficient for persuasion in itself (viz. the OP is so discouraged by the thoughtless/callous responses, s/he’s more than happy to validate a neutral response for neutrality alone, where s/he never would in a context of sufficient overall decorum). Being 6x more persuasive than the average Redditor is not such a feat therefore. Compare just the immediate, unvetted, responses of AI bots to those of humans closer to a baseline of social decorum and then we’ll talk.
There’s also a bias of survivorship going on here as there’s no analysis of deleted posts for, among other things, why they were deleted. What if, for example, common rationales for deletion were: the responses seemed too bot-like, the responses were generally unconvincing, the responses were equally convincing but contradictory, so unconvincing taken as a whole, etc. All of these reasons would feature as evidence against the greater persuasiveness of AI, but this potential has been swept under the rug.
I wouldn’t make too much of this study. The main concern is, humans using human standards vetted all the AI responses.
Honestly, I've always been more concerned about a general lack of persuadability in people. It seems like our fundamental beliefs are fixed at an early age, shaped by a limited range of personality traits (themselves fixed) where, from there, any time we appear to be persuaded we’re really just being given a proposition that is consistent with our prior beliefs, and so it’s more like we’re only being reminded of what we’ve believed all along. It’s like we’re having beliefs - we’ve always acted as though we’ve believed - just made explicit.
As far as any AI related attitudes to this, I guess it’s down to cases:
AI is sufficiently persuasive AND sufficiently factual and ethical: heartening.
AI is sufficiently persuasive AND insufficiently factual and ethical: highly concerning.
AI is insufficiently persuasive AND sufficiently factual and ethical: somewhat concerning.
AI is insufficiently persuasive AND insufficiently factual and ethical: neutral.
However, I suspect AI LLMs will continue evolving as systems just telling us what we want to hear, but then they won’t be persuading us, only facilitating stagnation. Some of us may ask them to change our minds (like the people on that subreddit), or even present us with all evidence for or against our claims. In the latter case, since there’s always some evidence for or against any proposition and that evidence is worth knowing, this wouldn’t be a bad thing.
All that said, if there should ever be a problem of credulity (or even incredulity) with AI, the problem can be solved by a cultural shift to a valuing and practicing of an adequate degree of skepticism/competency at thinking critically (if we have it in us).
The study concludes a greater potential to persuade among people already admitting to a desire to have their beliefs challenged (as they're users of r/changemyview). That people are generalizing this to a greater potential for AI to persuade, simpliciter, is wrongheaded as virtually no one is amenable to having even a small number of their views challenged, openly, in most any other context. I'd be more afraid if the researchers’ AIs managed to make believers out of even a single person on r/atheism, say, as that kind of negative control (convincing staunch believers in X to believe NOT-X) would be much more telling...
Researchers also compared their (not just a single AI response but an AI determined ‘best of’ 16 response assessed by the researchers anyway, making it not really AI determined at all…) response to those of “normal R/changemyview users”. A likely confounder here is the frequent, incommensurate-to-most-other-contexts, lack of decorum, even arrogance, of the average Reddit user, absent in AI generated responses, causing a ‘writing off’ and ignoring of sound arguments before they can even be assessed. Also, the refreshing nature of a thoughtful, decorous, respondent among the callous might be sufficient for persuasion in itself (viz. the OP is so discouraged by the thoughtless/callous responses, s/he’s more than happy to validate a neutral response for neutrality alone, where s/he never would in a context of sufficient overall decorum). Being 6x more persuasive than the average Redditor is not such a feat therefore. Compare just the immediate, unvetted, responses of AI bots to those of humans closer to a baseline of social decorum and then we’ll talk.
There’s also a bias of survivorship going on here as there’s no analysis of deleted posts for, among other things, why they were deleted. What if, for example, common rationales for deletion were: the responses seemed too bot-like, the responses were generally unconvincing, the responses were equally convincing but contradictory, so unconvincing taken as a whole, etc. All of these reasons would feature as evidence against the greater persuasiveness of AI, but this potential has been swept under the rug.
I wouldn’t make too much of this study. The main concern is, humans using human standards vetted all the AI responses.
Do you have future concerns over their persuasion ability?
Honestly, I've always been more concerned about a general lack of persuadability in people. It seems like our fundamental beliefs are fixed at an early age, shaped by a limited range of personality traits (themselves fixed) where, from there, any time we appear to be persuaded we’re really just being given a proposition that is consistent with our prior beliefs, and so it’s more like we’re only being reminded of what we’ve believed all along. It’s like we’re having beliefs - we’ve always acted as though we’ve believed - just made explicit.
As far as any AI related attitudes to this, I guess it’s down to cases:
AI is sufficiently persuasive AND sufficiently factual and ethical: heartening.
AI is sufficiently persuasive AND insufficiently factual and ethical: highly concerning.
AI is insufficiently persuasive AND sufficiently factual and ethical: somewhat concerning.
AI is insufficiently persuasive AND insufficiently factual and ethical: neutral.
However, I suspect AI LLMs will continue evolving as systems just telling us what we want to hear, but then they won’t be persuading us, only facilitating stagnation. Some of us may ask them to change our minds (like the people on that subreddit), or even present us with all evidence for or against our claims. In the latter case, since there’s always some evidence for or against any proposition and that evidence is worth knowing, this wouldn’t be a bad thing.
All that said, if there should ever be a problem of credulity (or even incredulity) with AI, the problem can be solved by a cultural shift to a valuing and practicing of an adequate degree of skepticism/competency at thinking critically (if we have it in us).
When an infinitely intelligent, productive, capable system becomes a sycophant what could possibly go wrong?