project of a community where people can argue with each other and an AI moderator will judge who’s being good-faith in their conduct, sort of referee the discussion to help reduce the “there’s no way to have a ‘debate’ if the other person’s committing to just being an evasive bad-faith cunt about it” problem.
Any ways to follow this project? This is a concept I’ve also thought about. Would probably work to prompt various simple yes/no questions about a comment and what it’s responding to like, “is it likely they didn’t read it”, “does it address the central point” and then do stuff with the results
It hasn’t gone past the “idea” stage. I have an unfortunate habit of talking up all kinds of fancy stuff I want to do and only following through on like 20% of it, but I do think something like that would be a really good idea. If I do wind up executing on it I will reach out.
And yes, I think having multiple prompts to sort of analyze the comments thread and progress towards conclusions about it is the way to go. I was mucking around with, I think, a four-prompt setup to keep the LLM from going too far off the rails or try to bite off too much of the analysis at once (and also, to stop it from wanting to be “fair to everyone” which it seems like it otherwise really wants to do because of how it’s been trained to be supposedly-neutral).
Any ways to follow this project? This is a concept I’ve also thought about. Would probably work to prompt various simple yes/no questions about a comment and what it’s responding to like, “is it likely they didn’t read it”, “does it address the central point” and then do stuff with the results
It hasn’t gone past the “idea” stage. I have an unfortunate habit of talking up all kinds of fancy stuff I want to do and only following through on like 20% of it, but I do think something like that would be a really good idea. If I do wind up executing on it I will reach out.
And yes, I think having multiple prompts to sort of analyze the comments thread and progress towards conclusions about it is the way to go. I was mucking around with, I think, a four-prompt setup to keep the LLM from going too far off the rails or try to bite off too much of the analysis at once (and also, to stop it from wanting to be “fair to everyone” which it seems like it otherwise really wants to do because of how it’s been trained to be supposedly-neutral).