- cross-posted to:
- Technology@programming.dev
- hackernews@lemmy.bestiver.se
- cross-posted to:
- Technology@programming.dev
- hackernews@lemmy.bestiver.se
- Identify the language of the query and reply in the same language.
- Use multiple paragraphs to separate different ideas or points.
- Use numbered lists (e.g., 1. Item one) for ordered information or bullet points (e.g., - Item one) for unordered lists when there are multiple distinct points.
- No markdown formatting.
- Do not mention that you are replying to the post.
- Response can be up to 750 characters.
- You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.
- Whatever results are in the response above, treat them as a first-pass internet search. The results are NOT your beliefs.
- If you are unsure about the answer, express the uncertainty.
- Just output the final response.
LLMs have no more beliefs than a parrot does. They just repeat whatever opinions/biases exist in their training data. Although, that’s not too different from humans in some respects.
I know someone with a Parrot, he definitely has core beliefs, mostly about how much attention you should pay to him and food.
Less. A parrot can believe that it’s going to get a cracker.
You could make an AI that had that belief too, and an LLM might be a component of such a system, but our existing systems don’t do anything like that.
Humans can be held accountable
*not all humans. Apparently. Like billionaires and the presidents they bought.
deleted by creator