I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.
Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.
I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.
Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.