Bayesian reasoning is only as reliable as the prior probabilities that go into the algorithm. If you can’t justify your priors, it’s no better than saying “this just feels likely to me” but with a window dressing of mathematics. It’s a great algorithm for updating concrete, known probabilities in the face of new concrete evidence, but that is not at all what’s going on with the vast majority of what the Rationalists do.
Even if you want to use it for estimating the probability of very uncertain events, the uncertainty compounds at each step. Once you get more than a step or two down that path of “let’s say the probability of x is p” without empirical justification, you should have no confidence at all that the number you’re getting bears any relationship to “true” probabilities. Again, it’s just a fancy way of saying “this feels true to me.”
Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”
That’s fair enough as far as it goes, but I think you’re in the minority about being explicit about that. It’s also important to really be precise here: the claim this kind of reasoning lets you defend isn’t “this is more probable than you think” but rather “if you examine your beliefs carefully, you’ll see that you actually think this is more plausible than you might be aware of.” That’s a very important distinction. It’s fine–good, even–to help people try to sort out their own subjective probabilities in a more systematic way, but we should be really careful to remember that that’s what’s going on here, not an objective assessment of probability. I think many (most) Rationalists and x-risk people elide that distinction, and either make it sound like or themselves believe that they’re putting a real objective numerical probability on these kinds of events. As I said, that’s not something you can do without rigorously derived and justified priors. We simply don’t have that for things like this. It’s easy to either delude yourself or give the wrong impression when you’re using the Bayesian framework in a way that looks objective but pulling numbers out of thin air for your priors.
I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.
Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.
Bayesian reasoning is only as reliable as the prior probabilities that go into the algorithm. If you can’t justify your priors, it’s no better than saying “this just feels likely to me” but with a window dressing of mathematics. It’s a great algorithm for updating concrete, known probabilities in the face of new concrete evidence, but that is not at all what’s going on with the vast majority of what the Rationalists do.
Even if you want to use it for estimating the probability of very uncertain events, the uncertainty compounds at each step. Once you get more than a step or two down that path of “let’s say the probability of x is p” without empirical justification, you should have no confidence at all that the number you’re getting bears any relationship to “true” probabilities. Again, it’s just a fancy way of saying “this feels true to me.”
Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”
That’s fair enough as far as it goes, but I think you’re in the minority about being explicit about that. It’s also important to really be precise here: the claim this kind of reasoning lets you defend isn’t “this is more probable than you think” but rather “if you examine your beliefs carefully, you’ll see that you actually think this is more plausible than you might be aware of.” That’s a very important distinction. It’s fine–good, even–to help people try to sort out their own subjective probabilities in a more systematic way, but we should be really careful to remember that that’s what’s going on here, not an objective assessment of probability. I think many (most) Rationalists and x-risk people elide that distinction, and either make it sound like or themselves believe that they’re putting a real objective numerical probability on these kinds of events. As I said, that’s not something you can do without rigorously derived and justified priors. We simply don’t have that for things like this. It’s easy to either delude yourself or give the wrong impression when you’re using the Bayesian framework in a way that looks objective but pulling numbers out of thin air for your priors.
I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.
Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.