rational enlightened beings that think the terminator from the movies is real i-cant

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Yes that’s right (that it’s only reliable as the prior probabilities that go into it).

    Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.

    My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”

    • Philosoraptor [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”

      That’s fair enough as far as it goes, but I think you’re in the minority about being explicit about that. It’s also important to really be precise here: the claim this kind of reasoning lets you defend isn’t “this is more probable than you think” but rather “if you examine your beliefs carefully, you’ll see that you actually think this is more plausible than you might be aware of.” That’s a very important distinction. It’s fine–good, even–to help people try to sort out their own subjective probabilities in a more systematic way, but we should be really careful to remember that that’s what’s going on here, not an objective assessment of probability. I think many (most) Rationalists and x-risk people elide that distinction, and either make it sound like or themselves believe that they’re putting a real objective numerical probability on these kinds of events. As I said, that’s not something you can do without rigorously derived and justified priors. We simply don’t have that for things like this. It’s easy to either delude yourself or give the wrong impression when you’re using the Bayesian framework in a way that looks objective but pulling numbers out of thin air for your priors.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.

        Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.