rational enlightened beings that think the terminator from the movies is real i-cant

  • Philosoraptor [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 days ago

    My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”

    That’s fair enough as far as it goes, but I think you’re in the minority about being explicit about that. It’s also important to really be precise here: the claim this kind of reasoning lets you defend isn’t “this is more probable than you think” but rather “if you examine your beliefs carefully, you’ll see that you actually think this is more plausible than you might be aware of.” That’s a very important distinction. It’s fine–good, even–to help people try to sort out their own subjective probabilities in a more systematic way, but we should be really careful to remember that that’s what’s going on here, not an objective assessment of probability. I think many (most) Rationalists and x-risk people elide that distinction, and either make it sound like or themselves believe that they’re putting a real objective numerical probability on these kinds of events. As I said, that’s not something you can do without rigorously derived and justified priors. We simply don’t have that for things like this. It’s easy to either delude yourself or give the wrong impression when you’re using the Bayesian framework in a way that looks objective but pulling numbers out of thin air for your priors.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      I understand the point you’re making, but I guess I see updating based on logic as a valid way that probabilities can change for someone. I think you can reason something out and realize something is more probable than you had previously thought.

      Unless I’ve misunderstood something critical, priors have to be pulled out of thin air (at least at some point in the reasoning chain). But I think I was very clear: decide for yourself how likely you think these situations are – the probabilities don’t have to match mine – multiply them together, and that’s the probability you should assign to AI X-risk.