I’m sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that’s a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here’s what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
  • preasket@lemy.lolOP
    link
    fedilink
    arrow-up
    4
    ·
    2 年前

    The idea of backpropagation and neural nets is quite old, but there’s some significant new research being done now. Primarily in node types and computational efficiency. LSTM, transformers, ReLU - these are all new.

    • Zo0@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      2 年前

      Haha reading your other replies, you’re too humble for someone who knows what they’re talking about