• @voracitude@lemmy.world
    link
    fedilink
    62 months ago

    You’re thinking of Machine Learning and neural networks. The first “L” in LLM stands for “Large”; what’s new about these particular neural networks is the scale at which they operate. It’s like saying a modern APU from 2024 is equivalent to a Celeron from the early 90s; technically they’re in the same class, but one is much more complicated and powerful than the other.

    • @UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      12 months ago

      what’s new about these particular neural networks is the scale at which they operate.

      Sure. They’re larger language models. Although, they also (ostensibly) have better parsing and graphing algorithms around them.

      It’s the marriage of sophistication and scale that makes these things valuable. But it’s like talking about skyscrapers. Whether it’s the Effiel Tower, the WTC, or the Birch Kalif, we’re still talking about concrete and steel.

      It’s like saying a modern APU from 2024 is equivalent to a Celeron from the early 90s; technically they’re in the same class, but one is much more complicated and powerful than the other.

      I’d more compare it to a Cray from the 90s than a budget chipset like Celeron.

      But imagine someone insisting we didn’t have Supercomputers until 2020 because that’s when TMSC started cranking out 5nm chips in earnest.