• ipkpjersi@lemmy.ml
    link
    fedilink
    arrow-up
    25
    arrow-down
    3
    ·
    7 days ago

    I’d argue it has. Things like ChatGPT shouldn’t be possible, maybe it’s unpopular to admit it but as someone who has been programming for over a decade, it’s amazing that LLMs and “AI” has come as far as it has over the past 5 years.

    That doesn’t mean we have AGI of course, and we may never have AGI, but it’s really impressive what has been done so far IMO.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      6 days ago

      If you’ve been paying attention to the field, you’d see it’s been a slow steady march. The technology that LLMs are based in were first published in 2016/2017, ChatGPT was the third iteration of the same base model.

      Thats not even accounting for all the work done with RNNs and LSTMs prior to that, and even more prior.

      Its definitely a major breakthrough, and very similar to what CNNs did for computer vision further back. But like computer vision, advancements have been made in other areas (like the generative space) and haven’t followed a linear path of progress.

    • Tedesche@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 days ago

      Agreed. I never thought it would happen in my lifetime, but it looks like we’re going to have Star Trek computers pretty soon.

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    6 days ago

    AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.

    We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people… and it only took 2-3 years to get there.

    But LLMs will have a long way to go because of how they create content. It’s very easy to poison LLM datasets, and they get worse learning from other generated content.

    • MiyamotoKnows@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It’s as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!

  • Pulptastic@midwest.social
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    7 days ago

    It has slowed exponentially because the models get exponentially more complicated the more you expect it to do.

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      The exponential problem has always been there. We keep finding tricks and optimizations in hardware and software to get by it but they’re only occasional.

      The pruned models keep getting better so now You’re seeing them running on local hardware and cell phones and crap like that.

      I don’t think they’re out of tricks yet, but God knows when we’ll see the next advance. And I don’t think there’s anything that’ll take this current path into AGI I think that’s going to be something else.

  • Mose13@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    7 days ago

    It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything

  • conditional_soup@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    7 days ago

    Well, the thing is that we’re hitting diminishing returns with current approaches. There’s a growing suspicion that LLMs simply won’t be able to bring us to AGI, but that they could be a part of or stepping stone to it. The quality of the outputs are pretty good for AI, and sometimes even just pretty good without the qualifier, but the only reason it’s being used so aggressively right now is that it’s being subsidized with investor money in the hopes that it will be too heavily adopted and too hard to walk away from by the time it’s time to start charging full price. I’m not seeing that. I work in comp sci, I use AI coding assistants and so do my co-workers. The general consensus is that it’s good for boilerplate and tests, but even that needs to be double checked and the AI gets it wrong a decent enough amount. If it actually involves real reasoning to satisfy requirements, the AI’s going to shit its pants. If we were paying the real cost of these coding assistants, there is NO WAY leadership would agree to pay for those licenses.

    • thru_dangers_untold@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      7 days ago

      Yeah, I don’t think AGI = an advanced LLM. But I think it’s very likely that a transformer style LLM will be part of some future AGI. Just like human brains have different regions that can do different tasks, an LLM is probably the language part of the “AGI brain”.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      7 days ago

      What are the “real costs” though? It’s free to run a half decent LLM locally on a mid tier gaming PC.

      Perhaps a bigger problem for the big AI companies rather then the open source approach.

      • conditional_soup@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        7 days ago

        Sure, but ChatGPT costs MONEY. Money to run, and MONEY to train, and then they still have to make money back for their investors after everything’s said and done. More than likely, the final tally is going to look like whole cents per token once those investor subsidies run out, and a lot of businesses are going to be looking to hire humans back quick and in a hurry.

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    How do you know it hasn’t and us just laying low? I for one welcome our benevolent and merciful machine overlord.

  • utopiah@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    LOL… you did make me chuckle.

    Aren’t we 18months until developers get replaced by AI… for like few years now?

    Of course “AI” even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It’s a marketing term to keep on fueling the hype.

    That’s despite so much resources, namely R&D and data centers, being poured in… and yet there is not “GPT5” or anything that most people use on a daily basis for anything “productive” except unreliable summarization or STT (which both had plenty of tools for decades).

    So… yeah, it’s a slow take off, as expected. shrug

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    7 days ago

    I think we might not be seeing all the advancements as they are made.

    Google just showed off AI video with sound. You can use it if you subscribe to thier $250/month plan. That is quite expensive.

    But if you have strong enough hardware, you can generate your own without sound.

    I think that is a pretty huge advancement in the past year or so.

    I think that focus is being put on optimizing these current things and making small improvements to quality.

    Just give it a few years and you will not even need your webcam to be on. You could just use an AI avatar that look and sounds just like you running locally on your own computer. You could just type what you want to say or pass through audio. I think the tech to do this kind of stuff is basically there, it just needs to be refined and optimized. Computers in the coming years will offer more and more power to let you run this stuff.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    Iirc there are mathematical reason why AI can’t actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we’re already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 days ago

    Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore’s law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn’t necessarily need to feel fast. It’s always growing at the same rate percentage wise, definitionally.

    • cabb@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Moore’s law is kinda still in effect, depending on your definition of Moore’s law. However, Dennard Scaling is not so computer performance isn’t advancing like it used to.

      • utopiah@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        Moore’s law is kinda still in effect, depending on your definition of Moore’s law.

        Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.

    • Inucune@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      We once again congratulate software engineers for nullifying 40 years of hardware improvements.