• xyzzy@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    13 hours ago

    I wasn’t the one who down voted you, but I do think you’re painting an overly optimistic picture.

    I was referencing 4, which was released over two years ago and was a significant improvement over 3.5. I was genuinely impressed with 4, but I haven’t been very impressed with anything since then. Probably the most substantive change was pulling chain of thought into the model itself, but everyone was already doing it anyway.

    Maybe we just have different views on what counts as a game changer.

    I’m not coming at this from a place of ignorance: I have AI patents to my name as both first inventor and supporting, and I’ve worked with these teams directly (although, crucially, not in video). I’m saying that the rate of improvement in critical (i.e., non-toy) areas is slowing down, and I believe it’s a significant possibility that AI will start to hit the same walls it did many times before. That was before it entered the consciousness of execs and the general public, and because they aren’t as familiar with the long stop-start history of AI, they don’t think that wall exists.

    AI companies definitely know that wall exists, and in at least one case they’re getting increasingly nervous about it.