• FiniteBanjo
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    I’m paranoid that everybody who replies to my comments might be a bot. That’s got nothing to do with Lemmy, specifically, just a result of emerging technology.

    • j4k3@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      If you were to play with them for awhile that would likely change. They have patterns and very real limits, especially with the kinds anyone will use with a bot. A half decent LLM would cost a fortune to run just to troll people. Like I can spot a bot quickly just based on the way natural language occurs and content/conceptual density. Like with this post, and how I am quickly stepping through layers of complexity with density but using poor natural language grammar; no bot can do this kind of thing in practice, presently. They don’t have the attention capacity to juggle this kind of information at the same time. That is an easy way to spot that I am not a bot.

      • FiniteBanjo
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        10 months ago

        LLMs by definition are prediction models of what words or grammar are commonly used next in order. So conceptual density with poor grammar and natural language skills is literally what it does best. If not realizing who is responding to who in a comment thread or subtly changing stances are evidence of being a bot alone then there are more bots on lemmy than people.

        You cannot tell the difference between a dumbass from a dumbass machine with the current state of things.

        As for costs associated, if every business can afford enough api calls to replace their websites support chat with an LLM then it’s clearly not that farfetched to have your shillbots defending you in comment sections.

        • j4k3@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          There is a limit to how many vectors can be effectively passed from the header (or whatever the initial attention layer us called) before it goes through the actual neural path for the layer. It is why models are constant summarizing if too much complexity is introduced. It has been too long since I read the details to talk about it accurately. I know enough to make use of it in practice; spot when complexity is being dropped by summarization. I also know how such patterns are difficult to conclusively say they are reply styles versus actual limitations. I have pushed these boundaries very hard from many angles using unrelated prompts with several 70B and 8×7B models I run on my own hardware. That is still just speculative opinion of no external value, but I don’t care. I can make useful outputs and adjust using these principles.

          Talking about bots for businesses is not remotely relevant. Those are Rags, heavily trained for a very specifically limited task. If you try and breakout one of these and it is not caught my a model loader filter, it will not handle complex thought well either. I have yet to find a model that can handle a topic, introspection, and meta analysis within a few sentences on a random subject. Now a two dimensional replay such as this one, sure. However my bad grammar mix, is harder to replicate. Small models like a 7-13B are very style and subject dependent. Those can be run on cheap hardware. If you want to train a 8×7B, you need a very expensive setup and one dedicated just for your troll bot. I’m sure there are people with more than enough funds, but the life opportunities of such wealth negates most reasons someone is buying the hardware and burning the power bill money required to do this in practice. It isn’t just next word under the surface. It is next word under what context, and there is a hard limit to the number of those contexts in a given space of relationships between vectors. It has to do with the major innovation of transformers and rotational spaces IIRC.

          • FiniteBanjo
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            You keep saying Troll Bot as if there aren’t commercial incentives to run these bots on forums.