I know the title will trigger people but it’s a short so please briefly hear her out. I’ve since given this a try and it’s incredibly cool. It’s a very different experience and provides much better information AFAICT

  • Zaleramancer@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    9 days ago

    I’m not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I’m a weirdo and so will pick up on the idea that the prompt leads the style.

    It’s not like the LLM actually understands that you are asking questions, it’s just that it’s generating a procedural response to the last statement given.

    Saying please and thank you isn’t the important part.

    Just preface your use with, like,

    “You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon.”

    And you’ll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.

    Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there’s nothing I can do to discourage people from putting their trust in them.

    • sabreW4K3@lazysoci.alOP
      link
      fedilink
      arrow-up
      2
      ·
      8 days ago

      I’ll be honest, this blew my mind, hence why I posted it. I always just asked questions and then spent ages with the back and forth for factual corrections. People like you are treasures.

      • Zaleramancer@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it’s almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that’s because I’m weird.

        Like, it’s probably fine for anything you’d rely on a skimming a wikipedia article for. I wouldn’t use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you’re just like, “Hey, what’s Ice-IV?” then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it’s less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.

        My main point of caution for people reading this is that you shouldn’t rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn’t risk an AI hallucination, even if it’s unlikely.