I’ll admit I’m often verbose in my own chats about technical issues. Lately they have been replying to everyone with what seems to be LLM generated responses, as if they are copy/pasting into an LLM and copy/pasting the response back to others.

Besides calling them out on this, what would you do?

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    10
    ·
    20 hours ago

    Are they providing you the information you asked for? If so, whats the problem. Many of my coworkers over the years have had communication skills of a 3rd grader and I would have actually preferred an LLM response instead of reading over their response 5 or 6 times trying to parse what the hell they were talking about.

    I they aren’t providing the information you need, call on their boss complaining the worker isn’t doing their job.

    • stoy@lemmy.zip
      link
      fedilink
      arrow-up
      31
      ·
      19 hours ago

      If they are copying OPs messages straight into a chatbot, this could absolutely be a serious security incident, where they are leaking confidential data

      • Bongles@lemm.ee
        link
        fedilink
        arrow-up
        9
        arrow-down
        3
        ·
        18 hours ago

        It depends, if they’re using copilot through their enterprise m365 account, it’s as protected as using any of their other services, which companies have sensitive data in already. If they’re just pulling up chatgpt and going to town, absolutely.