• jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    4 months ago

    “Prompt Engineering”: AKA explaining to Chat GPT why it’s wrong a dozen times before it spits out a useable (but still not completely correct) answer.

    • ByteOnBikes@slrpnk.net
      link
      fedilink
      arrow-up
      30
      ·
      4 months ago

      That’s actually a valid skill to know when to tell the AI that it’s wrong.

      A few months ago, I had to talk to my juniors to think critically about the shitty code that AI was generating. I was getting sick of clearly copy-pasted code from chatGPT and the junior not knowing what the fuck they were submitting to code review.

      • Evotech@lemmy.world
        link
        fedilink
        arrow-up
        22
        ·
        4 months ago

        Should start asking them like, why did you do this? Why did you chose this method? To make them sweat :p

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 months ago

          That used to make sense when LLMs were not the thing, when evaluating assessments from students, half of which asked someone else and didn’t bother to even read the code

          • howrar@lemmy.ca
            link
            fedilink
            arrow-up
            13
            ·
            4 months ago

            If no one can make sense of the change, then you reject it. Makes no difference if it was generated with an LLM or copy-pasted from Stackoverflow.

      • pfm@scribe.disroot.org
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I’m trying to convince a senior developer from the team I’m a member of, to stop using copilot. They have committed code that they didn’t understand (only tested to verify it does what it’s expected to do). I doubt it’d succeed…

        • odelik
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Co-pilot is amazing and terrible at the same time.

          When it’s suggesting the exact line of code I expect to write, amazing. When it can build the permissions I need for a service account for a TF module I’ve written, amazing

          However, it will suggest poorly formed, un-optimized code all too often.

          That said, knowing when to use/not use/modify the suggested code has greatly improved my productivity and consistency.

          • rekorse@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            I wish all this AI stuff was limited to just creating a new coding language. That I can get behind, sharing programming information is not the same as copying others art.

    • lad@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      If there exists an answer, as gpt will tell you the answer exists till the very end, even when it’s not so