The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • @tranxuanthang@lemm.ee
    link
    fedilink
    English
    13
    edit-2
    1 month ago

    If you don’t know what you are doing, and you give it a vague request hoping it will automatically solve your problem, then you will just have to spend even more time to debug its given code.

    However, if you know exactly what needs do do, and give it a good prompt, then it will reward you with a very well written code, clean implementation and comments. Consider it an intern or junior developer.

    Example of bad prompt: My code won’t work [paste the code], I keep having this error [paste the error log], please help me

    Example of (reasonably) good prompt: This code introduces deep recursion and can sometimes cause a “maximum stack size exceeded” error in certain cases. Please help me convert it to use a while loop instead.

    • @madsen@lemmy.world
      link
      fedilink
      English
      13
      edit-2
      1 month ago

      I wouldn’t trust an LLM to produce any kind of programming answer. If you’re skilled enough to know it’s wrong, then you should do it yourself, if you’re not, then you shouldn’t be using it.

      I’ve seen plenty of examples of specific, clear, simple prompts that an LLM absolutely butchered by using libraries, functions, classes, and APIs that don’t exist. Likewise with code analysis where it invented bugs that literally did not exist in the actual code.

      LLMs don’t have a holistic understanding of anything—they’re your non-programming, but over-confident, friend that’s trying to convey the results of a Google search on low-level memory management in C++.

      • @locuester@lemmy.zip
        link
        fedilink
        English
        7
        edit-2
        1 month ago

        If you’re skilled enough to know it’s wrong, then you should do it yourself, if you’re not, then you shouldn’t be using it.

        Oh I strongly disagree. I’ve been building software for 30 years. I use copilot in vscode and it writes so much of the tedious code and comments for me. Really saves me a lot of time, allowing me to spend more time on the complicated bits.

        • @madsen@lemmy.world
          link
          fedilink
          English
          7
          edit-2
          1 month ago

          I’m closing in on 30 years too, started just around '95, and I have yet to see an LLM spit out anything useful that I would actually feel comfortable committing to a project. Usually you end up having to spend as much time—if not more—double-checking and correcting the LLM’s output as you would writing the code yourself. (Full disclosure: I haven’t tried Copilot, so it’s possible that it’s different from Bard/Gemini, ChatGPT and what-have-you, but I’d be surprised if it was that different.)

          Here’s a good example of how an LLM doesn’t really understand code in context and thus finds a “bug” that’s literally mitigated in the line before the one where it spots the potential bug: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/ (see “Exhibit B”, which links to: https://hackerone.com/reports/2298307, which is the actual HackerOne report).

          LLMs don’t understand code. It’s literally your “helpful”, non-programmer friend—on stereoids—cobbling together bits and pieces from searches on SO, Reddit, DevShed, etc. and hoping the answer will make you impressed with him. Reading the study from TFA (https://dl.acm.org/doi/pdf/10.1145/3613904.3642596, §§5.1-5.2 in particular) only cements this position further for me.

          And that’s not even touching upon the other issues (like copyright, licensing, etc.) with LLM-generated code that led to NetBSD simply forbidding it in their commit guidelines: https://mastodon.sdf.org/@netbsd/112446618914747900

          Edit: Spelling

          • @locuester@lemmy.zip
            link
            fedilink
            English
            4
            edit-2
            1 month ago

            I’m very familiar with what LLMs do.

            You’re misunderstanding what copilot does. It just completes a line or section of code. It doesn’t answer questions - it just continues a pattern. Sometimes quite intelligently.

            Shoot me a message on discord and I’ll do a screenshare for you. #locuester

            It has improved my quality and speed significantly. More so than any other feature since intellisense was introduced (which many back then also frowned upon).

            • @madsen@lemmy.world
              link
              fedilink
              English
              11 month ago

              Fair enough, and thanks for the offer. I found a demo on YouTube. It does indeed look a lot more reasonable than having an LLM actually write the code.

              I’m one of the people that don’t use IntelliSense, so it’s probably not for me, but I can definitely see why people find that particular implementation useful. Thanks for catching and correcting my misunderstanding. :)

      • @yopla@jlai.lu
        link
        fedilink
        English
        2
        edit-2
        1 month ago

        APIs that don’t exist

        I had that. I got a bunch of ok code for an AWS API, but then it decided to hallucinate a method. I tried all kind of prompt to instruct it that the method didn’t exist and not to use it, but it always came back telling me it was the right way to do it.

        Anyway, still faster than reading the doc for a one off script I just wanted thrown together quickly and never to be reused again.

    • @exanime
      link
      English
      101 month ago

      Example of (reasonably) good prompt: This code introduces deep recursion and can sometimes cause a “maximum stack size exceeded” error in certain cases. Please help me convert it to use a while loop instead.

      That sounds like those cases on YouTube where the correction to the code was shorter than the prompt hahaha

    • @CapeWearingAeroplane@sopuli.xyz
      link
      fedilink
      English
      11 month ago

      I’ve found chatgpt reasonably good for one thing: Generating regex-patterns. I don’t know regex for shit, but if I ask for a pattern described with words, I get a working pattern 9/10 times. It’s also a very easy use-case to double check.