• tal
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Yeah.

    I’m willing to believe that we can have solid AI software authoring, but I am skeptical that it’s gonna be via the raw LLM stuff being done with images and audio and such, where what matters is stuff that looks like other stuff.

    Maybe you could use LLMs as a component of a larger system that does effective coding. But I’m skeptical that this alone can be a great solution.

    Maybe in very limited situations where the system can reliably validate the code correctness itself. Like, say you want to write a quine. That doesn’t take input, and the output is trivial to validate.

    But for most software, I’d say that it’s not easy for a computer to validate that code is correct.

    And in some cases, trying to validate code has got to be worse than doing it yourself. Like, think of multithreaded code absent some sort of elaborate type system that permits fully specifying the constraints imposed by the parallelism requirements, and where such constraints are written and available to you. C or C++ doesn’t have such a type system.

    Or writing security-sensitive code. Same thing – absent some kind of type system that permits fully-specifying the requirements of the problem, you can’t automatically validate it, and trying to review code to understand whether it’s secure…ugh.

    I can maybe see some kind of “grammar check”, having an LLM looking for code that you wrote that has a portion that is unusual compared to existing code that it’s seen.

    Programming is basically translation from a list of (precise) requirements to code in a programming language. And LLMs can do translation of human language pretty well. But I expect that a major problem for LLM-driven programming is that there’s no training corpus for the requirements, the “source language” for translation.