• guitarsarereal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    More or less, yeah. It’s been possible to do math with llm’s by hooking in an interpreter, but it’s a pain in the ass and in my limited experience required tons of custom prompting. Now, hooking in an interpreter and having it do math directly on the processor instead of using the neural net architecture is going to be orders of magnitude faster and more power-efficient, but also, with rudimentary reasoning comes the possibility of improving an llm’s ability to hook itself into external tools without lots of help from the user.