I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I’m having a hard time to get it to generate long enough outputs.

“write an essay about the history of the moon. It needs to be at least 500 words” for example is a prompt where the same model will give me an output that’s actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn’t work as well in my koboldcpp setup as it does on llamacpp. I’ve set the tokens to generate to 512, the highest number. I’ve set the context tokens to 4096.

What else can I do to try to get longer responses?

  • @tal
    link
    English
    11 month ago

    https://old.reddit.com/r/KoboldAI/comments/163jfmo/more_than_512_tokens_possible/

    nevermind, just realised you can type in the token amount

    tries it

    Yeah. There’s a slider, and if you enter a number outside its acceptable range, it’ll be red, but it’ll still permit using the number. Worked with Amount to Generate = 1024 and Max Tokens (which also needs to be increased) set to 2048 in a quick test.

    • ffhein
      link
      fedilink
      English
      11 month ago

      Is max tokens different from context size?

      Might be worth keeping in mind that the generated tokens go into the context, so if you set it to 1k with 4k context you only get 3k left for character card and chat history. I think i usually have it set to 400 tokens or something, and use TGW’s continue button in case a long response gets cut off

      • @tal
        link
        English
        21 month ago

        Is max tokens different from context size?

        No. Same thing. If you hover over the question mark by “Max Tokens” in the Kobold AI Web UI:

        “Max number of tokens of context to submit to the AI for sampling. Make sure this is higher than Amount to Generate. Higher values increase VRAM/RAM usage.”