ylai@lemmy.ml to AI@lemmy.mlEnglish · 5 months agoHow Gradient created an open LLM with a million-token context windowventurebeat.comexternal-linkmessage-square5fedilinkarrow-up127arrow-down12cross-posted to: tezka_abhyayarshiniaicompanions@lemmy.world
arrow-up125arrow-down1external-linkHow Gradient created an open LLM with a million-token context windowventurebeat.comylai@lemmy.ml to AI@lemmy.mlEnglish · 5 months agomessage-square5fedilinkcross-posted to: tezka_abhyayarshiniaicompanions@lemmy.world
minus-squareTechNerdWizard42@lemmy.worldlinkfedilinkarrow-up4·5 months agoI believe you’d need roughly 500GB of RAM to run it minimum at full context length. There is chatter that 125k context took and used 40GB I know I can load the 70B models into my laptop at lower bits but it consumes about 140GB of RAM.
I believe you’d need roughly 500GB of RAM to run it minimum at full context length. There is chatter that 125k context took and used 40GB
I know I can load the 70B models into my laptop at lower bits but it consumes about 140GB of RAM.