What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
We need chatbots to bombard all our social media feeds with pro-western military propaganda. Otherwise, Putin and Wumao and Evil Korea and The Muslim Horde and Drumpf will win.
One of my favorite moments like this was a Reddit thread where some account was pretending to be human and arguing with people in favor of the CEO’s actions during The Purge. Then one person asked it a question about making some dangerous thing or other, and it starting replying with things like “As an AI model, I cannot explain how to do that.” and stuff. It was great.
The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance.
Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans.
However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little).
Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people.
PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
Technically the technology is open to the public but regular people cannot afford to implement it.
The thing that makes Large Language Models hardly functional is scaling up their databases and processing power of one of several of their small models with specialized tasks. One model creates output from input, another model checks it for accuracy/coherency, a third model polices it for things that are not allowed.
So unless you’ve got a datacenter and three high powered servers with top-grade cooling systems and a military grade power supply, fat fucking chance.
I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)
Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)
I wish people were as into FOSS as they are AI. I fucking hate LLMs.
What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?
I especially don’t like how discourse can be poisoned and diluted by some chatbots in favor of military operations.
We need chatbots to bombard all our social media feeds with pro-western military propaganda. Otherwise, Putin and Wumao and Evil Korea and The Muslim Horde and Drumpf will win.
I feel like that would just complete the Dead Internet Theory trifecta.
One of my favorite moments like this was a Reddit thread where some account was pretending to be human and arguing with people in favor of the CEO’s actions during The Purge. Then one person asked it a question about making some dangerous thing or other, and it starting replying with things like “As an AI model, I cannot explain how to do that.” and stuff. It was great.
The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.
FOSS is the opposite of that.
I would say that around half of AI development is free and open source.
The techbros who want to use AI and the developers of AI aren’t quite the same group.
I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.
(I may or may not have asked ChatGPT to write that.)
About as infuriating: the sheer amount of braindead morons who think LLMs are somehow in any way “AI”
Yet calling the simple rules that govern video game enemies AI is not controversial. Since when does AI have not to be fake to be called that?
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance. Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans. However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little). Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people. PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
I like both. Where’s the FOSS AI?
Huggingface usually Mixtral recently released a pretty good model that’s not very big
Technically the technology is open to the public but regular people cannot afford to implement it.
The thing that makes Large Language Models hardly functional is scaling up their databases and processing power of one of several of their small models with specialized tasks. One model creates output from input, another model checks it for accuracy/coherency, a third model polices it for things that are not allowed.
So unless you’ve got a datacenter and three high powered servers with top-grade cooling systems and a military grade power supply, fat fucking chance.
I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)
Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)
Basically Mistral, check /lmg/ in /g/, if you have a GPU newer than 2 years you can probably run a 32B quantised model.
They should do to AI what they make me do at work: More with less.
Haha try the entire datacenter.
If LLM was practical on three servers everyone and their mum would have an AI assistant product.