you read books and eat vegetables like a loser
my daddy lets me play nintendo 64 and eat cotton candy
we are not the same
“I used many words to ask the AI to tell me a story using unverified sources to give me the answer I want and have no desire to fact check.”
GIGO.
I mean, how many people fact check a book? Even at the most basic level of reading the citations, finding the sources the book cited, and making sure they say what the book claims they say?
In the vast majority of cases, when we read a book, we trust the editors to fact check.
AI has no editors and generates false statements all the time because it has no ability to tell true statements from false. Which is why letting an AI summarize sources, instead of reading those sources for yourself, introduces one very large procedurally generated point of failure.
But let’s not pretend the average person fact checks anything. The average person decides who they trust and relies on their trust in that person or source rather than fact checking themselves.
Which is one of the many reasons why Trump won.
This is a two part problem. The first is that LLMs are going to give you shoddy results riddled with errors. This is known. Would you pick up a book and take it as the truth if analysis of the author’s work said 50% of their facts are wrong?The second part is that the asker has no intent to verify the LLM’s output, they likely just want the output and be done with it. No critical thinking required. The recipient is only interested in a copy-paste way of transferring info.
If someone takes the time to actually read and process a book with the intent of absorbing and adding to their knowledge, mentally they take the time to balance what they read with what they know and hopefully cross referencing that information internally and gauging it with “that sounds right” at least, but hopefully by reading more.
These are not the same thing. Books and LLMs are not the same. Anyone can read the exact same book and offer a critical analysis. Anyone asking an LLM a question might get an entirely different response depending on minor differences in asking.
Sure, you can copy-paste from a book, but if you haven’t read it, then yeah…that’s like copy-pasting an LLM response. No intent of learning, no critical thought, etc.
Imagine thinking “I outsource all of my thinking to machines, machines that are infamous for completely hallucinating information out of the aether or pulling from sources that are blatantly fabrications. And due to this veil of technology, this black box that just spits out data with no way to tell where it came from, and my unwillingness to put in my own research efforts to verify anything, I will never have any way to tell if the information is just completely wrong. And yet I will claim this to be my personal knowledge, regurgitate this information with full confidence and attach my personal name and reputation to its veracity regardless, and be subject to the consequences when someone with actual knowledge fact checks me,” is a clever take. Imagine thinking that taking the easy way out, the lazy way, the manipulative way that gets others to do your work for you, is the virtuous path. Modern day Tom Sawyers, I swear. Sorry, AI bros, have an AI tell you who Tom Sawyer is so you can understand the insult.
Obviously it’s the fact checkers who are wrong /s
Maybe we don’t need 30 remedial IQ points from a magic hallucination box?
After all that long description, AI tells you eating rocks is ok.
2 minutes + 58 minutes = 2 hours
Bro must have asked the LLM to do the math for him
The additional hour might be the time they have to work so that they can pay for the LLM access.
Because that is another aspect of what LLMs really are, another Silicon Valley rapid-scale venture capital money-pit service hoping that by the time they’ve dominated the market and spent trillions they can turn around and squeeze their users hard.
Only trouble for fighting this with logic is that the market they’re attempting to wipe out is people’s ability to assess data and think critically.
Indeed. Folks right now dont understand that their queries are being 99.9% subsidized by trillions in VC hoping to dominate a market. Tech tale as old as time and people are falling for it hook, line, and sinker
Might be that it takes them an hour to read the summary
Impressed that he can think of the information he needs in 2 minutes - why even bother researching if you already know what you need …
Seriously though, reading and understanding generally just leaves me with more, very relevant, questions and some answers.
Two hours to read a book? How long has it been since he touched a piece of adult physical literature?
ChatGPT please tell me if spot does indeed run.
And not THAT kind of adult literature.
Welp, that’s gonna fuck up my search algorithm for a while.
“Chuck Tingle”. :D
while you were studying books, he studied a cup of coffee. TBH I can spend an hour both reading and drinking coffee at the same time idk why it’s got to be its own thing.
Look at this guy over here, bragging about multitasking. Next he’ll tell us he can drink coffee and write multiple prompts in an hour. /s
Imagine being proud of wasting the time drinking coffee instead of reading and understanding for yourself…
Then posting that you are proud of relying on hallucinating, made up slop.
Lmfao.
-Look at you. Spent four years in a college. Six months to go through the documentation of the programming language. Another six months to read the manual of the library and practice those example code. Finally, three months to implement the feature and complete the automated tests. Meanwhile, I write a prompt in thirty seconds and AI gives me the whole project, in a programming language I don’t know, and with me not knowing any of the technical detail.
-And somehow you are proud of that?
Except it won’t. There’s no LLM that can help someone with no or little experience build a full application. You can get away with a website once and struggle through updates, but there’s no LLM making Netflix. There isn’t a chance there will be in our lifetimes. Anyone who tells you anything else is selling you something or not educated enough on the topic to have an opinion.
The LLM will eventually steal the code, though, and people will claim it invented something.
-And somehow you are proud of that?
Further, I find it EXTREMELY disturbing that someone would desire the secrets of our wonderous journey to be so cynical, solvable and perfectly designed for authoritarian consolidation of power.
They also imply that 2+58 minutes is equal to 2 hours
You’re right OOP, we are not the same. I have the full context, processing time, an enjoyable reading experience and a framework to understand the book in question and its wider relevance. You have a set of bullet points that, when asked to talk about on the mind numbing mens rights/crypto podcast you no doubt have, you cannot talk about, a lot of which will be wrong anyway.
spittakes coffee all over keyboard
I just spent the last 57 minutes drinking that coffee, I was almost done too, thanks a lot.
Did you know that botanically speaking coffee beans are the same as milk and apples and you shouldn’t cry over spilt milk
I don’t think it’s an exaggeration to say these people are dehumanizing and debasing themselves.
After a few years of this they’ll scarcely be able to think at all.
They are dehumanizing everyone else too.
Can you think of anyone precise and clear enough in their speech that some “needless” repetition and context wouldn’t drastically improve your understanding of what they say?
Can you imagine how upset they would be if you took them by their very word and not what they meant?
In their mind, authors (and probably everyone else) are machines. The kindness of trying to truly understand them is not given. They should be “flawless”.
It also makes them unable to understand art. They think art is when something looks or sounds nice, they have no appreciation for anything deeper than that because for them the art is a commodity alienated from the labor that produced it.
100% that is why they only appreciate realistic art styles and I guess super trendy stuff like ghibli.
And of course, “appreciate” is doing a lot of heavy lifting here.
It’s a shame, because classic Ghibli movies are not shallow or inhumane at all. They were not based on trends. Miyazaki could not have made such beautiful films if he had not had real life experiences.
“The dragon is supposed to fall from down the air vent, but, being a dragon, it doesn’t land on the ground,” Miyazaki says. “It attaches itself to the wall, like a gecko. And then—ow!—it falls—thud!—it should fall like a serpent. Have you ever seen a snake fall out of a tree?” He explains that it “doesn’t slither, but holds its position.” He looks around at the animators, most of whom appear to be in their twenties and early thirties. They are taking notes, looking grave: nobody has seen a snake fall out of a tree.
Miyazaki goes on to describe how the dragon—a protean creature named Haku, who sometimes takes this form—struggles when he is pinned down. “This will be tricky,” Miyazaki says, smiling. “If you want to get an idea, go to an eel restaurant and see how an eel is gutted.” The director wriggles around in his seat, imitating the action of a recalcitrant eel. “Have you ever seen an eel resisting?” Miyazaki asks.
“No, actually,” admits a young man with hipster glasses, an orange sweatshirt, and an indoor pallor.
Miyazaki groans. “Japanese culture is doomed!” he says.
Even if we accept that the AI-using guy is correct - that he takes two minutes to formulate the perfect query, and gets a successful response based on that - he had to read books in order to know how to do that.
The people currently using AI were alive before it existed. They gained an education in a more traditional way, which perhaps allows them to take shortcuts using AI.
In the future, if nobody reads books, they will be even less able to prompt AI or to evaluate its responses.
This is a legit worry I have… Lemme ask ChatGPT how I should process this.
it’s like they purposefully try to think as little as possible
looking forward to day when random datacenter where they outsourced their thinking burns down
Do we want a HG Wells Time Machine future? This is how we get a HG Wells Time Machine future.
They think this is impressive.
I read books because I want knowledge and understanding. You get bite-sized bits of information. We are not the same.
They don’t value intelligence and think everyone is just as likely to be accurate as the LLM. Their distrust for academics and research makes them think that their first assumptions or guesses are more correct than anything established. That’s how they shirk off vaccines evidence and believe news without verifying anything.
Whatever makes their ego feel better must be the truth.
They’re the next generation of that guy who is ‘always right’ and ‘knows everything’, yet in reality they are often wrong and won’t admit it, and they really only know the most superficial things about any given subject.
You really nailed it here.
“information”
“hallucinations”
Orwell’s Animal Farm is a novella about animal husbandry . . .
for a large portion of the population, “if it doesn’t make money, then it is worthless” applies to EVERYTHING.
Did they ask an LLM how LLM’s work? Because that shit’s fucking farcical. They’re not “traversing” anything, bud. You get 17 different versions because each model is making that shit up on the fly.
Nah see they read thousands of pages in like an hour. That’s why. They just don’t need to anymore because they’re so intelligent and do it the smart way with like models and shit to compress it into a half a page summary that is clearly just as useful.
Seriously, that’s what they would say.
They don’t actually understand what LLMs do either. They just think people that do are smart so they press buttons and type prompts and think that’s as good as the software engineer that actually developed the LLMs.
Seriously. They think they are the same as the people that develop the source code for their webui prompt. And most of society doesn’t understand that difference so they get away with it.
It’s the equivalent of the dude that trade shitcoins thinking he understands crypto like the guy committing all of the code to actually run it.
(Or worse they clone a repo and follow a tutorial to change a config file and make their own shitcoins)
I really think some parts of our tech world need to be made LESS user friendly. Not more.
It’s people at the peak point of the Dunning-Krugger curve sharing their “wisdom” with the rest of us.
There are models designed to read documents and provide summaries; that part is actually realistic. And transforming text (such as by providing a summary) if actually something LLMs are better at than the conversational question answering that’s getting all the hype these days.
Of course stuffing an entire book in there is going to require a massive context length and would be damn expensive, especially if multiplied by 17. And I doubt it’d be done in a minute.
And there’s still the hallucination issue, especially with everything then getting filtered through another LLM.
So that guy is full of shit but at least he managed to mention one reasonable capability of neural nets. Surely that must be because of the 30+ IQ points ChatGPT has added to his brain…
I assumed this was a given.
This is the most Butlerian Jihad thing I’ve ever read. They should replace whatever Terminator-lite slop Brian Herbert wrote with this screengrab and called it Dune Book Zero.
It makes me think of The Time Machine by H. G. Wells.