Intelās Q1 2025 earnings press release talked up their new AI-enabled chips. But these are not selling. [Intel] In the earnings call, CFO Dave Zinsner mentioned they had ācapacity constraints in Inā¦
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then thereās just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? Thereās tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a secā¦
itās really embarrassing when the promptfans come here to brag about how theyāre using the technology thatās burning the earth and itās just basic editor shit they never learned. and then you watch these fuckers āworkā and itās miserably slow cause theyāre prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task thatās just a fucking key combo.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing vf.gU in fucking vim with a fraction of the amount of the energy thatās enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!
Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even itās possible. You!
You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.
And somehow these executives claim AI improves fucking productivityā½
exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with ysiw] where the last character is the bracket you want. itās incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.
and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electricianās screwdriver or another semi-specialized tool. thereās a million things you can do with it, but we donāt teach any of them to programmers. thereās no vim or emacs class, and Iāve seen the quality of your average bootcampās vscode material. your average programmer bounces between fad editors depending on whatās being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldnāt be ā Iād gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.
and now weāre trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how youāre just jealous that their nephew is going to become god first, because of course itās also a weirdo cult underneath it all, thatās what happens when you vilify the concept of knowing fuck all about anything.
The only things Iāve seen it do better than I could manage with a script or in Vim are things that require natural language comprehension. Like, āhereās an email forwarded to an app, find anything that sounds like a deadlineā or āgiven this job description, come up with a reasonable title summary for the page it shows up onā⦠But even then those are small things that could be entirely omitted from the functionality of an app without any trouble on the user. And thereās also the hallucinations and being super wrong sometimes.
Thatās literally a built-in VSCode command my dude, it does it in milliseconds and doesnāt require switching a window or even a conscious thought from you
Gotta be real, LLMs for queries makes me uneasy. Weāre already in a place where data modeling isnāt as common and people donāt put indexes or relationships between tables (and some tools didnāt really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, itās usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesnāt really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if thatās the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, Iām totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk Iām not sold.
Data understanding is part of the job anyhow, thatās largely conceptual which maybe LLMs could work as an extension for, but I really wouldnāt trust it to generate full on queries in most of the environments Iāve seen, data is overwhelmingly super messy and orgs donāt love putting effort towards governance.
Iāve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.
Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar Iāve found it common for them to switch between them without warning.
Yeah I can totally understand that, Genie is databricksā one and apparently itās surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.
Yeah, the more data you have around the database the better, but thatās always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.
When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?
The first two examples I really like since youāre able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
I use it to speed up my work.
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then thereās just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? Thereās tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a secā¦
itās really embarrassing when the promptfans come here to brag about how theyāre using the technology thatās burning the earth and itās just basic editor shit they never learned. and then you watch these fuckers āworkā and itās miserably slow cause theyāre prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task thatās just a fucking key combo.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
(100% natural rant)
I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing
vf.gU
in fucking vim with a fraction of the amount of the energy thatās enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even itās possible. You!
You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.
And somehow these executives claim AI improves fucking productivityā½
exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with
ysiw]
where the last character is the bracket you want. itās incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electricianās screwdriver or another semi-specialized tool. thereās a million things you can do with it, but we donāt teach any of them to programmers. thereās no vim or emacs class, and Iāve seen the quality of your average bootcampās vscode material. your average programmer bounces between fad editors depending on whatās being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldnāt be ā Iād gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.
and now weāre trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how youāre just jealous that their nephew is going to become god first, because of course itās also a weirdo cult underneath it all, thatās what happens when you vilify the concept of knowing fuck all about anything.
The only things Iāve seen it do better than I could manage with a script or in Vim are things that require natural language comprehension. Like, āhereās an email forwarded to an app, find anything that sounds like a deadlineā or āgiven this job description, come up with a reasonable title summary for the page it shows up onā⦠But even then those are small things that could be entirely omitted from the functionality of an app without any trouble on the user. And thereās also the hallucinations and being super wrong sometimes.
The whole thing is a mess
Thatās literally a built-in VSCode command my dude, it does it in milliseconds and doesnāt require switching a window or even a conscious thought from you
presumably everyone who has to work with you spits in your coffee/tea, too?
I have used a system wide service in macOS for that for decades by now.
Gotta be real, LLMs for queries makes me uneasy. Weāre already in a place where data modeling isnāt as common and people donāt put indexes or relationships between tables (and some tools didnāt really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, itās usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesnāt really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if thatās the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, Iām totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk Iām not sold.
Data understanding is part of the job anyhow, thatās largely conceptual which maybe LLMs could work as an extension for, but I really wouldnāt trust it to generate full on queries in most of the environments Iāve seen, data is overwhelmingly super messy and orgs donāt love putting effort towards governance.
Iāve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.
Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar Iāve found it common for them to switch between them without warning.
Yeah I can totally understand that, Genie is databricksā one and apparently itās surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.
Yeah, the more data you have around the database the better, but thatās always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.
When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?
what in the utter fuck is this post
The first two examples I really like since youāre able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
they donāt verify any of it
I use it to parse log files, compare logs from successful and failed requests and that sort of stuff.
and now weāre up to inaccurate, stochastic
diff
. fucking marvelous.Stay tuned for inaccurate, stochastic
ls
.