Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Better Offline was rough this morning in some places. Props to Ed for keeping his cool with the guests.
Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. āThese AI layoffs actually make sense because of complexity theoryā. āYou gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly.ā
I looked up his background, and it turns out heās the guy behind the TV show āBillionsā. That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the worldās most block-headed and cringe-inducing dialog.
Terrible choice of guest, Ed.
I study complexity theory and Iād like to know what circuit lower bound assumption he uses to prove that the AI layoffs make sense. Seriously, it is sad that the people in the VC techbro sphere are thought to have technical competence. At the same time, they do their best to erode scientific institutions.
My hot take has always been that current Boolean-SAT/MIP solvers are probably pretty close to theoretical optimality for problems that are interesting to humans & AI no matter how āintelligentā will struggle to meaningfully improve them. Ofc I doubt that Mr. Hollywood (or Yud for that matter) has actually spent enough time with classical optimization lore to understand this. Computer go FOOM ofc.
Only way I can make the link between complexity theory and laying off people is thinking about putting people in ācan solve up to this level of problemā style complexity classes (which regulars here should realize gets iffy fast). So hope he explained it more than that.
The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.
Yeah but those resource-intensive problems can be fitted into specific classes of problems (P, NP, PSPACE etc), which is what I was talking about, so we are talking about the same thing.
So under my imagined theory you can classify people as ācan solve: [ P, NP, PSPACE, ⦠]ā. Wonder what they will do with the P class. (Wait, what did Yarvin want to do with them again?)
solve this sokoban or youāre fired
Thereās really no good way to make any statements about what problems LLMs can solve in terms of complexity theory. To this day, LLMs, even the newfangled āreasoningā models, have not demonstrated that they can reliably solve computational problems in the first place. For example, LLMs cannot reliably make legal moves in chess and cannot reliably solve puzzles even when given the algorithm. LLM hypesters are in no position to make any claims about complexity theory.
Even if we have AIs that can reliably solve computational tasks (or, you know, just use computers properly), it still doesnāt change anything in terms of complexity theory, because complexity theory concerns itself with all possible algorithms, and any AI is just another algorithm in the end. If P != NP, it doesnāt matter how āintelligentā your AI is, itās not solving NP-hard problems in polynomial time. And if some particularly bold hypester wants to claim that AI can efficiently solve all problems in NP, letās just say that extraordinary claims require extraordinary evidence.
Koppelman is only saying ācomplexity theoryā because he likes dropping buzzwords that sound good and doesnāt realize that some of them have actual meanings.
I heard him say āquantumā and immediately came here looking for fresh-baked sneers
Yeah but I was trying to combine complexity theory as a loose theory misused by tech people in relation to āpeople who get firedā. (Not that I donāt appreciate your post btw, I sadly have not seen any pro-AI people be real complexity theory cranks re the capabilities. I have seen an anti be a complexity theory crank, but that is only when I reread my own posts ;) ).
Yeah, that guy was a real piece of work, and if I had actually bothered to watch The Bear before, I would stop doing so in favor of sending ChatGPT a video of me yelling in my kitchen and ask it if what is depicted was the plot of the latest episode.