It’s not always easy to distinguish between existentialism and a bad mood.

  • 17 Posts
  • 476 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle





  • The arguments made against the book in the review are that it doesn’t make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).

    That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn’t really a part of their core thesis against the book.












  • if one person came out and spilled the beans, it’d suggest that there might be more people who didn’t

    I mean, after his full throated defense of Lynn’s IQ map (featuring disgraced nazi college dropout Cremieux/TP0 as a subject matter expert) what other beans might be interesting enough to spill? Did he lie about becoming a kidney donor?

    I think the emails are important because a) they make a case that for all his performative high-mindedness and deference to science and whinging about polygenic selection he came to his current views through the same white supremacist/great replacement milieu as every other pretentious gutter racist out there and b) he is so consistently disingenuous that the previous statement might not even matter much… he might honestly believe that priming impressionable well-off techies towards blood and soil fascism precursors was worth it if we end up allowing unchecked human genetic experimentation to come up with 260IQ babies that might have a fighting chance against shAItan.

    I guess it could come out that despite his habit of including conflict of interest disclosures, his public views may be way more for sale than is generally perceived.



  • I wonder if this is just a really clumsy attempt to invent stretching the overton window from first principles or if he really is so terminally rationalist that he thinks a political ideology is a sliding scale of fungible points and being 23.17% ancap can be a meaningful statement.

    That the exchange of ideas between friends is supposed to work a bit like the principle of communicating vessels is a pretty weird assumption, too. Also, if he thinks it’s ok to admit that he straight up tries to manipulate friends in this way, imagine how he approaches non-friends.

    Between this and him casually admitting that he keeps ā€œculture warā€ topics alive on the substack because they get a ton of clicks, it’s a safe bet that he can’t be thinking too highly of his readership, although I suspect there is an esoteric/exoteric teachings divide that is mostly non-obvious from the online perspective.


  • In his early blog posts, Scott Alexander talked about how he was not leaping through higher education in a single bound

    He starts his recent article on AI psychosis by mixing up psychosis with schizophrenia (he calls psychosis a biological disease), so that tracks.

    Other than that, I think it’s ok in principle to be ideologically opposed to something even if you and yours happened to benefit from it. Of course, it immediately becomes iffy if it’s a mechanism for social mobility that you don’t plan on replacing, since in that case you are basically advocating for pulling up the ladder behind you.