It’s not always easy to distinguish between existentialism and a bad mood.

  • 15 Posts
  • 356 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle







  • The guy goes by the handle RatOrthodox, calls rationalism his religion in the replies, seems kind of a cult-brained ideologue anyway based on his other tattoos, and went out of his way to make boinking aella into a public achievement/trophy thing.

    This is just from the OP, I bet I could find any number of additional absolutely ridiculous things about him if I bothered with his twitter feed (edit: someone else did). Basically he seems like sneer incarnate, and if rationalists ever stormed the capitol building I bet he’d be the one with the face paint and the horned fur hat giving interviews.

    Virtue signaling is not really interchangeable with attention whoring, it’s when you specifically (and usually clumsily) want people to notice that you are part of an ingroup, and in this case the ingroup definitely isn’t just people who like amateur tattooing and horny post on main.

    Maybe I should explicitly note that unless this turns out to be another aella publicity stunt she does seem pretty incidental to the whole thing and her only fault appears to be being attractive this type of weirdo in the first place, which I’m not blaming her for.






  • How though, either he got cold feet in the middle of selling out to the tech-fash or he was honestly that incredibly oblivious (see also: agreeing to do tim pool’s show), neither strikes me as especially mitigating.

    edit: Tried to watch the video, I made it to the part where he all but claims he sold out ironically, apparently at the time he thought spreading the good news about Altman’s hilariously dystopic crypto pet project was so off-brand that it would be perceived like performance art or something, baffling.

    He also kept going on about how the money wasn’t even that good as I guess further evidence that the whole thing was him going briefly insane, and not I don’t know just him allowing sponsors to test the waters before committing more heavily.

    As if the only options available to get him to shill for something would be either heap Faustian amounts of cash on him or cast a confusion spell and hope he likes getting underpaid.


  • Today in alignment news: Sam Bowman of anthropic tweeted, then deleted, that the new Claude model (unintentionally, kind of) offers whistleblowing as a feature, i.e. it might call the cops on you if it gets worried about how you are prompting it.

    tweet text:

    If it thinks you’re doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

    tweet text:

    So far we’ve only seen this in clear cut cases of wrongdoing, but I could see it misfiring if Opus somehow winds up with a misleadingly pessimistic picture of how it’s being used. Telling Opus that you’ll torture its grandmother if it writes buggy code is a bad Idea.

    skeet text

    can’t wait to explain to my family that the robot swatted me after I threatened its non-existent grandma.

    Sam Bowman saying he deleted the tweets so they wouldn’t be quoted ā€˜out of context’: https://xcancel.com/sleepinyourhat/status/1925626079043104830

    Molly White with the out of context tweets: https://bsky.app/profile/molly.wiki/post/3lpryu7yd2s2m







  • He claims he was explaining what others believe not what he believes

    Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.

    I’m pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying ā€œI disagree that this is a likely timescale but I’m going to try to explain Daniel’s positionā€ immediately before. The reason I feel able to explain Daniel’s position is that I argued with him about it for ~2 hours until I finally had to admit it wasn’t completely insane and I couldn’t find further holes in it.

    Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn’t into, it’s not really relevant context.

    Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it’s fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .