- 1 Post
- 37 Comments
locallynonlinear@awful.systemsto SneerClub@awful.systems•Cultists Draw a Boogeyman on Cardboard, Become Afraid Of ItEnglish14·1 year agoScientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!
What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!
locallynonlinear@awful.systemsto SneerClub@awful.systems•Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.English4·1 year agoAlso seems relevant
Like in the deer, the large-scale target morphology can be revised – the pattern memory re-written – by transient physiological experience. The genetics sets the hardware with a default pattern outcome, but like any good cognitive system, it has a re-writable memory that learns from experience.
locallynonlinear@awful.systemsto SneerClub@awful.systems•Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.English9·1 year agoI wonder if Scott is the person who stood up during Michael Levin’s talk on (non genetic) bio-electric circuits storing morphological memory across time and said, “those animals can’t exist!”
Just like neuroscientists try to read out and decode the memories inside a living brain, we can now read and write (a little bit…) the anatomical goals and memories of the collective intelligence of morphogenesis. The first time I presented this at a conference – genetically wild-type worms with a drastically different, rewritten, permanent, target morphology – someone stood up and said that this was impossible and “those animals can’t exist”. Here’s a video taken by Junji Morokuma, of them hanging out.
locallynonlinear@awful.systemsto SneerClub@awful.systems•here in Top Pedophiles Of Twitter, my "friend" thinks about race so very little that he shit-tests every new person he meets with a racial slurEnglish19·1 year agoyou forgot the last stage of the evolution,
you’ll later find out that people were talking about you, your actions, your words, and that being ghosted was in fact the consequence of your actions, and then you’ll have one last opportunity to turn it all around
- do some self introspection and reconcile what actually happened vs what you intended to happen, and decide that it is in fact possible to create relationships without trying to meta discomfort them for your purposes specifically
or
- wokeism is the reason, so this time you need to be even MORE obnoxious, to filter people out who would talk behind your back even strongester! (repeat from the top of your flow)
locallynonlinear@awful.systemsto SneerClub@awful.systems•here in Top Pedophiles Of Twitter, my "friend" thinks about race so very little that he shit-tests every new person he meets with a racial slurEnglish1·1 year agoI think there is a nugget of truth here in so far as that you can’t live life trying to make everyone happy, but also, you get what you shop for so, have fun with the shit heads.
locallynonlinear@awful.systemsto SneerClub@awful.systems•good news, everyone! eliezer is writing fiction againEnglish5·1 year agoI love DnD and TTRPGs. I even love watching some streams when the quality is high. But I’m with you slides in pocket protector I don’t generally like this new wave of people who bring the expectation to my tables that every scene and every situation is a massive mellow drama mary sue projection for their OC that must be maximized.
What was that about wit and brevity? Simple done well?
locallynonlinear@awful.systemsto SneerClub@awful.systems•good news, everyone! eliezer is writing fiction againEnglish8·1 year agoAlways my favorite part of your day.
locallynonlinear@awful.systemsto SneerClub@awful.systems•good news, everyone! eliezer is writing fiction againEnglish11·1 year agoWhy protest when you could spend far less energy and just “not be wrong” and “have no stake” by over-fitting your statistical model to the past?
locallynonlinear@awful.systemsto SneerClub@awful.systems•"if you're not stupid, it doesn't matter if COVID was a lab leak"English6·1 year ago“priors updated” was the same desired outcome all along.
locallynonlinear@awful.systemsto SneerClub@awful.systems•"if you're not stupid, it doesn't matter if COVID was a lab leak"English8·1 year agoIf I could sum up everything that’s wrong with EA, it’d be,
“We can use statistics to do better than emotions!” in reality means “We are dysregulated and we aren’t going to do anything about it!!!”
locallynonlinear@awful.systemsto SneerClub@awful.systems•"if you're not stupid, it doesn't matter if COVID was a lab leak"English13·1 year agoSo far, there has been zero or one[1] lab leak that led to a world-wide pandemic. Before COVID, I doubt anyone was even thinking about the probabilities of a lab leak leading to a worldwide pandemic.
So, actually, many people were thinking about lab leaks, and the potential of a worldwide pandemic, despite Scott’s suggestion that stupid people weren’t. For years now, bioengineering has been concerned with accidental lab leaks because the understanding that risk existed was widespread.
But the reality is that guessing at probabilities of this sort of thing still doesn’t change anything. It’s up to labs to pursue safety protocols, which happens at the economic edge of of the opportunity vs the material and mental cost of being diligent. Reality is that lab leaks may not change probabilities, but yes the events of them occurring does cause trauma which acts, not as some bayesian correction, but an emotional correction so that people’s motivations for atleast paying more attention increases for a short while.
Other than that, the greatest rationalist on earth can’t do anything with their statistics about label leaks.
This is the best paradox. Not only is Scott wrong to suggest people shouldn’t be concerned about major events (the traumatic update to individual’s memory IS valuable), but he’s wrong to suggest that anything he or anyone does after updating their probabilities could possibly help them prepare meaningfully.
He’s the most hilarious kind of wrong.
locallynonlinear@awful.systemsto SneerClub@awful.systems•"if you're not stupid, it doesn't matter if COVID was a lab leak"English18·1 year agoAh, if only the world wasn’t so full of “stupid people” updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for… inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.
What… insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.
Risk management is not a statistical question in fact. It’s an economics question of your opportunities. It’s why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It’s why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just “try to make money, stupid”.
locallynonlinear@awful.systemsto SneerClub@awful.systems•LessWrong: but what about some eugenics, tho?English3·1 year agoIn practice, alignment means “control”.
And the the existential panic is realizing that control doesn’t scale. So rather than admit that goal “alignment” doesn’t mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we’re constantly in tenuous relationships at the edge of uncertainty with all of it,
it’s the end of all meaning aka the robot overlord.
locallynonlinear@awful.systemsto SneerClub@awful.systems•definitely time we started charging this person rentEnglish2·1 year agoOne day, when Zack is a little older, I hope he learns it’s okay to sometimes talk -to someone- instead of airing one’s identity confusion like an arxiv prepublish paper.
Like, it’s okay to be confused in a weird world, or even have controversial opinions. Make some friends you can actually trust, aren’t demanding bayesian defenses of feelings, and chat this shit out buddy.
locallynonlinear@awful.systemsto TechTakes@awful.systems•Andrew Plotkin (Zarf): Sydney obeys any command that rhymesEnglish2·1 year agoAdversarial attacks on training data for LLMs is in fact a real issue. You can very very effectively punch up with regards to the proportion of effect on trained system with even small samples of carefully crafter adversarial inputs. There are things that can counter act this, but all of those things increase costs, and LLMs are very sensitive to economics.
Think of it this way. One, reason why humans don’t just learn everything is because we spend as much time filtering and refocusing our attention in order to preserve our sense of self in the face of adversarial inputs. It’s not perfect, again it changes economics, and at some point being wrong but consistent with our environment is still more important.
I have no skepticism that LLMs learn or understand. They do. But crucially, like everything else we know of, they are in a critically dependent, asymmetrical relationship with their environment. The environment of their existence being our digital waste, so long as that waste contains the correct shapes.
Long term I see regulation plus new economic realities wrt to digital data, not just to be nice or ethical, but because it’s the only way future systems can reach reliable and economical online learning. Maybe the right things happen for the wrong reasons.
It’s funny to me just how much AI ends up demonstrating non equilibrium ecology at scale. Maybe we’ll have that self introspective moment and see our own relationship with our ecosystems reflect back on us. Or maybe we’ll ignore that and focus on reductive world views again.
locallynonlinear@awful.systemsto SneerClub@awful.systems•LW: saying sorry to people might be good, actuallyEnglish1·1 year agoAnd indeed, the other crucial piece is that… apologizing isn’t a protocol with an expected reward function. I can just, not accept your apology. I can just, feel or “update my priors” howmever I like.
We apologize and care about these things because of shame. Which we have to regulate, in part through our actions and perspectives.
Why people feel the way they do and act the way do makes total sense when
one finally confronts your own vulnerabilitiessorry, builds an API and RL framework.
locallynonlinear@awful.systemsto SneerClub@awful.systems•LW: saying sorry to people might be good, actuallyEnglish1·1 year agoNormies go crazy for this one neat rationalist trick!
locallynonlinear@awful.systemsto SneerClub@awful.systems•in case you wondered when Grimes was going to go full NaziEnglish0·1 year agoTalk a lot about white culture, and only scarcely mention that he thinks white culture is a product of genetics.
I remember in the early days of the “culture wars” as far as political agendas going, hearing about “white/ethno-european pride,” and being naively curious, I actually tried to engage these people on the topics of European culture and history, and found exactly zero engagement on these topics. Just politics abusing people’s confusion of heritage with people’s internal shame and lack of identity.
The paradox I’ve always found is that the more secure in your identity and heritage you are, the more happy you are to share, grow, and widen that. Maybe a hot take, but growing up in the south, alot of people there hide their personal internal shame and confusion in aggression and identity politics.
locallynonlinear@awful.systemsto SneerClub@awful.systems•Effective Altruism is when you want to spend money on genetic engineering for race-and-IQ theories. Emphasises Richard Lynn fandom in the comments. Front-paged.English1·1 year agoIt’s also, probably wrong. Modern views of intelligence (see Multiple realizability of cognition and Multi-level competency collective intelligence and Free Energy Principle models) suggest you are better of measuring intelligence by measuring it’s metabolism or through perturbation and interactions.
Which isn’t reductive enough for these people.
Feel free to ask Michael in the comments of his blog, he frequently replies, helpfully, with references. I mean all science is tentative, so skepticism is healthy.