

:( looked in my old CS dept’s discord, recruitment posts for the “Existential Risk Laboratory” running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
Not sure! What is CFAR?