

It’s worse than you are remembering! Eliezer has claimed deep neural networks (maybe even something along the lines of llms) could learn to break hashes just through being trained on exposure to hash/plaintext pairs on the training data set.
The original discussion: here about a lesswrong post and here about a tweet. And the original lesswrong post if you want to go back to the source.
I’d add:
examples of problems equivalent to the halting problem, examples of problems that are intractable
computational complexity. I.e. Schrodinger Equation and DFT and why the ASI can’t invent new materials/nanotech (if it was even possible in the first place) just by simulating stuff really well.
titotal has written some good stuff on computational complexity before. Oh wait, you said you can do physics so maybe you’re already familiar with the material science stuff?