Transformers are unreasonably good at extracting patterns.
- Transformers are unreasonably good at extracting patterns[aka].
- Apparently if you train them on RNA sequences to images of the rendered protein, they do a surprisingly great job at predicting what a given sequence will fold to.
- LLMs are patient, and observant enough, to tap into that structure.