Transformers are unreasonably good at extracting patterns.

· Bits and Bobs 1/13/25
  • Transformers are unreasonably good at extracting patterns[aka].
    • Apparently if you train them on RNA sequences to images of the rendered protein, they do a surprisingly great job at predicting what a given sequence will fold to.
    • LLMs are tapping into a hidden structure of the universe that reveals itself only if you are patient enough to sift through it.[akb][akc]
    • LLMs are patient, and observant enough, to tap into that structure.

More on this topic

From other episodes