Transformers

Conv-basis: A new paradigm for efficient attention inference and gradient computation in transformers (arXiv 2024)

Tensor attention training: Provably efficient learning of higher-order transformers (arXiv 2024)

Toward Infinite-Long Prefix in Transformer (arXiv 2024)