Attention Mechanism

Conv-basis: A new paradigm for efficient attention inference and gradient computation in transformers (arXiv 2024)

Tensor attention training: Provably efficient learning of higher-order transformers (arXiv 2024)

Toward Infinite-Long Prefix in Transformer (arXiv 2024)

Learning adaptive axis attentions in fine-tuning: Beyond fixed sparse attention patterns (Findings of the Association for Computational Linguistics (ACL) 2022)