Skip to content

Repository for Efficient Autoregressive Inference for Transformer Probabilistic Models paper

acerbilab/transformer-ar-buffer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

Efficient Autoregressive Inference for Transformer Probabilistic Models

Repository for the paper “Efficient Autoregressive Inference for Transformer Probabilistic Models” (Conor et al., 2025)

Code coming soon — currently under preparation.


Abstract

Transformer-based models for amortized probabilistic inference, such as neural processes, prior-fitted networks, and tabular foundation models, excel at single-pass marginal prediction. However, many real-world applications -- from signal interpolation to multi-column tabular predictions -- require coherent joint distributions that capture dependencies between predictions. While purely autoregressive architectures efficiently generate such distributions, they sacrifice the flexible set-conditioning that makes these models powerful for meta-learning. Conversely, the standard approach to obtain joint distributions from set-based models requires expensive re-encoding of the entire augmented conditioning set at each autoregressive step. We introduce a causal autoregressive buffer that preserves the advantages of both paradigms. Our approach decouples context encoding from updating the conditioning set. The model processes the context once and caches it. A dynamic buffer then captures target dependencies: as targets are incorporated, they enter the buffer and attend to both the cached context and previously buffered targets. This enables efficient batched autoregressive generation and one-pass joint log-likelihood evaluation. A unified training strategy allows seamless integration of set-based and autoregressive modes at minimal additional cost. Across synthetic functions, EEG signals, cognitive models, and tabular data, our method matches predictive accuracy of strong baselines while delivering up to 20x faster joint sampling. Our approach combines the efficiency of autoregressive generative models with the representational power of set-based conditioning, making joint prediction practical for transformer-based probabilistic models.


About

Repository for Efficient Autoregressive Inference for Transformer Probabilistic Models paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published