We propose EditAR, a unified autoregressive model for diverse conditional generation tasks, e.g., image editing, depth-to-image, edge-to-image, segmentation-to-image.
Diffusion models have made significant advances in text-guided synthesis tasks. Recent progress in controllable image generation and editing is largely driven by diffusion-based methods. Although diffusion models perform exceptionally well in specific tasks with tailored designs, establishing a unified model is still challenging. In contrast, autoregressive models inherently feature a unified tokenized representation, which simplifies the creation of a single foundational model for various tasks. In this work, we propose EditAR, a single unified autoregressive framework for a variety of conditional image generation tasks, e.g., image editing, depth-to-image, edge-to-image, segmentation-to-image. The model takes both images and instructions as inputs, and predicts the edited images tokens in a vanilla next-token paradigm. To enhance the text-to-image alignment, we further propose to distill the knowledge from foundation models into the autoregressive modeling process. We evaluate its effectiveness across diverse tasks on established benchmarks, showing competitive performance to various state-of-the-art task-specific methods.
An image is mapped through a VQ-Encoder to obtain corresponding token indices. Corresponding text instructions are mapped to latent embeddings via a text encoder. Both image token indices and text embeddings are input to the autoregressive transformer to predict the target token indices. To enhance the text-to-image alignment, a distillation loss is introduced during training to minimize the differences between the latent features of the autoregressive model and that of a feature encoder. The output sequence is lastly decoded into a realistic image via a VQ-Decoder during inference.
@article{mu2025editAR,
author = {Mu, Jiteng and Vasconcelos, Nuno and Wang, Xiaolong},
title = {EditAR: Unified Conditional Generation with Autoregressive Models},
journal={arXiv preprint arXiv:2501.04699},
year={2025}}