CompSlider: Compositional Slider for Disentangled Multiple-Attribute Image Generation

Zixin Zhu1,2, Kevin Duarte2, Mamshad Nayeem Rizve2, Chengyuan Xu2, Ratheesh Kalarot2, Junsong Yuan1
1 University at Buffalo    2 Adobe Inc. (ASML)
{ zixinzhu, jsyuan }@buffalo.edu    { kduarte, mrizve, chengyuanx, kalarot }@adobe.com
📄 Paper (ICCV 2025) 🖼️ Poster

Demo Video

If the video does not load, open it on Google Drive.

Overview

CompSlider teaser (replace assets/teaser_comp.png)
Figure 1. Teaser. (a) Slider-based generation: text defines the primary object, while sliders enable continuous control over specific attributes. (b) Examples of fine-grained attribute control. (c) Previous serial sliders vs. our compositional slider enabling simultaneous control with better disentanglement.

Motivation: Entanglement in Serial Sliders

Entanglement example (replace with assets/onetime_motivation_c.png)
Entanglement in the previous method (Gandikota et al., 2023) causes a fixed smile value to produce varying smile intensities depending on slider order; applying the smile slider before the age slider yields a young person with a closed smile, while the reverse order results in an open smile. The arrows indicate the sequential addition of control signals; the text and sliders below denote the corresponding inputs.

Abstract

In text-to-image (T2I) generation, achieving fine-grained control over attributes - such as age or smile - remains challenging, even with detailed text prompts. Slider-based methods offer a solution for precise control of image attributes. Existing approaches typically train individual adapter for each attribute separately, overlooking the entanglement among multiple attributes. As a result, interference occurs among different attributes, preventing precise control of multiple attributes together. To address this challenge, we aim to disentangle multiple attributes in slider-based generation to enbale more reliable and independent attribute manipulation. Our approach, CompSlider, can generate a conditional prior for the T2I foundation model to control multiple attributes simultaneously. Furthermore, we introduce novel disentanglement and structure losses to compose multiple attribute changes while maintaining structural consistency within the image. Since CompSlider operates in the latent space of the conditional prior and does not require retraining the foundation model, it reduces the computational burden for both training and inference. We evaluate our approach on a variety of image attributes and highlight its generality by extending to video generation.

Method

Foundation model inference pipeline (replace with assets/method_foundation_c.png)
In the foundation model, image conditions extracted using CLIP control the style and structure of generated images. In our CompSlider, image conditions are generated from slider values instead of being obtained from source images.

Training

Training process of CompSlider (replace with assets/prior_modelmethod_c.png)
Figure: The training process of our CompSlider.

Results

Result 1
Qualitative Comparison for human-related and non-human sliders.
Result 2
Combinations of different sliders.
Result 3
Qualitative results of simultaneous multi-attribute manipulation using our CompSlider.

BibTeX

@inproceedings{ZhuICCV2025CompSlider,
  title     = {CompSlider: Compositional Slider for Disentangled Multiple-Attribute Image Generation},
  author    = {Zixin Zhu and Kevin Duarte and Mamshad Nayeem Rizve and Chengyuan Xu and Ratheesh Kalarot and Junsong Yuan},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025}
}