GeoRemover: Removing Objects and Their Causal Visual Artifacts

Zixin Zhu1, Haoxiang Li2, Xuelu Feng1, He Wu1, Chunming Qiao1, Junsong Yuan1
1 University at Buffalo    2 Pixocial Technology
{ zixinzhu, xuelufen, qiao, jsyuan }@buffalo.edu    haoxiang.li@pixocial.com    heu199825@gmail.com
📄 Paper 💻 Code 🤗 Model ▶️ Demo 🗂️ Dataset
Drag the handle or press ▶️ to autoplay.

Overview

GeoRemover teaser visualization
Figure 1. Comparison of object removal training paradigms. (a) Strictly mask-aligned training edits only masked regions but leaves causal visual artifacts (shadow) unaddressed. (b) Loosely mask-aligned training allows broader context-aware corrections but lacks clear guidance, leading to confusion and uncontrollable edits. (c) Our method decouples geometry and appearance for object removal: we first edit the scene geometric representation (in the form of a depth map) under strictly mask-aligned supervision, then render a realistic image where both objects and causal visual artifacts (shadow) are cleanly removed.

Abstract

Towards intelligent image editing, object removal should eliminate both the target object and its causal visual artifacts, such as shadows and reflections. However, existing image appearance-based methods either follow strictly mask-aligned training and fail to remove these casual effects which are not explicitly masked, or adopt loosely mask-aligned strategies that lack controllability and may unintentionally over-erase other objects. We identify that these limitations stem from ignoring the causal relationship between an object’s geometry presence and its visual effects. To address this limitation, we propose a geometry-aware two-stage framework that decouples object removal into (1) geometry removal and (2) appearance rendering. In the first stage, we remove the object directly from the geometry (e.g., depth) using strictly mask-aligned supervision, enabling structure-aware editing with strong geometric constraints. In the second stage, we render a photorealistic RGB image conditioned on the updated geometry, where causal visual effects are considered implicitly as a result of the modified 3D geometry. To guide learning in the geometry removal stage, we introduce a preference-driven objective based on positive and negative sample pairs, encouraging the model to remove objects as well as their causal visual artifacts while avoiding new structural insertions. Extensive experiments demonstrate that our method achieves state-of-the-art performance in removing both objects and their associated artifacts on two popular benchmarks.

Dataset: CausRem

Result example 1
Figure 2. Representative annotations in CausRem. Left: shadow examples; Right: reflection examples.

Visualization

Result example 1
Figure 3. Qualitative comparison with state-of-the-art methods on CausRem.

BibTeX

@misc{zhu2025georemoverremovingobjectscausal,
  title        = {GeoRemover: Removing Objects and Their Causal Visual Artifacts},
  author       = {Zixin Zhu and Haoxiang Li and Xuelu Feng and He Wu and Chunming Qiao and Junsong Yuan},
  year         = {2025},
  eprint       = {2509.18538},
  archivePrefix= {arXiv},
  primaryClass = {cs.CV},
  url          = {https://arxiv.org/abs/2509.18538}
}