Instruct2See: Learning to Remove Any Obstructions Across Distributions

ICML 2025

Junhang Li1, 2, *, Yu Guo2, 3,*, Chuhua Xian1, †, Shengfeng He2, †,
1.School of Computer Science and Engineering, South China University of Technology
2.School of Computing and Information Systems, Singapore Management University
3.School of Navigation, Wuhan University of Technology
* Equal contribution
† Corresponding authors (Email: chhxian@scut.edu.cn, shengfenghe@smu.edu.sg)

Abstract

Images are often obstructed by various obstacles due to capture limitations, hindering the observation of objects of interest. Most existing methods address occlusions from specific elements like fences or raindrops, but are constrained by the wide range of real-world obstructions, making comprehensive data collection impractical. To overcome these challenges, we propose Instruct2See, a novel zero-shot framework capable of handling both seen and unseen obstacles. The core idea of our approach is to unify obstruction removal by treating it as a soft-hard mask restoration problem, where any obstruction can be represented using multi-modal prompts, such as visual semantics and textual instructions, processed through a cross-attention unit to enhance contextual understanding and improve mode control. Additionally, a tunable mask adapter allows for dynamic soft masking, enabling real- time adjustment of inaccurate masks. Extensive experiments on both in-distribution and out-of-distribution obstacles show that Instruct2See consistently achieves strong performance and generalization in obstruction removal, regardless of whether the obstacles were present during the training phase.

Overview

Flowchart of our Instruct2See. Instruct2See accepts instructions (randomly sampled from a database of instructions generated by GPT-4 during the training phase or inputted by the user during the inference phase) to flexibly activate soft masking and control the obstruction removal model for optimal capabilities


Experiment results

Results on Seen Obstructions

We compare the obstruction removal performance of various methods by using detected masks.


Results on Unseen Obstructions

To further evaluate the zero-shot learning capability of our model, we conducted experiments on images containing unseen obstructions.


BibTeX