Back to Search Start Over

EVLM: Self-Reflective Multimodal Reasoning for Cross-Dimensional Visual Editing

Authors :
Khalid, Umar
Iqbal, Hasan
Farooq, Azib
Rahnavard, Nazanin
Hua, Jing
Chen, Chen
Publication Year :
2024

Abstract

Editing complex visual content based on ambiguous instructions remains a challenging problem in vision-language modeling. While existing models can contextualize content, they often struggle to grasp the underlying intent within a reference image or scene, leading to misaligned edits. We introduce the Editing Vision-Language Model (EVLM), a system designed to interpret such instructions in conjunction with reference visuals, producing precise and context-aware editing prompts. Leveraging Chain-of-Thought (CoT) reasoning and KL-Divergence Target Optimization (KTO) alignment technique, EVLM captures subjective editing preferences without requiring binary labels. Fine-tuned on a dataset of 30,000 CoT examples, with rationale paths rated by human evaluators, EVLM demonstrates substantial improvements in alignment with human intentions. Experiments across image, video, 3D, and 4D editing tasks show that EVLM generates coherent, high-quality instructions, supporting a scalable framework for complex vision-language applications.<br />Comment: Technical Report

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.10566
Document Type :
Working Paper