Qwen3-VL-Seg: Unlocking Open-World Referring Segmentation with Vision-Language Grounding
概要
arXiv:2605.07141v1 Announce Type: cross Abstract: Open-world referring segmentation requires grounding unconstrained language expressions to precise pixel-level regions. Existing multimodal large language models (MLLMs) exhibit strong open-world visual grounding, but their outputs remain limited to…