CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning Apr 17, 2026

Caption First, VQA Second: Knowledge Density, Not Task Format, Drives Multimodal Scaling

arXiv AI Archived Apr 17, 2026 ✓ Full text saved

arXiv:2604.13054v1 Announce Type: cross Abstract: Multimodal large language models (MLLMs) have achieved rapid progress, yet their scaling behavior remains less clearly characterized and often less predictable than that of text-only LLMs. Increasing model size and task diversity often yields diminishing returns. In this work, we argue that the primary bottleneck in multimodal scaling is not task format, but knowledge density in training data. We first show that task-specific supervision such as

Full text archived locally
✦ AI Summary · Claude Sonnet


    Computer Science > Computation and Language [Submitted on 17 Mar 2026] Caption First, VQA Second: Knowledge Density, Not Task Format, Drives Multimodal Scaling Hongjian Zou, Yue Ge, Qi Ding, Yixuan Liao, Xiaoxin Chen Multimodal large language models (MLLMs) have achieved rapid progress, yet their scaling behavior remains less clearly characterized and often less predictable than that of text-only LLMs. Increasing model size and task diversity often yields diminishing returns. In this work, we argue that the primary bottleneck in multimodal scaling is not task format, but knowledge density in training data. We first show that task-specific supervision such as Visual Question Answering (VQA) contributes little incremental semantic information beyond image captions: VQA signals can be reconstructed from captions with negligible performance loss. We then demonstrate that increasing knowledge density -- through structured caption enrichment and cross-modal knowledge injection -- leads to consistent performance improvements across multimodal and downstream benchmarks. Across controlled experiments, performance correlates more strongly with semantic coverage than with task diversity. These findings suggest that current MLLMs fail to scale primarily because training data lacks sufficient knowledge coverage. We advocate for knowledge-centric multimodal training as a principled foundation for scalable multimodal models. Comments: 23 pages, 4 figures, 10 tables. Preprint Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2604.13054 [cs.CL]   (or arXiv:2604.13054v1 [cs.CL] for this version)   https://doi.org/10.48550/arXiv.2604.13054 Focus to learn more Submission history From: Hongjian Zou [view email] [v1] Tue, 17 Mar 2026 15:45:21 UTC (7,322 KB) Access Paper: HTML (experimental) view license Current browse context: cs.CL < prev   |   next > new | recent | 2026-04 Change to browse by: cs cs.AI cs.CV References & Citations NASA ADS Google Scholar Semantic Scholar Export BibTeX Citation Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Demos Related Papers About arXivLabs Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
    💬 Team Notes
    Article Info
    Source
    arXiv AI
    Category
    ◬ AI & Machine Learning
    Published
    Apr 17, 2026
    Archived
    Apr 17, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗