Abstract
Vision-language Models (VLMs), such as CLIP and SigLIP, have become the de facto standard for multimodal tasks, serving as essential
building blocks for recent Multimodal Large Language Models, including LLaVA and PaliGemma. However, current evaluations for VLMs
remain heavily anchored to ImageNet. In this paper, we question whether ImageNet’s coverage is still sufficiently challenging for
modern VLMs, and investigate the impact of adding novel and varied concept categories, i.e., semantically grouped fine-grained synsets.
To this end, we introduce Concept-pedia, a novel, large-scale, semantically-annotated multimodal resource covering more than 165,000 concepts.
Leveraging a language-agnostic, automatic annotation pipeline grounded in Wikipedia, Concept-pedia expands the range of visual concepts,
including diverse abstract categories. Building on Concept-pedia, we also present a manually-curated Visual Concept Recognition evaluation
benchmark, Concept-10k, that spans thousands of concepts across a wide range of categories. Our experiments show that current models,
although excelling on ImageNet, struggle with Concept-10k. Not only do these findings highlight a persistent bias toward ImageNet-centric
concepts, but they also underscore the urgent need for more representative benchmarks. By offering a broader and semantically richer testbed,
Concept-10k aims to support the development of multimodal systems that better generalize to the complexities of real-world visual concepts.