Unified Lexical Representation for Interpretable Visual-Language Alignment

🎉 Accepted by NeurIPS 2024

Yifan Li1, Yikai Wang1, Yanwei Fu1, Dongyu Ru2, Zheng Zhang2, Tong He2
1Fudan University, 2Amazon Web Services

LexVLA can generate a lexical representation of the input image (the first wordcloud), or to pick some patches of the image for local lexical information representation (the second wordcloud, with the selected patches boxed in red), and to select the most relevant patches of the image given the text content (the rightmost figure, with caption ’horse’, with the second-to-last figure is the GT mask).

Abstract

Visual-Language Alignment (VLA) has gained a lot of attention since CLIP’s groundbreaking work. Although CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores. On the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation and interpretable, providing exact matches for individual words. However, lexical representations is difficult to learn due to no ground-truth supervision and false-discovery issues, and thus requires complex design to train effectively. In this paper, we introduce LexVLA, a more interpretable VLA framework by learning a unified lexical representation for both modalities without complex design. We use DINOv2 as our visual model for its local-inclined features and Llama 2, a generative language model, to leverage its in-context lexical prediction ability. To avoid the false discovery, we propose an overuse penalty to refrain the lexical representation from falsely frequently activating meaningless words. We demonstrate that these two pre-trained uni-modal models can be well-aligned by fine-tuning on modest multi-modal dataset and avoid intricate training configurations. On crossmodal retrieval benchmarks, LexVLA, trained on the CC-12M multi-modal dataset, outperforms baselines fine-tuned on larger datasets (e.g., YFCC15M) and those trained from scratch on even bigger datasets (e.g., 1.1B data, including CC-12M). We conduct extensive experiments to analyze LexVLA. Codes are available at https://github.com/Clementine24/LexVLA.

Method

We learn a unified lexical representation with distinct codebooks for text and visual modalities.
We train LexVLA with the standard contrastive objectives along with the proposed overuse penalty to encourage sparsity while preventing meaningless activation.

Experiments

Performance under different sparsity

LexVLA is robust against the sparsity ratio even in a very high ratio (98.27%, only 296 activated tokens).

Lexical visualization

Visualization of the image lexical representation obtained by LexVLA. Larger word indicates larger lexical value. The first row represents the complete image, and the second row represent local patches (boxed in red). LexVLA learns a well-aligned lexical representation for both image and patches without local supervision.

PatchDis visualization

PatchDis visualization. The same color indicates the same category. LexVLA correctly predicts the corresponding region, even for the small-scale objects, like the bottle in the first image.

BibTeX

@article{li2024unified,
      title={Unified Lexical Representation for Interpretable Visual-Language Alignment},
      author={Li, Yifan and Wang, Yikai and Fu, Yanwei and Ru, Dongyu and Zhang, Zheng and He, Tong},
      journal={Advances in Neural Information Processing Systems},
      year={2024}
    }