LLM2CLIP Logo

LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation

Weiquan Huang1*, Aoqi Wu1*, Yifan Yang2†, Xufang Luo2, Yuqing Yang2, Liang Hu1, Qi Dai2, Xiyang Dai2, Dongdong Chen2, Chong Luo2, Lili Qiu2

1Tongji University, 2Microsoft Corporation
*Equal Contribution, work done during internship at Microsoft Research Asia Corresponding to: yifanyang@microsoft.com

News

  • [2024-11-18] Our Caption-Contrastive finetuned Llama3-8B-CC released on HuggingFace, we will try release more version.
  • [2024-11-08] We are training a scaled-up version with ten times the dataset. Updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!
  • [2024-11-06] OpenAI's CLIP and EVA02's ViT models are now available on HuggingFace.
  • [2024-11-01] Our paper was accepted at the NeurIPS 2024 SSL Workshop!

Abstract

CLIP is one of the most important multimodal foundational models today, aligning visual and textual signals into a shared feature space using a simple contrastive learning loss on large-scale image-text pairs. What powers CLIP’s capabilities? The rich supervision signals provided by natural language — the carrier of human knowledge — shape a powerful cross-modal representation space. As a result, CLIP supports a variety of tasks, including zero-shot classification, detection, segmentation, and cross-modal retrieval, significantly influencing the entire multimodal domain.

However, with the rapid advancements in large language models (LLMs) like GPT-4 and LLaMA, the boundaries of language comprehension and generation are continually being pushed. This raises an intriguing question: can the capabilities of LLMs be harnessed to further improve multimodal representation learning? The potential benefits of incorporating LLMs into CLIP are clear. LLMs’ strong textual understanding can fundamentally improve CLIP’s ability to handle image captions, drastically enhancing its ability to process long and complex texts — a well-known limitation of vanilla CLIP. Moreover, LLMs are trained on a vast corpus of text, possessing open-world knowledge. This allows them to expand on caption information during training, increasing the efficiency of the learning process.

In this paper, we propose LLM2CLIP, a novel approach that embraces the power of LLMs to unlock CLIP’s potential. By fine-tuning the LLM in the caption space with contrastive learning, we extract its textual capabilities into the output embeddings, significantly improving the output layer’s textual discriminability. We then design an efficient training process where the fine-tuned LLM acts as a powerful teacher for CLIP’s visual encoder. Thanks to the LLM’s presence, we can now incorporate longer and more complex captions without being restricted by vanilla CLIP text encoder’s context window and ability limitations. Our experiments demonstrate that this approach brings substantial improvements in cross-modal tasks.

LLM2CLIP Overview

LLM2CLIP Overview: After applying caption contrastive fine-tuning to the LLM, the increased textual discriminability enables more effective CLIP training. We leverage the open-world knowledge and general capabilities of the LLM to better process dense captions, addressing the previous limitations of the pretrained CLIP visual encoder and providing richer, higher-dimensional textual supervision.

Radar Chart

LLM2CLIP: Demonstrating excellence across multiple benchmarks.

Experimental Results

To validate our hypothesis, we designed a caption-to-caption retrieval experiment, as shown in Table 1 and Figure 2. Each image in the MS-COCO dataset has five human-annotated captions. We selected the first two captions as positive samples and performed retrieval across the entire validation set. Using the caption retrieval accuracy (CRA), we evaluated the text model’s ability to differentiate between captions, helping us determine which language model is better suited for CLIP. We found that Llama-3 8B achieved only 18.4% top-1 accuracy, while the standard CLIP-ViT-L reached 66.0% top-1 accuracy. As illustrated in Figure 2, the top-1 caption retrieved by original Llama-3 can be entirely unrelated to the query caption, clearly obstructing effective CLIP learning. Therefore, directly using an LLM to guide CLIP’s visual encoder training is highly constrained.

Comparison of top-1 Caption Retrieval Accuracy (CRA) for various language models in MS COCO validation set.
Language Model CRA
CLIP-L/14 25.2
EVA02-L/14 27.11
Llama3-8B 5.2
Llama3.2-1B 5.6
Llama3-8B-CC 29.5
Llama3.2-1B-CC 29.4
COCO Score

LLM2CLIP can make SOTA CLIP even more SOTA ever.

Comparison of various text encoders. With only a few epochs of training alongside the LLM, we can significantly enhance the performance of existing pretrained SOTA CLIP models. We freeze the gradients of the LLM and only use it for feature extraction, making the training process very low-cost.
Methods Flickr30k COCO ShareGPT4v Urban-1k DOCCI Average CRA
I2T T2I I2T T2I I2T T2I I2T T2I I2T T2I
EVA02 Vit-L/14 89.8 73.3 63.8 63.8 89.3 91.9 68.5 73.3 75.0 73.4 76.2 69.8
+ Jina Bert 87.9 77.9 60.9 50.3 95.3 95.1 79.4 83.8 73.8 77.9 78.2 74.2
+ Llama3-8B 87.1 75.3 56.4 41.6 89.3 91.4 58.6 60.9 51.7 50.6 66.3 18.4
+ Llama3-8B-TC 92.7 82.1 68.1 54.6 97.7 98.2 88.9 93.8 85.0 87.8 84.8 71.3
+ Llama3-8B-CC 92.0 82.8 68.5 54.8 98.6 99.0 88.1 94.0 88.2 90.4 85.6 73.0
+ Llama3.2-1B-CC 91.6 81.3 65.8 52.5 98.3 98.2 84.5 91.9 83.4 86.4 83.4 72.8
+ Mistral-Nemo-12B-CC 93.5 83.7 68.5 54.7 98.6 98.9 90.4 94.3 88.0 89.7 86.0 73.3
Retrieval Performance across Flickr30K-CN and COCO-CN. Even with alignment done solely on English data, our network enables Chinese cross-modal retrieval to go from unusable to SOTA, surpassing models like Wukong that are trained on Chinese datasets.
Methods Flickr-CN COCO-CN
I2T@1 I2T@5 I2T@10 T2I@1 T2I@5 T2I@10 I2T@1 I2T@5 I2T@10 T2I@1 T2I@5 T2I@10
Wukong 76.1 94.8 97.5 51.7 78.9 86.3 53.4 80.2 90.1 55.2 81.0 90.6
CN-CLIP 80.2 96.6 98.2 68.0 90.7 95.4 63.4 84.2 92.9 64.0 89.2 94.4
JinaCLIP 3.30 9.90 15.1 0.7 3.5 6.0 2.9 8.9 13.7 1.0 4.9 8.2
EVA02 4.40 11.8 16.7 0.94 2.9 4.8 2.7 9.8 15.2 1.0 3.7 7.3
  + LLM2CLIP 86.9 98.1 99.3 75.1 92.9 96.0 69.1 92.5 97.2 70.0 92.6 96.7

LLM2CLIP can make models like Llava more powerful and comprehensive in performance:

Performance of Llava 1.5. We explored whether LLM2CLIP could enhance complex image understanding tasks by modifying Llava's visual encoder. LLM2CLIP improves Llava in 87.5% of benchmarks.
MODEL VQA Datasets Pope Metrics MM Benchmarks Seed Benchmarks
VQAv2 GQA VizWiz SQA-IMG TextVQA Random Adv. Popular MME MMBench MMBench-CN LlavaBench MMVet All IMG Video
Llava (Paper) 78.5 62.0 50.0 66.8 58.2 87.3 86.1 84.2 1510.7 64.3 58.3 65.4 31.1 58.6 66.1 37.3
Llava (Rep.) 79.04 62.86 50.57 67.97 57.48 87.7 84.85 86.3 1476.69 66.66 60.39 58.0 34.3 59.86 66.95 39.71
  + LLM2CLIP 79.80 63.15 52.37 69.92 58.35 88.55 82.76 87.75 1505.82 68.29 60.40 62.7 34.8 60.96 68.80 38.96

BibTeX

@misc{huang2024llm2clippowerfullanguagemodel,
  title={LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation},
  author={Weiquan Huang and Aoqi Wu and Yifan Yang and Xufang Luo and Yuqing Yang and Liang Hu and Qi Dai and Xiyang Dai and Dongdong Chen and Chong Luo and Lili Qiu},
  year={2024},
  eprint={2411.04997},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2411.04997}
}