LLM2CLIP Logo

LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation

Weiquan Huang1*, Aoqi Wu1*, Yifan Yang2†‡, Xufang Luo2, Yuqing Yang2, Usman Naseem3, Chunyu Wang2, Qi Dai2, Xiyang Dai2, Dongdong Chen2, Chong Luo2, Lili Qiu2, Liang Hu1†

1Tongji University, 2Microsoft Corporation, 3Macquarie University
*Equal Contribution (work done during internship at Microsoft Research Asia)
Corresponding authors: yifanyang@microsoft.com, lianghu@tongji.edu.cn
Project Lead

News

  • [2026-01-23] 🎉 LLM2CLIP received the AAAI 2026 Outstanding Paper Award! Our work was recognized for advancing multimodal representation learning by leveraging large language models as powerful textual teachers. (AAAI 2026 Conference Paper Awards and Recognition)
  • [2025-03-25] 🔥 SigLIP2 models updated with LLM2CLIP training. The new checkpoints bring substantial improvements in short- and long-text image retrieval, as well as multilingual text–image retrieval.
  • [2024-11-18] Our Caption-Contrastive finetuned Llama3-8B-CC released on HuggingFace , with more versions coming soon.
  • [2024-11-08] We are training a scaled-up version with ten times the dataset. Updates include EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results. Stay tuned for the most powerful CLIP models. Thanks for your star!
  • [2024-11-06] OpenAI's CLIP and EVA02's ViT models are now available on HuggingFace .
  • [2024-11-01] Our paper was accepted at the NeurIPS 2024 SSL Workshop!

Abstract

CLIP is one of the most important multimodal foundational models today, aligning visual and textual signals into a shared feature space using a simple contrastive learning loss on large-scale image-text pairs. What powers CLIP’s capabilities? The rich supervision signals provided by natural language — the carrier of human knowledge — shape a powerful cross-modal representation space. As a result, CLIP supports a variety of tasks, including zero-shot classification, detection, segmentation, and cross-modal retrieval, significantly influencing the entire multimodal domain.

However, with the rapid advancements in large language models (LLMs) like GPT-4 and LLaMA, the boundaries of language comprehension and generation are continually being pushed. This raises an intriguing question: can the capabilities of LLMs be harnessed to further improve multimodal representation learning? The potential benefits of incorporating LLMs into CLIP are clear. LLMs’ strong textual understanding can fundamentally improve CLIP’s ability to handle image captions, drastically enhancing its ability to process long and complex texts — a well-known limitation of vanilla CLIP. Moreover, LLMs are trained on a vast corpus of text, possessing open-world knowledge. This allows them to expand on caption information during training, increasing the efficiency of the learning process.

In this paper, we propose LLM2CLIP, a novel approach that embraces the power of LLMs to unlock CLIP’s potential. By fine-tuning the LLM in the caption space with contrastive learning, we extract its textual capabilities into the output embeddings, significantly improving the output layer’s textual discriminability. We then design an efficient training process where the fine-tuned LLM acts as a powerful teacher for CLIP’s visual encoder. Thanks to the LLM’s presence, we can now incorporate longer and more complex captions without being restricted by vanilla CLIP text encoder’s context window and ability limitations. Our experiments demonstrate that this approach brings substantial improvements in cross-modal tasks.

LLM2CLIP Overview

LLM2CLIP Overview: After applying caption contrastive fine-tuning to the LLM, the increased textual discriminability enables more effective CLIP training. We leverage the open-world knowledge and general capabilities of the LLM to better process dense captions, addressing the previous limitations of the pretrained CLIP visual encoder and providing richer, higher-dimensional textual supervision.

Radar Chart

LLM2CLIP: Demonstrating excellence across multiple benchmarks.

Experimental Results

To validate our hypothesis, we designed a caption-to-caption retrieval experiment (CRA). Each image in MS-COCO has five captions. We treat two captions of the same image as positives and retrieve across the whole validation split. CRA measures how well a text encoder can distinguish fine-grained caption semantics—an essential prerequisite for serving as a reliable teacher. We found that vanilla LLMs can be weak at this task, while our Caption-Contrastive (CC) fine-tuning substantially improves CRA, enabling effective supervision for CLIP-style training.

Top-1 Caption Retrieval Accuracy (CRA) on MS-COCO val.
Text Encoder CRA
CLIP-L/14 25.2
EVA02-L/14 27.11
Llama3-8B 5.2
Llama3.2-1B 5.6
Llama3-8B-CC 29.5
Llama3.2-1B-CC 29.4
COCO CRA Figure

LLM2CLIP pushes state-of-the-art CLIP models even further.

Text-encoder ablation on image-text retrieval. Community LLM-derived text encoders can be strong, while our CC-tuned LLM encoders (suffix “-CC”) provide consistent gains and strong averages across diverse benchmarks.
Method Flickr COCO ShareGPT4V Urban-1k DOCCI Avg
I2TT2I I2TT2I I2TT2I I2TT2I I2TT2I I2TT2I
CLIP 89.677.8 59.448.6 88.187.7 68.074.8 67.071.3 74.472.0
Directly Finetune (50%) 89.377.8 59.348.5 88.388.2 68.576.0 67.271.2 74.572.3
bge-en-icl 89.178.5 58.849.7 95.095.7 77.987.7 73.579.4 78.978.2
LLM2Vec-Llama-3-8B 91.181.4 61.551.9 94.596.4 82.188.6 77.782.6 81.480.2
NV-Embed-v2 90.480.0 60.551.6 94.595.7 83.390.0 78.382.1 81.479.9
bge-m3-XLM-R 80.770.3 51.442.0 84.586.5 56.863.4 51.755.9 65.063.6
jina-v3-XLM-R 84.473.7 56.345.6 90.090.9 70.874.1 66.770.6 73.671.0
e5 (XLM-R) 86.475.3 56.446.2 88.488.5 71.177.3 67.571.2 74.071.7
VLM2VEC 91.679.8 61.351.5 93.991.0 90.992.0 80.586.0 83.680.1
VLM2VEC (finetune) 90.279.3 60.050.1 89.891.4 76.985.8 74.178.7 78.277.1
Qwen2.5-0.5B-CC 86.274.9 56.045.1 92.693.2 73.779.0 69.573.0 75.673.0
Llama-3.2-1B-CC 88.978.8 59.849.1 96.395.6 80.185.1 77.080.8 80.477.9
Llama-3-8B-CC 90.480.7 62.751.9 96.596.2 84.289.5 83.386.4 83.480.9
DeepSeek-R1-Distill-Llama-8B-CC 91.780.9 62.251.9 96.796.3 84.388.3 82.585.2 83.580.5
Llama-3.1-8B-CC 92.281.5 63.552.3 97.196.2 86.589.3 84.785.9 84.881.0

Note: “-CC” indicates encoders that underwent our Caption-Contrastive fine-tuning on top of the original LLM.

Multi-lingual Cross-Modal Retrieval

Multi-lingual retrieval results. LLM2CLIP significantly improves cross-lingual image-text retrieval (Flickr-CN, COCO-CN, XM3600) with strong gains on both I2T and T2I.
Models Flickr-CN COCO-CN XM3600
I2TT2I I2TT2I I2TT2I
CN-CLIP 80.268.0 63.464.0 ----
EVA-L-224 4.40.9 2.61.0 14.08.0
+ LLM2CLIP 90.675.6 72.070.1 68.356.0
SigLIP2 79.256.9 55.351.7 59.748.2
+ LLM2CLIP 90.076.1 70.870.2 69.156.3

Vision Transfer: Stronger Visual Representations

Beyond retrieval, LLM2CLIP also improves the visual encoder itself. We observe consistent gains on zero-shot segmentation, open-vocabulary detection, and even supervised COCO finetuning, showing that fine-grained textual supervision can transfer to stronger visual understanding.

Zero-shot segmentation (mIOU), open-vocabulary detection, and supervised COCO val2017 results.
Method Zero-shot Seg. mIOU OV-COCO Det. COCO val2017
COCO-SADEVOCCity NovelBaseAll APbb/APseg
EVA02 12.911.521.013.5 24.753.646.0 45.0/38.2
+ LLM2CLIP 15.315.829.120.1 28.954.748.0 45.6/38.7

LLM2CLIP boosts VLMs like Llava for stronger multimodal understanding

We also replace Llava 1.5’s CLIP ViT-L/14 with our LLM2CLIP-enhanced visual encoder. LLM2CLIP improves Llava across most benchmarks (and strengthens both VQA and multi-modal evaluation suites).

Performance of Llava 1.5 benchmarks. For + LLM2CLIP, we replace Llava’s CLIP ViT-L/14 with our finetuned version.
MODEL VQA Pope MM Seed
V2 GQA Vz SQA TV R A P MME MB MC LB MV All I V
Llava (Paper) 78.562.050.066.858.2 87.386.184.2 1510.764.358.365.431.1 58.666.137.3
Llava (Rep.) 79.0462.8650.5767.9757.48 87.784.8586.3 1476.6966.6660.3958.034.3 59.8666.9539.71
  + LLM2CLIP 79.8063.1552.3769.9258.35 88.5582.7687.75 1505.8268.2960.4062.734.8 60.9668.8038.96

BibTeX

@misc{huang2024llm2clip,
  title     = {LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation},
  author    = {Weiquan Huang and
               Aoqi Wu and
               Yifan Yang and
               Xufang Luo and
               Yuqing Yang and
               Usman Naseem and
               Chunyu Wang and
               Qi Dai and
               Xiyang Dai and
               Dongdong Chen and
               Chong Luo and
               Lili Qiu and
               Liang Hu},
  year      = {2024},
  eprint    = {2411.04997},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CV},
  url       = {https://arxiv.org/abs/2411.04997}
}