VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model

Jiani Zheng1, Lu Wang2, Fangkai Yang2, Chaoyun Zhang2, Lingrui Mei3, Wenjie Yin4,
Qingwei Lin2, Dongmei Zhang2, Saravan Rajmohan2, Qi Zhang2

1Peking University    2Microsoft
3University of the Chinese Academy of Sciences    4KTH Royal Institute of Technology

Abstract

Directional Weight Score

Training Vision-Language Models (VLMs) for Graphical User Interfaces (GUI) agents via Reinforcement Learning (RL) faces critical challenges: environment-based RL requires costly interactions, while environment-free methods struggle with distribution shift and reward generalization. We propose an environment-free RL framework that decouples value estimation from policy optimization by leveraging a pretrained Value Environment Model (VEM). VEM predicts state-action values directly from offline data, distilling human-like priors about GUI interaction outcomes without requiring next-state prediction or environmental feedback. This avoids compounding errors and enhances resilience to UI changes by focusing on semantic reasoning (e.g., “Does this action advance the user’s goal?”). The framework operates in two stages: (1) pretraining VEM to estimate long-term action utilities and (2) guiding policy exploration with frozen VEM signals, enabling layout-agnostic GUI automation. Evaluated on Android-in-the-Wild benchmarks, VEM achieves state-of-the-art performance in both offline and online settings, outperforming environment-free baselines significantly and matching environment-based approaches without interaction costs. Importantly, VEM demonstrates that semantic-aware value estimation can achieve comparable performance with online-trained methods.

Directional Weight Score

BibTeX

@misc{zheng2025vemenvironmentfreeexplorationtraining,
        title={VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model}, 
        author={Jiani Zheng and Lu Wang and Fangkai Yang and Chaoyun Zhang and Lingrui Mei and Wenjie Yin and Qingwei Lin and Dongmei Zhang and Saravan Rajmohan and Qi Zhang},
        year={2025},
        eprint={2502.18906},
        archivePrefix={arXiv},
        primaryClass={cs.LG},
        url={https://arxiv.org/abs/2502.18906}, 
  }