We aim to advance research on Large Language Models (LLMs), with a particular focus on agentic coding LLMs. We build interactive SWE environments and synthetic tasks that facilitate designing, training, and evaluating agentic coding AI systems. We distil teacher LLMs' reasoning and coding capabilities into smaller, more efficient (plug-and-play) student models that can be deployed in real-world settings. We also explore new learning paradigms that leverage interaction and feedback to improve LLM-based coding agents.