Quickstart快速开始

Get oBeaver up and running in minutes. This guide covers installation, getting models, and your first chat.几分钟即可启动 oBeaver。本指南涵盖安装、获取模型及首次对话。

Requirements系统要求

bash
brew install microsoft/foundrylocal/foundrylocal   # macOS
winget install Microsoft.FoundryLocal              # Windows

Note: On Linux, Foundry Local is not available. The engine defaults to ORT.注意:Linux 平台不支持 Foundry Local。引擎固定为 ORT。

Installation安装

Clone the repository and install oBeaver — all Python dependencies are included:克隆仓库并安装 oBeaver——所有 Python 依赖已包含:

bash
git clone https://github.com/microsoft/obeaver.git
cd obeaver
pip install -e .

Initialise Model Directory初始化模型目录

After installation, configure the default model save location. This creates the required sub-folders (ort/, foundrylocal/, cache_dir/) automatically:安装完成后,配置默认模型保存位置。这会自动创建所需的子目录(ort/foundrylocal/cache_dir/):

bash
obeaver init

You can also pass a path directly:你也可以直接传入路径:

bash
obeaver init ~/my-models

💡 Tip: The configuration is saved to ~/.obeaver/config.json. Run obeaver with no arguments to view the current model directory paths at any time.💡 提示:配置保存在 ~/.obeaver/config.json。随时运行 obeaver (不带参数)即可查看当前模型目录路径。

Environment Check环境检查

After installation, verify your setup:安装后,验证环境:

bash
obeaver check

Expected output:预期输出:

text
────────────────── obeaver environment check ──────────────────
Platform: macOS

✓  Foundry Local CLI found: /usr/local/bin/foundry
   Version: 0.8.119
✓  foundry-local-sdk (Python) is installed.
   SDK version: 0.5.1
────────────────────────────────────────────────────────────────

Getting Models获取模型

Before using oBeaver, you need models. Choose based on your engine:使用 oBeaver 前,你需要模型。根据引擎选择:

Foundry Local Catalog (macOS / Windows)Foundry Local 目录(macOS / Windows)

No manual download needed — pass a catalog alias and Foundry handles it:无需手动下载——传入目录别名,Foundry 自动处理:

bash
obeaver run phi-4-mini   # auto-downloads on first use

💡 Tip: Run obeaver models to list all local models. On macOS/Windows, run foundry model list to browse the full Foundry Local catalog.💡 提示:运行 obeaver models 列出所有本地模型。在 macOS/Windows 上,运行 foundry model list 浏览完整的 Foundry Local 目录。

⏳ First run requires download: When you run a Foundry Local model for the first time, the model will be downloaded from the Microsoft Foundry catalog. Please wait for the download to complete before the chat session starts — this may take several minutes depending on network speed and model size. Subsequent runs will use the cached version instantly.⏳ 首次运行需要下载:首次运行 Foundry Local 模型时,模型会从 Microsoft Foundry 目录下载。请等待下载完成后再开始对话——根据网络速度和模型大小,这可能需要几分钟。后续运行将直接使用缓存版本。

ONNX GenAI Models (all platforms)ONNX GenAI 模型(全平台)

For the ORT engine, you need to download a model from Hugging Face, convert it to ONNX format, then run it:使用 ORT 引擎时,需从 Hugging Face 下载模型,转换为 ONNX 格式后运行:

bash
# Step 1: Download model from Hugging Face
hf download Qwen/Qwen3-0.6B --local-dir ./models/cache_dir/models--Qwen--Qwen3-0.6B

# Step 2: Convert to ONNX INT4
obeaver convert Qwen/Qwen3-0.6B
# → output: ./models/Qwen3-0.6B_ONNX_INT4_CPU

# Step 3: Run with the converted model
obeaver run --engine ort ./models/Qwen3-0.6B_ONNX_INT4_CPU

⚠️ ORT engine requires ONNX format: Models downloaded from Hugging Face in raw format (PyTorch / SafeTensors) cannot be loaded directly. You must first run obeaver convert to transform them to ONNX GenAI format. See the Model Conversion section for detailed instructions on text models and VL models.⚠️ ORT 引擎需要 ONNX 格式:从 Hugging Face 下载的原始格式模型(PyTorch / SafeTensors)无法直接加载。必须先运行 obeaver convert 将其转换为 ONNX GenAI 格式。请参阅模型转换章节了解文本模型和 VL 模型的详细说明。

Your First Chat第一次对话

Start an interactive multi-turn chat in your terminal:在终端中启动交互式多轮对话:

Foundry Local (macOS / Windows)Foundry Local(macOS / Windows)

bash
obeaver run phi-4-mini
    obeaver run phi-4-mini --timings   # show TTFT + tok/s

First run: Foundry Local will automatically download the model on first use. Please wait for the download to complete — subsequent runs will be instant.首次运行:Foundry Local 会在首次使用时自动下载模型。请等待下载完成——后续运行将即时启动。

ORT Engine (all platforms)ORT 引擎(全平台)

bash
obeaver run --engine ort ./models/phi3-mini-int4

Reminder: The ORT engine requires models in ONNX format. If your model is not pre-converted, run obeaver convert first. See Model Conversion for details.提醒:ORT 引擎需要 ONNX 格式的模型。如果模型未预转换,请先运行 obeaver convert。请参阅模型转换

Vision-Language Models视觉语言模型

bash
obeaver run ./models/Qwen2.5-VL-3B-Instruct_VL_ONNX_INT4_CPU
obeaver run ./models/Qwen3-VL-2B-Instruct_VL_ONNX_INT4_CPU

Note: VL models require the ORT engine. When a VL model directory is detected (contains vision.onnx), the engine is automatically set to ort.注意:VL 模型需要 ORT 引擎。当检测到 VL 模型目录(包含 vision.onnx)时,引擎会自动设置为 ort

Start the API Server启动 API 服务

Launch a local HTTP server with OpenAI-compatible endpoints:启动带有 OpenAI 兼容端点的本地 HTTP 服务:

bash
obeaver serve Phi-4-mini                          # Foundry Local
obeaver serve --engine ort ./models/phi3-mini-int4   # ORT

Once running, any OpenAI-compatible client can connect to http://127.0.0.1:18000/v1.服务启动后,任何 OpenAI 兼容客户端都可以连接到 http://127.0.0.1:18000/v1

Test with curl使用 curl 测试

bash
curl http://127.0.0.1:18000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Phi-4-mini",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": false
  }'

Web DashboardWeb 仪表盘

Launch the monitoring dashboard with:使用以下命令启动监控仪表盘:

bash
obeaver dashboard               # Foundry Local engine
obeaver dashboard -e ort         # ORT engine (scans ./models)

Then open然后打开 http://127.0.0.1:1573/

The dashboard includes model selector, system memory monitoring, inference parameters, a chat interface, and server logs.仪表盘包含模型选择器、系统内存监控、推理参数、聊天界面和服务器日志。

Uninstall卸载

To completely remove oBeaver and its associated files from your system:要从系统中完全移除 oBeaver 及其相关文件:

1. Uninstall the Python package1. 卸载 Python 包

Run the following command to uninstall oBeaver from your Python environment:运行以下命令从 Python 环境中卸载 oBeaver:

bash
pip uninstall obeaver

2. Remove the configuration file2. 删除配置文件

oBeaver stores its configuration in ~/.obeaver/. Remove it with:oBeaver 的配置保存在 ~/.obeaver/ 目录下。使用以下命令删除:

bash
rm -rf ~/.obeaver

3. Remove downloaded models (optional)3. 删除已下载的模型(可选)

The model directory is the path you set during obeaver init. Run obeaver (with no arguments) to check your current model directory path:模型目录是你在执行 obeaver init 时设定的路径。运行 obeaver(不带参数)即可查看当前模型目录路径:

bash
obeaver

Then remove the displayed model directory:然后删除显示的模型目录:

bash
rm -rf /path/to/your/models

4. Remove the source code (optional)4. 删除源码(可选)

If you cloned the oBeaver repository, remove the project directory:如果你克隆了 oBeaver 仓库,删除项目目录:

bash
rm -rf /path/to/obeaver

5. Uninstall Foundry Local (optional)5. 卸载 Foundry Local(可选)

If you no longer need Foundry Local:如果你不再需要 Foundry Local:

bash
brew uninstall microsoft/foundrylocal/foundrylocal   # macOS
winget uninstall Microsoft.FoundryLocal              # Windows

Next Steps后续步骤