Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,55 @@ jobs:
uses: test-summary/action@v2
with:
paths: reports/pytest.xml

live-openai-tests:
name: Run live OpenAI tests (requires secret)
runs-on: ubuntu-latest
needs: tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
OPENAI_MODEL: ${{ secrets.OPENAI_MODEL }}
OPENAI_EMBEDDING_MODEL: ${{ secrets.OPENAI_EMBEDDING_MODEL }}
steps:
- name: Skip live tests when secret is absent
if: ${{ env.OPENAI_API_KEY == '' }}
run: |
echo "OPENAI_API_KEY is not configured; skipping live OpenAI test job."

- name: Checkout repository
if: ${{ env.OPENAI_API_KEY != '' }}
uses: actions/checkout@v4

- name: Set up Python
if: ${{ env.OPENAI_API_KEY != '' }}
uses: actions/setup-python@v5
with:
python-version: '3.11'

- name: Install dependencies
if: ${{ env.OPENAI_API_KEY != '' }}
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then
pip install -r requirements.txt
fi

- name: Run live OpenAI endpoint tests
if: ${{ env.OPENAI_API_KEY != '' }}
run: |
mkdir -p reports
pytest --junitxml=reports/openai-live.xml tests/test_openai_live.py

- name: Upload live test results
if: always()
uses: actions/upload-artifact@v4
with:
name: pytest-openai-live-results
path: reports/openai-live.xml

- name: Summarize live results
if: always()
uses: test-summary/action@v2
with:
paths: reports/openai-live.xml
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ VelvetFlow (repo root)
```bash
export OPENAI_API_KEY="<your_api_key>"
```
- 参见 [docs/openai_token_security.md](docs/openai_token_security.md) 了解在本地与 CI 中安全保存/注入 Token 的实践,以及如何运行需要真实 OpenAI 端点的测试。
3. **离线构建工具集索引(可选)**
- 若需根据最新的 `tools/business_actions/` 重建关键词与向量索引,可运行:
```bash
Expand Down
39 changes: 39 additions & 0 deletions docs/openai_token_security.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# 安全保存与使用 OpenAI Endpoint Token 的建议

为了运行需要真实 OpenAI 调用的集成测试(例如 `tests/test_openai_live.py`),需要正确地配置并保护 OpenAI 的访问 Token。以下指引旨在帮助你安全地保存、加载和轮换凭证,避免泄露风险。

## 1. 在本地安全存储 Token
- **使用环境变量**:优先将 Token 写入 `OPENAI_API_KEY` 环境变量。临时使用时可以在当前 shell 中执行:
```bash
export OPENAI_API_KEY="sk-..."
```
- **使用 .env 文件但避免提交**:可以在项目根目录创建 `.env` 文件(或使用 `direnv`/`dotenv` 等工具自动加载),并在其中写入 `OPENAI_API_KEY`。确保 `.env` 已加入 `.gitignore`,避免被提交到仓库。
- **锁定访问权限**:为保存 Token 的文件设置严格权限,例如 `chmod 600 .env`,防止同机其他用户读取。

## 2. 在 CI/CD 或服务器上管理密钥
- **使用机密管理服务**:优先将 Token 存放在 CI 的 Secret 管理(如 GitHub Actions Secrets、GitLab CI Variables),在流水线运行时以环境变量方式注入。
- **分角色密钥**:为 CI、开发机、生产环境分别使用不同 Token,限制误用范围。
- **最小权限与可审计性**:若使用可配置的服务账号或资源组,限制 Token 仅可访问所需的模型/区域,并开启访问日志。

### GitHub Actions 专用指引
- CI workflow 会在检测到 `OPENAI_API_KEY` 已通过 GitHub Secrets 注入时自动运行 `tests/test_openai_live.py`,否则跳过该 Job。
- 建议在仓库或环境级 Secret 中配置以下键值(缺失则为空字符串,Job 会被跳过):
- `OPENAI_API_KEY`(必需,存在时才会触发)
- `OPENAI_BASE_URL`(可选)
- `OPENAI_MODEL` 与 `OPENAI_EMBEDDING_MODEL`(可选,覆盖默认模型)

## 3. 运行真实 OpenAI 调用的测试
- 本仓库的 `tests/test_openai_live.py` 会在检测到缺失 `OPENAI_API_KEY` 时自动 `skip`,避免在无凭证或离线环境下失败。
- 如需指向非默认的端点,可设置 `OPENAI_BASE_URL`;如需覆盖默认模型,可设置 `OPENAI_MODEL`(聊天)和 `OPENAI_EMBEDDING_MODEL`(向量)。
- 建议在本地或 CI 中仅在必要时运行该文件,例如:
```bash
OPENAI_API_KEY=sk-... pytest -q tests/test_openai_live.py
```

## 4. 防止泄露的额外措施
- **永不写死 Token**:不要将 Token 写入源码、配置文件或示例代码;在交付文档或日志前,使用搜索工具确认未包含 `sk-` 等敏感模式。
- **避免分享控制台输出**:在演示或调试时,避免在终端打印完整的请求或错误信息中暴露 Token。
- **定期轮换**:设置定期轮换计划,更新 CI Secrets 和本地 `.env`,并验证旧 Token 已失效。
- **撤销已泄露的 Token**:一旦怀疑泄露,立即在 OpenAI 控制台撤销,并替换相关环境中的 Token。

通过以上措施,可以在保障安全的前提下运行需要真实 OpenAI 访问的测试,并确保凭证在本地与云环境中都得到妥善保护。
54 changes: 54 additions & 0 deletions tests/test_build_workflow_e2e.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import json
from pathlib import Path

import build_workflow
from velvetflow.action_registry import BUSINESS_ACTIONS
from velvetflow.models import Node, Workflow


def _make_stub_workflow(action_id: str) -> Workflow:
node = Node.model_validate({
"id": "stub_action",
"type": "action",
"action_id": action_id,
"params": {"message": "hello"},
})
return Workflow(nodes=[node], workflow_name="stub_workflow", description="test stub")


def test_build_workflow_main_persists_outputs(monkeypatch, tmp_path: Path):
# Ensure OpenAI checks are bypassed.
monkeypatch.setenv("OPENAI_API_KEY", "test-key")

# Pick a known action id from the registry so tool-gap guidance is skipped.
first_action_id = next(action["action_id"] for action in BUSINESS_ACTIONS if action.get("action_id"))

stub_workflow = _make_stub_workflow(first_action_id)

def fake_plan_workflow(user_nl: str):
return stub_workflow

def fake_render(workflow, output_path: str):
path = Path(output_path)
path.write_text("stub image", encoding="utf-8")
return str(path)

# Avoid interactive prompts.
monkeypatch.setattr("builtins.input", lambda _prompt="": "")
monkeypatch.setattr(build_workflow, "plan_workflow", fake_plan_workflow)
monkeypatch.setattr(build_workflow, "render_workflow_dag", fake_render)

# Run in an isolated directory so persisted artifacts don't leak between tests.
monkeypatch.chdir(tmp_path)

build_workflow.main()

workflow_json = tmp_path / build_workflow.DEFAULT_WORKFLOW_JSON
dag_image = tmp_path / "workflow_dag.jpg"

assert workflow_json.exists(), "workflow_output.json should be created"
assert dag_image.exists(), "workflow_dag.jpg should be created"

payload = json.loads(workflow_json.read_text(encoding="utf-8"))
assert payload["workflow_name"] == "stub_workflow"
assert payload["nodes"], "workflow nodes should be persisted"
Loading