Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions .ports.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Port range allocated to this work order
# Each work order gets 10 consecutive ports for flexibility
# CLI tools can ignore ports, microservices can use multiple

PORT_RANGE_START=9170
PORT_RANGE_END=9179
PORT_RANGE_SIZE=10

# Individual ports (use PORT_0, PORT_1, etc.)
PORT_0=9170
PORT_1=9171
PORT_2=9172
PORT_3=9173
PORT_4=9174
PORT_5=9175
PORT_6=9176
PORT_7=9177
PORT_8=9178
PORT_9=9179

# Convenience aliases (backward compatible with old format)
BACKEND_PORT=9170
FRONTEND_PORT=9171
VITE_BACKEND_URL=http://localhost:9170
10 changes: 9 additions & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,12 +90,18 @@ uv pip install -e .
### Development Commands

```bash
# Run all tests
# Run all tests (sequential)
uv run pytest

# Run all tests in parallel (faster for large test suites)
uv run pytest -n auto

# Run specific tests
uv run pytest concept_library/full_review_loop/tests/ -v

# Run specific tests in parallel with 4 workers
uv run pytest dylan/tests/ -n 4 -v

# Format code
uv run black .

Expand All @@ -106,6 +112,8 @@ uv run ruff check .
uv run mypy .
```

**Note on Parallel Testing**: Use `pytest -n auto` for parallel test execution when running large test suites (20+ tests). For small test suites or debugging, sequential execution (`pytest` without `-n`) is recommended. See TESTING.md for detailed parallel execution documentation.

### Running Core Components

#### Individual Components
Expand Down
286 changes: 286 additions & 0 deletions PRPs/features/completed/parallel-execution-test.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,286 @@
# Feature: Parallel Execution Test Documentation

## Feature Description

This feature creates comprehensive documentation for parallel execution testing using pytest-xdist, demonstrating the project's ability to run tests in parallel to reduce CI/CD execution time and costs. The documentation will serve as both a guide for developers and a validation that parallel testing infrastructure is properly configured in the Dylan project.

## User Story

As a developer working on the Dylan project
I want to understand how to run tests in parallel and validate parallel execution works correctly
So that I can reduce test execution time during development and CI/CD pipelines

## Problem Statement

Testing is a critical part of the development workflow, but as test suites grow, sequential test execution can become a bottleneck. Long-running test suites slow down development feedback loops and increase CI/CD costs based on execution minutes. The project needs clear documentation on how to leverage parallel test execution to optimize development workflows and reduce infrastructure costs.

## Solution Statement

Create a comprehensive TESTING.md documentation file that explains parallel execution testing using pytest-xdist, including installation, usage examples, configuration options, and best practices. The documentation will demonstrate that the Dylan project supports parallel test execution and provides clear guidance for developers on how to use this feature effectively.

## Relevant Files

Use these files to implement the feature:

### Existing Files

- **TESTING.md** (lines 1-150)
- Current testing documentation that needs to be enhanced with parallel execution information
- Contains existing testing structure, fixtures, and running test commands
- Will be updated to include parallel execution section

- **pyproject.toml** (lines 96-105)
- Contains pytest configuration in [tool.pytest.ini_options]
- May need to be updated to include pytest-xdist configuration
- Currently has testpaths, python_files, python_functions, and addopts settings

- **CLAUDE.md** (lines 90-107)
- Contains development commands section
- Shows current test execution commands
- Reference for project conventions and patterns

- **dylan/tests/test_cli.py** (lines 1-95)
- Example test file showing current testing patterns
- Demonstrates unit test structure with mocking
- Shows fixture usage patterns

- **dylan/conftest.py**
- Contains global fixtures for testing
- Provides mock_claude_provider, temp_output_dir, cli_runner, etc.
- Reference for understanding test infrastructure

### New Files

- **tests/test_parallel_execution.py**
- New test file to validate parallel execution works correctly
- Will contain tests that can safely run in parallel
- Demonstrates isolation and independence of tests

## Relevant research documentation

Use these documentation files and links to help with understanding the technology to use:

- [pytest-xdist Documentation](https://pytest-xdist.readthedocs.io/)
- [Installation Guide](https://pytest-xdist.readthedocs.io/en/latest/index.html#installation)
- Official documentation for pytest parallel execution plugin
- Contains configuration options, usage examples, and best practices

- [Pytest Parallel Testing with pytest-xdist](https://pytest-with-eric.com/plugins/pytest-xdist/)
- [Basic Usage](https://pytest-with-eric.com/plugins/pytest-xdist/#basic-usage)
- [Performance Benefits](https://pytest-with-eric.com/plugins/pytest-xdist/#performance-benefits)
- Comprehensive tutorial on parallel testing with real-world examples

- [AWS CodeBuild - Configure parallel tests with Pytest](https://docs.aws.amazon.com/en_us/codebuild/latest/userguide/sample-parallel-test-python.html)
- [Configuration Examples](https://docs.aws.amazon.com/en_us/codebuild/latest/userguide/sample-parallel-test-python.html#sample-parallel-test-python-config)
- Guide for CI/CD parallel test configuration

- [pytest-xdist PyPI](https://pypi.org/project/pytest-xdist/)
- [Installation command](https://pypi.org/project/pytest-xdist/#installation)
- Official package page with version information and quick start

- [Run Tests in Parallel with Python - GeeksforGeeks](https://www.geeksforgeeks.org/python/run-tests-in-parallel-with-pytest/)
- [Common Issues and Solutions](https://www.geeksforgeeks.org/python/run-tests-in-parallel-with-pytest/#common-issues-and-solutions)
- Tutorial covering common parallel testing scenarios and troubleshooting

## Implementation Plan

### Phase 1: Foundation

Research and understand the current testing infrastructure to ensure parallel execution documentation aligns with existing patterns. Install pytest-xdist as a development dependency and validate it works with the current test suite. Review existing test files to ensure they follow best practices for parallel execution (test isolation, no shared state, independent fixtures).

### Phase 2: Core Implementation

Update TESTING.md with a comprehensive parallel execution section including:
- Installation instructions for pytest-xdist
- Basic usage examples (`pytest -n auto`, `pytest -n 4`)
- Performance benefits and cost savings
- Best practices for writing parallel-safe tests
- Common pitfalls and how to avoid them
- CI/CD configuration examples

Create a new test file `tests/test_parallel_execution.py` that demonstrates parallel execution capabilities with isolated, independent tests that validate the parallel testing infrastructure works correctly.

### Phase 3: Integration

Update pyproject.toml to include pytest-xdist in dev dependencies and configure parallel execution settings if needed. Update CLAUDE.md to reference the new parallel execution capabilities in the development commands section. Verify all existing tests can run in parallel without conflicts or shared state issues.

## Step by Step Tasks

IMPORTANT: Execute every step in order, top to bottom.

### 1. Install and Configure pytest-xdist

- Add pytest-xdist to the project's dev dependencies using `uv add --dev pytest-xdist`
- Verify installation by running `uv run pytest --version` to check xdist plugin is loaded
- Test basic parallel execution with `uv run pytest -n auto` to ensure compatibility with existing tests

### 2. Create Parallel Execution Test File

- Create `tests/test_parallel_execution.py` with isolated, independent tests
- Include tests that validate parallel execution works (e.g., multiple independent tests that can run simultaneously)
- Use proper fixtures from `dylan/conftest.py` to ensure test isolation
- Add docstrings following Google-style format to explain test purpose
- Mark tests appropriately with pytest markers if needed

### 3. Update TESTING.md Documentation

- Read current TESTING.md to understand existing structure and tone
- Add a new "Parallel Execution Testing" section after the "Running Tests" section
- Include subsections:
- **What is Parallel Execution**: Brief explanation of pytest-xdist
- **Installation**: How pytest-xdist is already installed as dev dependency
- **Basic Usage**: Command examples for parallel execution
- **Performance Benefits**: Explanation of time and cost savings
- **Best Practices**: Guidelines for writing parallel-safe tests
- **Common Issues**: Troubleshooting guide for parallel testing problems
- Use code blocks with bash syntax highlighting for commands
- Maintain consistent formatting and style with existing documentation

### 4. Update pyproject.toml Configuration

- Open pyproject.toml and review [tool.pytest.ini_options] section
- Add pytest-xdist to [project.optional-dependencies] dev section if not already present
- Consider adding parallel execution configuration options like `addopts = ["-n", "auto"]` for default parallel execution (optional, discuss trade-offs)
- Ensure configuration doesn't break existing test execution patterns

### 5. Update CLAUDE.md Reference

- Read CLAUDE.md to understand development commands section
- Add parallel execution examples to the "Development Commands" section
- Include examples like `uv run pytest -n auto` and `uv run pytest -n 4`
- Add brief explanation of when to use parallel vs sequential execution
- Maintain consistent formatting with existing commands

### 6. Validate Test Suite Compatibility

- Run full test suite in parallel mode: `uv run pytest -n auto`
- Identify any tests that fail due to shared state or race conditions
- Fix or mark problematic tests appropriately (skip in parallel mode if necessary)
- Verify test output is readable and errors are clearly reported in parallel mode
- Document any tests that must run sequentially and why

### 7. Run Validation Commands

- Execute all validation commands listed below to ensure zero regressions
- Fix any issues that arise during validation
- Verify parallel execution reduces test execution time compared to sequential
- Confirm all documentation is accurate and up-to-date

## Testing Strategy

See `CLAUDE.md` for complete testing requirements. Every file in `src/` must have a corresponding test file in `tests/`.

### Unit Tests

Mark with @pytest.mark.unit. Test individual components:

- **test_parallel_execution.py**: Test that validates parallel execution infrastructure
- Test that multiple independent tests can run simultaneously
- Test that test isolation works correctly (no shared state conflicts)
- Test that fixtures are properly isolated between parallel workers
- Test that test results are correctly aggregated from parallel workers

### Integration Tests

This feature is primarily documentation-focused, but integration testing includes:

- **Full test suite parallel execution**: Run entire test suite with `-n auto` to validate all tests work in parallel
- **CI/CD integration**: Verify parallel execution works in CI/CD pipeline (if applicable)

### Edge Cases

- Tests with file I/O that might conflict (ensure proper temp directory usage)
- Tests with database connections (ensure connection pooling or isolation)
- Tests with external API calls (ensure proper mocking)
- Tests with time-dependent behavior (ensure no race conditions)
- Tests that modify global state (ensure cleanup or isolation)

## Acceptance Criteria

- [ ] pytest-xdist is installed as a development dependency and working correctly
- [ ] TESTING.md contains a comprehensive "Parallel Execution Testing" section with clear examples
- [ ] A new test file `tests/test_parallel_execution.py` exists and demonstrates parallel execution
- [ ] All existing tests pass when run in parallel mode (`pytest -n auto`)
- [ ] Documentation includes best practices for writing parallel-safe tests
- [ ] Documentation includes troubleshooting guide for common parallel testing issues
- [ ] pyproject.toml is updated with pytest-xdist in dev dependencies
- [ ] CLAUDE.md references parallel execution capabilities in development commands
- [ ] Parallel execution reduces test execution time by at least 30% compared to sequential (for test suites with sufficient tests)
- [ ] All validation commands pass with zero regressions

## Validation Commands

Execute every command to validate the feature works correctly with zero regressions.

**Required validation commands:**

- `uv run ruff check .` - Lint check must pass
- `uv run mypy .` - Type check must pass (if applicable to test files)
- `uv run pytest` - All tests must pass with zero regressions (sequential mode)
- `uv run pytest -n auto` - All tests must pass in parallel mode
- `uv run pytest -n 4` - All tests must pass with 4 workers (demonstrates configurable parallelism)

**Performance validation:**

- `time uv run pytest` - Measure sequential execution time
- `time uv run pytest -n auto` - Measure parallel execution time
- Compare times to validate performance improvement

**Documentation validation:**

- Read TESTING.md to ensure parallel execution section is clear and accurate
- Verify all code examples in documentation are correct and runnable
- Check that best practices and troubleshooting sections are comprehensive

**Test file validation:**

- `uv run pytest tests/test_parallel_execution.py -v` - Verify new test file works correctly
- `uv run pytest tests/test_parallel_execution.py -n auto -v` - Verify tests run in parallel

## Notes

### Performance Considerations

Parallel execution provides the most benefit when:
- Test suite has many tests (>50 tests recommended)
- Tests are I/O bound rather than CPU bound
- Tests are independent and don't share state
- Machine has multiple CPU cores available

For small test suites (<20 tests), the overhead of parallel execution might not provide significant benefits.

### CI/CD Integration

Most CI/CD services charge based on execution minutes. Parallel execution can:
- Reduce total execution time by 50-80% for large test suites
- Lower CI/CD costs proportionally
- Provide faster feedback to developers

Consider using `-n auto` in CI/CD pipelines to automatically scale to available cores.

### Test Isolation Best Practices

To ensure tests work correctly in parallel:
- Use fixtures for test data setup and teardown
- Use temporary directories for file operations (temp_output_dir fixture)
- Mock external services and APIs (mock_claude_provider fixture)
- Avoid global state or ensure proper cleanup
- Make tests deterministic (no reliance on execution order)

### Troubleshooting Common Issues

If tests fail in parallel but pass sequentially:
1. Check for shared file paths (use temp directories)
2. Check for database connection limits (use proper pooling)
3. Check for global state modification (ensure isolation)
4. Check for race conditions in async code
5. Use `-n 1` to disable parallelism for specific tests if needed

### Future Enhancements

Potential future improvements:
- Add test markers for parallel vs sequential execution
- Implement test groups that must run together
- Add CI/CD pipeline configuration for parallel execution
- Create dashboard for test execution time tracking
- Add automatic detection of non-parallel-safe tests
Loading