-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Description
Track how long servers actually take to become available during E2E tests. This data can help optimize wait intervals and identify performance regressions.
Proposed Changes
1. Collect Timing Metrics
class ServerManager
def wait_until_ready
logger.info 'Waiting for server to be ready...'
start_time = Time.now
attempt_count = 0
# Give server time to initialize before checking
sleep INITIAL_STARTUP_DELAY
MAX_STARTUP_ATTEMPTS.times do |attempt|
attempt_count = attempt + 1
if server_responding?
@ready = true
elapsed = Time.now - start_time
record_startup_metric(elapsed, attempt_count)
logger.info "Server ready after #{elapsed.round(2)}s (#{attempt_count} attempts)"
return true
end
print '.'
sleep STARTUP_CHECK_INTERVAL
end
elapsed = Time.now - start_time
logger.warn "Server failed to start after #{elapsed.round(2)}s (#{MAX_STARTUP_ATTEMPTS} attempts)"
false
end
private
def record_startup_metric(elapsed_seconds, attempts)
# Could write to file, send to monitoring service, etc.
metrics = {
timestamp: Time.now.iso8601,
mode: @mode[:name],
elapsed_seconds: elapsed_seconds.round(3),
attempts: attempts,
initial_delay: INITIAL_STARTUP_DELAY,
check_interval: STARTUP_CHECK_INTERVAL
}
File.open('e2e_metrics.jsonl', 'a') do |f|
f.puts(JSON.generate(metrics))
end
rescue StandardError => e
logger.debug "Failed to record metric: #{e.message}"
end
end
2. Analyze Metrics
# Calculate average startup time
cat e2e_metrics.jsonl | jq -s 'map(.elapsed_seconds) | add / length'
# Find slowest mode
cat e2e_metrics.jsonl | jq -s 'max_by(.elapsed_seconds)'
# Group by mode
cat e2e_metrics.jsonl | jq -s 'group_by(.mode) | map({mode: .[0].mode, avg: (map(.elapsed_seconds) | add / length)})'
3. CI Integration
# .github/workflows/ci.yml
- name: Run E2E tests
run: bundle exec rake e2e:test_all_modes
- name: Upload metrics
uses: actions/upload-artifact@v3
with:
name: e2e-metrics
path: e2e_metrics.jsonl
Benefits:
- Performance Tracking: Identify slow server modes
- Regression Detection: Catch performance degradation over time
- Optimization: Data-driven decisions on timeout values
- CI Insights: Compare performance across different CI runners
- Debugging: Historical data helps diagnose intermittent failures
Use Cases:
- Optimize Timeouts: If metrics show servers always start in <3s, reduce
MAX_STARTUP_ATTEMPTS
- Identify Regressions: Sudden increase in startup time indicates a problem
- Compare Modes: See which development mode (HMR vs static) is faster
- CI Tuning: Adjust CI timeout values based on actual CI performance
Implementation Considerations:
- Make metrics collection optional (off by default, enable with
COLLECT_E2E_METRICS=true
) - Don't fail tests if metrics collection fails
- Consider memory usage for long test runs (append to file, don't keep in memory)
- Support multiple output formats (JSON Lines, CSV, StatsD, etc.)
- Add timestamp and git commit SHA for correlation
Related
- Mentioned as future improvement in PR Refactor code quality improvements #24
- Would benefit from logger infrastructure from Replace puts with proper logger in E2E test infrastructure #29
- Could inform environment configuration from Make E2E test timeouts configurable via environment variables #28
Priority
Low - Nice to have for performance optimization, but not critical for functionality.
Metadata
Metadata
Assignees
Labels
No labels