-
Notifications
You must be signed in to change notification settings - Fork 146
feat(l1): docker compose parallel snapsync run in loop #5695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…cessed since fullsync
95ea395 to
45fbd26
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a comprehensive monitoring and orchestration system for running parallel Ethereum snapsync operations across multiple networks (Holesky, Sepolia, and Mainnet). The system continuously monitors sync progress, tracks block processing, logs results, sends Slack notifications, and automatically restarts successful runs in an infinite loop.
Key Changes:
- New Python monitoring script (
docker_monitor.py) that tracks sync status with configurable timeouts and automatic container restart on success - Docker Compose configuration supporting parallel multi-network deployments with isolated volumes
- Makefile targets for simplified operation (
multisync-up,multisync-loop,multisync-monitor, etc.)
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 20 comments.
| File | Description |
|---|---|
| tooling/sync/docker_monitor.py | Core monitoring script implementing status tracking, RPC polling, Slack notifications, log persistence, and automatic restart orchestration |
| tooling/sync/docker-compose.multisync.yaml | Multi-network Docker Compose configuration with 4 network setups (hoodi, sepolia, mainnet, hoodi-2) each with isolated volumes and consensus clients |
| tooling/sync/Makefile | New Make targets for starting, stopping, monitoring, and managing multi-network sync operations |
| .gitignore | Exclusion of multisync_logs directory from version control |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def slack_notify(run_id: str, run_count: int, instances: list, hostname: str, branch: str, commit: str): | ||
| """Send a single summary Slack message for the run.""" | ||
| all_success = all(i.status == "success" for i in instances) | ||
| url = os.environ.get("SLACK_WEBHOOK_URL_SUCCESS" if all_success else "SLACK_WEBHOOK_URL_FAILED") |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Slack webhook URLs are retrieved from environment variables without validation. If these URLs are compromised or point to an attacker-controlled endpoint, sensitive information about the sync process (hostname, branch, commit, network status) could be leaked. Consider validating that the webhook URLs match expected Slack webhook URL patterns or documenting this security consideration.
| def slack_notify(run_id: str, run_count: int, instances: list, hostname: str, branch: str, commit: str): | |
| """Send a single summary Slack message for the run.""" | |
| all_success = all(i.status == "success" for i in instances) | |
| url = os.environ.get("SLACK_WEBHOOK_URL_SUCCESS" if all_success else "SLACK_WEBHOOK_URL_FAILED") | |
| def _get_slack_webhook_url(all_success: bool) -> Optional[str]: | |
| """ | |
| Retrieve and validate the Slack webhook URL from the environment. | |
| This ensures we only send run metadata to real Slack webhook endpoints. | |
| """ | |
| env_var = "SLACK_WEBHOOK_URL_SUCCESS" if all_success else "SLACK_WEBHOOK_URL_FAILED" | |
| url = os.environ.get(env_var) | |
| if not url: | |
| return None | |
| # Basic validation: only allow standard Slack incoming webhook URLs. | |
| if not url.startswith("https://hooks.slack.com/services/"): | |
| print(f"⚠️ Ignoring invalid Slack webhook URL from {env_var}") | |
| return None | |
| return url | |
| def slack_notify(run_id: str, run_count: int, instances: list, hostname: str, branch: str, commit: str): | |
| """Send a single summary Slack message for the run.""" | |
| all_success = all(i.status == "success" for i in instances) | |
| url = _get_slack_webhook_url(all_success) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an internal tool, we could enhance this but this isn't an issue for now
Motivation
Have a single server continusly syncing hoodi, sepolia and mainnet
Description
This PR takes inspiration in the already present
server_runner.pyand creates a newdocker_monitor.pyaccompanied with make targets and a new docker compose that spawn by default 3 nodes in parallel, hoodi, sepolia and mainnet, and monitors them in the following way:syncedit makes sure that it process blocks for at least 20 minutesStatus while running

Notification

History Log

Next Steps:
This is far from perfect but its working and adds a lot of value in it's current form, next we may want to:
server_runner.py)Closes #5718