-
Notifications
You must be signed in to change notification settings - Fork 26
fix dependence and redirect url #30
Conversation
… and automation for the EO-1 project (#5) (#6) * Add initial project structure with configuration files, datasets, and example scripts * Update .gitignore to include new demo data paths, modify pre-commit configuration to exclude additional directories, and enhance README with more examples and installation instructions. Adjust dataset handling in pipeline configuration and dataset classes for improved training flexibility. Remove deprecated demo scripts and refine evaluation scripts for clarity. * Update .gitignore to include demo data paths, enhance README with additional examples, and modify Libero benchmark configuration files for improved clarity and structure. Adjust training scripts and evaluation settings across various experiments for consistency. * Remove fast testing workflow configuration from GitHub Actions * Update pre-commit configuration to refine exclusions, enhance README with structured examples, and remove unused imports in the EO model script. Co-authored-by: dlqu_0010 <[email protected]>
…d in eo1-dev branch
…y and performance.
…rations for improved clarity, and adjust training scripts for consistency across experiments. Enhance README documentation for better guidance on dataset preparation and training processes.
…dit checks for improved security analysis.
…ure and functionality. Updated EO1VisionFlowMatchingConfig to inherit from PretrainedConfig, streamlined initialization, and added keys_to_ignore_at_inference. Enhanced EO1VisionProcessor to support new text processing capabilities and improved handling of robot inputs and outputs. Adjusted class names for consistency and clarity.
… with integration details for EO-1 with LERobot. Refactor dataset handling in MultimodaLeRobotDataset and adjust model architecture in EO1VisionFlowMatchingModel for improved functionality. Update training utilities for better configuration management and streamline processor methods for action selection.
…a environment directly.
Update 'freeze_lm_head' option in TrainPipelineConfig for enhanced training flexibility. Refactor training utilities to align with new configuration settings.
…r exclusion of model development files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request introduces several improvements to the EO1 codebase, focusing on configuration flexibility, model sampling logic improvements, dependency updates, and repository renaming. The changes enhance the model's training capabilities by adding granular control over component freezing, improve action sampling consistency for chunked actions, and update the project to reflect the new EO1 repository naming convention.
- Added
freeze_lm_headconfiguration option for independent control of language model head freezing - Refactored action sampling logic in
modeling_eo1.pyto use consistent chunk size variables and improve cache handling - Updated repository URLs, dependencies, and documentation to reflect EO-1 to EO1 renaming
Reviewed Changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tools/test_hf_model.py | Removed obsolete HuggingFace model test script |
| tests/test_vlm.py | Enhanced VLM test script with improved multi-turn conversation handling and image processing |
| pyproject.toml | Updated repository URLs and added new dependencies with version constraints |
| experiments/*.sh | Removed unnecessary environment script sourcing from training scripts |
| eo/train/train_utils.py | Updated LLM configuration to use new freeze_lm_head parameter |
| eo/train/pipeline_config.py | Added freeze_lm_head configuration option and improved LoRA logic |
| eo/model/modeling_qwen2_5_vl.py | Cleaned up imports, improved state handling, and removed unused generation methods |
| eo/model/modeling_eo1.py | Refactored action sampling with consistent chunk size usage and improved cache management |
| eo/data/schema.py | Made root parameter optional in LerobotConfig |
| README.md | Updated repository URLs and added troubleshooting section |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
This pull request introduces several improvements and fixes across the codebase, focusing on enhanced configuration flexibility, improved sampling logic, dependency updates, and documentation corrections. The most significant changes include new configuration options for model freezing, refined action sampling logic for chunked actions, dependency and documentation updates to reflect repository renaming, and improved usability in testing and troubleshooting.
Model configuration and training improvements:
freeze_lm_headoption toTrainPipelineConfigand updatedconfigure_llmto respect this setting, allowing users to freeze the language model head independently from the rest of the LLM. [1] [2]vision_lorais only enabled iflora_enableis true, preventing configuration mismatches.Action sampling and model logic:
sample_actionsinmodeling_eo1.pyto use a localchunk_sizevariable for clarity, improved cache handling, and ensured consistent use of chunk size in tensor slicing and action projection. [1] [2] [3] [4]statesto the model's forward call inmodeling_eo1.pyto support additional stateful operations.Dependency and repository updates:
EO1naming instead ofEO-1, and added/updated dependencies inpyproject.toml(e.g., version pinning forlerobot, new dependencies likeqwen_vl_utils, andujson). [1] [2] [3] [4]flash-attnin the setup instructions and clarified recommended installation methods.Testing and troubleshooting enhancements:
tests/test_vlm.py) to handle image and grid inputs correctly across multiple turns, allowing for more robust interactive testing.README.mdfor common issues such as missing FFmpeg installations.Other notable changes:
rootinLerobotConfigoptional for better flexibility in dataset configuration.modeling_qwen2_5_vl.py, and fixed logic inprepare_inputs_for_generationfor state handling. [1] [2] [3]env.shin various experiment scripts, streamlining environment setup. [1] [2] [3] [4] [5]# What does this PR do?Fixes # (issue)
Before submitting