[16.0][ADD] fs_attachment_s3_migration: enable S3 migration#1
[16.0][ADD] fs_attachment_s3_migration: enable S3 migration#1
Conversation
WalkthroughAdds a new Odoo addon fs_attachment_s3_migration to migrate existing ir.attachment binaries into an S3-backed fs.storage via a storage-form wizard, queue channel, enqueue/batch worker logic, storage UI fields, tests, and documentation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @fs_attachment_s3_migration/data/queue_job_channel_data.xml:
- Line 12: Update the documentation URL in
fs_attachment_s3_migration/data/queue_job_channel_data.xml to point to the
queue_job branch matching this module (replace "18.0" with "16.0" in the GitHub
link), ensuring the comment reads
"https://github.com/OCA/queue/tree/16.0/queue_job" so the referenced docs align
with the module target version.
In @fs_attachment_s3_migration/models/ir_attachment.py:
- Around line 128-141: The _upload_to_storage method currently skips writing if
fs.exists(path), which can leave corrupted or mismatched files in place; modify
_upload_to_storage to, when fs.exists(path) is True, open and compare the
existing file's checksum (or raw bytes) against bin_data and only skip if they
match, otherwise overwrite (or log and overwrite) the file; keep the existing
mkdirs handling and ensure any IO exceptions are caught and logged via
_logger.debug/_logger.error to preserve robustness.
- Around line 353-398: The legacy helper _s3_resolve_migration_bytes is marked
"kept for compatibility" but lacks a deprecation notice; add a deprecation
docstring directive (e.g., ".. deprecated::") and emit a runtime warning at the
start of the method (use the module logger variable _logger or
logging.getLogger(__name__) to call .warning) recommending the replacement
method (e.g., _get_binary_data_for_checksum) so callers see the deprecation and
maintainers have guidance for future cleanup.
- Around line 317-327: The loop updates each attachment with two separate DB
writes (_force_write_store_fname and write({"fs_filename": fs_filename})),
doubling operations; modify _force_write_store_fname to accept an optional
fs_filename parameter or replace the per-attachment pattern by setting both
fields in one call (e.g., call att._force_write_store_fname(new_store_fname,
fs_filename=fs_filename) or att.write({ "store_fname": new_store_fname,
"fs_filename": fs_filename }) so only a single write occurs per attachment, or
else collect attachments and call attachments.write({"fs_filename":
fs_filename}) after the checksum group to batch-update fs_filename while keeping
_force_write_store_fname for store_fname changes.
In @fs_attachment_s3_migration/readme/DESCRIPTION.md:
- Line 6: Reword the sentence fragment "stored in S3 or must remain in
PostgreSQL. This allows to run the process" for clarity — e.g. combine into a
single sentence like "stored in S3 or remain in PostgreSQL, allowing the process
to run" — and update the DESCRIPTION.md text so it reads smoothly and
grammatically correct.
In @fs_attachment_s3_migration/tests/test_attachment_migration.py:
- Around line 291-296: The test test_compute_s3_path_short_checksum highlights
that Attachment._compute_s3_path currently produces an empty path segment for
very short checksums; update _compute_s3_path to defensively validate checksum
length when optimize_path is True (e.g., only split into two 2-char directory
segments when len(checksum) >= 4) and return the raw checksum (or other sane
fallback) when too short, then update or add unit tests
(test_compute_s3_path_short_checksum) to assert the new, non-malformed behavior.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
fs_attachment_s3_migration/security/ir.model.access.csvis excluded by!**/*.csv
📒 Files selected for processing (19)
fs_attachment_s3_migration/README.rstfs_attachment_s3_migration/__init__.pyfs_attachment_s3_migration/__manifest__.pyfs_attachment_s3_migration/data/queue_job_channel_data.xmlfs_attachment_s3_migration/models/__init__.pyfs_attachment_s3_migration/models/fs_storage.pyfs_attachment_s3_migration/models/ir_attachment.pyfs_attachment_s3_migration/readme/CONTRIBUTORS.mdfs_attachment_s3_migration/readme/DESCRIPTION.mdfs_attachment_s3_migration/readme/USAGE.mdfs_attachment_s3_migration/static/description/index.htmlfs_attachment_s3_migration/tests/__init__.pyfs_attachment_s3_migration/tests/test_attachment_migration.pyfs_attachment_s3_migration/views/fs_storage_view.xmlfs_attachment_s3_migration/views/migration_wizard_views.xmlfs_attachment_s3_migration/wizard/__init__.pyfs_attachment_s3_migration/wizard/migration_wizard.pysetup/fs_attachment_s3_migration/odoo/addons/fs_attachment_s3_migrationsetup/fs_attachment_s3_migration/setup.py
🧰 Additional context used
📓 Path-based instructions (3)
**/readme/*.*
⚙️ CodeRabbit configuration file
Never ask to add a top header in any file in these locations
Files:
fs_attachment_s3_migration/readme/USAGE.mdfs_attachment_s3_migration/readme/CONTRIBUTORS.mdfs_attachment_s3_migration/readme/DESCRIPTION.md
**/README.rst
⚙️ CodeRabbit configuration file
Never check this file
Files:
fs_attachment_s3_migration/README.rst
**/static/description/index.html
⚙️ CodeRabbit configuration file
Never check this file.
Files:
fs_attachment_s3_migration/static/description/index.html
🧬 Code graph analysis (2)
fs_attachment_s3_migration/tests/test_attachment_migration.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (7)
_s3_migration_domain(26-46)_should_force_db(77-87)_compute_s3_path(90-94)s3_enqueue_migration(148-218)s3_migrate_batch(224-347)_get_binary_data_for_checksum(96-126)_upload_to_storage(128-141)
fs_attachment_s3_migration/wizard/migration_wizard.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (1)
s3_enqueue_migration(148-218)
🪛 markdownlint-cli2 (0.18.1)
fs_attachment_s3_migration/readme/USAGE.md
1-1: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
fs_attachment_s3_migration/readme/CONTRIBUTORS.md
1-1: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
fs_attachment_s3_migration/readme/DESCRIPTION.md
1-1: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
🪛 Pylint (4.0.4)
fs_attachment_s3_migration/__manifest__.py
[error] 1-1: Unrecognized option found: readme-template-url, manifest-required-authors, manifest-required-keys, manifest-deprecated-keys, license-allowed, valid-odoo-versions
(E0015)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'api-one-deprecated'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'api-one-multi-together'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'attribute-deprecated'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'class-camelcase'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'dangerous-view-replace-wo-priority'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'development-status-allowed'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'duplicate-id-csv'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'duplicate-xml-fields'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'duplicate-xml-record-id'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'eval-referenced'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'incoherent-interpreter-exec-perm'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'license-allowed'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-author-string'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-deprecated-key'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-required-author'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-required-key'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-version-format'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'method-compute'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'method-inverse'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'method-required-super'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'method-search'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'openerp-exception-warning'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'print-used'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'redundant-modulename-xml'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'relative-import'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'rst-syntax-error'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'sql-injection'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'translation-field'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'translation-required'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'use-vim-comment'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'wrong-tabs-instead-of-spaces'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'xml-syntax-error'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'attribute-string-redundant'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'character-not-valid-in-resource-link'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'consider-merging-classes-inherited'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'context-overridden'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'create-user-wo-reset-password'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'dangerous-filter-wo-user'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'dangerous-qweb-replace-wo-priority'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'deprecated-data-xml-node'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'deprecated-openerp-xml-node'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'duplicate-po-message-definition'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'except-pass'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'file-not-used'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'invalid-commit'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'manifest-maintainers-list'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-newline-extrafiles'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-readme'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-return'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'odoo-addons-relative-import'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'old-api7-method-defined'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'po-msgstr-variables'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'po-syntax-error'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'renamed-field-parameter'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'resource-not-exist'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'str-format-used'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'test-folder-imported'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'translation-contains-variable'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'translation-positional-used'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'unnecessary-utf8-coding-comment'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'website-manifest-key-not-valid-uri'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'xml-attribute-translatable'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'xml-deprecated-qweb-directive'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'xml-deprecated-tree-attribute'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'external-request-timeout'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'consider-merging-classes-inherited'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'create-user-wo-reset-password'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'dangerous-filter-wo-user'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'file-not-used'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'invalid-commit'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-manifest-dependency'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-newline-extrafiles'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'missing-readme'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'no-utf8-coding-comment'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'odoo-addons-relative-import'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'old-api7-method-defined'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'too-complex'
(W0012)
[warning] 1-1: Unknown option value for '--enable', expected a valid pylint message and got 'unnecessary-utf8-coding-comment'
(W0012)
[error] 1-1: Plugin 'pylint_odoo' is impossible to load, is it installed ? ('No module named 'pylint_odoo'')
(E0013)
[warning] 3-21: Statement seems to have no effect
(W0104)
🪛 Ruff (0.14.10)
fs_attachment_s3_migration/__manifest__.py
3-21: Found useless expression. Either assign it to a variable or remove it.
(B018)
fs_attachment_s3_migration/models/ir_attachment.py
26-26: Missing return type annotation for private function _s3_migration_domain
(ANN202)
32-38: Consider iterable unpacking instead of concatenation
(RUF005)
45-45: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
49-49: Missing return type annotation for private function _s3_get_force_db_domain
(ANN202)
77-77: Missing return type annotation for private function _should_force_db
Add return type annotation: bool
(ANN202)
90-90: Missing return type annotation for private function _compute_s3_path
(ANN202)
96-96: Missing return type annotation for private function _get_binary_data_for_checksum
(ANN202)
128-128: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
183-183: Consider [*domain, ("id", ">", last_id)] instead of concatenation
Replace with [*domain, ("id", ">", last_id)]
(RUF005)
278-278: Do not catch blind exception: Exception
(BLE001)
303-303: Do not catch blind exception: Exception
(BLE001)
304-309: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
353-353: Missing return type annotation for private function _s3_resolve_migration_bytes
(ANN202)
fs_attachment_s3_migration/tests/test_attachment_migration.py
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
fs_attachment_s3_migration/wizard/migration_wizard.py
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
🔇 Additional comments (27)
setup/fs_attachment_s3_migration/odoo/addons/fs_attachment_s3_migration (1)
1-1: LGTM!Standard OCA addon packaging symlink. The relative path correctly resolves from
setup/fs_attachment_s3_migration/odoo/addons/to the module root atfs_attachment_s3_migration/.fs_attachment_s3_migration/tests/__init__.py (1)
1-4: LGTM!Standard test package initialization with proper license header and test module import.
fs_attachment_s3_migration/readme/USAGE.md (1)
1-5: LGTM!Clear and concise usage instructions that accurately describe the migration workflow. The steps align with the wizard implementation and UI elements introduced in this PR.
Note: The MD041 (first-line-heading) lint warning is intentionally ignored per coding guidelines for readme files.
fs_attachment_s3_migration/static/description/index.html (1)
1-441: Skipping review per coding guidelines.This auto-generated file (
static/description/index.html) is excluded from review as per project guidelines.setup/fs_attachment_s3_migration/setup.py (1)
1-6: LGTM!Standard OCA addon setup script using
setuptools-odoofor automated package metadata extraction from the module manifest.fs_attachment_s3_migration/data/queue_job_channel_data.xml (1)
1-19: LGTM!The queue channel configuration is well-structured with appropriate
noupdate="1"to prevent overwriting on module updates, and the helpful comment explains concurrency configuration options.fs_attachment_s3_migration/readme/CONTRIBUTORS.md (1)
1-5: LGTM!The contributors list follows OCA module conventions. The static analysis hint about missing top-level heading is intentionally ignored as per coding guidelines for readme files.
fs_attachment_s3_migration/wizard/__init__.py (1)
1-4: LGTM!Standard Odoo package initializer with appropriate license header and module import.
fs_attachment_s3_migration/readme/DESCRIPTION.md (1)
1-7: LGTM!The description clearly explains the module's purpose and key features. The static analysis hint about missing top-level heading is intentionally ignored as per coding guidelines for readme files.
fs_attachment_s3_migration/models/__init__.py (1)
1-5: LGTM!Standard Odoo package initializer with appropriate license header and module imports for the migration functionality.
fs_attachment_s3_migration/__init__.py (1)
1-2: LGTM!Standard Odoo module initialization importing the
modelsandwizardsubpackages.fs_attachment_s3_migration/README.rst (1)
1-94: Skipping review per coding guidelines.As per coding guidelines, README.rst files are excluded from review.
fs_attachment_s3_migration/views/fs_storage_view.xml (1)
1-27: LGTM!Well-structured view inheritance that:
- Properly inherits from
fs_storage.fs_storage_form_view- Conditionally shows the migration button only for S3 protocol storages
- Groups migration-related fields logically under "Migration Settings"
fs_attachment_s3_migration/views/migration_wizard_views.xml (1)
1-27: LGTM!Clean wizard form view following Odoo conventions with proper field grouping and standard footer buttons for confirm/cancel actions.
fs_attachment_s3_migration/__manifest__.py (1)
1-21: LGTM!Well-formed Odoo manifest with:
- Correct version format for Odoo 16.0
- Appropriate dependencies (
queue_jobfor async processing,fs_attachment_s3for S3 storage)- Complete data file references
Note: The static analysis warnings (Ruff B018, Pylint W0104) about "useless expression" are false positives—this is standard Odoo manifest syntax where the dictionary is implicitly consumed by Odoo's module loader.
fs_attachment_s3_migration/models/fs_storage.py (2)
11-27: LGTM! Well-structured migration configuration fields.The field definitions provide sensible defaults and clear help text. The
migration_use_storage_force_db_rulesfield appropriately defaults toTrueto prevent unintended migration of small assets.
29-45: LGTM! Wizard action correctly validates storage code.The
ensure_one()check and storage code validation prevent runtime errors. The context correctly passes all relevant defaults to the wizard.fs_attachment_s3_migration/wizard/migration_wizard.py (1)
14-40: LGTM! Computed fields with user override capability.The
store=True, readonly=Falsepattern correctly allows computed defaults while permitting manual user edits. The conditional checks in compute methods (elif not wizard.batch_size) preserve user-entered values.fs_attachment_s3_migration/tests/test_attachment_migration.py (3)
10-62: LGTM! Comprehensive domain construction tests.The tests verify that the migration domain correctly filters attachments by checksum, type, store_fname, db_datas, and res_field. The string-based domain inspection approach is pragmatic for verifying domain contents.
72-116: LGTM! Thorough helper method tests with good boundary coverage.The tests for
_should_force_dbcover empty rules, matching MIME types, size limits, and non-matching types. Path computation tests verify both optimized and flat structures.
127-202: LGTM! Enqueue tests properly mock queue_job dependencies.The tests correctly mock
DelayableRecordsetto prevent actual job creation while verifying the enqueue logic, batch limiting, and checksum ordering.fs_attachment_s3_migration/models/ir_attachment.py (6)
11-15: LGTM! Well-documented workaround for ir.attachment search behavior.The
RES_FIELD_DOMAINtautology is correctly explained and necessary to prevent Odoo from auto-filtering field-linked attachments.
48-70: Domain construction uses correct OR grouping.The
OR([domain, rule_domain])pattern correctly groups rules, and each rule with a size limit usesAND()for proper grouping. The final domain will be properly structured for negation.
276-311: LGTM! Robust error handling with appropriate logging.The broad
except Exceptionclauses (lines 278, 303) are appropriate here for migration resilience—individual failures shouldn't abort the entire batch. Warnings and errors are logged with sufficient context for debugging.
182-210: LGTM! Cursor-based pagination prevents memory issues.The pagination loop using
last_idcursor avoids loading all attachment IDs at once, which is essential for large datasets. Progress logging every 10 batches provides visibility without noise.
246-251: LGTM! Checksum grouping enables efficient deduplication.Grouping attachments by checksum before processing ensures each unique file is uploaded only once, then all related records are updated. The
|=operator correctly accumulates attachments into the recordset.
25-46: Domain negation syntax is correct.The negation
["!"] + force_db_domaincorrectly applies to the entire compound domain returned by_s3_get_force_db_domain(). Odoo's domain syntax uses!as a prefix operator that applies to the entire following expression. Whenforce_db_domaincontains nested operators fromOR()(e.g.,['|', ...]) orAND()(e.g.,['&', ...]), prepending["!"]produces the correct negation structure (e.g.,['!', '|', ...]), meaning "NOT (entire OR/AND expression)". No changes needed.Likely an incorrect or invalid review comment.
| wizard directly on the storage form. | ||
|
|
||
| Migrations are run in background batches, skipping attachments that are already | ||
| stored in S3 or must remain in PostgreSQL. This allows to run the process |
There was a problem hiding this comment.
Minor grammar improvement.
Consider rewording for better readability.
-stored in S3 or must remain in PostgreSQL. This allows to run the process
+stored in S3 or must remain in PostgreSQL. This allows running the process📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stored in S3 or must remain in PostgreSQL. This allows to run the process | |
| stored in S3 or must remain in PostgreSQL. This allows running the process |
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/readme/DESCRIPTION.md at line 6, Reword the
sentence fragment "stored in S3 or must remain in PostgreSQL. This allows to run
the process" for clarity — e.g. combine into a single sentence like "stored in
S3 or remain in PostgreSQL, allowing the process to run" — and update the
DESCRIPTION.md text so it reads smoothly and grammatically correct.
63380b7 to
b09c5de
Compare
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In @fs_attachment_s3_migration/models/fs_storage.py:
- Around line 8-45: Ensure action_open_migration_wizard enforces server-side
guards and validates batch size: in FsStorage.action_open_migration_wizard call
self.check_access_rights('read') and self.check_access_rule('read') to verify
permissions, validate that the storage protocol is S3 (e.g. check a
protocol/storage_type attribute on self and raise UserError if not S3) and
ensure migration_batch_size > 0 (raise UserError if zero/negative); keep the
existing returned action and context keys (default_storage_id,
default_storage_code, default_batch_size, default_channel) unchanged and
reference the migration_batch_size and migration_channel fields when populating
the context.
In @fs_attachment_s3_migration/models/ir_attachment.py:
- Around line 266-276: The upload failure handler in the loop currently logs
errors with _logger.error and omits the traceback; replace the
_logger.error(...) call in the exception block that surrounds
self._upload_to_storage(fs, path, bin_data) with _logger.exception(...) so the
full traceback is recorded (keep the same descriptive message including checksum
and number of attachments, and you can remove explicit exception formatting
since exception() captures it); leave the skipped += len(attachments) and
continue behavior unchanged.
- Around line 127-132: The except block that catches OSError currently calls
_logger.error("Failed to write file %s: %s", path, e) then re-raises; change
this to use _logger.exception so the traceback is captured (e.g.
_logger.exception("Failed to write file %s", path)) and then re-raise; update
the except clause around fs.open/write in the same function/method where path
and bin_data are used to replace _logger.error with _logger.exception and keep
the raise to propagate the error.
- Around line 19-35: In _s3_migration_domain, replace the list concatenation
that builds base (currently using ("checksum", ... ) list + RES_FIELD_DOMAIN)
with iterable unpacking to make the syntax cleaner (use *RES_FIELD_DOMAIN when
constructing base); keep the existing logic around fs_storage,
migration_use_storage_force_db_rules, _s3_get_force_db_domain and the final
AND([...]) return unchanged.
- Around line 165-181: The cursor pagination is incompatible with ordering by
"checksum, id ASC" in the loop that builds cursor_domain and calls search;
change the search ordering to "id ASC" (remove "checksum,") so the id-based
cursor ("id", ">", last_id) correctly paginates through all attachments; keep
the existing call to s3_migrate_batch (which already groups by checksum) and
leave the cursor logic (cursor_domain, last_id, batch_size) unchanged.
In @fs_attachment_s3_migration/tests/test_attachment_migration.py:
- Around line 13-311: This test module triggers Ruff ANN206/PT009 due to missing
return type annotations on classmethods and use of unittest-style assertions;
fix by either adding a file-level ignore comment at the top (e.g. "# noqa:
ANN206, PT009") or by annotating every setUpClass signature with "-> None"
(update all setUpClass methods) and converting unittest assertions
(self.assertTrue, self.assertFalse, self.assertEqual, self.assertIn,
self.assertGreaterEqual, self.assertIsNone, etc.) to plain pytest-style asserts
(e.g. assert x, assert x == y, assert y in x) throughout the tests, ensuring
references like Attachment._should_force_db, Attachment._compute_s3_path,
Attachment.s3_enqueue_migration, Attachment.s3_migrate_batch,
Attachment._get_binary_data_for_checksum, and _upload_to_storage remain
unchanged.
In @fs_attachment_s3_migration/views/fs_storage_view.xml:
- Around line 9-24: Replace the legacy invisible="protocol != 's3'" on the
button with attrs="{'invisible': [('protocol','!=','s3')]}" (targeting the
button named action_open_migration_wizard) and also wrap the <group
string="Migration Settings"> (contains fields migration_batch_size,
migration_channel, migration_use_storage_force_db_rules) with the same attrs
condition so the entire group is hidden when protocol != 's3'; ensure you use
the exact attribute name attrs and domain-style tuple [('protocol','!=','s3')]
for Odoo 16 compatibility.
In @fs_attachment_s3_migration/wizard/migration_wizard.py:
- Around line 4-79: Ensure the wizard validates storage selection and fixes Ruff
ANN202 by adding a return type to _get_default_channel, and prevent
storage_id/storage_code mismatch by validating them in action_confirm: add a
return type annotation (-> str) to _get_default_channel, ensure
_compute_storage_code remains authoritative (compute storage_code from
storage_id in _compute_storage_code), then in action_confirm call
self.ensure_one(), require that self.storage_id is set and that
self.storage_id.code == self.storage_code (raise UserError with a clear message
if not), and keep existing batch/channel logic intact; this tightens validation
and resolves the ANN202 lint warning.
- Around line 69-79: The cursor pagination breaks because ordering by "checksum,
id ASC" means the batch's last_id isn’t the max id; update the search in
ir.attachment.s3_enqueue_migration so id is the primary sort key (e.g. use
order="id ASC, checksum ASC") and continue using id > last_id in cursor_domain,
or alternatively implement per-batch deduplication by checksum before enqueuing;
locate the search call and adjust the order and cursor logic accordingly
(symbols: s3_enqueue_migration, search(..., order=...), cursor_domain, last_id).
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
fs_attachment_s3_migration/data/queue_job_channel_data.xmlfs_attachment_s3_migration/models/fs_storage.pyfs_attachment_s3_migration/models/ir_attachment.pyfs_attachment_s3_migration/tests/test_attachment_migration.pyfs_attachment_s3_migration/views/fs_storage_view.xmlfs_attachment_s3_migration/wizard/migration_wizard.py
🧰 Additional context used
🧬 Code graph analysis (2)
fs_attachment_s3_migration/models/ir_attachment.py (3)
fs_storage/models/fs_storage.py (2)
get_by_code(203-209)fs(310-324)fs_file/fields.py (1)
mimetype(101-116)fs_attachment/models/ir_attachment.py (2)
_fs_mark_for_gc(553-581)_force_write_store_fname(491-498)
fs_attachment_s3_migration/wizard/migration_wizard.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (1)
s3_enqueue_migration(135-199)
🪛 Ruff (0.14.10)
fs_attachment_s3_migration/models/ir_attachment.py
20-20: Missing return type annotation for private function _s3_migration_domain
(ANN202)
22-28: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
34-34: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
38-38: Missing return type annotation for private function _s3_get_force_db_domain
(ANN202)
56-56: Missing return type annotation for private function _should_force_db
Add return type annotation: bool
(ANN202)
69-69: Missing return type annotation for private function _compute_s3_path
(ANN202)
75-75: Missing return type annotation for private function _get_binary_data_for_checksum
(ANN202)
100-100: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
131-131: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
166-166: Consider [*domain, ("id", ">", last_id)] instead of concatenation
Replace with [*domain, ("id", ">", last_id)]
(RUF005)
245-245: Do not catch blind exception: Exception
(BLE001)
268-268: Do not catch blind exception: Exception
(BLE001)
269-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
fs_attachment_s3_migration/tests/test_attachment_migration.py
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
fs_attachment_s3_migration/wizard/migration_wizard.py
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
🔇 Additional comments (7)
fs_attachment_s3_migration/data/queue_job_channel_data.xml (1)
1-19: Queue channel record is correct;queue_jobis properly declared as a hard dependency.The manifest lists
queue_jobas the first dependency, ensuring thequeue.job.channel_rootparent reference resolves before data loads. Thenoupdate="1"flag appropriately preserves any admin configuration of channel capacity after initial installation.fs_attachment_s3_migration/models/ir_attachment.py (6)
1-13: LGTM!Clean imports and the
RES_FIELD_DOMAINtautology is a well-documented workaround for Odoo's default attachment search behavior.
37-66: LGTM!The force-DB domain construction and runtime check are consistent with each other. Proper handling of edge cases (empty rules, None values).
68-73: LGTM!Standard directory optimization pattern for large object stores. SHA256 checksums guarantee sufficient length for the path splitting.
75-98: LGTM!Smart fallback strategy: first try DB-stored attachments (outside migration scope), then try any attachment with matching checksum. The
prefetch_fields=Falsecontext is a good performance optimization for large datasets.
201-242: Well-designed checksum-based deduplication.Grouping attachments by checksum before processing ensures each unique file is uploaded only once, with all related records updated to point to the same S3 path. This is an efficient approach for large-scale migrations.
281-288: The original concern is unfounded. In this codebase,store_fnameis always stored in the formatcode://path(e.g.,s3://file/pathorftp://file/path), never as a bare local path likeab/cdef123. Thefs_attachmentmodule, which is a dependency of this migration module, consistently setsstore_fnameusing thef"{storage_code}://{path}"pattern.The
_fs_mark_for_gcmethod correctly handles this format by extracting the storage code viapartition("://")[0]. The check at line 283 prevents marking files that are already in the target storage (by checkingstartswith(f"{storage_code}://"), while correctly marking files from other storages for garbage collection—which is the intended behavior during migration.
| class FsStorage(models.Model): | ||
| _inherit = "fs.storage" | ||
|
|
||
| migration_batch_size = fields.Integer( | ||
| default=500, | ||
| help="Number of attachments per background job batch.", | ||
| ) | ||
|
|
||
| migration_channel = fields.Char( | ||
| string="Queue Channel", | ||
| default="root.s3_migration", | ||
| help="queue_job channel to use for migration jobs.", | ||
| ) | ||
|
|
||
| migration_use_storage_force_db_rules = fields.Boolean( | ||
| string="Use Storage Force-DB Rules", | ||
| default=True, | ||
| help="If checked, respect force_db_for_default_attachment_rules during " | ||
| "migration. Small images and assets will be skipped.", | ||
| ) | ||
|
|
||
| def action_open_migration_wizard(self): | ||
| """Open the S3 migration wizard for this storage.""" | ||
| self.ensure_one() | ||
| if not self.code: | ||
| raise UserError(_("Storage must have a code to run migration.")) | ||
| return { | ||
| "type": "ir.actions.act_window", | ||
| "res_model": "s3.migration.wizard", | ||
| "view_mode": "form", | ||
| "target": "new", | ||
| "context": { | ||
| "default_storage_id": self.id, | ||
| "default_storage_code": self.code, | ||
| "default_batch_size": self.migration_batch_size, | ||
| "default_channel": self.migration_channel, | ||
| }, | ||
| } |
There was a problem hiding this comment.
Add server-side guards (protocol + permissions) and validate batch size.
Right now any caller with model access can invoke the method regardless of storage protocol; batch size can be set to 0/negative. ``
Proposed fix
-from odoo import _, fields, models
+from odoo import _, api, fields, models
from odoo.exceptions import UserError
class FsStorage(models.Model):
_inherit = "fs.storage"
@@
migration_use_storage_force_db_rules = fields.Boolean(
@@
)
+ @api.constrains("migration_batch_size")
+ def _check_migration_batch_size(self):
+ for rec in self:
+ if rec.migration_batch_size is not None and rec.migration_batch_size <= 0:
+ raise UserError(_("Migration batch size must be a positive integer."))
+
def action_open_migration_wizard(self):
"""Open the S3 migration wizard for this storage."""
self.ensure_one()
+ # UI visibility is not a security boundary.
+ if getattr(self, "protocol", None) != "s3":
+ raise UserError(_("Migration is only available for S3 storages."))
+ # Consider tightening this to a dedicated group for storage admins.
+ if not self.env.user.has_group("base.group_system"):
+ raise UserError(_("You don't have permission to enqueue an attachment migration."))
if not self.code:
raise UserError(_("Storage must have a code to run migration."))
return {
"type": "ir.actions.act_window",
"res_model": "s3.migration.wizard",
"view_mode": "form",
"target": "new",
"context": {
"default_storage_id": self.id,
"default_storage_code": self.code,
"default_batch_size": self.migration_batch_size,
"default_channel": self.migration_channel,
},
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class FsStorage(models.Model): | |
| _inherit = "fs.storage" | |
| migration_batch_size = fields.Integer( | |
| default=500, | |
| help="Number of attachments per background job batch.", | |
| ) | |
| migration_channel = fields.Char( | |
| string="Queue Channel", | |
| default="root.s3_migration", | |
| help="queue_job channel to use for migration jobs.", | |
| ) | |
| migration_use_storage_force_db_rules = fields.Boolean( | |
| string="Use Storage Force-DB Rules", | |
| default=True, | |
| help="If checked, respect force_db_for_default_attachment_rules during " | |
| "migration. Small images and assets will be skipped.", | |
| ) | |
| def action_open_migration_wizard(self): | |
| """Open the S3 migration wizard for this storage.""" | |
| self.ensure_one() | |
| if not self.code: | |
| raise UserError(_("Storage must have a code to run migration.")) | |
| return { | |
| "type": "ir.actions.act_window", | |
| "res_model": "s3.migration.wizard", | |
| "view_mode": "form", | |
| "target": "new", | |
| "context": { | |
| "default_storage_id": self.id, | |
| "default_storage_code": self.code, | |
| "default_batch_size": self.migration_batch_size, | |
| "default_channel": self.migration_channel, | |
| }, | |
| } | |
| from odoo import _, api, fields, models | |
| from odoo.exceptions import UserError | |
| class FsStorage(models.Model): | |
| _inherit = "fs.storage" | |
| migration_batch_size = fields.Integer( | |
| default=500, | |
| help="Number of attachments per background job batch.", | |
| ) | |
| migration_channel = fields.Char( | |
| string="Queue Channel", | |
| default="root.s3_migration", | |
| help="queue_job channel to use for migration jobs.", | |
| ) | |
| migration_use_storage_force_db_rules = fields.Boolean( | |
| string="Use Storage Force-DB Rules", | |
| default=True, | |
| help="If checked, respect force_db_for_default_attachment_rules during " | |
| "migration. Small images and assets will be skipped.", | |
| ) | |
| @api.constrains("migration_batch_size") | |
| def _check_migration_batch_size(self): | |
| for rec in self: | |
| if rec.migration_batch_size is not None and rec.migration_batch_size <= 0: | |
| raise UserError(_("Migration batch size must be a positive integer.")) | |
| def action_open_migration_wizard(self): | |
| """Open the S3 migration wizard for this storage.""" | |
| self.ensure_one() | |
| # UI visibility is not a security boundary. | |
| if getattr(self, "protocol", None) != "s3": | |
| raise UserError(_("Migration is only available for S3 storages.")) | |
| # Consider tightening this to a dedicated group for storage admins. | |
| if not self.env.user.has_group("base.group_system"): | |
| raise UserError(_("You don't have permission to enqueue an attachment migration.")) | |
| if not self.code: | |
| raise UserError(_("Storage must have a code to run migration.")) | |
| return { | |
| "type": "ir.actions.act_window", | |
| "res_model": "s3.migration.wizard", | |
| "view_mode": "form", | |
| "target": "new", | |
| "context": { | |
| "default_storage_id": self.id, | |
| "default_storage_code": self.code, | |
| "default_batch_size": self.migration_batch_size, | |
| "default_channel": self.migration_channel, | |
| }, | |
| } |
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/fs_storage.py around lines 8 - 45, Ensure
action_open_migration_wizard enforces server-side guards and validates batch
size: in FsStorage.action_open_migration_wizard call
self.check_access_rights('read') and self.check_access_rule('read') to verify
permissions, validate that the storage protocol is S3 (e.g. check a
protocol/storage_type attribute on self and raise UserError if not S3) and
ensure migration_batch_size > 0 (raise UserError if zero/negative); keep the
existing returned action and context keys (default_storage_id,
default_storage_code, default_batch_size, default_channel) unchanged and
reference the migration_batch_size and migration_channel fields when populating
the context.
| @api.model | ||
| def _s3_migration_domain(self, storage_code): | ||
| """Build domain for attachments eligible for migration.""" | ||
| base = [ | ||
| ("checksum", "!=", False), | ||
| ("type", "=", "binary"), | ||
| ("store_fname", "!=", False), | ||
| ("store_fname", "not like", f"{storage_code}://%"), | ||
| ("db_datas", "=", False), | ||
| ] + RES_FIELD_DOMAIN | ||
|
|
||
| fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code) | ||
| if fs_storage and fs_storage.migration_use_storage_force_db_rules: | ||
| force_db_domain = self._s3_get_force_db_domain(storage_code) | ||
| if force_db_domain: | ||
| return AND([base, ["!"] + force_db_domain]) | ||
| return base |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Well-structured migration domain logic.
The domain correctly excludes URL-based attachments, DB-stored files, and already-migrated files. The optional force-DB exclusion is a nice configuration option.
Minor: Per static analysis (RUF005), you could use iterable unpacking for slightly cleaner syntax:
♻️ Optional style improvement
- base = [
- ("checksum", "!=", False),
- ("type", "=", "binary"),
- ("store_fname", "!=", False),
- ("store_fname", "not like", f"{storage_code}://%"),
- ("db_datas", "=", False),
- ] + RES_FIELD_DOMAIN
+ base = [
+ ("checksum", "!=", False),
+ ("type", "=", "binary"),
+ ("store_fname", "!=", False),
+ ("store_fname", "not like", f"{storage_code}://%"),
+ ("db_datas", "=", False),
+ *RES_FIELD_DOMAIN,
+ ]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @api.model | |
| def _s3_migration_domain(self, storage_code): | |
| """Build domain for attachments eligible for migration.""" | |
| base = [ | |
| ("checksum", "!=", False), | |
| ("type", "=", "binary"), | |
| ("store_fname", "!=", False), | |
| ("store_fname", "not like", f"{storage_code}://%"), | |
| ("db_datas", "=", False), | |
| ] + RES_FIELD_DOMAIN | |
| fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code) | |
| if fs_storage and fs_storage.migration_use_storage_force_db_rules: | |
| force_db_domain = self._s3_get_force_db_domain(storage_code) | |
| if force_db_domain: | |
| return AND([base, ["!"] + force_db_domain]) | |
| return base | |
| @api.model | |
| def _s3_migration_domain(self, storage_code): | |
| """Build domain for attachments eligible for migration.""" | |
| base = [ | |
| ("checksum", "!=", False), | |
| ("type", "=", "binary"), | |
| ("store_fname", "!=", False), | |
| ("store_fname", "not like", f"{storage_code}://%"), | |
| ("db_datas", "=", False), | |
| *RES_FIELD_DOMAIN, | |
| ] | |
| fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code) | |
| if fs_storage and fs_storage.migration_use_storage_force_db_rules: | |
| force_db_domain = self._s3_get_force_db_domain(storage_code) | |
| if force_db_domain: | |
| return AND([base, ["!"] + force_db_domain]) | |
| return base |
🧰 Tools
🪛 Ruff (0.14.10)
20-20: Missing return type annotation for private function _s3_migration_domain
(ANN202)
22-28: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
34-34: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 19 - 35, In
_s3_migration_domain, replace the list concatenation that builds base (currently
using ("checksum", ... ) list + RES_FIELD_DOMAIN) with iterable unpacking to
make the syntax cleaner (use *RES_FIELD_DOMAIN when constructing base); keep the
existing logic around fs_storage, migration_use_storage_force_db_rules,
_s3_get_force_db_domain and the final AND([...]) return unchanged.
| try: | ||
| self._upload_to_storage(fs, path, bin_data) | ||
| except Exception as e: | ||
| _logger.error( | ||
| "Upload failed for %s: %s, skipping %d attachments", | ||
| checksum, | ||
| e, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use logging.exception for upload failures.
When catching broad exceptions in batch processing, including the full traceback helps diagnose issues across different failure modes.
♻️ Suggested improvement
try:
self._upload_to_storage(fs, path, bin_data)
except Exception as e:
- _logger.error(
+ _logger.exception(
"Upload failed for %s: %s, skipping %d attachments",
checksum,
e,
len(attachments),
)
skipped += len(attachments)
continue🧰 Tools
🪛 Ruff (0.14.10)
268-268: Do not catch blind exception: Exception
(BLE001)
269-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 266 - 276,
The upload failure handler in the loop currently logs errors with _logger.error
and omits the traceback; replace the _logger.error(...) call in the exception
block that surrounds self._upload_to_storage(fs, path, bin_data) with
_logger.exception(...) so the full traceback is recorded (keep the same
descriptive message including checksum and number of attachments, and you can
remove explicit exception formatting since exception() captures it); leave the
skipped += len(attachments) and continue behavior unchanged.
| class TestS3MigrationDomain(TransactionCase): | ||
| """Test suite for migration domain building.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_migration_domain_excludes_already_migrated(self): | ||
| """Test that migration domain excludes already-migrated files.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| domain_str = str(domain) | ||
| self.assertIn("test_s3", domain_str) | ||
| self.assertIn("store_fname", domain_str) | ||
| self.assertIn("not like", domain_str) | ||
|
|
||
| def test_migration_domain_completeness(self): | ||
| """Verify domain includes all required filters.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Convert to string for easier checking | ||
| domain_str = str(domain) | ||
|
|
||
| # Must filter by checksum (has binary content) | ||
| self.assertIn("checksum", domain_str) | ||
| # Must filter by type=binary | ||
| self.assertIn("type", domain_str) | ||
| # Must filter by store_fname (not in target) | ||
| self.assertIn("store_fname", domain_str) | ||
| # Must filter by db_datas=False (not in database) | ||
| self.assertIn("db_datas", domain_str) | ||
| # Must include res_field tautology | ||
| self.assertIn("res_field", domain_str) | ||
|
|
||
| def test_migration_domain_excludes_url_attachments(self): | ||
| """Verify domain excludes URL-type attachments.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Check type=binary filter is present | ||
| type_filter = [d for d in domain if isinstance(d, tuple) and d[0] == "type"] | ||
| self.assertTrue(type_filter) | ||
| self.assertEqual(type_filter[0], ("type", "=", "binary")) | ||
|
|
||
| def test_migration_domain_excludes_db_stored(self): | ||
| """Verify domain excludes attachments stored in database.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Check db_datas=False filter is present | ||
| db_filter = [d for d in domain if isinstance(d, tuple) and d[0] == "db_datas"] | ||
| self.assertTrue(db_filter) | ||
| self.assertEqual(db_filter[0], ("db_datas", "=", False)) | ||
|
|
||
|
|
||
| class TestS3MigrationHelpers(TransactionCase): | ||
| """Test suite for migration helper methods.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_should_force_db_empty_rules(self): | ||
| """Empty force_db_rules should always return False.""" | ||
| result = self.Attachment._should_force_db("image/png", 1000, {}) | ||
| self.assertFalse(result) | ||
|
|
||
| def test_should_force_db_matching_mime(self): | ||
| """Matching mimetype with limit=0 should return True.""" | ||
| rules = {"image/": 0, "text/css": 0} | ||
|
|
||
| self.assertTrue(self.Attachment._should_force_db("image/png", 5000, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("image/jpeg", 100, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("text/css", 1, rules)) | ||
|
|
||
| def test_should_force_db_with_size_limit(self): | ||
| """Size limit should be respected for matching mimetype.""" | ||
| rules = {"image/": 51200} # 50KB limit | ||
|
|
||
| # Under limit - should be forced to DB | ||
| self.assertTrue(self.Attachment._should_force_db("image/png", 1000, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("image/png", 51200, rules)) | ||
|
|
||
| # Over limit - should not be forced to DB | ||
| self.assertFalse(self.Attachment._should_force_db("image/png", 51201, rules)) | ||
| self.assertFalse(self.Attachment._should_force_db("image/png", 100000, rules)) | ||
|
|
||
| def test_should_force_db_non_matching_mime(self): | ||
| """Non-matching mimetype should return False.""" | ||
| rules = {"image/": 51200, "text/css": 0} | ||
|
|
||
| self.assertFalse( | ||
| self.Attachment._should_force_db("application/pdf", 100, rules) | ||
| ) | ||
| self.assertFalse(self.Attachment._should_force_db("text/html", 100, rules)) | ||
|
|
||
| def test_compute_s3_path_optimized(self): | ||
| """Optimized path should use hierarchical structure.""" | ||
| checksum = "abc123def456" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=True) | ||
| self.assertEqual(path, "ab/c1/abc123def456") | ||
|
|
||
| def test_compute_s3_path_flat(self): | ||
| """Non-optimized path should use flat structure.""" | ||
| checksum = "abc123def456" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=False) | ||
| self.assertEqual(path, "abc123def456") | ||
|
|
||
|
|
||
| class TestS3MigrationEnqueue(TransactionCase): | ||
| """Test suite for migration enqueue functionality.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_enqueue_migration_returns_count(self): | ||
| """Test that enqueue_migration returns correct attachment count.""" | ||
| # Create test attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"test{i}.txt", | ||
| "raw": b"test content", | ||
| } | ||
| for i in range(5) | ||
| ] | ||
| ) | ||
|
|
||
| # Mock DelayableRecordset to prevent actual queue_job creation | ||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_instance = MagicMock() | ||
| mock_delayable.return_value = mock_instance | ||
|
|
||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| ) | ||
|
|
||
| # Should return count >= 5 (our attachments + any existing ones) | ||
| self.assertGreaterEqual(total, 5) | ||
|
|
||
| def test_enqueue_migration_respects_max_batches(self): | ||
| """Test that max_batches parameter limits the number of batches.""" | ||
| # Create 30 attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"batch_test{i}.txt", | ||
| "raw": b"content", | ||
| } | ||
| for i in range(30) | ||
| ] | ||
| ) | ||
|
|
||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_delayable.return_value = MagicMock() | ||
|
|
||
| # Limit to 2 batches of 10 | ||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| max_batches=2, | ||
| ) | ||
|
|
||
| # Should stop at 20 (2 batches × 10) | ||
| self.assertEqual(total, 20) | ||
|
|
There was a problem hiding this comment.
❓ Verification inconclusive
Script executed:
git ls-files | grep -E "(s3_migration|queue_job)" | head -20Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment_s3_migration/README.rst
fs_attachment_s3_migration/__init__.py
fs_attachment_s3_migration/__manifest__.py
fs_attachment_s3_migration/data/queue_job_channel_data.xml
fs_attachment_s3_migration/models/__init__.py
fs_attachment_s3_migration/models/fs_storage.py
fs_attachment_s3_migration/models/ir_attachment.py
fs_attachment_s3_migration/readme/CONTRIBUTORS.md
fs_attachment_s3_migration/readme/DESCRIPTION.md
fs_attachment_s3_migration/readme/USAGE.md
fs_attachment_s3_migration/security/ir.model.access.csv
fs_attachment_s3_migration/static/description/index.html
fs_attachment_s3_migration/tests/__init__.py
fs_attachment_s3_migration/tests/test_attachment_migration.py
fs_attachment_s3_migration/views/fs_storage_view.xml
fs_attachment_s3_migration/views/migration_wizard_views.xml
fs_attachment_s3_migration/wizard/__init__.py
fs_attachment_s3_migration/wizard/migration_wizard.py
setup/fs_attachment_s3_migration/odoo/addons/fs_attachment_s3_migration
setup/fs_attachment_s3_migration/setup.py
Script executed:
find . -type f -name "*.py" | grep -E "attachment|migration" | head -20Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
./fs_attachment/__init__.py
./fs_attachment/models/__init__.py
./fs_attachment/models/fs_file_gc.py
./fs_attachment/models/fs_storage.py
./fs_attachment/models/ir_attachment.py
./fs_attachment/models/ir_binary.py
./fs_attachment/models/ir_model.py
./fs_attachment/models/ir_model_fields.py
./fs_attachment/models/strtobool.py
./fs_attachment/tests/__init__.py
./fs_attachment/tests/common.py
./fs_attachment/tests/test_fs_attachment_file_like_adapter.py
./fs_attachment/tests/test_fs_attachment_internal_url.py
./fs_attachment/tests/test_fs_attachment.py
./fs_attachment/tests/test_fs_storage.py
./fs_attachment/tests/test_stream.py
./fs_attachment/__manifest__.py
./fs_attachment/fs_stream.py
./fs_attachment/hooks.py
./fs_attachment_s3/__init__.py
Script executed:
cat -n fs_attachment_s3_migration/tests/test_attachment_migration.py | head -50Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
1 # Copyright 2025 Cetmix OU
2 # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
3 """Tests for S3 attachment migration functionality."""
4
5 from unittest.mock import MagicMock, patch
6
7 from odoo.tests.common import TransactionCase
8
9
10 class TestS3MigrationDomain(TransactionCase):
11 """Test suite for migration domain building."""
12
13 @classmethod
14 def setUpClass(cls):
15 super().setUpClass()
16 cls.Attachment = cls.env["ir.attachment"]
17
18 def test_migration_domain_excludes_already_migrated(self):
19 """Test that migration domain excludes already-migrated files."""
20 domain = self.Attachment._s3_migration_domain("test_s3")
21
22 domain_str = str(domain)
23 self.assertIn("test_s3", domain_str)
24 self.assertIn("store_fname", domain_str)
25 self.assertIn("not like", domain_str)
26
27 def test_migration_domain_completeness(self):
28 """Verify domain includes all required filters."""
29 domain = self.Attachment._s3_migration_domain("test_s3")
30
31 # Convert to string for easier checking
32 domain_str = str(domain)
33
34 # Must filter by checksum (has binary content)
35 self.assertIn("checksum", domain_str)
36 # Must filter by type=binary
37 self.assertIn("type", domain_str)
38 # Must filter by store_fname (not in target)
39 self.assertIn("store_fname", domain_str)
40 # Must filter by db_datas=False (not in database)
41 self.assertIn("db_datas", domain_str)
42 # Must include res_field tautology
43 self.assertIn("res_field", domain_str)
44
45 def test_migration_domain_excludes_url_attachments(self):
46 """Verify domain excludes URL-type attachments."""
47 domain = self.Attachment._s3_migration_domain("test_s3")
48
49 # Check type=binary filter is present
50 type_filter = [d for d in domain if isinstance(d, tuple) and d[0] == "type"]
Script executed:
find . -type f -name "*.py" -path "*/fs_attachment_s3_migration/*" ! -path "*/tests/*" | head -10Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
./fs_attachment_s3_migration/__init__.py
./fs_attachment_s3_migration/__manifest__.py
./fs_attachment_s3_migration/models/__init__.py
./fs_attachment_s3_migration/models/fs_storage.py
./fs_attachment_s3_migration/models/ir_attachment.py
./fs_attachment_s3_migration/wizard/__init__.py
./fs_attachment_s3_migration/wizard/migration_wizard.py
./setup/fs_attachment_s3_migration/setup.py
Script executed:
# Search for with_delay usage in the codebase
rg "with_delay" --type pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment_s3_migration/models/ir_attachment.py: self.browse(ids).with_delay(
Script executed:
# Search for _s3_migration_domain implementation
ast-grep --pattern 'def _s3_migration_domain($_, $_) {
$$$
}'Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 1
Script executed:
# Look for s3_enqueue_migration implementation
rg "def s3_enqueue_migration" --type py -A 20Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment_s3_migration/models/ir_attachment.py: def s3_enqueue_migration(
fs_attachment_s3_migration/models/ir_attachment.py- self,
fs_attachment_s3_migration/models/ir_attachment.py- storage_code,
fs_attachment_s3_migration/models/ir_attachment.py- batch_size=500,
fs_attachment_s3_migration/models/ir_attachment.py- max_batches=None,
fs_attachment_s3_migration/models/ir_attachment.py- channel="root.s3_migration",
fs_attachment_s3_migration/models/ir_attachment.py- max_retries=None,
fs_attachment_s3_migration/models/ir_attachment.py- ):
fs_attachment_s3_migration/models/ir_attachment.py- """Enqueue migration jobs using cursor pagination."""
fs_attachment_s3_migration/models/ir_attachment.py- domain = self._s3_migration_domain(storage_code)
fs_attachment_s3_migration/models/ir_attachment.py- total_enqueued = 0
fs_attachment_s3_migration/models/ir_attachment.py- batches = 0
fs_attachment_s3_migration/models/ir_attachment.py- last_id = 0
fs_attachment_s3_migration/models/ir_attachment.py-
fs_attachment_s3_migration/models/ir_attachment.py- force_db_config = self.env[
fs_attachment_s3_migration/models/ir_attachment.py- "fs.storage"
fs_attachment_s3_migration/models/ir_attachment.py- ].get_force_db_for_default_attachment_rules(storage_code)
fs_attachment_s3_migration/models/ir_attachment.py- if force_db_config:
fs_attachment_s3_migration/models/ir_attachment.py- _logger.info(
fs_attachment_s3_migration/models/ir_attachment.py- "Migration will exclude force-DB files: %s",
fs_attachment_s3_migration/models/ir_attachment.py- force_db_config,
Script executed:
# Check how attachments are created and stored
rg "def create" fs_attachment_s3_migration --type py -B 2 -A 15Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 1
Script executed:
# Check the default attachment storage location config
rg "ir_attachment.location" --type pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment/tests/test_fs_attachment_file_like_adapter.py: self.env["ir.config_parameter"].sudo().set_param("ir_attachment.location", "db")
fs_attachment/tests/test_fs_attachment_file_like_adapter.py: "ir_attachment.location", "file"
fs_attachment/tests/test_fs_attachment.py: self.env["ir.config_parameter"].sudo().set_param("ir_attachment.location", "db")
fs_attachment/tests/test_fs_attachment.py: "ir_attachment.location", "file"
Script executed:
cat -n fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
1 # Copyright 2025 Cetmix OU
2 # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
3
4 import logging
5
6 from odoo import api, models
7 from odoo.osv.expression import AND, OR
8
9 _logger = logging.getLogger(__name__)
10
11 # Tautology to include field-linked attachments in searches.
12 # Odoo's ir.attachment._search adds ('res_field', '=', False) by default.
13 RES_FIELD_DOMAIN = ["|", ("res_field", "=", False), ("res_field", "!=", False)]
14
15
16 class IrAttachment(models.Model):
17 _inherit = "ir.attachment"
18
19 @api.model
20 def _s3_migration_domain(self, storage_code):
21 """Build domain for attachments eligible for migration."""
22 base = [
23 ("checksum", "!=", False),
24 ("type", "=", "binary"),
25 ("store_fname", "!=", False),
26 ("store_fname", "not like", f"{storage_code}://%"),
27 ("db_datas", "=", False),
28 ] + RES_FIELD_DOMAIN
29
30 fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code)
31 if fs_storage and fs_storage.migration_use_storage_force_db_rules:
32 force_db_domain = self._s3_get_force_db_domain(storage_code)
33 if force_db_domain:
34 return AND([base, ["!"] + force_db_domain])
35 return base
36
37 @api.model
38 def _s3_get_force_db_domain(self, storage_code):
39 """Get domain for attachments that must stay in DB."""
40 force_db_rules = self.env[
41 "fs.storage"
42 ].get_force_db_for_default_attachment_rules(storage_code)
43 if not force_db_rules:
44 return []
45
46 domain = None
47 for mimetype_key, size_limit in force_db_rules.items():
48 rule_domain = [("mimetype", "=like", f"{mimetype_key}%")]
49 if size_limit:
50 rule_domain = AND([rule_domain, [("file_size", "<=", size_limit)]])
51 domain = OR([domain, rule_domain]) if domain else rule_domain
52
53 return domain or []
54
55 @api.model
56 def _should_force_db(self, mimetype, file_size, force_db_rules):
57 """Check if attachment should stay in DB based on force_db rules."""
58 if not force_db_rules:
59 return False
60 mimetype = mimetype or ""
61 file_size = file_size or 0
62 for mime_prefix, limit in force_db_rules.items():
63 if mimetype.startswith(mime_prefix):
64 if limit == 0 or file_size <= limit:
65 return True
66 return False
67
68 @api.model
69 def _compute_s3_path(self, checksum, optimize_path):
70 """Compute S3 storage path for a given checksum."""
71 if optimize_path:
72 return f"{checksum[:2]}/{checksum[2:4]}/{checksum}"
73 return checksum
74
75 def _get_binary_data_for_checksum(self, checksum):
76 """Get binary data for a checksum from any available source."""
77 domain = AND(
78 [[("checksum", "=", checksum), ("db_datas", "!=", False)], RES_FIELD_DOMAIN]
79 )
80 donor = self.search(domain, limit=1)
81 if donor:
82 try:
83 return donor.raw
84 except OSError as e:
85 _logger.debug(
86 "Failed to read from DB donor attachment %s: %s", donor.id, e
87 )
88
89 domain = AND([[("checksum", "=", checksum)], RES_FIELD_DOMAIN])
90 candidates = self.with_context(prefetch_fields=False).search(domain, limit=5)
91 for candidate in candidates:
92 try:
93 data = candidate.raw
94 if data:
95 return data
96 except OSError:
97 continue
98 return None
99
100 def _upload_to_storage(self, fs, path, bin_data):
101 """Upload binary data to storage with content verification."""
102 dirname = "/".join(path.split("/")[:-1])
103 if dirname:
104 try:
105 fs.makedirs(dirname, exist_ok=True)
106 except OSError as e:
107 _logger.debug("Directory %s may already exist: %s", dirname, e)
108
109 expected_size = len(bin_data)
110
111 if fs.exists(path):
112 try:
113 existing_size = fs.info(path).get("size", -1)
114 if existing_size == expected_size:
115 return
116 _logger.warning(
117 "Existing file %s has mismatched size (%d vs %d), overwriting",
118 path,
119 existing_size,
120 expected_size,
121 )
122 except OSError as e:
123 _logger.debug(
124 "Cannot verify existing file %s: %s, overwriting", path, e
125 )
126
127 try:
128 with fs.open(path, "wb") as f:
129 f.write(bin_data)
130 except OSError as e:
131 _logger.error("Failed to write file %s: %s", path, e)
132 raise
133
134 @api.model
135 def s3_enqueue_migration(
136 self,
137 storage_code,
138 batch_size=500,
139 max_batches=None,
140 channel="root.s3_migration",
141 max_retries=None,
142 ):
143 """Enqueue migration jobs using cursor pagination."""
144 domain = self._s3_migration_domain(storage_code)
145 total_enqueued = 0
146 batches = 0
147 last_id = 0
148
149 force_db_config = self.env[
150 "fs.storage"
151 ].get_force_db_for_default_attachment_rules(storage_code)
152 if force_db_config:
153 _logger.info(
154 "Migration will exclude force-DB files: %s",
155 force_db_config,
156 )
157
158 _logger.info(
159 "Starting migration enqueue for storage %s (batch_size=%d, max_batches=%s)",
160 storage_code,
161 batch_size,
162 max_batches or "unlimited",
163 )
164
165 while True:
166 cursor_domain = domain + [("id", ">", last_id)]
167 ids = (
168 self.with_context(prefetch_fields=False)
169 .search(cursor_domain, limit=batch_size, order="checksum, id ASC")
170 .ids
171 )
172 if not ids:
173 break
174
175 self.browse(ids).with_delay(
176 channel=channel, max_retries=max_retries
177 ).s3_migrate_batch(storage_code)
178
179 total_enqueued += len(ids)
180 batches += 1
181 last_id = ids[-1]
182
183 if batches % 10 == 0:
184 _logger.info(
185 "Migration enqueue progress: %d attachments in %d batches",
186 total_enqueued,
187 batches,
188 )
189
190 if max_batches and batches >= max_batches:
191 break
192
193 _logger.info(
194 "Completed migration enqueue: %d attachments in %d batches for storage %s",
195 total_enqueued,
196 batches,
197 storage_code,
198 )
199 return total_enqueued
200
201 def s3_migrate_batch(self, storage_code):
202 """Migrate batch with checksum deduplication."""
203 fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code)
204 if not fs_storage:
205 _logger.error("Storage not found: %s", storage_code)
206 return False
207
208 fs = fs_storage.fs
209 optimize_path = fs_storage.optimizes_directory_path
210
211 force_db_rules = {}
212 if fs_storage.migration_use_storage_force_db_rules:
213 force_db_rules = self.env[
214 "fs.storage"
215 ].get_force_db_for_default_attachment_rules(storage_code)
216
217 checksum_groups = {}
218 for att in self.with_context(prefetch_fields=False):
219 if att.checksum:
220 checksum_groups.setdefault(att.checksum, self.env["ir.attachment"])
221 checksum_groups[att.checksum] |= att
222
223 total = len(self)
224 migrated = 0
225 skipped = 0
226
227 _logger.info(
228 "Starting batch: %d attachments (%d unique checksums) to %s",
229 total,
230 len(checksum_groups),
231 storage_code,
232 )
233
234 for checksum, attachments in checksum_groups.items():
235 representative = attachments[0]
236
237 if self._should_force_db(
238 representative.mimetype, representative.file_size, force_db_rules
239 ):
240 skipped += len(attachments)
241 continue
242
243 try:
244 bin_data = self._get_binary_data_for_checksum(checksum)
245 except Exception as e: # pylint: disable=broad-except
246 _logger.warning(
247 "Cannot read checksum %s: %s, skipping %d attachments",
248 checksum,
249 e,
250 len(attachments),
251 )
252 skipped += len(attachments)
253 continue
254
255 if not bin_data:
256 _logger.warning(
257 "No data for checksum %s, skipping %d attachments",
258 checksum,
259 len(attachments),
260 )
261 skipped += len(attachments)
262 continue
263
264 path = self._compute_s3_path(checksum, optimize_path)
265
266 try:
267 self._upload_to_storage(fs, path, bin_data)
268 except Exception as e:
269 _logger.error(
270 "Upload failed for %s: %s, skipping %d attachments",
271 checksum,
272 e,
273 len(attachments),
274 )
275 skipped += len(attachments)
276 continue
277
278 new_store_fname = f"{storage_code}://{path}"
279 fs_filename = path.split("/")[-1]
280
281 for att in attachments:
282 old_fname = att.store_fname
283 if old_fname and not old_fname.startswith(f"{storage_code}://"):
284 self._fs_mark_for_gc(old_fname)
285 att._force_write_store_fname(new_store_fname)
286
287 attachments.write({"fs_filename": fs_filename})
288 migrated += len(attachments)
289
290 if migrated % 100 == 0:
291 _logger.info(
292 "Batch progress: %d/%d migrated, %d skipped",
293 migrated,
294 total,
295 skipped,
296 )
297
298 _logger.info(
299 "Batch complete: migrated=%d, skipped=%d (total=%d) to %s",
300 migrated,
301 skipped,
302 total,
303 storage_code,
304 )
305 return True
Script executed:
# Check how raw parameter affects storage
rg "_save_file\|store_fname\|_save" fs_attachment_s3_migration/models/ir_attachment.py -A 5 -B 2Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 1
Script executed:
# Check imports and exact with_delay usage context
head -50 fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
# Copyright 2025 Cetmix OU
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import logging
from odoo import api, models
from odoo.osv.expression import AND, OR
_logger = logging.getLogger(__name__)
# Tautology to include field-linked attachments in searches.
# Odoo's ir.attachment._search adds ('res_field', '=', False) by default.
RES_FIELD_DOMAIN = ["|", ("res_field", "=", False), ("res_field", "!=", False)]
class IrAttachment(models.Model):
_inherit = "ir.attachment"
@api.model
def _s3_migration_domain(self, storage_code):
"""Build domain for attachments eligible for migration."""
base = [
("checksum", "!=", False),
("type", "=", "binary"),
("store_fname", "!=", False),
("store_fname", "not like", f"{storage_code}://%"),
("db_datas", "=", False),
] + RES_FIELD_DOMAIN
fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code)
if fs_storage and fs_storage.migration_use_storage_force_db_rules:
force_db_domain = self._s3_get_force_db_domain(storage_code)
if force_db_domain:
return AND([base, ["!"] + force_db_domain])
return base
@api.model
def _s3_get_force_db_domain(self, storage_code):
"""Get domain for attachments that must stay in DB."""
force_db_rules = self.env[
"fs.storage"
].get_force_db_for_default_attachment_rules(storage_code)
if not force_db_rules:
return []
domain = None
for mimetype_key, size_limit in force_db_rules.items():
rule_domain = [("mimetype", "=like", f"{mimetype_key}%")]
if size_limit:
rule_domain = AND([rule_domain, [("file_size", "<=", size_limit)]])
Script executed:
# Find the _s3_migration_domain definition
rg "def _s3_migration_domain" -A 30 fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
def _s3_migration_domain(self, storage_code):
"""Build domain for attachments eligible for migration."""
base = [
("checksum", "!=", False),
("type", "=", "binary"),
("store_fname", "!=", False),
("store_fname", "not like", f"{storage_code}://%"),
("db_datas", "=", False),
] + RES_FIELD_DOMAIN
fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code)
if fs_storage and fs_storage.migration_use_storage_force_db_rules:
force_db_domain = self._s3_get_force_db_domain(storage_code)
if force_db_domain:
return AND([base, ["!"] + force_db_domain])
return base
@api.model
def _s3_get_force_db_domain(self, storage_code):
"""Get domain for attachments that must stay in DB."""
force_db_rules = self.env[
"fs.storage"
].get_force_db_for_default_attachment_rules(storage_code)
if not force_db_rules:
return []
domain = None
for mimetype_key, size_limit in force_db_rules.items():
rule_domain = [("mimetype", "=like", f"{mimetype_key}%")]
if size_limit:
rule_domain = AND([rule_domain, [("file_size", "<=", size_limit)]])
Script executed:
# Check if db_datas filter is actually in the domain
rg "db_datas" fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
("db_datas", "=", False),
[[("checksum", "=", checksum), ("db_datas", "!=", False)], RES_FIELD_DOMAIN]
Script executed:
# Check the parent fs_attachment module for storage behavior
cat -n fs_attachment/models/ir_attachment.py | head -100Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
1 # Copyright 2017-2013 Camptocamp SA
2 # Copyright 2023 ACSONE SA/NV
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html)
4
5 from __future__ import annotations
6
7 import io
8 import logging
9 import mimetypes
10 import os
11 import re
12 import time
13 from contextlib import closing, contextmanager
14 from pathlib import Path
15
16 import fsspec # pylint: disable=missing-manifest-dependency
17 import psycopg2
18 from slugify import slugify # pylint: disable=missing-manifest-dependency
19
20 import odoo
21 from odoo import _, api, fields, models
22 from odoo.exceptions import AccessError, UserError
23 from odoo.osv.expression import AND, OR, normalize_domain
24
25 from .strtobool import strtobool
26
27 _logger = logging.getLogger(__name__)
28
29
30 REGEX_SLUGIFY = r"[^-a-z0-9_]+"
31
32 FS_FILENAME_RE_PARSER = re.compile(
33 r"^(?P<name>.+)-(?P<id>\d+)-(?P<version>\d+)(?P<extension>\..+)$"
34 )
35
36
37 def is_true(strval):
38 return bool(strtobool(strval or "0"))
39
40
41 def clean_fs(files):
42 _logger.info("cleaning old files from filestore")
43 for full_path in files:
44 if os.path.exists(full_path):
45 try:
46 os.unlink(full_path)
47 except OSError:
48 _logger.info(
49 "_file_delete could not unlink %s", full_path, exc_info=True
50 )
51 except IOError:
52 # Harmless and needed for race conditions
53 _logger.info(
54 "_file_delete could not unlink %s", full_path, exc_info=True
55 )
56
57
58 class IrAttachment(models.Model):
59 _inherit = "ir.attachment"
60
61 fs_filename = fields.Char(
62 "File Name into the filesystem storage",
63 help="The name of the file in the filesystem storage."
64 "To preserve the mimetype and the meaning of the filename"
65 "the filename is computed from the name and the extension",
66 readonly=True,
67 )
68
69 internal_url = fields.Char(
70 "Internal URL",
71 compute="_compute_internal_url",
72 help="The URL to access the file from the server.",
73 )
74
75 fs_url = fields.Char(
76 "Filesystem URL",
77 compute="_compute_fs_url",
78 help="The URL to access the file from the filesystem storage.",
79 store=True,
80 )
81 fs_url_path = fields.Char(
82 "Filesystem URL Path",
83 compute="_compute_fs_url_path",
84 help="The path to access the file from the filesystem storage.",
85 )
86 fs_storage_code = fields.Char(
87 "Filesystem Storage Code",
88 related="fs_storage_id.code",
89 store=True,
90 )
91 fs_storage_id = fields.Many2one(
92 "fs.storage",
93 "Filesystem Storage",
94 compute="_compute_fs_storage_id",
95 help="The storage where the file is stored.",
96 store=True,
97 ondelete="restrict",
98 )
99
100 @api.depends("name")
Script executed:
# Check how with_delay is implemented in queue_job
rg "def with_delay" --type pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 1
Script executed:
# Search for where DelayableRecordset is actually used
rg "DelayableRecordset" --type pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment_s3_migration/tests/test_attachment_migration.py: # Mock DelayableRecordset to prevent actual queue_job creation
fs_attachment_s3_migration/tests/test_attachment_migration.py: with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable:
fs_attachment_s3_migration/tests/test_attachment_migration.py: with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable:
fs_attachment_s3_migration/tests/test_attachment_migration.py: with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable:
Script executed:
# Check default storage for attachments with raw data
rg "def create" fs_attachment/models/ir_attachment.py -A 30 | head -60Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
def create(self, vals_list):
"""
Storage may depend on resource field, but the method calling _storage
(_get_datas_related_values) does not take all vals, just the mimetype.
The only way to give res_field and res_model to _storage method
is to pass them into the context, and perform 1 create call per record
to create.
"""
vals_list_no_model = []
attachments = self.env["ir.attachment"]
for vals in vals_list:
if vals.get("res_model"):
attachment = super(
IrAttachment,
self.with_context(
attachment_res_model=vals.get("res_model"),
attachment_res_field=vals.get("res_field"),
),
).create(vals)
attachments += attachment
else:
vals_list_no_model.append(vals)
atts = super().create(vals_list_no_model)
attachments |= atts
attachments._enforce_meaningful_storage_filename()
return attachments
def write(self, vals):
if not self:
return super().write(vals)
if ("datas" in vals or "raw" in vals) and not (
Script executed:
# Understand raw parameter and storage decision
rg "raw\|store_fname" fs_attachment/models/ir_attachment.py -B 2 -A 2 | head -50Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
Script executed:
# Check what db_datas field represents
rg "db_datas" fs_attachment/models/ir_attachment.py -B 2 -A 2Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
"index_content": self._index(bin_data, mimetype),
"store_fname": False,
"db_datas": data,
}
return values
--
_logger.info("moved %s on the object storage", fname)
return self._full_path(fname)
elif self.db_datas:
_logger.info("moving on the object storage from database")
self.write({"datas": self.datas})
--
filesystem = fsspec.filesystem("memory")
if "a" in self.mode or self._is_open_for_read:
filesystem.pipe_file(filepath, self.attachment.db_datas)
the_file = filesystem.open(
filepath,
--
"""Ensure the cache consistency once the file is closed"""
if self._is_open_for_modify and not self._is_stored_in_db:
self.attachment.invalidate_recordset(fnames=["raw", "datas", "db_datas"])
if (
self.attachment.res_model
Script executed:
# Check if queue_job with_delay implementation exists in codebase or is external dependency
rg "queue_job" fs_attachment_s3_migration/__manifest__.pyRepository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
"queue_job",
"data/queue_job_channel_data.xml",
Script executed:
# Verify how with_delay returns DelayableRecordset
rg "with_delay" fs_attachment_s3_migration -B 5 -A 5Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment_s3_migration/models/ir_attachment.py- .ids
fs_attachment_s3_migration/models/ir_attachment.py- )
fs_attachment_s3_migration/models/ir_attachment.py- if not ids:
fs_attachment_s3_migration/models/ir_attachment.py- break
fs_attachment_s3_migration/models/ir_attachment.py-
fs_attachment_s3_migration/models/ir_attachment.py: self.browse(ids).with_delay(
fs_attachment_s3_migration/models/ir_attachment.py- channel=channel, max_retries=max_retries
fs_attachment_s3_migration/models/ir_attachment.py- ).s3_migrate_batch(storage_code)
fs_attachment_s3_migration/models/ir_attachment.py-
fs_attachment_s3_migration/models/ir_attachment.py- total_enqueued += len(ids)
fs_attachment_s3_migration/models/ir_attachment.py- batches += 1
Script executed:
# Check test base class behavior for default storage settings
rg "TransactionCase" fs_attachment/tests -A 10 | head -30Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
fs_attachment/tests/common.py:from odoo.tests.common import TransactionCase
fs_attachment/tests/common.py-
fs_attachment/tests/common.py-
fs_attachment/tests/common.py:class TestFSAttachmentCommon(TransactionCase):
fs_attachment/tests/common.py- @classmethod
fs_attachment/tests/common.py- def setUpClass(cls):
fs_attachment/tests/common.py- super().setUpClass()
fs_attachment/tests/common.py- cls.env = cls.env(context=dict(cls.env.context, tracking_disable=True))
fs_attachment/tests/common.py- temp_dir = tempfile.mkdtemp()
fs_attachment/tests/common.py- cls.temp_backend = cls.env["fs.storage"].create(
fs_attachment/tests/common.py- {
fs_attachment/tests/common.py- "name": "Temp FS Storage",
fs_attachment/tests/common.py- "protocol": "file",
fs_attachment/tests/common.py- "code": "tmp_dir",
Script executed:
# Check if there are any other tests in the repo that set ir_attachment.location
rg "ir_attachment.location" fs_attachment/tests/test_fs_attachment.py -B 3 -A 3Repository: cetmix/oca-storage
Repository: cetmix/oca-storage
Exit code: 0
stdout:
self.assertEqual(f.read(), new_content)
def test_open_attachment_in_db(self):
self.env["ir.config_parameter"].sudo().set_param("ir_attachment.location", "db")
content = b"This is a test attachment in db"
attachment = self.ir_attachment_model.create(
{"name": "test.txt", "raw": content}
--
def test_attachment_open_in_filestore(self):
self.env["ir.config_parameter"].sudo().set_param(
"ir_attachment.location", "file"
)
content = b"This is a test attachment in filestore"
attachment = self.ir_attachment_model.create(
Make enqueue tests deterministic: set filestore location and patch the with_delay method directly.
These tests are non-deterministic for two reasons:
-
Storage location dependency: Attachments created with
rawparameter are stored in either the database or filesystem depending on their_attachment.locationconfig parameter. Since tests don't set this explicitly, the location depends on system defaults. The_s3_migration_domainexplicitly filters("db_datas", "=", False)(line 27 ofir_attachment.py), which excludes DB-stored attachments. If the default location is "db", created attachments will be DB-stored and excluded from the domain query, causing count assertions to fail. -
Patching strategy: The current patch targets
odoo.addons.queue_job.delay.DelayableRecordset, but the code invokeswith_delay()as a method on the recordset (line 175-177 ofir_attachment.py). Patching the class directly may not intercept the actual call if the method uses a different import path or construction at runtime. Instead, patch the method entrypoint usingpatch.object(type(self.Attachment), "with_delay").
Recommended fixes
class TestS3MigrationEnqueue(TransactionCase):
def test_enqueue_migration_returns_count(self):
"""Test that enqueue_migration returns correct attachment count."""
+ # Ensure attachments are stored on filesystem for domain eligibility.
+ self.env["ir.config_parameter"].sudo().set_param("ir_attachment.location", "file")
# Create test attachments
self.Attachment.create([...])
- with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable:
- mock_instance = MagicMock()
- mock_delayable.return_value = mock_instance
+ with patch.object(type(self.Attachment), "with_delay") as mock_with_delay:
+ mock_with_delay.return_value = MagicMock(s3_migrate_batch=MagicMock())
total = self.Attachment.s3_enqueue_migration("test_s3", batch_size=10)🧰 Tools
🪛 Ruff (0.14.10)
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_migration_domain_excludes_already_migrated(self): | ||
| """Test that migration domain excludes already-migrated files.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| domain_str = str(domain) | ||
| self.assertIn("test_s3", domain_str) | ||
| self.assertIn("store_fname", domain_str) | ||
| self.assertIn("not like", domain_str) | ||
|
|
||
| def test_migration_domain_completeness(self): | ||
| """Verify domain includes all required filters.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Convert to string for easier checking | ||
| domain_str = str(domain) | ||
|
|
||
| # Must filter by checksum (has binary content) | ||
| self.assertIn("checksum", domain_str) | ||
| # Must filter by type=binary | ||
| self.assertIn("type", domain_str) | ||
| # Must filter by store_fname (not in target) | ||
| self.assertIn("store_fname", domain_str) | ||
| # Must filter by db_datas=False (not in database) | ||
| self.assertIn("db_datas", domain_str) | ||
| # Must include res_field tautology | ||
| self.assertIn("res_field", domain_str) | ||
|
|
||
| def test_migration_domain_excludes_url_attachments(self): | ||
| """Verify domain excludes URL-type attachments.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Check type=binary filter is present | ||
| type_filter = [d for d in domain if isinstance(d, tuple) and d[0] == "type"] | ||
| self.assertTrue(type_filter) | ||
| self.assertEqual(type_filter[0], ("type", "=", "binary")) | ||
|
|
||
| def test_migration_domain_excludes_db_stored(self): | ||
| """Verify domain excludes attachments stored in database.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Check db_datas=False filter is present | ||
| db_filter = [d for d in domain if isinstance(d, tuple) and d[0] == "db_datas"] | ||
| self.assertTrue(db_filter) | ||
| self.assertEqual(db_filter[0], ("db_datas", "=", False)) | ||
|
|
||
|
|
||
| class TestS3MigrationHelpers(TransactionCase): | ||
| """Test suite for migration helper methods.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_should_force_db_empty_rules(self): | ||
| """Empty force_db_rules should always return False.""" | ||
| result = self.Attachment._should_force_db("image/png", 1000, {}) | ||
| self.assertFalse(result) | ||
|
|
||
| def test_should_force_db_matching_mime(self): | ||
| """Matching mimetype with limit=0 should return True.""" | ||
| rules = {"image/": 0, "text/css": 0} | ||
|
|
||
| self.assertTrue(self.Attachment._should_force_db("image/png", 5000, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("image/jpeg", 100, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("text/css", 1, rules)) | ||
|
|
||
| def test_should_force_db_with_size_limit(self): | ||
| """Size limit should be respected for matching mimetype.""" | ||
| rules = {"image/": 51200} # 50KB limit | ||
|
|
||
| # Under limit - should be forced to DB | ||
| self.assertTrue(self.Attachment._should_force_db("image/png", 1000, rules)) | ||
| self.assertTrue(self.Attachment._should_force_db("image/png", 51200, rules)) | ||
|
|
||
| # Over limit - should not be forced to DB | ||
| self.assertFalse(self.Attachment._should_force_db("image/png", 51201, rules)) | ||
| self.assertFalse(self.Attachment._should_force_db("image/png", 100000, rules)) | ||
|
|
||
| def test_should_force_db_non_matching_mime(self): | ||
| """Non-matching mimetype should return False.""" | ||
| rules = {"image/": 51200, "text/css": 0} | ||
|
|
||
| self.assertFalse( | ||
| self.Attachment._should_force_db("application/pdf", 100, rules) | ||
| ) | ||
| self.assertFalse(self.Attachment._should_force_db("text/html", 100, rules)) | ||
|
|
||
| def test_compute_s3_path_optimized(self): | ||
| """Optimized path should use hierarchical structure.""" | ||
| checksum = "abc123def456" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=True) | ||
| self.assertEqual(path, "ab/c1/abc123def456") | ||
|
|
||
| def test_compute_s3_path_flat(self): | ||
| """Non-optimized path should use flat structure.""" | ||
| checksum = "abc123def456" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=False) | ||
| self.assertEqual(path, "abc123def456") | ||
|
|
||
|
|
||
| class TestS3MigrationEnqueue(TransactionCase): | ||
| """Test suite for migration enqueue functionality.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_enqueue_migration_returns_count(self): | ||
| """Test that enqueue_migration returns correct attachment count.""" | ||
| # Create test attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"test{i}.txt", | ||
| "raw": b"test content", | ||
| } | ||
| for i in range(5) | ||
| ] | ||
| ) | ||
|
|
||
| # Mock DelayableRecordset to prevent actual queue_job creation | ||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_instance = MagicMock() | ||
| mock_delayable.return_value = mock_instance | ||
|
|
||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| ) | ||
|
|
||
| # Should return count >= 5 (our attachments + any existing ones) | ||
| self.assertGreaterEqual(total, 5) | ||
|
|
||
| def test_enqueue_migration_respects_max_batches(self): | ||
| """Test that max_batches parameter limits the number of batches.""" | ||
| # Create 30 attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"batch_test{i}.txt", | ||
| "raw": b"content", | ||
| } | ||
| for i in range(30) | ||
| ] | ||
| ) | ||
|
|
||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_delayable.return_value = MagicMock() | ||
|
|
||
| # Limit to 2 batches of 10 | ||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| max_batches=2, | ||
| ) | ||
|
|
||
| # Should stop at 20 (2 batches × 10) | ||
| self.assertEqual(total, 20) | ||
|
|
||
| def test_enqueue_orders_by_checksum(self): | ||
| """Test that enqueue orders attachments by checksum for batch locality.""" | ||
| # Create attachments with same content (same checksum) | ||
| content = b"shared content for dedup test" | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"dedup_test{i}.txt", | ||
| "raw": content, | ||
| } | ||
| for i in range(3) | ||
| ] | ||
| ) | ||
|
|
||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_delayable.return_value = MagicMock() | ||
|
|
||
| # Should complete without error | ||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=100, | ||
| ) | ||
|
|
||
| self.assertGreaterEqual(total, 3) | ||
|
|
||
|
|
||
| class TestS3MigrationBatch(TransactionCase): | ||
| """Test suite for batch migration functionality.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_migrate_batch_handles_empty_recordset(self): | ||
| """Test that empty recordset doesn't crash.""" | ||
| empty = self.Attachment.browse([]) | ||
|
|
||
| # Should return True without error | ||
| result = empty.s3_migrate_batch("test_s3") | ||
| # Empty recordset may return False if storage not found, or True | ||
| self.assertIn(result, [True, False]) | ||
|
|
||
| def test_migrate_batch_method_exists(self): | ||
| """Test that s3_migrate_batch method is callable.""" | ||
| self.assertTrue(hasattr(self.Attachment, "s3_migrate_batch")) | ||
| self.assertTrue(callable(self.Attachment.s3_migrate_batch)) | ||
|
|
||
|
|
||
| class TestS3MigrationChecksumDedup(TransactionCase): | ||
| """Test suite for checksum deduplication functionality.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
| cls.FsStorage = cls.env["fs.storage"] | ||
|
|
||
| def test_get_binary_data_for_checksum_from_db(self): | ||
| """Test reading binary data from DB-stored attachment.""" | ||
| content = b"test content for checksum lookup" | ||
| att = self.Attachment.create( | ||
| { | ||
| "name": "test_db.txt", | ||
| "raw": content, | ||
| } | ||
| ) | ||
| checksum = att.checksum | ||
|
|
||
| # Should be able to read data by checksum | ||
| data = self.Attachment._get_binary_data_for_checksum(checksum) | ||
| self.assertEqual(data, content) | ||
|
|
||
| def test_get_binary_data_for_checksum_not_found(self): | ||
| """Test that non-existent checksum returns None.""" | ||
| data = self.Attachment._get_binary_data_for_checksum("nonexistent123456") | ||
| self.assertIsNone(data) | ||
|
|
||
|
|
||
| class TestS3MigrationGCMarking(TransactionCase): | ||
| """Test suite for garbage collection marking.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_fs_mark_for_gc_method_exists(self): | ||
| """Test that _fs_mark_for_gc method is available (from fs_attachment).""" | ||
| # This method should be inherited from fs_attachment module | ||
| self.assertTrue(hasattr(self.Attachment, "_fs_mark_for_gc")) | ||
|
|
||
|
|
||
| class TestS3MigrationErrorHandling(TransactionCase): | ||
| """Test suite for error handling during migration.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_should_force_db_handles_none_values(self): | ||
| """Test that None mimetype/file_size are handled gracefully.""" | ||
| rules = {"image/": 51200} | ||
|
|
||
| # Should not crash with None values | ||
| result = self.Attachment._should_force_db(None, None, rules) | ||
| self.assertFalse(result) | ||
|
|
||
| result = self.Attachment._should_force_db(None, 1000, rules) | ||
| self.assertFalse(result) | ||
|
|
||
| def test_compute_s3_path_short_checksum(self): | ||
| """Test path computation with short checksum.""" | ||
| # Should handle short checksums gracefully | ||
| checksum = "ab" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=True) | ||
| self.assertEqual(path, "ab//ab") | ||
|
|
||
| def test_upload_to_storage_creates_dirs(self): | ||
| """Test that _upload_to_storage creates directories.""" | ||
| mock_fs = MagicMock() | ||
| mock_fs.exists.return_value = False | ||
| mock_file = MagicMock() | ||
| mock_fs.open.return_value.__enter__ = MagicMock(return_value=mock_file) | ||
| mock_fs.open.return_value.__exit__ = MagicMock(return_value=False) | ||
|
|
||
| self.Attachment._upload_to_storage(mock_fs, "ab/cd/checksum", b"data") | ||
|
|
||
| # Should have attempted to create directories | ||
| mock_fs.makedirs.assert_called_once_with("ab/cd", exist_ok=True) | ||
| # Should have uploaded the file | ||
| mock_fs.open.assert_called_once_with("ab/cd/checksum", "wb") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find Ruff configuration files in the repository
find . -maxdepth 2 -type f \( -name "pyproject.toml" -o -name "setup.cfg" -o -name "tox.ini" -o -name "ruff.toml" -o -name ".ruff.toml" \) 2>/dev/null | head -20Repository: cetmix/oca-storage
Length of output: 73
🏁 Script executed:
# Check if Ruff is available and look for the test file
which ruff || echo "Ruff not found"
find . -path "*/fs_attachment_s3_migration/tests/test_attachment_migration.py" 2>/dev/nullRepository: cetmix/oca-storage
Length of output: 201
🏁 Script executed:
# Search for the repository structure
ls -la | head -20Repository: cetmix/oca-storage
Length of output: 1260
🏁 Script executed:
cat ./ruff.tomlRepository: cetmix/oca-storage
Length of output: 1867
🏁 Script executed:
# Check the directory structure more carefully
find . -type d -name "*s3*" | head -20Repository: cetmix/oca-storage
Length of output: 169
🏁 Script executed:
# Check if the test file path is correct
fd "test_attachment_migration.py" .Repository: cetmix/oca-storage
Length of output: 125
🏁 Script executed:
# Read the test file to check for actual violations
cat -n ./fs_attachment_s3_migration/tests/test_attachment_migration.py | head -100Repository: cetmix/oca-storage
Length of output: 4804
🏁 Script executed:
# Check for return type annotations on setUpClass and other methods
rg "def setUpClass|def test_" ./fs_attachment_s3_migration/tests/test_attachment_migration.py -A 1 | head -50Repository: cetmix/oca-storage
Length of output: 1878
🏁 Script executed:
# Check for f-strings and pytest.raises in the file
rg 'f".*"|pytest\.raises|self\.assert' ./fs_attachment_s3_migration/tests/test_attachment_migration.py | head -30Repository: cetmix/oca-storage
Length of output: 1731
🏁 Script executed:
# Get the complete test file to check for all f-string violations
wc -l ./fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 129
🏁 Script executed:
# Check for f-strings without placeholders (RUF003)
rg 'f"[^{]*"|f'"'"'[^{]*'"'"'' ./fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
# Get full test file to manually verify violations
cat ./fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 11640
🏁 Script executed:
# Try to find any information about RUF003 in common Ruff documentation/standards
python3 << 'EOF'
# Check for RUF rule codes in the 0-999 range to understand naming
# RUF003 should theoretically exist if following RUF001, RUF002 pattern
# Let's check what the review comment might have meant
# Common Ruff RUF rules (non-exhaustive):
ruf_rules = {
"RUF100": "Unused noqa directive",
"RUF200": "Ruff metadata",
}
# Common flake8-bugbear/similar rule codes that might be confused:
# F - Pyflakes
# E - pycodestyle errors
# W - pycodestyle warnings
# C - complexity/comprehensions
# A - flake8-builtins
# B - flake8-bugbear
# S - bandit
# ANN - flake8-annotations
# PT - flake8-pytest-style
# TRY - tryceratops
# RUF003 is not a standard Ruff rule
# Possibilities:
# 1. It's a very new rule
# 2. It's a typo/wrong code
# 3. It's from a different linter
print("RUF003 does not appear in standard Ruff rule list")
print("Possible intended rules:")
print("- RUF100 (unused noqa)")
print("- RUF200 (metadata)")
print("- UP003 (if from pyupgrade)")
print("- COM003 (if from flake8-commas)")
EOFRepository: cetmix/oca-storage
Length of output: 240
Fix Ruff violations ANN206 and PT009 in this test module, or add file-level ignores.
The module uses unittest-style assertions and classmethods without return type annotations. Either add return types (def setUpClass(cls) -> None:) and convert assertions to pytest style (assert x instead of self.assertTrue(x)), or add file-level ignores:
Add file-level ignores
+# ruff: noqa: ANN206, PT009
"""Tests for S3 attachment migration functionality."""Alternatively, fix the violations directly by adding return type annotations to all setUpClass methods and converting unittest assertions to pytest assertions.
🧰 Tools
🪛 Ruff (0.14.10)
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
13 - 311, This test module triggers Ruff ANN206/PT009 due to missing return type
annotations on classmethods and use of unittest-style assertions; fix by either
adding a file-level ignore comment at the top (e.g. "# noqa: ANN206, PT009") or
by annotating every setUpClass signature with "-> None" (update all setUpClass
methods) and converting unittest assertions (self.assertTrue, self.assertFalse,
self.assertEqual, self.assertIn, self.assertGreaterEqual, self.assertIsNone,
etc.) to plain pytest-style asserts (e.g. assert x, assert x == y, assert y in
x) throughout the tests, ensuring references like Attachment._should_force_db,
Attachment._compute_s3_path, Attachment.s3_enqueue_migration,
Attachment.s3_migrate_batch, Attachment._get_binary_data_for_checksum, and
_upload_to_storage remain unchanged.
| <xpath expr="//header" position="inside"> | ||
| <button | ||
| type="object" | ||
| name="action_open_migration_wizard" | ||
| string="Move existing attachments to S3" | ||
| invisible="protocol != 's3'" | ||
| class="oe_highlight" | ||
| /> | ||
| </xpath> | ||
| <xpath expr="//sheet" position="inside"> | ||
| <group string="Migration Settings"> | ||
| <field name="migration_batch_size" /> | ||
| <field name="migration_channel" /> | ||
| <field name="migration_use_storage_force_db_rules" /> | ||
| </group> | ||
| </xpath> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Odoo 16 form view invisible attribute syntax documentation
💡 Result:
Short answer
-
Dynamic (conditional) hide in a Form view: use attrs with the invisible modifier:
attrs="{'invisible': [('field_name','=', value)]}". [1][2] -
Static hide: set invisible="1" on the field node:
. [1]
Notes/examples
- Multiple conditions use a list of tuples (AND). You can combine logical expressions (e.g. [('a','=',1),('b','!=',False)]). [1][2]
- For One2many parent-based hiding use column_invisible on the list/column. [2]
References
[1] Odoo 16 Views (developer docs) — attrs / invisible usage.
[2] Example/tutorial showing attrs="{'invisible': [(...)]}" usage.
🌐 Web query:
Odoo 16 attrs attribute vs invisible attribute form fields
💡 Result:
Short answer — difference and when to use each:
-
attrs (XML attribute)
- Dynamic: lets you set invisible/readonly/required/etc. based on conditions (a domain) using other field values. Evaluated in the client for each record. Use when visibility/behaviour depends on record data. Example:
. [1][2]
- Dynamic: lets you set invisible/readonly/required/etc. based on conditions (a domain) using other field values. Evaluated in the client for each record. Use when visibility/behaviour depends on record data. Example:
-
invisible (XML attribute)
- Static: a simple boolean/flag to always hide the field in the view (invisible="1" or invisible="0"). The field is still fetched from DB but not shown. Use when you always want the element hidden (or to hide a field used only by attrs conditions). [2][3]
Key practical points
- A field referenced inside attrs must be present in the view (it can itself be invisible). [1][2]
- attrs supports logical operators (|, &, !) and multiple attributes per field; invisible is just a fixed attribute. [1]
- attrs is evaluated per-record in the client; invisible is fixed in the view definition. [1][2]
Sources: Odoo docs and community articles on attrs/invisible. [1][2][3]
🌐 Web query:
Odoo 16 form field conditional visibility expression-based
💡 Result:
Short answer — use attrs with domain-like conditions in the field tag.
Examples (Odoo 16):
-
Single condition (hide when boolean is False):
. [1] -
Multiple conditions (logical AND): list multiple tuples — they are ANDed:
. [2] -
OR conditions: use the domain prefix operators '|' (or) and '&' (and). Example: hide when a=1 OR b=2:
<field name="x" attrs="{'invisible': ['|', ('a','=',1), ('b','=',2)]}/> . [2]
Other notes:
- Fields referenced in attrs must be present in the same view (can be invisible). [1]
- attrs supports other keys: 'readonly', 'required', 'column_invisible' (for list subviews). [1][3]
Sources: Odoo 16 views/dev docs and practical guides. [1][2][3]
References
[1] Odoo 16 developer/views (attrs explanation).
[2] Odoo 16 tutorial / examples (attrs usage).
[3] Articles showing attrs examples and column_invisible usage.
Switch to attrs for Odoo 16 conditional visibility compatibility.
The invisible="protocol != 's3'" syntax is not supported in Odoo 16 form views. Conditional visibility must use attrs="{'invisible': [('protocol', '!=', 's3')]}" following domain-style conditions. Additionally, the "Migration Settings" group should be hidden when protocol is not S3 to prevent user confusion.
Proposed fix
<xpath expr="//header" position="inside">
<button
type="object"
name="action_open_migration_wizard"
string="Move existing attachments to S3"
- invisible="protocol != 's3'"
+ attrs="{'invisible': [('protocol', '!=', 's3')]}"
class="oe_highlight"
/>
</xpath>
<xpath expr="//sheet" position="inside">
- <group string="Migration Settings">
+ <group string="Migration Settings" attrs="{'invisible': [('protocol', '!=', 's3')]}">
<field name="migration_batch_size" />
<field name="migration_channel" />
<field name="migration_use_storage_force_db_rules" />
</group>
</xpath>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <xpath expr="//header" position="inside"> | |
| <button | |
| type="object" | |
| name="action_open_migration_wizard" | |
| string="Move existing attachments to S3" | |
| invisible="protocol != 's3'" | |
| class="oe_highlight" | |
| /> | |
| </xpath> | |
| <xpath expr="//sheet" position="inside"> | |
| <group string="Migration Settings"> | |
| <field name="migration_batch_size" /> | |
| <field name="migration_channel" /> | |
| <field name="migration_use_storage_force_db_rules" /> | |
| </group> | |
| </xpath> | |
| <xpath expr="//header" position="inside"> | |
| <button | |
| type="object" | |
| name="action_open_migration_wizard" | |
| string="Move existing attachments to S3" | |
| attrs="{'invisible': [('protocol', '!=', 's3')]}" | |
| class="oe_highlight" | |
| /> | |
| </xpath> | |
| <xpath expr="//sheet" position="inside"> | |
| <group string="Migration Settings" attrs="{'invisible': [('protocol', '!=', 's3')]}"> | |
| <field name="migration_batch_size" /> | |
| <field name="migration_channel" /> | |
| <field name="migration_use_storage_force_db_rules" /> | |
| </group> | |
| </xpath> |
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/views/fs_storage_view.xml around lines 9 - 24,
Replace the legacy invisible="protocol != 's3'" on the button with
attrs="{'invisible': [('protocol','!=','s3')]}" (targeting the button named
action_open_migration_wizard) and also wrap the <group string="Migration
Settings"> (contains fields migration_batch_size, migration_channel,
migration_use_storage_force_db_rules) with the same attrs condition so the
entire group is hidden when protocol != 's3'; ensure you use the exact attribute
name attrs and domain-style tuple [('protocol','!=','s3')] for Odoo 16
compatibility.
| from odoo import _, api, fields, models | ||
| from odoo.exceptions import UserError | ||
|
|
||
| MIGRATION_CHANNEL_XMLID = "fs_attachment_s3_migration.queue_channel_s3_migration" | ||
|
|
||
|
|
||
| class S3MigrationWizard(models.TransientModel): | ||
| _name = "s3.migration.wizard" | ||
| _description = "Migrate existing attachments to S3" | ||
|
|
||
| storage_id = fields.Many2one( | ||
| "fs.storage", | ||
| string="Target Storage", | ||
| required=True, | ||
| ) | ||
| storage_code = fields.Char( | ||
| compute="_compute_storage_code", | ||
| store=True, | ||
| readonly=False, | ||
| required=True, | ||
| ) | ||
| batch_size = fields.Integer( | ||
| compute="_compute_batch_size", | ||
| store=True, | ||
| readonly=False, | ||
| ) | ||
| channel = fields.Char( | ||
| string="Queue Channel", | ||
| compute="_compute_channel", | ||
| store=True, | ||
| readonly=False, | ||
| ) | ||
| max_batches = fields.Integer( | ||
| string="Max Batches (per click)", | ||
| help="Limit number of batches to enqueue now. Leave 0 for unlimited.", | ||
| default=0, | ||
| ) | ||
|
|
||
| @api.model | ||
| def _get_default_channel(self): | ||
| """Get default channel name from XML record.""" | ||
| channel = self.env.ref(MIGRATION_CHANNEL_XMLID, raise_if_not_found=False) | ||
| return channel.complete_name if channel else "root.s3_migration" | ||
|
|
||
| @api.depends("storage_id") | ||
| def _compute_storage_code(self): | ||
| for wizard in self: | ||
| wizard.storage_code = wizard.storage_id.code if wizard.storage_id else False | ||
|
|
||
| @api.depends("storage_id") | ||
| def _compute_batch_size(self): | ||
| for wizard in self: | ||
| if wizard.storage_id and wizard.storage_id.migration_batch_size: | ||
| wizard.batch_size = wizard.storage_id.migration_batch_size | ||
| elif not wizard.batch_size: | ||
| wizard.batch_size = 500 | ||
|
|
||
| @api.depends("storage_id") | ||
| def _compute_channel(self): | ||
| for wizard in self: | ||
| if wizard.storage_id and wizard.storage_id.migration_channel: | ||
| wizard.channel = wizard.storage_id.migration_channel | ||
| elif not wizard.channel: | ||
| wizard.channel = wizard._get_default_channel() | ||
|
|
||
| def action_confirm(self): | ||
| self.ensure_one() | ||
| if not self.storage_code: | ||
| raise UserError(_("Storage code is required.")) | ||
| max_batches = self.max_batches or None | ||
| total = self.env["ir.attachment"].s3_enqueue_migration( | ||
| self.storage_code, | ||
| batch_size=self.batch_size, | ||
| max_batches=max_batches, | ||
| channel=self.channel, | ||
| ) |
There was a problem hiding this comment.
Tighten wizard validations + fix Ruff ANN202 + prevent storage_id/storage_code mismatch. ``
Proposed fix
from odoo import _, api, fields, models
from odoo.exceptions import UserError
MIGRATION_CHANNEL_XMLID = "fs_attachment_s3_migration.queue_channel_s3_migration"
@@
storage_code = fields.Char(
compute="_compute_storage_code",
store=True,
- readonly=False,
+ readonly=True,
required=True,
)
@@
@api.model
- def _get_default_channel(self):
+ def _get_default_channel(self) -> str:
"""Get default channel name from XML record."""
channel = self.env.ref(MIGRATION_CHANNEL_XMLID, raise_if_not_found=False)
return channel.complete_name if channel else "root.s3_migration"
@api.depends("storage_id")
- def _compute_storage_code(self):
+ def _compute_storage_code(self) -> None:
for wizard in self:
wizard.storage_code = wizard.storage_id.code if wizard.storage_id else False
@api.depends("storage_id")
- def _compute_batch_size(self):
+ def _compute_batch_size(self) -> None:
for wizard in self:
if wizard.storage_id and wizard.storage_id.migration_batch_size:
wizard.batch_size = wizard.storage_id.migration_batch_size
elif not wizard.batch_size:
wizard.batch_size = 500
@api.depends("storage_id")
- def _compute_channel(self):
+ def _compute_channel(self) -> None:
for wizard in self:
if wizard.storage_id and wizard.storage_id.migration_channel:
wizard.channel = wizard.storage_id.migration_channel
elif not wizard.channel:
wizard.channel = wizard._get_default_channel()
def action_confirm(self):
self.ensure_one()
if not self.storage_code:
raise UserError(_("Storage code is required."))
+ if getattr(self.storage_id, "protocol", None) != "s3":
+ raise UserError(_("Selected storage is not an S3 storage."))
+ if not self.batch_size or self.batch_size <= 0:
+ raise UserError(_("Batch size must be a positive integer."))
+ if not self.channel:
+ raise UserError(_("Queue channel is required."))
max_batches = self.max_batches or None
total = self.env["ir.attachment"].s3_enqueue_migration(
self.storage_code,
batch_size=self.batch_size,
max_batches=max_batches,
channel=self.channel,
)🧰 Tools
🪛 Ruff (0.14.10)
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/wizard/migration_wizard.py around lines 4 - 79,
Ensure the wizard validates storage selection and fixes Ruff ANN202 by adding a
return type to _get_default_channel, and prevent storage_id/storage_code
mismatch by validating them in action_confirm: add a return type annotation (->
str) to _get_default_channel, ensure _compute_storage_code remains authoritative
(compute storage_code from storage_id in _compute_storage_code), then in
action_confirm call self.ensure_one(), require that self.storage_id is set and
that self.storage_id.code == self.storage_code (raise UserError with a clear
message if not), and keep existing batch/channel logic intact; this tightens
validation and resolves the ANN202 lint warning.
b09c5de to
a96d971
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @fs_attachment_s3_migration/models/ir_attachment.py:
- Around line 165-181: The cursor pagination in the loop that enqueues
s3_migrate_batch is incompatible with ordering by "checksum, id ASC" and can
skip records; change the search ordering in the block that builds cursor_domain
and calls self.search(...) so it uses "id ASC" for correct id-based cursor
pagination (or implement checksum-based cursoring if you must preserve checksum
ordering), then if you still need checksum grouping sort each batch by
"checksum" after retrieving ids before calling
self.browse(ids).with_delay(...).s3_migrate_batch(storage_code).
- Around line 100-132: The _upload_to_storage function only checks existing
files by size which can miss corrupted or partially written files; update it to
verify the checksum of the existing file before skipping overwrite by computing
a hash (e.g., SHA256) of the opened existing file and comparing it to the
expected checksum derived from bin_data (or the path checksum) and only return
early if they match, otherwise overwrite; also replace the final OSError log
call inside the except block with logging.exception (or call _logger.exception)
so the stack trace is included when write fails.
In @fs_attachment_s3_migration/tests/test_attachment_migration.py:
- Around line 205-225: Add an integration test in TestS3MigrationBatch that
configures a mock fs.storage record, creates filestore-backed attachments via
self.Attachment.create with known content and checksums (including duplicates),
invokes the s3_migrate_batch method on the created recordset, and asserts the
migration side effects: store_fname updated to the S3 key, fs_filename
populated, and duplicate-content attachments deduplicated (only one upload
performed and other records point to the same S3 object); use the existing
setUpClass/TestS3MigrationBatch context to access self.Attachment, stub or spy
the S3 client or fs.storage upload method to verify upload call counts, and
ensure cleanup of created attachments and mock storage after the test.
- Around line 213-220: The test test_migrate_batch_handles_empty_recordset
contains a no-op assertion self.assertIn(result, [True, False]); replace it with
a meaningful check or drop it: either assert that calling
Attachment.browse([]).s3_migrate_batch("test_s3") does not raise and return a
boolean (use self.assertIsInstance(result, bool) referencing the
s3_migrate_batch call on the empty recordset), or simply remove the assertion
and leave the call to ensure no exception is raised.
- Around line 153-177: Replace the Unicode multiplication sign in the comment
with ASCII "x" and make the assertion robust by basing expected value on the
attachments you just created: capture the records returned by
self.Attachment.create (use that result to get created_count) and assert total
== min(created_count, 2 * 10) (i.e. min(created_count, max_batches *
batch_size)) instead of assuming exactly 20, while still calling
self.Attachment.s3_enqueue_migration with batch_size and max_batches as shown.
- Around line 291-296: The test assumes a 2-char checksum which cannot occur in
production; add validation in Attachment._compute_s3_path to reject
unrealistically short checksums (e.g., raise ValueError if len(checksum) < 32)
and update the tests accordingly (either remove
test_compute_s3_path_short_checksum or change it to assert that calling
Attachment._compute_s3_path("ab", optimize_path=True) raises the ValueError).
In @fs_attachment_s3_migration/wizard/migration_wizard.py:
- Around line 53-59: The compute method treats 0 as falsy and will overwrite a
user-set 0; change the fallback check in _compute_batch_size to test for None
(e.g., use "elif wizard.batch_size is None") so explicit 0 stays unchanged, and
add validation in action_confirm to reject non-positive batch_size (ensure
batch_size is an int > 0, raise a UserError or similar if batch_size <= 0)
before starting migration; reference the batch_size field,
storage_id.migration_batch_size, _compute_batch_size and action_confirm when
making these edits.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
fs_attachment_s3_migration/data/queue_job_channel_data.xmlfs_attachment_s3_migration/models/fs_storage.pyfs_attachment_s3_migration/models/ir_attachment.pyfs_attachment_s3_migration/tests/test_attachment_migration.pyfs_attachment_s3_migration/views/fs_storage_view.xmlfs_attachment_s3_migration/wizard/migration_wizard.py
🧰 Additional context used
🧬 Code graph analysis (3)
fs_attachment_s3_migration/tests/test_attachment_migration.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (7)
_s3_migration_domain(20-35)_should_force_db(56-66)_compute_s3_path(69-73)s3_enqueue_migration(135-199)s3_migrate_batch(201-305)_get_binary_data_for_checksum(75-98)_upload_to_storage(100-132)
fs_attachment_s3_migration/wizard/migration_wizard.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (1)
s3_enqueue_migration(135-199)
fs_attachment_s3_migration/models/ir_attachment.py (4)
fs_storage/models/fs_storage.py (2)
get_by_code(203-209)fs(310-324)fs_attachment/models/fs_storage.py (1)
get_force_db_for_default_attachment_rules(376-391)fs_file/fields.py (1)
mimetype(101-116)fs_attachment/models/ir_attachment.py (2)
_fs_mark_for_gc(553-581)_force_write_store_fname(491-498)
🪛 Ruff (0.14.10)
fs_attachment_s3_migration/tests/test_attachment_migration.py
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
fs_attachment_s3_migration/wizard/migration_wizard.py
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
fs_attachment_s3_migration/models/ir_attachment.py
20-20: Missing return type annotation for private function _s3_migration_domain
(ANN202)
22-28: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
34-34: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
38-38: Missing return type annotation for private function _s3_get_force_db_domain
(ANN202)
56-56: Missing return type annotation for private function _should_force_db
Add return type annotation: bool
(ANN202)
69-69: Missing return type annotation for private function _compute_s3_path
(ANN202)
75-75: Missing return type annotation for private function _get_binary_data_for_checksum
(ANN202)
100-100: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
131-131: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
166-166: Consider [*domain, ("id", ">", last_id)] instead of concatenation
Replace with [*domain, ("id", ">", last_id)]
(RUF005)
245-245: Do not catch blind exception: Exception
(BLE001)
268-268: Do not catch blind exception: Exception
(BLE001)
269-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🔇 Additional comments (14)
fs_attachment_s3_migration/data/queue_job_channel_data.xml (1)
1-19: LGTM!The queue job channel configuration is well-structured with clear documentation. The
noupdate="1"attribute correctly prevents overwrites on module upgrades, and the channel ID aligns with theMIGRATION_CHANNEL_XMLIDconstant used in the wizard.fs_attachment_s3_migration/models/fs_storage.py (1)
1-45: LGTM!The storage model extension is well-implemented with appropriate defaults. The wizard action correctly validates the storage code requirement and passes all necessary context parameters.
fs_attachment_s3_migration/models/ir_attachment.py (4)
11-35: LGTM!The migration domain logic correctly identifies eligible attachments by excluding URL-based, DB-stored, and already-migrated files. The
RES_FIELD_DOMAINtautology properly includes field-linked attachments that Odoo's default_searchwould otherwise filter out.
37-66: LGTM!The force-DB domain construction and the runtime check are correctly implemented. The
limit == 0case properly means "no size limit" for the mimetype prefix, matching the expected behavior fromget_force_db_for_default_attachment_rules.
68-98: LGTM!The path computation follows a standard sharding pattern for optimized directory structures. The binary data retrieval correctly prioritizes DB-stored attachments as donors before falling back to filestore candidates, with appropriate error handling for missing files.
243-276: Broad exception handling is acceptable for batch resilience.The
except Exceptioncatches at lines 245 and 268 are intentional to prevent a single problematic attachment from failing the entire batch. The code properly logs context and continues processing. Consider adding# noqa: BLE001comments to suppress the static analysis warnings if this pattern is intentional.fs_attachment_s3_migration/wizard/migration_wizard.py (2)
42-46: LGTM!The default channel lookup correctly uses
complete_nameto get the full channel path and has a sensible fallback for edge cases where the XML record might be missing.
69-93: LGTM!The confirmation action properly validates the storage code, converts
max_batches=0toNonefor unlimited batches, and returns a user-friendly notification. The notification includes the count of enqueued attachments, providing good feedback.fs_attachment_s3_migration/views/fs_storage_view.xml (1)
1-31: LGTM!The view inheritance is well-structured. The invisible
protocolfield correctly enables the button's visibility condition, and the Migration Settings group is appropriately placed within the form sheet.fs_attachment_s3_migration/tests/test_attachment_migration.py (5)
1-62: LGTM - Domain tests provide reasonable coverage.The string-based domain verification approach is pragmatic for checking filter presence. The tests correctly validate that the migration domain excludes already-migrated files, URL attachments, and DB-stored entries.
Note: The Ruff PT009 hints suggesting
assertover unittest-style assertions are false positives for Odoo'sTransactionCase-based tests, which correctly use the unittest framework.
64-116: LGTM - Comprehensive helper method tests.Good coverage of edge cases including:
- Empty rules returning
False- Size boundary testing at exact limit (51200) and just over (51201)
- Non-matching MIME types
- Both optimized and flat path computation
228-255: LGTM - Checksum deduplication tests cover key scenarios.The tests verify:
- Reading binary data from attachments by checksum
- Graceful handling of non-existent checksums returning
None
258-269: Acceptable as a dependency contract test.This verifies the
_fs_mark_for_gcmethod is inherited fromfs_attachment, ensuring the module dependency is correctly configured.
298-311: LGTM - Well-structured mock test for upload functionality.The mock setup correctly simulates the filesystem interface and validates:
- Directory creation with
makedirscalled with correct path andexist_ok=True- File opened in write-binary mode at the expected path
| def _upload_to_storage(self, fs, path, bin_data): | ||
| """Upload binary data to storage with content verification.""" | ||
| dirname = "/".join(path.split("/")[:-1]) | ||
| if dirname: | ||
| try: | ||
| fs.makedirs(dirname, exist_ok=True) | ||
| except OSError as e: | ||
| _logger.debug("Directory %s may already exist: %s", dirname, e) | ||
|
|
||
| expected_size = len(bin_data) | ||
|
|
||
| if fs.exists(path): | ||
| try: | ||
| existing_size = fs.info(path).get("size", -1) | ||
| if existing_size == expected_size: | ||
| return | ||
| _logger.warning( | ||
| "Existing file %s has mismatched size (%d vs %d), overwriting", | ||
| path, | ||
| existing_size, | ||
| expected_size, | ||
| ) | ||
| except OSError as e: | ||
| _logger.debug( | ||
| "Cannot verify existing file %s: %s, overwriting", path, e | ||
| ) | ||
|
|
||
| try: | ||
| with fs.open(path, "wb") as f: | ||
| f.write(bin_data) | ||
| except OSError as e: | ||
| _logger.error("Failed to write file %s: %s", path, e) | ||
| raise |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Size-only verification may miss corrupted files.
The existing file check at line 114 compares only file size. Since the path is derived from the checksum, and files with the same checksum should be identical, this is generally safe. However, if a previous upload was interrupted or corrupted, size alone won't detect it.
Consider adding checksum verification for existing files, or at minimum, document this limitation.
Also, per static analysis hint, using logging.exception at line 131 would include the stack trace automatically:
Suggested improvement
try:
with fs.open(path, "wb") as f:
f.write(bin_data)
except OSError as e:
- _logger.error("Failed to write file %s: %s", path, e)
+ _logger.exception("Failed to write file %s", path)
raise📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _upload_to_storage(self, fs, path, bin_data): | |
| """Upload binary data to storage with content verification.""" | |
| dirname = "/".join(path.split("/")[:-1]) | |
| if dirname: | |
| try: | |
| fs.makedirs(dirname, exist_ok=True) | |
| except OSError as e: | |
| _logger.debug("Directory %s may already exist: %s", dirname, e) | |
| expected_size = len(bin_data) | |
| if fs.exists(path): | |
| try: | |
| existing_size = fs.info(path).get("size", -1) | |
| if existing_size == expected_size: | |
| return | |
| _logger.warning( | |
| "Existing file %s has mismatched size (%d vs %d), overwriting", | |
| path, | |
| existing_size, | |
| expected_size, | |
| ) | |
| except OSError as e: | |
| _logger.debug( | |
| "Cannot verify existing file %s: %s, overwriting", path, e | |
| ) | |
| try: | |
| with fs.open(path, "wb") as f: | |
| f.write(bin_data) | |
| except OSError as e: | |
| _logger.error("Failed to write file %s: %s", path, e) | |
| raise | |
| def _upload_to_storage(self, fs, path, bin_data): | |
| """Upload binary data to storage with content verification.""" | |
| dirname = "/".join(path.split("/")[:-1]) | |
| if dirname: | |
| try: | |
| fs.makedirs(dirname, exist_ok=True) | |
| except OSError as e: | |
| _logger.debug("Directory %s may already exist: %s", dirname, e) | |
| expected_size = len(bin_data) | |
| if fs.exists(path): | |
| try: | |
| existing_size = fs.info(path).get("size", -1) | |
| if existing_size == expected_size: | |
| return | |
| _logger.warning( | |
| "Existing file %s has mismatched size (%d vs %d), overwriting", | |
| path, | |
| existing_size, | |
| expected_size, | |
| ) | |
| except OSError as e: | |
| _logger.debug( | |
| "Cannot verify existing file %s: %s, overwriting", path, e | |
| ) | |
| try: | |
| with fs.open(path, "wb") as f: | |
| f.write(bin_data) | |
| except OSError as e: | |
| _logger.exception("Failed to write file %s", path) | |
| raise |
🧰 Tools
🪛 Ruff (0.14.10)
100-100: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
131-131: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 100 - 132,
The _upload_to_storage function only checks existing files by size which can
miss corrupted or partially written files; update it to verify the checksum of
the existing file before skipping overwrite by computing a hash (e.g., SHA256)
of the opened existing file and comparing it to the expected checksum derived
from bin_data (or the path checksum) and only return early if they match,
otherwise overwrite; also replace the final OSError log call inside the except
block with logging.exception (or call _logger.exception) so the stack trace is
included when write fails.
| def test_enqueue_migration_respects_max_batches(self): | ||
| """Test that max_batches parameter limits the number of batches.""" | ||
| # Create 30 attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"batch_test{i}.txt", | ||
| "raw": b"content", | ||
| } | ||
| for i in range(30) | ||
| ] | ||
| ) | ||
|
|
||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_delayable.return_value = MagicMock() | ||
|
|
||
| # Limit to 2 batches of 10 | ||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| max_batches=2, | ||
| ) | ||
|
|
||
| # Should stop at 20 (2 batches × 10) | ||
| self.assertEqual(total, 20) |
There was a problem hiding this comment.
Fix ambiguous character and potential test flakiness.
- Line 176 uses Unicode
×(MULTIPLICATION SIGN) instead of ASCIIx. - The assertion
assertEqual(total, 20)may fail if pre-existing attachments in the test database match the migration domain, as they would be included in the batch count.
Proposed fix
- # Should stop at 20 (2 batches × 10)
- self.assertEqual(total, 20)
+ # Should stop at 20 (2 batches x 10)
+ self.assertLessEqual(total, 20)Alternatively, if you need to assert exactly 20, consider filtering for only the test-created attachments or using assertGreaterEqual(total, 20) combined with assertLessEqual to account for database state variability.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_enqueue_migration_respects_max_batches(self): | |
| """Test that max_batches parameter limits the number of batches.""" | |
| # Create 30 attachments | |
| self.Attachment.create( | |
| [ | |
| { | |
| "name": f"batch_test{i}.txt", | |
| "raw": b"content", | |
| } | |
| for i in range(30) | |
| ] | |
| ) | |
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | |
| mock_delayable.return_value = MagicMock() | |
| # Limit to 2 batches of 10 | |
| total = self.Attachment.s3_enqueue_migration( | |
| "test_s3", | |
| batch_size=10, | |
| max_batches=2, | |
| ) | |
| # Should stop at 20 (2 batches × 10) | |
| self.assertEqual(total, 20) | |
| def test_enqueue_migration_respects_max_batches(self): | |
| """Test that max_batches parameter limits the number of batches.""" | |
| # Create 30 attachments | |
| self.Attachment.create( | |
| [ | |
| { | |
| "name": f"batch_test{i}.txt", | |
| "raw": b"content", | |
| } | |
| for i in range(30) | |
| ] | |
| ) | |
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | |
| mock_delayable.return_value = MagicMock() | |
| # Limit to 2 batches of 10 | |
| total = self.Attachment.s3_enqueue_migration( | |
| "test_s3", | |
| batch_size=10, | |
| max_batches=2, | |
| ) | |
| # Should stop at 20 (2 batches x 10) | |
| self.assertLessEqual(total, 20) |
🧰 Tools
🪛 Ruff (0.14.10)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
153 - 177, Replace the Unicode multiplication sign in the comment with ASCII "x"
and make the assertion robust by basing expected value on the attachments you
just created: capture the records returned by self.Attachment.create (use that
result to get created_count) and assert total == min(created_count, 2 * 10)
(i.e. min(created_count, max_batches * batch_size)) instead of assuming exactly
20, while still calling self.Attachment.s3_enqueue_migration with batch_size and
max_batches as shown.
| class TestS3MigrationBatch(TransactionCase): | ||
| """Test suite for batch migration functionality.""" | ||
|
|
||
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
|
|
||
| def test_migrate_batch_handles_empty_recordset(self): | ||
| """Test that empty recordset doesn't crash.""" | ||
| empty = self.Attachment.browse([]) | ||
|
|
||
| # Should return True without error | ||
| result = empty.s3_migrate_batch("test_s3") | ||
| # Empty recordset may return False if storage not found, or True | ||
| self.assertIn(result, [True, False]) | ||
|
|
||
| def test_migrate_batch_method_exists(self): | ||
| """Test that s3_migrate_batch method is callable.""" | ||
| self.assertTrue(hasattr(self.Attachment, "s3_migrate_batch")) | ||
| self.assertTrue(callable(self.Attachment.s3_migrate_batch)) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding integration tests for actual migration flow.
The current test suite provides good unit test coverage for individual methods but lacks integration tests that:
- Configure a mock
fs.storagerecord - Create filestore-backed attachments
- Run
s3_migrate_batchand verify attachments are correctly migrated (store_fname updated, fs_filename set) - Verify checksum deduplication works end-to-end (multiple attachments with same content result in single upload)
Would you like me to help draft an integration test that covers the full migration workflow?
🧰 Tools
🪛 Ruff (0.14.10)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
205 - 225, Add an integration test in TestS3MigrationBatch that configures a
mock fs.storage record, creates filestore-backed attachments via
self.Attachment.create with known content and checksums (including duplicates),
invokes the s3_migrate_batch method on the created recordset, and asserts the
migration side effects: store_fname updated to the S3 key, fs_filename
populated, and duplicate-content attachments deduplicated (only one upload
performed and other records point to the same S3 object); use the existing
setUpClass/TestS3MigrationBatch context to access self.Attachment, stub or spy
the S3 client or fs.storage upload method to verify upload call counts, and
ensure cleanup of created attachments and mock storage after the test.
| def test_compute_s3_path_short_checksum(self): | ||
| """Test path computation with short checksum.""" | ||
| # Should handle short checksums gracefully | ||
| checksum = "ab" | ||
| path = self.Attachment._compute_s3_path(checksum, optimize_path=True) | ||
| self.assertEqual(path, "ab//ab") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the _compute_s3_path implementation
rg -n "_compute_s3_path" --type pyRepository: cetmix/oca-storage
Length of output: 1062
🏁 Script executed:
#!/bin/bash
# Examine the attachment model to understand checksum handling
fd "ir_attachment" --type py | head -5Repository: cetmix/oca-storage
Length of output: 231
🏁 Script executed:
#!/bin/bash
# Look for checksum length validation or generation
rg -n "checksum.*len\|len.*checksum\|SHA\|sha256" fs_attachment_s3_migration/ --type py | head -20Repository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Read the implementation file to see _compute_s3_path and checksum handling
cat -n fs_attachment_s3_migration/models/ir_attachment.py | head -100Repository: cetmix/oca-storage
Length of output: 4388
🏁 Script executed:
#!/bin/bash
# Look at the area around line 264 where _compute_s3_path is called in production
sed -n '250,280p' fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Length of output: 1043
🏁 Script executed:
#!/bin/bash
# Check if there's any checksum validation before _compute_s3_path is called
sed -n '60,80p' fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Length of output: 884
🏁 Script executed:
#!/bin/bash
# Look for how checksums are generated/set in the codebase
rg -n "checksum\s*=" fs_attachment_s3_migration/ | head -15Repository: cetmix/oca-storage
Length of output: 449
🏁 Script executed:
#!/bin/bash
# Check if there are any other tests that use different checksum lengths
grep -n "checksum.*=" fs_attachment_s3_migration/tests/test_attachment_migration.py | head -20Repository: cetmix/oca-storage
Length of output: 448
🏁 Script executed:
#!/bin/bash
# Look at the test setup to see how checksums are created in tests
sed -n '1,120p' fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 4945
🏁 Script executed:
#!/bin/bash
# Check Odoo documentation or code comments about checksum format
rg -n "checksum" fs_attachment_s3_migration/models/ir_attachment.py -B2 -A2 | head -30Repository: cetmix/oca-storage
Length of output: 1271
🏁 Script executed:
#!/bin/bash
# See if there are comments or docs about the test's intent
sed -n '285,305p' fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 947
Remove unrealistic edge case test or add checksum length validation.
The test documents expected behavior for a 2-character checksum ("ab" → "ab//ab"), but this edge case cannot occur in production since Odoo attachment checksums are always 32+ characters (MD5 or SHA-256). Either remove this unrealistic test or add checksum length validation in _compute_s3_path to explicitly reject invalid checksums.
🧰 Tools
🪛 Ruff (0.14.10)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
291 - 296, The test assumes a 2-char checksum which cannot occur in production;
add validation in Attachment._compute_s3_path to reject unrealistically short
checksums (e.g., raise ValueError if len(checksum) < 32) and update the tests
accordingly (either remove test_compute_s3_path_short_checksum or change it to
assert that calling Attachment._compute_s3_path("ab", optimize_path=True) raises
the ValueError).
| @api.depends("storage_id") | ||
| def _compute_batch_size(self): | ||
| for wizard in self: | ||
| if wizard.storage_id and wizard.storage_id.migration_batch_size: | ||
| wizard.batch_size = wizard.storage_id.migration_batch_size | ||
| elif not wizard.batch_size: | ||
| wizard.batch_size = 500 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
batch_size of 0 could cause issues.
The condition elif not wizard.batch_size treats 0 the same as False, so if a user clears the field to 0, it will be reset to 500. However, if batch_size=0 somehow persists, it would cause the migration to process 0 attachments per batch (effectively doing nothing). Consider adding validation in action_confirm:
Add validation for batch_size
def action_confirm(self):
self.ensure_one()
if not self.storage_code:
raise UserError(_("Storage code is required."))
+ if not self.batch_size or self.batch_size < 1:
+ raise UserError(_("Batch size must be at least 1."))
max_batches = self.max_batches or None🧰 Tools
🪛 Ruff (0.14.10)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/wizard/migration_wizard.py around lines 53 - 59,
The compute method treats 0 as falsy and will overwrite a user-set 0; change the
fallback check in _compute_batch_size to test for None (e.g., use "elif
wizard.batch_size is None") so explicit 0 stays unchanged, and add validation in
action_confirm to reject non-positive batch_size (ensure batch_size is an int >
0, raise a UserError or similar if batch_size <= 0) before starting migration;
reference the batch_size field, storage_id.migration_batch_size,
_compute_batch_size and action_confirm when making these edits.
a96d971 to
4109f14
Compare
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Fix all issues with AI agents
In @fs_attachment_s3_migration/models/fs_storage.py:
- Around line 11-14: migration_batch_size currently accepts zero or negative
values; add validation by (1) adding a Python constraint method (e.g.,
_check_migration_batch_size) decorated with
@api.constrains('migration_batch_size') that raises ValidationError from
odoo.exceptions when migration_batch_size < 1, and (2) add a DB-level safeguard
by appending an entry to _sql_constraints such as ("migration_batch_size_check",
"CHECK (migration_batch_size >= 1)", "migration_batch_size must be at least 1")
to ensure invalid values cannot be persisted.
In @fs_attachment_s3_migration/models/ir_attachment.py:
- Around line 243-253: The try/except around
self._get_binary_data_for_checksum(checksum) is too broad; replace the broad
"except Exception" with specific exception types that
_get_binary_data_for_checksum can raise (e.g., OSError, IOError, ValueError or
the library-specific exceptions) or, if truly intentional, add a clear comment
explaining why a broad catch is required; update the except clause to catch only
those specific exceptions, log the error as before using _logger.warning with
checksum and exception, then increment skipped and continue (references:
_get_binary_data_for_checksum, checksum, attachments, skipped).
- Around line 268-276: The except block currently calls _logger.error and drops
the traceback; replace the _logger.error call in that except Exception as e:
block with _logger.exception so the full stack trace is recorded (e.g.
_logger.exception("Upload failed for %s, skipping %d attachments", checksum,
len(attachments))). Keep the skipped += len(attachments) and continue logic
unchanged; referencing the except block handling 'e', '_logger', 'checksum', and
'attachments'.
- Around line 130-132: Replace the _logger.error call inside the except OSError
as e block with _logger.exception so the traceback is preserved; e.g., in the
except OSError handler in models/ir_attachment.py (the "except OSError as e:"
block that logs "Failed to write file %s: %s"), call _logger.exception("Failed
to write file %s", path) and then re-raise the exception.
- Around line 165-171: The pagination is broken because the search orders by
"checksum, id ASC" while the cursor uses ("id", ">", last_id); change the search
ordering to "id ASC" (replace the current order="checksum, id ASC") so cursor
pagination using cursor_domain, last_id and batch_size is reliable; you can keep
the checksum deduplication logic inside s3_migrate_batch as-is and continue
using the same cursor loop (the symbols to update are the search call that
currently sets order="checksum, id ASC" and the surrounding
cursor_domain/last_id logic).
- Line 26: The LIKE pattern uses raw storage_code in the domain tuple
("store_fname", "not like", f"{storage_code}://%"), which lets any % or _ in
storage_code be treated as wildcards; fix it by either validating storage_code
against a strict safe regex (e.g., only [A-Za-z0-9_]+) before using it, or
escape SQL LIKE metacharacters in storage_code (escape % and _ by prefixing with
the escape char and ensure the query uses the corresponding ESCAPE clause or
Odoo domain escaping) and then build the domain tuple with the escaped value
(replace the tuple ("store_fname", "not like", f"{storage_code}://%")
accordingly).
In @fs_attachment_s3_migration/tests/test_attachment_migration.py:
- Around line 127-151: The test test_enqueue_migration_returns_count is flaky
because it assumes total includes at least the 5 created attachments but does
not isolate those records; update the test to assert deterministically by either
calling Attachment.s3_enqueue_migration with a domain limited to the created
names (e.g., a name pattern or explicit list) or by querying
Attachment.search/count filtered by the created name prefix before/after the
call to verify exactly 5 were enqueued; target the
Attachment.s3_enqueue_migration call and the test function
test_enqueue_migration_returns_count to add the domain or an explicit filtered
assertion to ensure the test does not depend on pre-existing attachments.
- Around line 176-177: Replace the Unicode multiplication sign in the inline
comment with an ASCII "x": change the comment "2 batches × 10" to "2 batches x
10" (refer to the assertion line containing self.assertEqual(total, 20) in the
test_attachment_migration test).
In @fs_attachment_s3_migration/views/fs_storage_view.xml:
- Around line 19-25: The Migration Settings group is always visible but should
only show for S3 storage; update the XML group (the <group string="Migration
Settings"> that contains fields migration_batch_size, migration_channel,
migration_use_storage_force_db_rules) to include a visibility attrs so it's
invisible when protocol != 's3' (e.g., add attrs="{'invisible': [('protocol',
'!=', 's3')]}" to the group) so these S3-only settings match the same visibility
as the "Move existing attachments to S3" button.
In @fs_attachment_s3_migration/wizard/migration_wizard.py:
- Around line 25-29: The batch_size field can be left at zero which then
overwrites defaults in s3_enqueue_migration; add a non-zero default or validate
before enqueue. Either set a positive default on the fields.Integer (e.g.,
default=500) or add a pre-flight check in action_confirm that raises a UserError
if self.batch_size is falsy or <= 0; ensure this validation runs after
_compute_batch_size so the computed value is respected and references the
batch_size attribute, _compute_batch_size compute method, and the
s3_enqueue_migration call to prevent passing batch_size=0.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
fs_attachment_s3_migration/data/queue_job_channel_data.xmlfs_attachment_s3_migration/models/fs_storage.pyfs_attachment_s3_migration/models/ir_attachment.pyfs_attachment_s3_migration/tests/test_attachment_migration.pyfs_attachment_s3_migration/views/fs_storage_view.xmlfs_attachment_s3_migration/wizard/migration_wizard.py
🧰 Additional context used
🧬 Code graph analysis (3)
fs_attachment_s3_migration/tests/test_attachment_migration.py (2)
fs_attachment_s3_migration/models/ir_attachment.py (7)
_s3_migration_domain(20-35)_should_force_db(56-66)_compute_s3_path(69-73)s3_enqueue_migration(135-199)s3_migrate_batch(201-305)_get_binary_data_for_checksum(75-98)_upload_to_storage(100-132)fs_attachment_s3_migration/models/fs_storage.py (1)
FsStorage(8-45)
fs_attachment_s3_migration/models/ir_attachment.py (3)
fs_storage/models/fs_storage.py (2)
get_by_code(203-209)fs(310-324)fs_file/fields.py (1)
mimetype(101-116)fs_attachment/models/ir_attachment.py (2)
_fs_mark_for_gc(553-581)_force_write_store_fname(491-498)
fs_attachment_s3_migration/wizard/migration_wizard.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (1)
s3_enqueue_migration(135-199)
🪛 Ruff (0.14.10)
fs_attachment_s3_migration/tests/test_attachment_migration.py
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
fs_attachment_s3_migration/models/ir_attachment.py
20-20: Missing return type annotation for private function _s3_migration_domain
(ANN202)
22-28: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
34-34: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
38-38: Missing return type annotation for private function _s3_get_force_db_domain
(ANN202)
56-56: Missing return type annotation for private function _should_force_db
Add return type annotation: bool
(ANN202)
69-69: Missing return type annotation for private function _compute_s3_path
(ANN202)
75-75: Missing return type annotation for private function _get_binary_data_for_checksum
(ANN202)
100-100: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
131-131: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
166-166: Consider [*domain, ("id", ">", last_id)] instead of concatenation
Replace with [*domain, ("id", ">", last_id)]
(RUF005)
245-245: Do not catch blind exception: Exception
(BLE001)
268-268: Do not catch blind exception: Exception
(BLE001)
269-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
fs_attachment_s3_migration/wizard/migration_wizard.py
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
🔇 Additional comments (15)
fs_attachment_s3_migration/data/queue_job_channel_data.xml (1)
1-19: LGTM!The queue channel definition is well-structured with appropriate
noupdate="1"to prevent overwrites on module upgrade. The documentation clearly explains the purpose and provides helpful guidance for configuring parallelism.fs_attachment_s3_migration/models/fs_storage.py (1)
29-45: LGTM!The
action_open_migration_wizardmethod is well-implemented with proper single-record validation viaensure_one()and meaningful error messaging when the storage code is missing.fs_attachment_s3_migration/wizard/migration_wizard.py (2)
53-59: Potential logic issue in_compute_batch_size.When
storage_idis set butmigration_batch_sizeis falsy (0), the conditionwizard.storage_id.migration_batch_sizeevaluates toFalse, and the wizard falls through toelif not wizard.batch_size, which only sets 500 ifbatch_sizeis also 0. This is likely fine but worth noting for clarity.However, the current logic won't reset
batch_sizewhen switching to a storage that hasmigration_batch_size=0configured (though the SQL constraint suggested earlier would prevent this scenario).
69-93: LGTM!The
action_confirmmethod is well-implemented:
- Validates storage_code presence
- Correctly converts
max_batches=0toNonefor unlimited batches- Returns a user-friendly notification with the enqueued count
- Properly closes the wizard after completion
fs_attachment_s3_migration/tests/test_attachment_migration.py (5)
1-7: LGTM!Good test file structure with clear imports and module docstring.
291-296: This test documents a potential edge case bug in_compute_s3_path.The test expects
"ab//ab"for a 2-character checksum withoptimize_path=True. This produces an empty string forchecksum[2:4], resulting in a double slash. While the test correctly captures current behavior, this edge case may indicate that_compute_s3_pathshould validate checksum length or handle short checksums differently.Consider whether short checksums are a realistic scenario in production. If not, this test documents defensive behavior. If checksums should always be a fixed length (e.g., SHA-256 hex = 64 chars), consider adding validation upstream.
10-62: LGTM - Domain tests provide good coverage.The tests verify that the migration domain correctly filters attachments by:
- Excluding already-migrated files (storage code prefix)
- Requiring checksum, type=binary, store_fname
- Excluding db_datas (DB-stored) and URL attachments
- Including res_field tautology
64-116: LGTM - Helper method tests are comprehensive.Good coverage of
_should_force_dbedge cases (empty rules, matching/non-matching MIME, size limits) and_compute_s3_pathfor both optimized and flat path structures.
298-311: LGTM - Upload test correctly verifies directory creation and file write.The mock setup properly tests that
_upload_to_storagecreates directories and writes the file.fs_attachment_s3_migration/models/ir_attachment.py (6)
11-13: LGTM! Well-documented workaround for Odoo's default domain behavior.The tautology pattern correctly ensures field-linked attachments are included in searches by overriding Odoo's default
('res_field', '=', False)filter.
37-53: LGTM!The domain construction logic correctly builds OR-combined rules for force-DB exclusions. The
=likeoperator with explicit%suffix is the appropriate Odoo pattern for prefix matching.
55-66: LGTM!The force-DB check logic correctly handles edge cases with None defaults and properly interprets
limit == 0as "no size limit" (all matching files stay in DB).
68-73: LGTM!The two-level directory optimization is a good practice for S3 key distribution. SHA checksums are always 40+ characters, making the slicing safe.
281-288: LGTM! Correct use of GC marking and store_fname update.The migration correctly marks old files for garbage collection before updating the store reference, and uses
_force_write_store_fnameto bypass ORM write protection. This follows the patterns fromfs_attachmentmodule.
77-80: No changes required. The domain filter("db_datas", "!=", False)is the standard and correct Odoo ORM pattern for checking non-empty binary fields. This follows Odoo's ir.attachment conventions and properly identifies attachments with data stored in the database column.
| migration_batch_size = fields.Integer( | ||
| default=500, | ||
| help="Number of attachments per background job batch.", | ||
| ) |
There was a problem hiding this comment.
Add validation to prevent invalid batch sizes.
migration_batch_size allows zero or negative values, which could cause issues during migration (infinite loops or no-op batches). Consider adding a constraint or minimum bound.
🔧 Suggested fix using SQL constraint
migration_batch_size = fields.Integer(
default=500,
help="Number of attachments per background job batch.",
)
+
+ _sql_constraints = [
+ (
+ "migration_batch_size_positive",
+ "CHECK(migration_batch_size > 0)",
+ "Migration batch size must be greater than zero.",
+ ),
+ ]🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/fs_storage.py around lines 11 - 14,
migration_batch_size currently accepts zero or negative values; add validation
by (1) adding a Python constraint method (e.g., _check_migration_batch_size)
decorated with @api.constrains('migration_batch_size') that raises
ValidationError from odoo.exceptions when migration_batch_size < 1, and (2) add
a DB-level safeguard by appending an entry to _sql_constraints such as
("migration_batch_size_check", "CHECK (migration_batch_size >= 1)",
"migration_batch_size must be at least 1") to ensure invalid values cannot be
persisted.
| ("checksum", "!=", False), | ||
| ("type", "=", "binary"), | ||
| ("store_fname", "!=", False), | ||
| ("store_fname", "not like", f"{storage_code}://%"), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, examine the file and context around line 26
cat -n fs_attachment_s3_migration/models/ir_attachment.py | head -50Repository: cetmix/oca-storage
Length of output: 2227
🏁 Script executed:
# Search for storage_code definition, validation, and usage
rg -n 'storage_code' --type=py -B2 -A2 | head -80Repository: cetmix/oca-storage
Length of output: 6516
🏁 Script executed:
# Check Odoo domain usage patterns with 'not like' in the codebase
rg -n 'not like' --type=py -B1 -A1Repository: cetmix/oca-storage
Length of output: 661
🏁 Script executed:
# Find fs.storage model definition to see code field constraints
rg -n 'class.*Storage|code.*=.*fields\.' --type=py fs_attachment_s3_migration/models/fs_storage.py -A2 | head -40Repository: cetmix/oca-storage
Length of output: 128
🏁 Script executed:
# Search for fs.storage model in base modules to understand code field definition
fd -e py -exec rg -l "fs\.storage" {} \; | head -10Repository: cetmix/oca-storage
Length of output: 257
🏁 Script executed:
# Look for code field definition in fs_storage.py
cat -n fs_attachment_s3_migration/models/fs_storage.py | head -100Repository: cetmix/oca-storage
Length of output: 1868
🏁 Script executed:
# Check if there are any escaping utilities or patterns used with LIKE in the codebase
rg -n 'like.*%|escape|SQL' --type=py | grep -i 'like\|escape' | head -20Repository: cetmix/oca-storage
Length of output: 1298
🏁 Script executed:
# Look in all Python files for fs.storage model or code field definition patterns
rg -n "class.*FsStorage|_name.*fs\.storage|code.*=" --type=py | grep -i "fs.storage\|code.*field\|_name" | head -20Repository: cetmix/oca-storage
Length of output: 1326
🏁 Script executed:
# Check Odoo domain documentation or similar patterns with escaping
rg -n "escape.*like|ESCAPE" --type=pyRepository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
# Search for storage code validation or naming rules
rg -n "storage.*code.*=" --type=py -B2 -A2 | grep -E "constraint|pattern|regex|valid" -i | head -20Repository: cetmix/oca-storage
Length of output: 124
🏁 Script executed:
# Check test files for expected storage code formats
cat -n fs_attachment_s3_migration/tests/test_attachment_migration.pyRepository: cetmix/oca-storage
Length of output: 13817
🏁 Script executed:
# Read the fs.storage model to understand code field definition
cat -n fs_storage/models/fs_storage.py | sed -n '64,120p'Repository: cetmix/oca-storage
Length of output: 2752
🏁 Script executed:
# Search for any validation or constraints on storage code
rg -n "code.*=" fs_storage/models/fs_storage.py -A5 | head -30Repository: cetmix/oca-storage
Length of output: 701
🏁 Script executed:
# Search for SQL or LIKE escaping in the codebase
rg -n "escape|ESCAPE|sql_escape" --type=py | head -20Repository: cetmix/oca-storage
Length of output: 310
🏁 Script executed:
# Check how storage code is used in actual implementation
rg -n "storage_code" --type=py -B1 -A1 fs_attachment/models/ir_attachment.py | head -30Repository: cetmix/oca-storage
Length of output: 1396
🏁 Script executed:
# Look for any constraints or SQL_CONSTRAINTS on the code field
rg -n "_sql_constraints|_constraints" fs_storage/models/fs_storage.py -A5Repository: cetmix/oca-storage
Length of output: 223
🏁 Script executed:
# Search for how Odoo handles LIKE escaping in domains
rg -n "like.*escape|escape.*like" --type=pyRepository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
# Check if there are examples of storage code values in tests
rg -n "storage_code\s*=" --type=py -A1 | grep -E "=\s*['\"]" | head -20Repository: cetmix/oca-storage
Length of output: 242
🏁 Script executed:
# Check the actual store_fname format to understand the LIKE pattern better
rg -n "store_fname.*://" --type=py -B2 -A2 | head -30Repository: cetmix/oca-storage
Length of output: 2304
🏁 Script executed:
# Search for Odoo domain handling or escaping documentation in the codebase
rg -n "odoo.osv.expression|AND|OR" --type=py fs_attachment_s3_migration/models/ir_attachment.pyRepository: cetmix/oca-storage
Length of output: 430
🏁 Script executed:
# Look for any other uses of "like" operator to see patterns
rg -n '"like"|\'like\'' --type=py -B2 -A2 | head -40Repository: cetmix/oca-storage
Length of output: 130
🏁 Script executed:
# Check if there are any validation/constraints on storage code elsewhere
rg -n "code.*constraint|constraint.*code|code.*regex|_check_code" --type=pyRepository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
# Search for any documentation or usage of LIKE escaping in Odoo context
rg -n "like.*%" --type=py | grep -E "escape|sql|param" | head -10Repository: cetmix/oca-storage
Length of output: 44
🏁 Script executed:
# Check if there's a pattern or validation constraint we missed on the code field
rg -n "code.*=.*fields\." fs_storage/ --type=py -A10 | grep -E "pattern|regex|constraint|help" | head -20Repository: cetmix/oca-storage
Length of output: 178
🏁 Script executed:
# Verify by looking at actual store_fname examples to understand the pattern better
rg -n "store_fname\s*=" --type=py -A2 | head -30Repository: cetmix/oca-storage
Length of output: 2293
Add validation or escaping for storage_code in LIKE pattern.
The store_fname LIKE pattern on line 26 uses an unescaped storage_code. While storage codes are typically alphanumeric identifiers, if storage_code contains SQL LIKE wildcards (% or _), they will be interpreted as pattern characters rather than literals, causing incorrect filtering. Either validate that storage codes conform to a safe pattern (e.g., [a-zA-Z0-9_]+) or escape the pattern using Odoo's domain escaping where available.
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py at line 26, The LIKE
pattern uses raw storage_code in the domain tuple ("store_fname", "not like",
f"{storage_code}://%"), which lets any % or _ in storage_code be treated as
wildcards; fix it by either validating storage_code against a strict safe regex
(e.g., only [A-Za-z0-9_]+) before using it, or escape SQL LIKE metacharacters in
storage_code (escape % and _ by prefixing with the escape char and ensure the
query uses the corresponding ESCAPE clause or Odoo domain escaping) and then
build the domain tuple with the escaped value (replace the tuple ("store_fname",
"not like", f"{storage_code}://%") accordingly).
| except OSError as e: | ||
| _logger.error("Failed to write file %s: %s", path, e) | ||
| raise |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use logging.exception to preserve stack trace.
When logging an error before re-raising, use _logger.exception() to include the full traceback in the logs, which aids debugging.
Suggested fix
try:
with fs.open(path, "wb") as f:
f.write(bin_data)
except OSError as e:
- _logger.error("Failed to write file %s: %s", path, e)
+ _logger.exception("Failed to write file %s: %s", path, e)
raise📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except OSError as e: | |
| _logger.error("Failed to write file %s: %s", path, e) | |
| raise | |
| except OSError as e: | |
| _logger.exception("Failed to write file %s: %s", path, e) | |
| raise |
🧰 Tools
🪛 Ruff (0.14.10)
131-131: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 130 - 132,
Replace the _logger.error call inside the except OSError as e block with
_logger.exception so the traceback is preserved; e.g., in the except OSError
handler in models/ir_attachment.py (the "except OSError as e:" block that logs
"Failed to write file %s: %s"), call _logger.exception("Failed to write file
%s", path) and then re-raise the exception.
| def s3_migrate_batch(self, storage_code): | ||
| """Migrate batch with checksum deduplication.""" | ||
| fs_storage = self.env["fs.storage"].sudo().get_by_code(storage_code) | ||
| if not fs_storage: | ||
| _logger.error("Storage not found: %s", storage_code) | ||
| return False |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding @api.model decorator or documenting batch invocation pattern.
s3_migrate_batch operates on self recordset (populated by browse(ids) in enqueue) but lacks the @api.model decorator that's on s3_enqueue_migration. This is correct for the current usage, but adding a docstring clarifying that this method expects to be called on a pre-populated recordset would improve maintainability.
| try: | ||
| bin_data = self._get_binary_data_for_checksum(checksum) | ||
| except Exception as e: # pylint: disable=broad-except | ||
| _logger.warning( | ||
| "Cannot read checksum %s: %s, skipping %d attachments", | ||
| checksum, | ||
| e, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider narrowing exception types.
The broad except Exception catches everything including KeyboardInterrupt and SystemExit (though Python 3's Exception doesn't catch those). For production robustness, consider catching more specific exceptions like (OSError, IOError) or at minimum add a comment explaining why broad catching is intentional here.
🧰 Tools
🪛 Ruff (0.14.10)
245-245: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 243 - 253,
The try/except around self._get_binary_data_for_checksum(checksum) is too broad;
replace the broad "except Exception" with specific exception types that
_get_binary_data_for_checksum can raise (e.g., OSError, IOError, ValueError or
the library-specific exceptions) or, if truly intentional, add a clear comment
explaining why a broad catch is required; update the except clause to catch only
those specific exceptions, log the error as before using _logger.warning with
checksum and exception, then increment skipped and continue (references:
_get_binary_data_for_checksum, checksum, attachments, skipped).
| except Exception as e: | ||
| _logger.error( | ||
| "Upload failed for %s: %s, skipping %d attachments", | ||
| checksum, | ||
| e, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use logging.exception to capture full traceback.
Similar to the earlier upload error, use _logger.exception() here to preserve the stack trace for debugging failed uploads.
Suggested fix
try:
self._upload_to_storage(fs, path, bin_data)
except Exception as e:
- _logger.error(
+ _logger.exception(
"Upload failed for %s: %s, skipping %d attachments",
checksum,
e,
len(attachments),
)
skipped += len(attachments)
continue📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| _logger.error( | |
| "Upload failed for %s: %s, skipping %d attachments", | |
| checksum, | |
| e, | |
| len(attachments), | |
| ) | |
| skipped += len(attachments) | |
| continue | |
| except Exception as e: | |
| _logger.exception( | |
| "Upload failed for %s: %s, skipping %d attachments", | |
| checksum, | |
| e, | |
| len(attachments), | |
| ) | |
| skipped += len(attachments) | |
| continue |
🧰 Tools
🪛 Ruff (0.14.10)
268-268: Do not catch blind exception: Exception
(BLE001)
269-274: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 268 - 276,
The except block currently calls _logger.error and drops the traceback; replace
the _logger.error call in that except Exception as e: block with
_logger.exception so the full stack trace is recorded (e.g.
_logger.exception("Upload failed for %s, skipping %d attachments", checksum,
len(attachments))). Keep the skipped += len(attachments) and continue logic
unchanged; referencing the except block handling 'e', '_logger', 'checksum', and
'attachments'.
| def test_enqueue_migration_returns_count(self): | ||
| """Test that enqueue_migration returns correct attachment count.""" | ||
| # Create test attachments | ||
| self.Attachment.create( | ||
| [ | ||
| { | ||
| "name": f"test{i}.txt", | ||
| "raw": b"test content", | ||
| } | ||
| for i in range(5) | ||
| ] | ||
| ) | ||
|
|
||
| # Mock DelayableRecordset to prevent actual queue_job creation | ||
| with patch("odoo.addons.queue_job.delay.DelayableRecordset") as mock_delayable: | ||
| mock_instance = MagicMock() | ||
| mock_delayable.return_value = mock_instance | ||
|
|
||
| total = self.Attachment.s3_enqueue_migration( | ||
| "test_s3", | ||
| batch_size=10, | ||
| ) | ||
|
|
||
| # Should return count >= 5 (our attachments + any existing ones) | ||
| self.assertGreaterEqual(total, 5) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test may be flaky due to pre-existing attachments.
The assertion self.assertGreaterEqual(total, 5) accounts for pre-existing attachments in the database, which is good. However, the test's reliability depends on the test database state. Consider filtering by name pattern in a follow-up assertion or using a mock domain to ensure determinism.
🧰 Tools
🪛 Ruff (0.14.10)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
127 - 151, The test test_enqueue_migration_returns_count is flaky because it
assumes total includes at least the 5 created attachments but does not isolate
those records; update the test to assert deterministically by either calling
Attachment.s3_enqueue_migration with a domain limited to the created names
(e.g., a name pattern or explicit list) or by querying Attachment.search/count
filtered by the created name prefix before/after the call to verify exactly 5
were enqueued; target the Attachment.s3_enqueue_migration call and the test
function test_enqueue_migration_returns_count to add the domain or an explicit
filtered assertion to ensure the test does not depend on pre-existing
attachments.
4109f14 to
baad198
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @fs_attachment_s3_migration/models/ir_attachment.py:
- Around line 144-149: The except block around the file write uses _logger.error
with the caught OSError but doesn't include the traceback; replace the
_logger.error call with _logger.exception to log the full traceback (e.g., in
the try/except around fs.open(path, "wb") and f.write(bin_data) swap
_logger.error("Failed to write file %s: %s", path, e) for
_logger.exception("Failed to write file %s", path)) and keep the existing raise
to re-raise the exception.
- Around line 260-293: The broad except blocks around calls to
_get_binary_data_for_checksum and _upload_to_storage should still allow
system-exiting exceptions to propagate; update each except Exception as e
handler to re-raise when e is a KeyboardInterrupt or SystemExit (i.e., if
isinstance(e, (KeyboardInterrupt, SystemExit)): raise) before logging/skipping
so normal runtime errors are swallowed for resilience but interrupts and exits
are not. Ensure you update both handlers that catch exceptions from
_get_binary_data_for_checksum and _upload_to_storage (and keep existing
logging/skipped counting behavior for other exceptions).
In @fs_attachment_s3_migration/tests/test_attachment_migration.py:
- Around line 176-177: Replace the Unicode multiplication sign '×' with the
ASCII letter 'x' in the inline comment above the assertion in
tests/test_attachment_migration.py (the comment "Should stop at 20 (2 batches ×
10)"); update it to "Should stop at 20 (2 batches x 10)" so editors/linters
don't choke on U+00D7 while leaving the assertion self.assertEqual(total, 20)
unchanged.
- Around line 231-235: The setUpClass defines an unused class attribute
cls.FsStorage; remove the unused assignment in setUpClass (inside the setUpClass
method that currently sets cls.Attachment and cls.FsStorage) so only
cls.Attachment is set, or alternatively replace the assignment by using
cls.env["fs.storage"] where actually needed; specifically remove the line
assigning cls.FsStorage to eliminate the dead reference.
- Around line 213-220: The test_migrate_batch_handles_empty_recordset currently
uses a weak assertion (self.assertIn(result, [True, False])) which always passes
for booleans; change it to assert the specific expected behavior for an empty
recordset by calling Attachment.browse([]) and asserting the precise return
value from s3_migrate_batch("test_s3") (e.g., use self.assertTrue(result) or
self.assertEqual(result, True) if the contract is a successful no-op returns
True), and update the test docstring to reflect the exact expected result.
In @fs_attachment_s3_migration/views/fs_storage_view.xml:
- Around line 19-25: Add an attrs-based visibility condition to the "Migration
Settings" group so it is only shown for S3 storages; locate the <group
string="Migration Settings"> element that contains fields migration_batch_size,
migration_channel, and migration_use_storage_force_db_rules and add
attrs="{'invisible':[('storage_protocol','!=','s3')]}" (or the project's
equivalent field name for protocol) so the group is hidden when storage_protocol
is not 's3'.
In @fs_attachment_s3_migration/wizard/migration_wizard.py:
- Around line 69-79: In action_confirm validate that self.batch_size is a
positive integer before calling ir.attachment.s3_enqueue_migration: check the
batch_size field on the wizard (self.batch_size) and if it is None, zero, or
negative raise a UserError with a clear message (e.g., "Batch size must be a
positive integer") so the enqueue loop never receives non-positive batch sizes;
update action_confirm to perform this check and only call s3_enqueue_migration
when the value is valid.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
fs_attachment_s3_migration/data/queue_job_channel_data.xmlfs_attachment_s3_migration/models/fs_storage.pyfs_attachment_s3_migration/models/ir_attachment.pyfs_attachment_s3_migration/tests/test_attachment_migration.pyfs_attachment_s3_migration/views/fs_storage_view.xmlfs_attachment_s3_migration/wizard/migration_wizard.py
🧰 Additional context used
🧬 Code graph analysis (2)
fs_attachment_s3_migration/models/ir_attachment.py (2)
fs_storage/models/fs_storage.py (2)
get_by_code(203-209)fs(310-324)fs_attachment/models/ir_attachment.py (2)
_fs_mark_for_gc(553-581)_force_write_store_fname(491-498)
fs_attachment_s3_migration/wizard/migration_wizard.py (1)
fs_attachment_s3_migration/models/ir_attachment.py (1)
s3_enqueue_migration(152-216)
🪛 Ruff (0.14.10)
fs_attachment_s3_migration/models/ir_attachment.py
20-20: Missing return type annotation for private function _s3_migration_domain
(ANN202)
22-28: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
34-34: Consider ["!", *force_db_domain] instead of concatenation
Replace with ["!", *force_db_domain]
(RUF005)
38-38: Missing return type annotation for private function _s3_get_force_db_domain
(ANN202)
56-56: Missing return type annotation for private function _should_force_db
Add return type annotation: bool
(ANN202)
69-69: Missing return type annotation for private function _compute_s3_path
(ANN202)
75-75: Missing return type annotation for private function _get_binary_data_for_checksum
(ANN202)
117-117: Missing return type annotation for private function _upload_to_storage
Add return type annotation: None
(ANN202)
148-148: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
183-183: Consider [*domain, ("id", ">", last_id)] instead of concatenation
Replace with [*domain, ("id", ">", last_id)]
(RUF005)
262-262: Do not catch blind exception: Exception
(BLE001)
285-285: Do not catch blind exception: Exception
(BLE001)
286-291: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
fs_attachment_s3_migration/wizard/migration_wizard.py
43-43: Missing return type annotation for private function _get_default_channel
(ANN202)
49-49: Missing return type annotation for private function _compute_storage_code
Add return type annotation: None
(ANN202)
54-54: Missing return type annotation for private function _compute_batch_size
Add return type annotation: None
(ANN202)
62-62: Missing return type annotation for private function _compute_channel
Add return type annotation: None
(ANN202)
fs_attachment_s3_migration/tests/test_attachment_migration.py
14-14: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
51-51: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
52-52: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
60-60: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
61-61: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
68-68: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
75-75: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
81-81: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
82-82: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
83-83: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
90-90: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
91-91: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
94-94: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
95-95: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
101-101: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
104-104: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
110-110: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
116-116: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
123-123: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
151-151: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
202-202: Use a regular assert instead of unittest-style assertGreaterEqual
Replace assertGreaterEqual(...) with assert ...
(PT009)
209-209: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
224-224: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
225-225: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
250-250: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
255-255: Use a regular assert instead of unittest-style assertIsNone
Replace assertIsNone(...) with assert ...
(PT009)
262-262: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
269-269: Use a regular assert instead of unittest-style assertTrue
Replace assertTrue(...) with assert ...
(PT009)
276-276: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
286-286: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
289-289: Use a regular assert instead of unittest-style assertFalse
Replace assertFalse(...) with assert ...
(PT009)
296-296: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🔇 Additional comments (19)
fs_attachment_s3_migration/data/queue_job_channel_data.xml (1)
1-19: LGTM!The queue job channel definition is well-structured. The
noupdate="1"attribute correctly prevents overwriting operator customizations, and the documentation comment provides clear guidance on adjusting parallelism.fs_attachment_s3_migration/models/ir_attachment.py (8)
11-13: LGTM!The
RES_FIELD_DOMAINtautology is a well-documented technique to bypass Odoo's defaultres_field = Falsefilter inir.attachment._search. The comment clearly explains its purpose.
19-35: LGTM!The domain construction correctly filters attachments eligible for migration. The negation pattern
["!"] + force_db_domainproperly excludes attachments that should remain in DB storage per the configured rules.
37-54: LGTM!The domain construction for force-DB rules correctly builds an OR-combined domain from mimetype prefixes and optional size limits.
55-66: LGTM!The force-DB check logic correctly handles the case where
limit == 0means "no size limit" (all matching mimetypes stay in DB).
68-73: LGTM!The path optimization using two-level directory hashing (
checksum[:2]/checksum[2:4]/checksum) is a standard pattern for distributing files across directories. Since Odoo uses SHA-256 checksums (64 hex characters), the slicing is safe.
75-115: LGTM!The multi-source fallback strategy (S3 donor → DB → local filestore) is well-designed. Using
prefetch_fields=Falsecontext prevents unnecessary field loading, and the error handling gracefully falls back through sources.
151-216: LGTM!The enqueue logic uses efficient cursor-based pagination and properly integrates with
queue_job. Progress logging every 10 batches provides useful visibility without being excessive.
295-305: LGTM!The update pattern correctly uses
_force_write_store_fnameper-record (bypassing field protection) and efficiently batch-updatesfs_filename. The GC marking ensures old filestore files are cleaned up.fs_attachment_s3_migration/models/fs_storage.py (2)
11-27: LGTM!The migration configuration fields are well-defined with sensible defaults and clear help text. The defaults align with the migration workflow implementation.
29-45: LGTM!The wizard launcher correctly validates the storage code requirement and pre-populates the wizard with storage defaults via context.
fs_attachment_s3_migration/wizard/migration_wizard.py (3)
14-40: LGTM!The wizard fields use the editable computed pattern (
compute=..., store=True, readonly=False) correctly, allowing defaults from storage while permitting user override.
42-67: LGTM!The compute methods correctly implement the fallback chain: storage setting → existing value → module default. The
elif not wizard.fieldpattern preserves user overrides.
80-93: LGTM!The notification action follows Odoo's standard pattern for wizard completion feedback, with proper internationalization and window close behavior.
fs_attachment_s3_migration/views/fs_storage_view.xml (1)
11-17: No changes needed. The currentattrssyntax is correct for Odoo 16.0. Theinvisibleattribute with Python expressions (invisible="protocol != 's3'") is not supported until Odoo 17.0, so the suggested modernization would break compatibility with the target version.Likely an incorrect or invalid review comment.
fs_attachment_s3_migration/tests/test_attachment_migration.py (4)
64-116: LGTM!The helper method tests are well-structured with good boundary testing for size limits (exact limit at 51200 and one byte over at 51201) and proper coverage of mime-type prefix matching.
258-269: LGTM!Simple integration test verifying the
_fs_mark_for_gcmethod is available from thefs_attachmentdependency.
298-311: LGTM!Good isolation of filesystem operations using mocks. The test properly verifies both directory creation (
makedirs) and file writing (open).
291-296: Fix_compute_s3_pathto handle short checksums properly.The double slash in
"ab//ab"occurs because_compute_s3_pathdoesn't validate checksum length. With a 2-character checksum, the middle slicechecksum[2:4]is empty, producing an unintended double slash. While real checksums are typically 32+ characters, the implementation should either validate minimum length or fall back to flat path storage for edge cases.
| try: | ||
| with fs.open(path, "wb") as f: | ||
| f.write(bin_data) | ||
| except OSError as e: | ||
| _logger.error("Failed to write file %s: %s", path, e) | ||
| raise |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using logging.exception for error logging with traceback.
When catching and logging an exception, _logger.exception() automatically includes the full traceback, which aids debugging.
♻️ Suggested improvement
try:
with fs.open(path, "wb") as f:
f.write(bin_data)
except OSError as e:
- _logger.error("Failed to write file %s: %s", path, e)
+ _logger.exception("Failed to write file %s", path)
raise📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| with fs.open(path, "wb") as f: | |
| f.write(bin_data) | |
| except OSError as e: | |
| _logger.error("Failed to write file %s: %s", path, e) | |
| raise | |
| try: | |
| with fs.open(path, "wb") as f: | |
| f.write(bin_data) | |
| except OSError as e: | |
| _logger.exception("Failed to write file %s", path) | |
| raise |
🧰 Tools
🪛 Ruff (0.14.10)
148-148: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 144 - 149,
The except block around the file write uses _logger.error with the caught
OSError but doesn't include the traceback; replace the _logger.error call with
_logger.exception to log the full traceback (e.g., in the try/except around
fs.open(path, "wb") and f.write(bin_data) swap _logger.error("Failed to write
file %s: %s", path, e) for _logger.exception("Failed to write file %s", path))
and keep the existing raise to re-raise the exception.
| try: | ||
| bin_data = self._get_binary_data_for_checksum(checksum, storage_code) | ||
| except Exception as e: # pylint: disable=broad-except | ||
| _logger.warning( | ||
| "Cannot read checksum %s: %s, skipping %d attachments", | ||
| checksum, | ||
| e, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue | ||
|
|
||
| if not bin_data: | ||
| _logger.warning( | ||
| "No data for checksum %s, skipping %d attachments", | ||
| checksum, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue | ||
|
|
||
| path = self._compute_s3_path(checksum, optimize_path) | ||
|
|
||
| try: | ||
| self._upload_to_storage(fs, path, bin_data) | ||
| except Exception as e: | ||
| _logger.error( | ||
| "Upload failed for %s: %s, skipping %d attachments", | ||
| checksum, | ||
| e, | ||
| len(attachments), | ||
| ) | ||
| skipped += len(attachments) | ||
| continue |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Broad exception handling is intentional but verify error propagation needs.
The except Exception blocks at lines 262 and 285 catch all errors to allow batch processing to continue despite individual failures. This is a valid pattern for migration jobs where partial progress is preferable to complete failure.
However, consider whether certain exceptions (e.g., KeyboardInterrupt, system exit) should propagate. If you want to be more precise while still catching most runtime errors:
♻️ More specific exception handling
- except Exception as e: # pylint: disable=broad-except
+ except (OSError, ValueError, RuntimeError) as e:
_logger.warning(
"Cannot read checksum %s: %s, skipping %d attachments",Alternatively, the current approach is acceptable if the intent is maximum resilience during migration.
🧰 Tools
🪛 Ruff (0.14.10)
262-262: Do not catch blind exception: Exception
(BLE001)
285-285: Do not catch blind exception: Exception
(BLE001)
286-291: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/models/ir_attachment.py around lines 260 - 293,
The broad except blocks around calls to _get_binary_data_for_checksum and
_upload_to_storage should still allow system-exiting exceptions to propagate;
update each except Exception as e handler to re-raise when e is a
KeyboardInterrupt or SystemExit (i.e., if isinstance(e, (KeyboardInterrupt,
SystemExit)): raise) before logging/skipping so normal runtime errors are
swallowed for resilience but interrupts and exits are not. Ensure you update
both handlers that catch exceptions from _get_binary_data_for_checksum and
_upload_to_storage (and keep existing logging/skipped counting behavior for
other exceptions).
| def test_migration_domain_excludes_already_migrated(self): | ||
| """Test that migration domain excludes already-migrated files.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| domain_str = str(domain) | ||
| self.assertIn("test_s3", domain_str) | ||
| self.assertIn("store_fname", domain_str) | ||
| self.assertIn("not like", domain_str) | ||
|
|
||
| def test_migration_domain_completeness(self): | ||
| """Verify domain includes all required filters.""" | ||
| domain = self.Attachment._s3_migration_domain("test_s3") | ||
|
|
||
| # Convert to string for easier checking | ||
| domain_str = str(domain) | ||
|
|
||
| # Must filter by checksum (has binary content) | ||
| self.assertIn("checksum", domain_str) | ||
| # Must filter by type=binary | ||
| self.assertIn("type", domain_str) | ||
| # Must filter by store_fname (not in target) | ||
| self.assertIn("store_fname", domain_str) | ||
| # Must filter by db_datas=False (not in database) | ||
| self.assertIn("db_datas", domain_str) | ||
| # Must include res_field tautology | ||
| self.assertIn("res_field", domain_str) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
String-based domain assertions are fragile.
The tests use assertIn on str(domain) which can lead to false positives. For example, assertIn("type", domain_str) would pass even if "type" appeared only as part of a field name like "attachment_type". The later tests (lines 50-61) demonstrate a better approach by filtering and checking actual tuples.
Consider using tuple-based assertions consistently for more reliable tests:
♻️ Suggested improvement
def test_migration_domain_completeness(self):
"""Verify domain includes all required filters."""
domain = self.Attachment._s3_migration_domain("test_s3")
- # Convert to string for easier checking
- domain_str = str(domain)
-
- # Must filter by checksum (has binary content)
- self.assertIn("checksum", domain_str)
- # Must filter by type=binary
- self.assertIn("type", domain_str)
- # Must filter by store_fname (not in target)
- self.assertIn("store_fname", domain_str)
- # Must filter by db_datas=False (not in database)
- self.assertIn("db_datas", domain_str)
- # Must include res_field tautology
- self.assertIn("res_field", domain_str)
+ # Extract field names from domain tuples
+ field_names = {d[0] for d in domain if isinstance(d, tuple)}
+
+ # Must filter by these fields
+ self.assertIn("checksum", field_names)
+ self.assertIn("type", field_names)
+ self.assertIn("store_fname", field_names)
+ self.assertIn("db_datas", field_names)
+ self.assertIn("res_field", field_names)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_migration_domain_excludes_already_migrated(self): | |
| """Test that migration domain excludes already-migrated files.""" | |
| domain = self.Attachment._s3_migration_domain("test_s3") | |
| domain_str = str(domain) | |
| self.assertIn("test_s3", domain_str) | |
| self.assertIn("store_fname", domain_str) | |
| self.assertIn("not like", domain_str) | |
| def test_migration_domain_completeness(self): | |
| """Verify domain includes all required filters.""" | |
| domain = self.Attachment._s3_migration_domain("test_s3") | |
| # Convert to string for easier checking | |
| domain_str = str(domain) | |
| # Must filter by checksum (has binary content) | |
| self.assertIn("checksum", domain_str) | |
| # Must filter by type=binary | |
| self.assertIn("type", domain_str) | |
| # Must filter by store_fname (not in target) | |
| self.assertIn("store_fname", domain_str) | |
| # Must filter by db_datas=False (not in database) | |
| self.assertIn("db_datas", domain_str) | |
| # Must include res_field tautology | |
| self.assertIn("res_field", domain_str) | |
| def test_migration_domain_completeness(self): | |
| """Verify domain includes all required filters.""" | |
| domain = self.Attachment._s3_migration_domain("test_s3") | |
| # Extract field names from domain tuples | |
| field_names = {d[0] for d in domain if isinstance(d, tuple)} | |
| # Must filter by these fields | |
| self.assertIn("checksum", field_names) | |
| self.assertIn("type", field_names) | |
| self.assertIn("store_fname", field_names) | |
| self.assertIn("db_datas", field_names) | |
| self.assertIn("res_field", field_names) |
🧰 Tools
🪛 Ruff (0.14.10)
23-23: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
24-24: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
25-25: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
35-35: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
37-37: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
39-39: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
41-41: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
43-43: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
| # Should stop at 20 (2 batches × 10) | ||
| self.assertEqual(total, 20) |
There was a problem hiding this comment.
Replace ambiguous multiplication sign with ASCII 'x'.
The comment uses the Unicode multiplication sign × (U+00D7) which can cause issues with some editors and linters. Use the ASCII letter x instead.
🔧 Proposed fix
- # Should stop at 20 (2 batches × 10)
+ # Should stop at 20 (2 batches x 10)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Should stop at 20 (2 batches × 10) | |
| self.assertEqual(total, 20) | |
| # Should stop at 20 (2 batches x 10) | |
| self.assertEqual(total, 20) |
🧰 Tools
🪛 Ruff (0.14.10)
176-176: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?
(RUF003)
177-177: Use a regular assert instead of unittest-style assertEqual
Replace assertEqual(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
176 - 177, Replace the Unicode multiplication sign '×' with the ASCII letter 'x'
in the inline comment above the assertion in tests/test_attachment_migration.py
(the comment "Should stop at 20 (2 batches × 10)"); update it to "Should stop at
20 (2 batches x 10)" so editors/linters don't choke on U+00D7 while leaving the
assertion self.assertEqual(total, 20) unchanged.
| def test_migrate_batch_handles_empty_recordset(self): | ||
| """Test that empty recordset doesn't crash.""" | ||
| empty = self.Attachment.browse([]) | ||
|
|
||
| # Should return True without error | ||
| result = empty.s3_migrate_batch("test_s3") | ||
| # Empty recordset may return False if storage not found, or True | ||
| self.assertIn(result, [True, False]) |
There was a problem hiding this comment.
Weak assertion always passes for boolean results.
The assertion self.assertIn(result, [True, False]) will pass for any boolean value, making this test ineffective at verifying behavior. Consider asserting the expected specific behavior for an empty recordset.
♻️ Suggested improvement
def test_migrate_batch_handles_empty_recordset(self):
"""Test that empty recordset doesn't crash."""
empty = self.Attachment.browse([])
- # Should return True without error
+ # Empty recordset should complete gracefully (no-op)
result = empty.s3_migrate_batch("test_s3")
- # Empty recordset may return False if storage not found, or True
- self.assertIn(result, [True, False])
+ # Verify it returns a boolean without raising an exception
+ self.assertIsInstance(result, bool)Alternatively, if the expected behavior is defined (e.g., empty recordset should return True as a successful no-op), assert that specific value.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_migrate_batch_handles_empty_recordset(self): | |
| """Test that empty recordset doesn't crash.""" | |
| empty = self.Attachment.browse([]) | |
| # Should return True without error | |
| result = empty.s3_migrate_batch("test_s3") | |
| # Empty recordset may return False if storage not found, or True | |
| self.assertIn(result, [True, False]) | |
| def test_migrate_batch_handles_empty_recordset(self): | |
| """Test that empty recordset doesn't crash.""" | |
| empty = self.Attachment.browse([]) | |
| # Empty recordset should complete gracefully (no-op) | |
| result = empty.s3_migrate_batch("test_s3") | |
| # Verify it returns a boolean without raising an exception | |
| self.assertIsInstance(result, bool) |
🧰 Tools
🪛 Ruff (0.14.10)
220-220: Use a regular assert instead of unittest-style assertIn
Replace assertIn(...) with assert ...
(PT009)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
213 - 220, The test_migrate_batch_handles_empty_recordset currently uses a weak
assertion (self.assertIn(result, [True, False])) which always passes for
booleans; change it to assert the specific expected behavior for an empty
recordset by calling Attachment.browse([]) and asserting the precise return
value from s3_migrate_batch("test_s3") (e.g., use self.assertTrue(result) or
self.assertEqual(result, True) if the contract is a successful no-op returns
True), and update the test docstring to reflect the exact expected result.
| @classmethod | ||
| def setUpClass(cls): | ||
| super().setUpClass() | ||
| cls.Attachment = cls.env["ir.attachment"] | ||
| cls.FsStorage = cls.env["fs.storage"] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove unused FsStorage reference.
cls.FsStorage is defined but never used in any test method within this class.
♻️ Proposed fix
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.Attachment = cls.env["ir.attachment"]
- cls.FsStorage = cls.env["fs.storage"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @classmethod | |
| def setUpClass(cls): | |
| super().setUpClass() | |
| cls.Attachment = cls.env["ir.attachment"] | |
| cls.FsStorage = cls.env["fs.storage"] | |
| @classmethod | |
| def setUpClass(cls): | |
| super().setUpClass() | |
| cls.Attachment = cls.env["ir.attachment"] |
🧰 Tools
🪛 Ruff (0.14.10)
232-232: Missing return type annotation for classmethod setUpClass
Add return type annotation: None
(ANN206)
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/tests/test_attachment_migration.py around lines
231 - 235, The setUpClass defines an unused class attribute cls.FsStorage;
remove the unused assignment in setUpClass (inside the setUpClass method that
currently sets cls.Attachment and cls.FsStorage) so only cls.Attachment is set,
or alternatively replace the assignment by using cls.env["fs.storage"] where
actually needed; specifically remove the line assigning cls.FsStorage to
eliminate the dead reference.
| <xpath expr="//sheet" position="inside"> | ||
| <group string="Migration Settings"> | ||
| <field name="migration_batch_size" /> | ||
| <field name="migration_channel" /> | ||
| <field name="migration_use_storage_force_db_rules" /> | ||
| </group> | ||
| </xpath> |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding visibility control to the Migration Settings group.
The Migration Settings group is currently visible for all storage protocols, though the migration button is only available for S3. This could be intentional (pre-configure settings before switching protocol), but if not, consider adding an invisible condition.
♻️ Optional: Hide settings for non-S3 storages
<xpath expr="//sheet" position="inside">
- <group string="Migration Settings">
+ <group string="Migration Settings" invisible="protocol != 's3'">
<field name="migration_batch_size" />
<field name="migration_channel" />
<field name="migration_use_storage_force_db_rules" />
</group>
</xpath>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <xpath expr="//sheet" position="inside"> | |
| <group string="Migration Settings"> | |
| <field name="migration_batch_size" /> | |
| <field name="migration_channel" /> | |
| <field name="migration_use_storage_force_db_rules" /> | |
| </group> | |
| </xpath> | |
| <xpath expr="//sheet" position="inside"> | |
| <group string="Migration Settings" invisible="protocol != 's3'"> | |
| <field name="migration_batch_size" /> | |
| <field name="migration_channel" /> | |
| <field name="migration_use_storage_force_db_rules" /> | |
| </group> | |
| </xpath> |
🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/views/fs_storage_view.xml around lines 19 - 25,
Add an attrs-based visibility condition to the "Migration Settings" group so it
is only shown for S3 storages; locate the <group string="Migration Settings">
element that contains fields migration_batch_size, migration_channel, and
migration_use_storage_force_db_rules and add
attrs="{'invisible':[('storage_protocol','!=','s3')]}" (or the project's
equivalent field name for protocol) so the group is hidden when storage_protocol
is not 's3'.
| def action_confirm(self): | ||
| self.ensure_one() | ||
| if not self.storage_code: | ||
| raise UserError(_("Storage code is required.")) | ||
| max_batches = self.max_batches or None | ||
| total = self.env["ir.attachment"].s3_enqueue_migration( | ||
| self.storage_code, | ||
| batch_size=self.batch_size, | ||
| max_batches=max_batches, | ||
| channel=self.channel, | ||
| ) |
There was a problem hiding this comment.
Consider validating batch_size before enqueuing.
The wizard doesn't validate that batch_size is positive. A zero or negative value could cause issues in the enqueue loop.
🐛 Proposed fix
def action_confirm(self):
self.ensure_one()
if not self.storage_code:
raise UserError(_("Storage code is required."))
+ if self.batch_size <= 0:
+ raise UserError(_("Batch size must be a positive number."))
max_batches = self.max_batches or None
total = self.env["ir.attachment"].s3_enqueue_migration(🤖 Prompt for AI Agents
In @fs_attachment_s3_migration/wizard/migration_wizard.py around lines 69 - 79,
In action_confirm validate that self.batch_size is a positive integer before
calling ir.attachment.s3_enqueue_migration: check the batch_size field on the
wizard (self.batch_size) and if it is None, zero, or negative raise a UserError
with a clear message (e.g., "Batch size must be a positive integer") so the
enqueue loop never receives non-positive batch sizes; update action_confirm to
perform this check and only call s3_enqueue_migration when the value is valid.
Module for migration existing filestore attachments into the S3 backend with queue_job orchestration
Before this change, each attachment was processed individually causing duplicate S3 uploads for files with identical content. The migration domain was incomplete (missing type, checksum, db_datas filters), and writing to the datas field risked data loss if source files were unreadable. After this change, attachments are grouped by checksum within each batch - uploading each unique file once and updating all related attachment records to point to the same S3 path. The domain now properly excludes URL attachments, DB-stored files, and already migrated files. Record updates use _force_write_store_fname instead of datas writes, eliminating data loss risk. Added configurable force-DB rules respect and updated error logging. Task: 5048
baad198 to
0eb52f1
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@fs_attachment_s3_migration/__init__.py`:
- Around line 1-2: Add the same license header used in other package __init__.py
files to this module's __init__.py so headers are consistent; place the header
comment block at the top of fs_attachment_s3_migration/__init__.py above the
existing import statements (the lines "from . import models" and "from . import
wizard") so the file begins with the license header followed by the two imports.
| from . import models | ||
| from . import wizard |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Missing license header for consistency.
Other __init__.py files in this module include a license header (e.g., tests/__init__.py, models/__init__.py, wizard/__init__.py). Consider adding the same header here for consistency.
📝 Suggested fix
+# Copyright 2025 Cetmix OU
+# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
+
from . import models
from . import wizard📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from . import models | |
| from . import wizard | |
| # Copyright 2025 Cetmix OU | |
| # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl). | |
| from . import models | |
| from . import wizard |
🤖 Prompt for AI Agents
In `@fs_attachment_s3_migration/__init__.py` around lines 1 - 2, Add the same
license header used in other package __init__.py files to this module's
__init__.py so headers are consistent; place the header comment block at the top
of fs_attachment_s3_migration/__init__.py above the existing import statements
(the lines "from . import models" and "from . import wizard") so the file begins
with the license header followed by the two imports.
Helper module that migrates existing attachments to S3.
Copy of OCA#534 for internal review.
Summary by CodeRabbit
New Features
UI
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.