Skip to content

[BUG] Mixed type format inconsistency in iter1 when training DPA-1 model #1687

Open
@chenggoj

Description

@chenggoj

Bug summary

When training a DPA-1 neural network potential using DP-GEN (v0.13.0) with TensorFlow backend, I encountered an error related to mixed type format inconsistency in iter1. While iter0 (including training-exploration-label stages) completed successfully, the training in iter1 failed due to data format mismatch. DP-GEN should automatically convert iter0.02.fp data to mixed type format, maintaining consistency with the initial data format.
In iter1, DP-GEN uses the standard format for iter0.02.fp data instead of maintaining the mixed type format from iter0, causing a format inconsistency error.

Environment

  • DP-GEN version: 0.13.0
  • DeePMD-kit backend: TensorFlow
  • Model type: DPA-1
  • Data format: Multisystem mixed type

Error Message

AssertionError: if one of the system is of mixed_type format, then all of the systems should be of mixed_type format!

Would appreciate any guidance on resolving this issue or confirmation if this is a bug that needs to be fixed.

DP-GEN Version

0.13.0

Platform, Python Version, Remote Platform, etc

No response

Input Files, Running Commands, Error Log, etc.

No inputs.

Steps to Reproduce

  1. Initialize training data using multisystem mixed type dp data format
  2. Run iter0 (completes successfully)
  3. Enter iter1, where the error occurs

Further Information, Files, and Links

The error suggests that DP-GEN is not properly carrying over the mixed type format configuration from iter0 to iter1's training data.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions