You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enabling faster reading and writing of subsets of an HDF5 file can be achieved by chunking. To make use of this, it is important to optimize the chunk size. As the size of snapshots is unknown before the computation of the first snapshot in parameter space, the snapshot computation needs to determine the chunk size automatically with the knowledge of the first output. In a parallel run of the snapshot computation, chunks mustn't allocate a lot of space that is not used to store data, and results from different processes can be merged. Currently, the snapshot computation uses a fixed chunk size
The text was updated successfully, but these errors were encountered:
Enabling faster reading and writing of subsets of an HDF5 file can be achieved by chunking. To make use of this, it is important to optimize the chunk size. As the size of snapshots is unknown before the computation of the first snapshot in parameter space, the snapshot computation needs to determine the chunk size automatically with the knowledge of the first output. In a parallel run of the snapshot computation, chunks mustn't allocate a lot of space that is not used to store data, and results from different processes can be merged. Currently, the snapshot computation uses a fixed chunk size
The text was updated successfully, but these errors were encountered: