Skip to content

Commit

Permalink
Showing 53 changed files with 1,482 additions and 390 deletions.
10 changes: 10 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -31,6 +31,16 @@ matrix:
- CC=gcc-4.8
- CXX=g++-4.8
- TENSORFLOW_VERSION=1.14
- python: 3.6
env:
- CC=gcc-5
- CXX=g++-5
- TENSORFLOW_VERSION=1.14
- python: 3.6
env:
- CC=gcc-8
- CXX=g++-8
- TENSORFLOW_VERSION=1.14
- python: 3.7
env:
- CC=gcc-5
45 changes: 35 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -9,8 +9,11 @@
- [License and credits](#license-and-credits)
- [Deep Potential in a nutshell](#deep-potential-in-a-nutshell)
- [Download and install](#download-and-install)
- [Easy installation methods](#easy-installation-methods)
- [With Docker](#with-docker)
- [With conda](#with-conda)
- [Offline packages](#offline-packages)
- [Install the python interaction](#install-the-python-interface)
- [Easy installation methods](#easy-installation-methods)
- [Install the Tensorflow's python interface](#install-the-tensorflows-python-interface)
- [Install the DeePMD-kit's python interface](#install-the-deepmd-kits-python-interface)
- [Install the C++ interface](#install-the-c-interface)
@@ -83,11 +86,29 @@ In addition to building up potential energy models, DeePMD-kit can also be used

Please follow our [github](https://github.com/deepmodeling/deepmd-kit) webpage to see the latest released version and development version.

## Install the python interface
## Easy installation methods
There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.

### With Docker
A docker for installing the DeePMD-kit on CentOS 7 is available [here](https://github.com/frankhan91/deepmd-kit_docker).

### With conda
DeePMD-kit is avaiable with [conda](https://github.com/conda/conda). Install [Anaconda](https://www.anaconda.com/distribution/#download-section) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html) first.

To install the CPU version:
```bash
conda install deepmd-kit=*=*cpu lammps-dp=*=*cpu -c deepmodeling
```

To install the GPU version containing [CUDA 10.0](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#binary-compatibility__table-toolkit-driver):
```bash
conda install deepmd-kit=*=*gpu lammps-dp=*=*gpu -c deepmodeling
```

### Easy installation methods
A docker for installing the DeePMD-kit on CentOS 7 is available [here](https://github.com/frankhan91/deepmd-kit_docker). We are currently working on installation methods using the `conda` package management system and `pip` tools. Hope these will come out soon.
### Offline packages
Both CPU and GPU version offline package are avaiable in [the Releases page](https://github.com/deepmodeling/deepmd-kit/releases).

## Install the python interface
### Install the Tensorflow's python interface
First, check the python version and compiler version on your machine
```bash
@@ -102,6 +123,14 @@ source $tensorflow_venv/bin/activate
pip install --upgrade pip
pip install --upgrade tensorflow==1.14.0
```
It is notice that everytime a new shell is started and one wants to use `DeePMD-kit`, the virtual environment should be activated by
```bash
source $tensorflow_venv/bin/activate
```
if one wants to skip out of the virtual environment, he/she can do
```bash
deactivate
```
If one has multiple python interpreters named like python3.x, it can be specified by, for example
```bash
virtualenv -p python3.7 $tensorflow_venv
@@ -483,7 +512,7 @@ Running an MD simulation with LAMMPS is simpler. In the LAMMPS input file, one n
pair_style deepmd graph.pb
pair_coeff
```
where `graph.pb` is the file name of the frozen model. The `pair_coeff` should be left blank. It should be noted that LAMMPS counts atom types starting from 1, therefore, all LAMMPS atom type will be firstly subtracted by 1, and then passed into the DeePMD-kit engine to compute the interactions. A detailed documentation of this pair style is [here](doc/lammps-pair-style-deepmd.md).
where `graph.pb` is the file name of the frozen model. The `pair_coeff` should be left blank. It should be noted that LAMMPS counts atom types starting from 1, therefore, all LAMMPS atom type will be firstly subtracted by 1, and then passed into the DeePMD-kit engine to compute the interactions. [A detailed documentation of this pair style is available.](doc/lammps-pair-style-deepmd.md).

### Long-range interaction
The reciprocal space part of the long-range interaction can be calculated by LAMMPS command `kspace_style`. To use it with DeePMD-kit, one writes
@@ -533,11 +562,7 @@ If other unexpected problems occur, you're welcome to contact us for help.

When the version of DeePMD-kit used to training model is different from the that of DeePMD-kit running MDs, one has the problem of model compatability.

DeePMD-kit guarantees that the codes with the same major and minor revisions are compatible. That is to say v0.12.5 is compatible to v0.12.0, but is not compatible to v0.11.0. When way of fixing it is to restart the training with the new revisions and a slightly increased `stop_batch`, say 1,000,000 to 1,001,000 if the `save_freq` was set to 1,000. Typically one runs
```bash
dp train --restart model.ckpt revised_input.json
```
and freeze the new model.
DeePMD-kit guarantees that the codes with the same major and minor revisions are compatible. That is to say v0.12.5 is compatible to v0.12.0, but is not compatible to v0.11.0 nor v1.0.0.

## Installation: inadequate versions of gcc/g++
Sometimes you may use a gcc/g++ of version <4.9. If you have a gcc/g++ of version > 4.9, say, 7.2.0, you may choose to use it by doing
9 changes: 5 additions & 4 deletions deepmd/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
from .env import set_mkl
from .DeepEval import DeepEval
from .DeepPot import DeepPot
from .DeepPolar import DeepPolar
from .DeepWFC import DeepWFC
from .DeepEval import DeepEval
from .DeepPot import DeepPot
from .DeepDipole import DeepDipole
from .DeepPolar import DeepPolar
from .DeepWFC import DeepWFC

set_mkl()

12 changes: 7 additions & 5 deletions doc/lammps-pair-style-deepmd.md
Original file line number Diff line number Diff line change
@@ -35,13 +35,13 @@ This pair style takes the deep potential defined in a model file that usually ha

The model deviation evalulate the consistency of the force predictions from multiple models. By default, only the maximal, minimal and averge model deviations are output. If the key `atomic` is set, then the model deviation of force prediction of each atom will be output.

By default, the model deviation is output in absolute value. If the keyword `relative` is set, then the relative model deviation will be output, which is defined by
By default, the model deviation is output in absolute value. If the keyword `relative` is set, then the relative model deviation will be output. The relative model deviation of the force on atom `i` is defined by
```math
|Df|
Ef = -------------
|f| + level
|Df_i|
Ef_i = -------------
|f_i| + level
```
where `Df` is the model deviation of a force, `|f|` is the norm of the force and `level` is provided as the parameter of the keyword `relative`.
where `Df_i` is the absolute model deviation of the force on atom `i`, `|f_i|` is the norm of the the force and `level` is provided as the parameter of the keyword `relative`.


## Restrictions
@@ -50,6 +50,8 @@ where `Df` is the model deviation of a force, `|f|` is the norm of the force and

- The `atom_style` of the system should be `atomic`.

- When using the `atomic` key word of `deepmd` is set, one should not use this pair style with MPI parallelization.


[DP]:https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.143001
[DP-SE]:https://arxiv.org/abs/1805.09003
4 changes: 2 additions & 2 deletions examples/fparam/data/.gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
*raw

*.raw
convert_aparam.py
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
1 change: 1 addition & 0 deletions examples/fparam/train/input.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{
"_comment": " model parameters",
"model" : {
"data_stat_nbatch": 1,
"descriptor": {
"type": "se_a",
"sel": [60],
63 changes: 63 additions & 0 deletions examples/fparam/train/input_aparam.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
{
"_comment": " model parameters",
"model" : {
"data_stat_nbatch": 1,
"descriptor": {
"type": "se_a",
"sel": [60],
"rcut_smth": 1.80,
"rcut": 6.00,
"neuron": [25, 50, 100],
"resnet_dt": false,
"axis_neuron": 8,
"seed": 1
},
"fitting_net" : {
"neuron": [120, 120, 120],
"resnet_dt": true,
"numb_aparam": 1,
"seed": 1
}
},

"loss" : {
"start_pref_e": 0.02,
"limit_pref_e": 1,
"start_pref_f": 1000,
"limit_pref_f": 1,
"start_pref_v": 0,
"limit_pref_v": 0
},

"learning_rate" : {
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95
},

"_comment": " traing controls",
"training" : {
"systems": ["../data/e3000_i2000/", "../data/e8000_i2000/"],
"set_prefix": "set",
"stop_batch": 1000000,
"batch_size": 1,

"seed": 1,

"_comment": " display and restart",
"_comment": " frequencies counted in batch",
"disp_file": "lcurve.out",
"disp_freq": 100,
"numb_test": 10,
"save_freq": 1000,
"save_ckpt": "model.ckpt",
"load_ckpt": "model.ckpt",
"disp_training":true,
"time_training":true,
"profiling": false,
"profiling_file": "timeline.json"
},

"_comment": "that's all"
}

1 change: 1 addition & 0 deletions examples/water/train/polar.json
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
"_comment": " model parameters",
"model":{
"type_map": ["O", "H"],
"data_stat_nbatch": 1,
"descriptor": {
"type": "loc_frame",
"sel_a": [16, 32],
6 changes: 4 additions & 2 deletions examples/water/train/polar_se_a.json
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
"_comment": " model parameters",
"model":{
"type_map": ["O", "H"],
"data_stat_nbatch": 1,
"descriptor" :{
"type": "se_a",
"sel": [46, 92],
@@ -16,7 +17,8 @@
},
"fitting_net": {
"type": "polar",
"pol_type": [0],
"sel_type": [0],
"fit_diag": true,
"neuron": [100, 100, 100],
"resnet_dt": true,
"seed": 1,
@@ -27,7 +29,7 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"start_lr": 0.01,
"decay_steps": 5000,
"decay_rate": 0.95,
"_comment": "that's all"
1 change: 1 addition & 0 deletions examples/water/train/wannier.json
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
"_comment": " model parameters",
"model":{
"type_map": ["O", "H"],
"data_stat_nbatch": 1,
"descriptor": {
"type": "loc_frame",
"sel_a": [16, 32],
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[build-system]
requires = ["setuptools", "wheel", "scikit-build", "cmake", "ninja"]
requires = ["setuptools", "wheel", "scikit-build", "cmake", "ninja", "m2r"]

15 changes: 14 additions & 1 deletion source/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -105,7 +105,11 @@ endif ()

# define USE_CUDA_TOOLKIT
if (DEFINED USE_CUDA_TOOLKIT)
find_package(CUDA REQUIRED)
if (USE_CUDA_TOOLKIT)
find_package(CUDA REQUIRED)
else()
message(STATUS "Will not build nv GPU support")
endif()
else()
find_package(CUDA QUIET)
if (CUDA_FOUND)
@@ -120,6 +124,15 @@ if (USE_CUDA_TOOLKIT)
add_definitions("-DUSE_CUDA_TOOLKIT")
endif()

# define USE_TTM
if (NOT DEFINED USE_TTM)
set(USE_TTM FALSE)
endif (NOT DEFINED USE_TTM)
if (USE_TTM)
message(STATUS "Use TTM")
set(TTM_DEF "-DUSE_TTM")
endif (USE_TTM)

# define build type
if ((NOT DEFINED CMAKE_BUILD_TYPE) OR CMAKE_BUILD_TYPE STREQUAL "")
set (CMAKE_BUILD_TYPE release)
31 changes: 24 additions & 7 deletions source/lib/include/NNPInter.h
Original file line number Diff line number Diff line change
@@ -64,7 +64,8 @@ class NNPInter
const vector<int> & atype,
const vector<VALUETYPE> & box,
const int nghost = 0,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
void compute (ENERGYTYPE & ener,
vector<VALUETYPE> & force,
vector<VALUETYPE> & virial,
@@ -73,7 +74,8 @@ class NNPInter
const vector<VALUETYPE> & box,
const int nghost,
const LammpsNeighborList & lmp_list,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
void compute (ENERGYTYPE & ener,
vector<VALUETYPE> & force,
vector<VALUETYPE> & virial,
@@ -82,7 +84,8 @@ class NNPInter
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
void compute (ENERGYTYPE & ener,
vector<VALUETYPE> & force,
vector<VALUETYPE> & virial,
@@ -93,10 +96,12 @@ class NNPInter
const vector<VALUETYPE> & box,
const int nghost,
const LammpsNeighborList & lmp_list,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
VALUETYPE cutoff () const {assert(inited); return rcut;};
int numb_types () const {assert(inited); return ntypes;};
int dim_fparam () const {assert(inited); return dfparam;};
int dim_aparam () const {assert(inited); return daparam;};
private:
Session* session;
int num_intra_nthreads, num_inter_nthreads;
@@ -109,6 +114,10 @@ class NNPInter
VALUETYPE cell_size;
int ntypes;
int dfparam;
int daparam;
void validate_fparam_aparam(const int & nloc,
const vector<VALUETYPE> &fparam,
const vector<VALUETYPE> &aparam)const ;
};

class NNPInterModelDevi
@@ -125,7 +134,8 @@ class NNPInterModelDevi
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
void compute (vector<ENERGYTYPE> & all_ener,
vector<vector<VALUETYPE> > & all_force,
vector<vector<VALUETYPE> > & all_virial,
@@ -134,7 +144,8 @@ class NNPInterModelDevi
const vector<VALUETYPE> & box,
const int nghost,
const LammpsNeighborList & lmp_list,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
void compute (vector<ENERGYTYPE> & all_ener,
vector<vector<VALUETYPE> > & all_force,
vector<vector<VALUETYPE> > & all_virial,
@@ -145,10 +156,12 @@ class NNPInterModelDevi
const vector<VALUETYPE> & box,
const int nghost,
const LammpsNeighborList & lmp_list,
const vector<VALUETYPE> fparam = vector<VALUETYPE>());
const vector<VALUETYPE> & fparam = vector<VALUETYPE>(),
const vector<VALUETYPE> & aparam = vector<VALUETYPE>());
VALUETYPE cutoff () const {assert(inited); return rcut;};
int numb_types () const {assert(inited); return ntypes;};
int dim_fparam () const {assert(inited); return dfparam;};
int dim_aparam () const {assert(inited); return daparam;};
#ifndef HIGH_PREC
void compute_avg (ENERGYTYPE & dener,
const vector<ENERGYTYPE > & all_energy);
@@ -176,6 +189,10 @@ class NNPInterModelDevi
VALUETYPE cell_size;
int ntypes;
int dfparam;
int daparam;
void validate_fparam_aparam(const int & nloc,
const vector<VALUETYPE> &fparam,
const vector<VALUETYPE> &aparam)const ;
};


Loading

0 comments on commit d735223

Please sign in to comment.