diff --git a/.github/workflows/ci-pypi-deploy.yml b/.github/workflows/ci-pypi-deploy.yml new file mode 100644 index 000000000..926f8d6ff --- /dev/null +++ b/.github/workflows/ci-pypi-deploy.yml @@ -0,0 +1,24 @@ +name: package-release + +on: + workflow_dispatch: + pull_request: + push: + branches: + - main + release: + types: + - published + +jobs: + build: + name: build and upload release to pypi + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + - uses: casperdcl/deploy-pypi@v2 + with: + password: ${{ secrets.PYPI_TOKEN }} + pip: wheel -w dist/ --no-deps . + upload: ${{ github.event_name == 'release' && github.event.action == 'published' }} diff --git a/CHANGELOG.md b/CHANGELOG.md index 6d36518c8..3fad868e9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -17,6 +17,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Add option to clamp output prediction using limits specified in config file [\#92](https://github.com/mllam/neural-lam/pull/92) @SimonKamuk +- Add publication of releases to pypi.org. [\#71](https://github.com/mllam/neural-lam/pull/71) @leifdenby, @observingClouds + ### Fixed - Only print on rank 0 to avoid duplicates of all print statements. [\#103](https://github.com/mllam/neural-lam/pull/103) @simonkamuk @sadamov diff --git a/README.md b/README.md index cf0cacff8..d755164e2 100644 --- a/README.md +++ b/README.md @@ -79,7 +79,15 @@ expects the most recent version of CUDA on your system. We cover all the installation options in our [github actions ci/cd setup](.github/workflows/) which you can use as a reference. -## Using `pdm` +### From pypi.org + +``` +python -m pip install neural_lam +``` + +### From source + +#### Using `pdm` 1. Clone this repository and navigate to the root directory. 2. Install `pdm` if you don't have it installed on your system (either with `pip install pdm` or [following the install instructions](https://pdm-project.org/latest/#installation)). @@ -88,7 +96,7 @@ setup](.github/workflows/) which you can use as a reference. 4. Install a specific version of `torch` with `pdm run python -m pip install torch --index-url https://download.pytorch.org/whl/cpu` for a CPU-only version or `pdm run python -m pip install torch --index-url https://download.pytorch.org/whl/cu111` for CUDA 11.1 support (you can find the correct URL for the variant you want on [PyTorch webpage](https://pytorch.org/get-started/locally/)). 5. Install the dependencies with `pdm install` (by default this in include the). If you will be developing `neural-lam` we recommend to install the development dependencies with `pdm install --group dev`. By default `pdm` installs the `neural-lam` package in editable mode, so you can make changes to the code and see the effects immediately. -## Using `pip` +#### Using `pip` 1. Clone this repository and navigate to the root directory. > If you are happy using the latest version of `torch` with GPU support (expecting the latest version of CUDA is installed on your system) you can skip to step 3. diff --git a/pyproject.toml b/pyproject.toml index ab33a5a02..b700b4ce7 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -10,6 +10,7 @@ authors = [ { name = "Kasper Hintz", email = "kah@dmi.dk" }, { name = "Erik Larsson", email = "erik.larsson@liu.se" }, ] +readme = "README.md" # PEP 621 project metadata # See https://www.python.org/dev/peps/pep-0621/