diff --git a/.github/workflows/common-workflow.yml b/.github/workflows/common-workflow.yml index 3ac9f50c2..e64bc5c27 100644 --- a/.github/workflows/common-workflow.yml +++ b/.github/workflows/common-workflow.yml @@ -109,7 +109,9 @@ jobs: ${{ inputs.build_command }} - name: Compile ISSM - run: make -j4 install + run: | + make -j4 + make install - name: Compress ISSM artifact run: | diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 000000000..f2daf625d --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,114 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## What is ISSM + +ISSM (Ice-sheet and Sea-level System Model) is a large-scale thermo-mechanical 2D/3D parallelized multi-purpose finite-element software for ice sheet and sea-level modeling. It is written in C++ (computational core) with MATLAB, Python, and JavaScript interfaces. + +## Environment Setup + +Before building or running anything, set `ISSM_DIR` in your shell profile (`.bashrc`/`.zshrc`) pointing to the repository root, then source the environment script: + +```sh +export ISSM_DIR=/path/to/ISSM +source $ISSM_DIR/etc/environment.sh +``` + +## Build + +External packages must be installed before configuring ISSM. The minimum required set for a basic build: + +```sh +cd $ISSM_DIR/externalpackages/triangle && ./install-linux.sh # or install-mac.sh +cd $ISSM_DIR/externalpackages/m1qn3 && ./install-linux.sh +cd $ISSM_DIR/externalpackages/petsc && ./install-3.22-linux.sh +``` + +Then configure and build: + +```sh +source $ISSM_DIR/etc/environment.sh +autoreconf -ivf +./configure.sh # local config script (adjust paths as needed), or run ./configure directly +make -j$(nproc) +make install +``` + +Key `configure` flags: `--prefix=$ISSM_DIR`, `--with-matlab-dir`, `--with-python`, `--with-petsc-dir`, `--with-triangle-dir`, `--with-m1qn3-dir`, `--enable-debugging`. + +## Running Tests + +Tests live in `test/NightlyRun/`. Each test is a numbered script (`test101.m` / `test101.py`). + +**Python** (from `test/NightlyRun/`): +```sh +export PYTHONPATH="$ISSM_DIR/src/m/dev:$PYTHONPATH" +export PYTHONSTARTUP="$ISSM_DIR/src/m/dev/devpath.py" +cd $ISSM_DIR/test/NightlyRun +./runme.py # run all nightly tests +./runme.py -i 101 102 # run specific tests by ID +./runme.py -i SquareShelf # run by (partial) name +./runme.py --benchmark nightly # benchmark filter +``` + +**MATLAB** (from within MATLAB): +```matlab +cd([getenv('ISSM_DIR') '/test/NightlyRun']) +addpath([getenv('ISSM_DIR') '/src/m/dev']); devpath; +runme % run all +runme('id', [101 102]) % run specific tests +runme('id', 102, 'procedure', 'update') % update reference archive (developers only) +``` + +To update a test's reference archive (after an intentional result change), use `procedure='update'` (MATLAB) or `--procedure update` (Python). + +## Code Architecture + +### Dual-layer design + +ISSM has two layers that work together: + +1. **High-level interfaces** (`src/m/`) — MATLAB (`.m`), Python (`.py`), and JavaScript (`.js`) code for building and parameterizing models, running simulations, and post-processing results. The key object is `model` (defined in `src/m/classes/model.m` / `model.py`), which holds all simulation fields as properties (mesh, geometry, materials, boundary conditions, solver settings, results, etc.). + +2. **C++ computational core** (`src/c/`) — compiled finite-element parallel engine that does the actual solving. + +The high-level interface generates input files (`.bin`, `.queue`, and `.toolkits`) that are read by the computational core. In turn, the results from the simulation are saved in an `.outbin` file that is read by the High-level interface and added to the model (saved in `md.results`) + +### Typical model workflow + +``` +triangle/mesh → setmask → parameterize → setflowequation → solve → results +``` + +Each step corresponds to functions in `src/m/parameterization/` and `src/m/solve/`. `parameterize()` runs a user-supplied `.par` file that fills the `model` object fields. `solve()` marshals model data to binary, calls the C++ executable (`bin/issm.exe`), and loads results back into `md.results`. + +### C++ core layout (`src/c/`) + +- **`analyses/`** — One `*Analysis` class per physics type (e.g., `StressbalanceAnalysis`, `ThermalAnalysis`). Each implements the abstract `Analysis` interface: element matrix/vector assembly (`CreateKMatrix`, `CreatePVector`), node/constraint creation, solution update. +- **`cores/`** — Top-level solution entry points (e.g., `stressbalance_core.cpp`, `transient_core.cpp`). These orchestrate which analyses to run and in what order. +- **`solutionsequences/`** — Linear/nonlinear/adjoint solvers that call PETSc (via `toolkits/`). +- **`classes/`** — C++ representations of FEM objects: `Elements/` (Tria, Penta, etc.), `Nodes/`, `Constraints/`, `Loads/`, `Inputs/`, `Params/`, `Materials/`. +- **`modules/`** — Compiled callable modules exposed as MEX/Python wrappers (e.g., mesh generation, interpolation, partitioning). +- **`toolkits/`** — Abstraction layer over PETSc, MPI, MUMPS, METIS for linear algebra and distributed computing. +- **`datastructures/`** — `DataSet` container and `Object` base class used throughout the core. + +### Wrappers (`src/wrappers/`) + +Glue code that compiles C++ modules as shared libraries loadable from MATLAB (`*_matlab.la`) and Python (`*_python.la`). The `io/` subdirectory handles binary serialization of the `model` object (marshalling) for communication between the interface and the executable. + +### External packages (`externalpackages/`) + +Each subdirectory has its own `install-linux.sh` / `install-mac.sh` / etc. ISSM only needs a handful of external packages installed depending on the desired configuration. The key dependencies are: +- **PETSc** (includes MPI/MPICH, BLAS/LAPACK, MUMPS, METIS/ParMETIS, ScaLAPACK) +- **Triangle** (mesh generation), +- **m1qn3** (L-BFGS optimizer for inversions). + +Some optional that can be useful depending on the application: +- **Dakota** (UQ/sampling), +- **CoDiPack** (automatic differentiation) + +### Path setup for interfaces + +- **Python**: `src/m/dev/devpath.py` walks `src/m/` and adds all directories containing `.py` files to `sys.path`, plus `$ISSM_DIR/lib` and `$ISSM_DIR/src/wrappers/python/.libs`. +- **MATLAB**: `src/m/dev/devpath.m` does the equivalent using `addpath` recursively. diff --git a/configure.ac b/configure.ac index e077f5f84..2e64d6d98 100644 --- a/configure.ac +++ b/configure.ac @@ -1,13 +1,13 @@ #Initializing configure -AC_INIT([Ice-sheet and Sea-level System Model (ISSM)],[2026.1],[https://github.com/ISSMteam/ISSM/],[issm],[https://issmteam.github.io/ISSM-Documentation/]) +AC_INIT([Ice-sheet and Sea-level System Model (ISSM)],[2026.2],[https://github.com/ISSMteam/ISSM/],[issm],[https://issmteam.github.io/ISSM-Documentation/]) AC_CONFIG_AUX_DIR([./aux-config]) # Put config files in aux-config AC_CONFIG_MACRO_DIR([m4]) # m4 macros are located in m4 m4_include([m4/issm_options.m4]) #print header AC_MSG_NOTICE(============================================================================) -AC_MSG_NOTICE(= AC_PACKAGE_NAME AC_PACKAGE_VERSION configuration =) +AC_MSG_NOTICE(= AC_PACKAGE_NAME AC_PACKAGE_VERSION configuration =) AC_MSG_NOTICE(============================================================================) #Determine System type and OS diff --git a/externalpackages/autotools/install-mac.sh b/externalpackages/autotools/install-mac.sh index fa7ec10da..11f013838 100755 --- a/externalpackages/autotools/install-mac.sh +++ b/externalpackages/autotools/install-mac.sh @@ -4,10 +4,10 @@ set -eu ## Constants # -AUTOCONF_VER="2.69" -AUTOMAKE_VER="1.16.1" -LIBTOOL_VER="2.4.6" -M4_VER="1.4.19" +AUTOCONF_VER="2.73" +AUTOMAKE_VER="1.18.1" +LIBTOOL_VER="2.5.4" +M4_VER="1.4.21" PREFIX="${ISSM_DIR}/externalpackages/autotools/install" # Set to location where external package should be installed diff --git a/externalpackages/chaco/install-linux.sh b/externalpackages/chaco/install-linux.sh index 23e455763..dbe850dbc 100755 --- a/externalpackages/chaco/install-linux.sh +++ b/externalpackages/chaco/install-linux.sh @@ -18,7 +18,7 @@ mkdir -p ${PREFIX} src # Download source $ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/raw/refs/heads/main/Chaco-${VER}.tar.gz" "Chaco-${VER}.tar.gz" -$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" +#$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" # Unpack source tar -xvzf Chaco-${VER}.tar.gz diff --git a/externalpackages/chaco/install-mac.sh b/externalpackages/chaco/install-mac.sh index a08158fb5..4dd2093e8 100755 --- a/externalpackages/chaco/install-mac.sh +++ b/externalpackages/chaco/install-mac.sh @@ -18,7 +18,7 @@ mkdir -p ${PREFIX} src # Download source $ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/raw/refs/heads/main/Chaco-${VER}.tar.gz" "Chaco-${VER}.tar.gz" -$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" +#$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" # Unpack source tar -xvzf Chaco-${VER}.tar.gz diff --git a/externalpackages/chaco/install-win-msys2-mingw.sh b/externalpackages/chaco/install-win-msys2-mingw.sh index 2978dc8b1..fa2f96a62 100755 --- a/externalpackages/chaco/install-win-msys2-mingw.sh +++ b/externalpackages/chaco/install-win-msys2-mingw.sh @@ -14,7 +14,7 @@ mkdir -p ${PREFIX} src # Download source $ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/raw/refs/heads/main/Chaco-${VER}.tar.gz" "Chaco-${VER}.tar.gz" -$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" +#$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://github.com/sandialabs/Chaco/blob/main/chaco_user_guide.pdf" "chaco_user_guide.pdf" # Unpack source tar -xvzf Chaco-${VER}.tar.gz diff --git a/externalpackages/gshhg/install.sh b/externalpackages/gshhg/install.sh index 59d059736..86d461ad7 100755 --- a/externalpackages/gshhg/install.sh +++ b/externalpackages/gshhg/install.sh @@ -4,7 +4,7 @@ set -eu ## Constants # -VER="2.3.4" +VER="2.3.7" PREFIX="${ISSM_DIR}/externalpackages/gshhg/install" # Set to location where external package should be installed @@ -13,7 +13,7 @@ rm -rf ${PREFIX} mkdir -p ${PREFIX} # Download source -$ISSM_DIR/scripts/DownloadExternalPackage.sh "https://www.ngdc.noaa.gov/mgg/shorelines/data/gshhg/oldversions/version${VER}/gshhg-gmt-${VER}.tar.gz" "gshhg-gmt-${VER}.tar.gz" +$ISSM_DIR/scripts/DownloadExternalPackage.sh "http://www.soest.hawaii.edu/pwessel/gshhg/gshhg-gmt-${VER}.tar.gz" "gshhg-gmt-${VER}.tar.gz" # Unpack source tar -zxvf gshhg-gmt-${VER}.tar.gz diff --git a/jenkins/javascript/karma/package.json b/jenkins/javascript/karma/package.json index b9ab26637..e1dc859bf 100644 --- a/jenkins/javascript/karma/package.json +++ b/jenkins/javascript/karma/package.json @@ -9,7 +9,7 @@ "karma-junit-reporter": "^1.0.0", "karma-phantomjs-launcher": "^1.0.1", "karma-requirejs": "^1.0.0", - "mathjs": "^13.0.0", + "mathjs": "^15.2.0", "phantomjs-prebuilt": "^2.1.7", "requirejs": "^2.2.0" } diff --git a/m4/analyses.m4 b/m4/analyses.m4 index fb85dbdc2..8e633b94c 100644 --- a/m4/analyses.m4 +++ b/m4/analyses.m4 @@ -387,6 +387,20 @@ fi AM_CONDITIONAL([HYDROLOGYARMAPW], [test x$HAVE_HYDROLOGYARMAPW = xyes]) AC_MSG_RESULT($HAVE_HYDROLOGYARMAPW) dnl }}} +dnl with-HydrologyPrescribe{{{ +AC_ARG_WITH([HydrologyPrescribe], + AS_HELP_STRING([--with-HydrologyPrescribe = YES], [compile with HydrologyPrescribe capabilities (default is yes)]), + [HYDROLOGYPRESCRIBE=$withval],[HYDROLOGYPRESCRIBE=yes]) +AC_MSG_CHECKING(for HydrologyPrescribe capability compilation) + +HAVE_HYDROLOGYPRESCRIBE=no +if test "x$HYDROLOGYPRESCRIBE" = "xyes"; then + HAVE_HYDROLOGYPRESCRIBE=yes + AC_DEFINE([_HAVE_HYDROLOGYPRESCRIBE_],[1],[with HydrologyPrescribe capability]) +fi +AM_CONDITIONAL([HYDROLOGYPRESCRIBE], [test x$HAVE_HYDROLOGYPRESCRIBE = xyes]) +AC_MSG_RESULT($HAVE_HYDROLOGYPRESCRIBE) +dnl }}} dnl with-L2ProjectionBase{{{ AC_ARG_WITH([L2ProjectionBase], AS_HELP_STRING([--with-L2ProjectionBase = YES], [compile with L2ProjectionBase capabilities (default is yes)]), diff --git a/m4/issm_options.m4 b/m4/issm_options.m4 index 984c98995..6adef12a2 100644 --- a/m4/issm_options.m4 +++ b/m4/issm_options.m4 @@ -2341,6 +2341,32 @@ AC_DEFUN([ISSM_OPTIONS],[ AC_MSG_RESULT([${GMSH_VERSION_MAJOR}]) AC_DEFINE_UNQUOTED([_GMSH_VERSION_MAJOR_], ${GMSH_VERSION_MAJOR}, [Gmsh major version]) fi + dnl }}} + dnl PyBind11{{{ + AC_MSG_CHECKING([for pybind11]) + AC_ARG_WITH( + [pybind11-include], + AS_HELP_STRING([--with-pybind11-include=DIR], [PyBind11 include directory, necessary for emulator integration]), + [PyBind11INCL=${withval}], + [PyBind11INCL=""] + ) + AC_ARG_WITH( + [pybind11-libflags], + AS_HELP_STRING([--with-pybind11-libflags=LIBS], [PyBind11 libraries to be used, necessary for emulator integration]), + [PyBind11LIB=${withval}], + [PyBind11LIB=""] + ) + if test -z "${PyBind11INCL}"; then + HAVE_PyBind11=no + else + HAVE_PyBind11=yes + + AC_DEFINE([_HAVE_PyBind11_], [1], [with PyBind11 in ISSM src]) + AC_SUBST([PyBind11INCL]) + AC_SUBST([PyBind11LIB]) + fi + AM_CONDITIONAL([PyBind11], [test "x${HAVE_PyBind11}" == "xyes"]) + AC_MSG_RESULT([${HAVE_PyBind11}]) dnl }}} dnl Capabilities dnl with-bamg{{{ diff --git a/src/c/Makefile.am b/src/c/Makefile.am index 48afc8a09..c88150179 100644 --- a/src/c/Makefile.am +++ b/src/c/Makefile.am @@ -1,4 +1,4 @@ -AM_CPPFLAGS = @NEOPZINCL@ @DAKOTAINCL@ @TRIANGLEINCL@ @PETSCINCL@ @AMPIINCL@ @MEDIPACKINCL@ @MPIINCL@ @PARMETISINCL@ @METISINCL@ @CHACOINCL@ @PLAPACKINCL@ @MKLINCL@ @MUMPSINCL@ @SPAIINCL@ @HYPREINCL@ @PROMETHEUSINCL@ @SUPERLUINCL@ @SPOOLESINCL@ @PASTIXINCL@ @MLINCL@ @TAOINCL@ @ADOLCINCL@ @CODIPACKINCL@ @ADJOINTPETSCINCL@ @GSLINCL@ @BOOSTINCL@ @ESMFINCL@ @PROJINCL@ @MPLAPACKINCL@ +AM_CPPFLAGS = @NEOPZINCL@ @DAKOTAINCL@ @TRIANGLEINCL@ @PETSCINCL@ @AMPIINCL@ @MEDIPACKINCL@ @MPIINCL@ @PARMETISINCL@ @METISINCL@ @CHACOINCL@ @PLAPACKINCL@ @MKLINCL@ @MUMPSINCL@ @SPAIINCL@ @HYPREINCL@ @PROMETHEUSINCL@ @SUPERLUINCL@ @SPOOLESINCL@ @PASTIXINCL@ @MLINCL@ @TAOINCL@ @ADOLCINCL@ @CODIPACKINCL@ @ADJOINTPETSCINCL@ @GSLINCL@ @BOOSTINCL@ @ESMFINCL@ @PROJINCL@ @MPLAPACKINCL@ @PyBind11INCL@ AM_FCFLAGS = @SEMICINCL@ AUTOMAKE_OPTIONS = subdir-objects @@ -182,6 +182,7 @@ issm_sources += \ ./shared/Elements/PrintArrays.cpp \ ./shared/Elements/PddSurfaceMassBalance.cpp \ ./shared/Elements/PddSurfaceMassBalanceSicopolis.cpp \ + ./shared/Elements/PddSurfaceMassBalanceFast.cpp \ ./shared/Elements/ComputeDelta18oTemperaturePrecipitation.cpp \ ./shared/Elements/ComputeMungsmTemperaturePrecipitation.cpp \ ./shared/Elements/ComputeD18OTemperaturePrecipitationFromPD.cpp \ @@ -492,6 +493,9 @@ endif if HYDROLOGYARMAPW issm_sources += ./analyses/HydrologyArmapwAnalysis.cpp endif +if HYDROLOGYPRESCRIBE +issm_sources += ./analyses/HydrologyPrescribeAnalysis.cpp +endif if L2PROJECTIONEPL issm_sources += ./analyses/L2ProjectionEPLAnalysis.cpp endif @@ -581,6 +585,11 @@ if OCEAN issm_sources += ./modules/OceanExchangeDatax/OceanExchangeDatax.cpp endif #}}} +# PyBind11{{{ +if PyBind11 +issm_sources += ./classes/Params/EmulatorParam.cpp +endif +#}}} # Sampling sources {{{ if SAMPLING issm_sources += \ @@ -860,28 +869,37 @@ LDADD += $(OSLIBS) issm_SOURCES = main/issm.cpp issm_CXXFLAGS= $(CXXFLAGS) +issm_DEPENDENCIES = libISSMCore.la libISSMModules.la issm_slc_SOURCES = main/issm_slc.cpp issm_slc_CXXFLAGS= $(CXXFLAGS) +issm_slc_DEPENDENCIES = libISSMCore.la libISSMModules.la if OCEAN bin_PROGRAMS += issm_ocean issm_ocean_SOURCES = main/issm_ocean.cpp issm_ocean_CXXFLAGS= $(CXXFLAGS) +issm_ocean_DEPENDENCIES = libISSMCore.la libISSMModules.la endif if KRIGING bin_PROGRAMS += kriging kriging_SOURCES = main/kriging.cpp kriging_CXXFLAGS= $(CXXFLAGS) +kriging_DEPENDENCIES = libISSMCore.la libISSMModules.la endif if ISSM_DAKOTA bin_PROGRAMS += issm_dakota issm_dakota_SOURCES = main/issm_dakota.cpp issm_dakota_CXXFLAGS= $(CXXFLAGS) +issm_dakota_DEPENDENCIES = libISSMCore.la libISSMModules.la bin_PROGRAMS += issm_post issm_post_SOURCES = main/issm_post.cpp issm_post_CXXFLAGS= $(CXXFLAGS) +issm_post_DEPENDENCIES = libISSMCore.la libISSMModules.la endif #}}} + +# Ensure install does not run before build completes +install-exec-local: $(lib_LTLIBRARIES) $(bin_PROGRAMS) diff --git a/src/c/analyses/EnthalpyAnalysis.cpp b/src/c/analyses/EnthalpyAnalysis.cpp index e0d42b80e..276111f0c 100644 --- a/src/c/analyses/EnthalpyAnalysis.cpp +++ b/src/c/analyses/EnthalpyAnalysis.cpp @@ -45,6 +45,7 @@ void EnthalpyAnalysis::CreateConstraints(Constraints* constraints,IoModel* iomod if(smb_model==SMBpddEnum) isdynamic=true; if(smb_model==SMBd18opddEnum) isdynamic=true; if(smb_model==SMBpddSicopolisEnum) isdynamic=true; + if(smb_model==SMBpddFastEnum) isdynamic=true; } /*Convert spcs from temperatures to enthalpy*/ diff --git a/src/c/analyses/EnumToAnalysis.cpp b/src/c/analyses/EnumToAnalysis.cpp index 74b624eda..76417ad56 100644 --- a/src/c/analyses/EnumToAnalysis.cpp +++ b/src/c/analyses/EnumToAnalysis.cpp @@ -94,6 +94,9 @@ Analysis* EnumToAnalysis(int analysis_enum){ #ifdef _HAVE_HYDROLOGYSHREVE_ case HydrologyShreveAnalysisEnum : return new HydrologyShreveAnalysis(); #endif + #ifdef _HAVE_HYDROLOGYPRESCRIBE_ + case HydrologyPrescribeAnalysisEnum: return new HydrologyPrescribeAnalysis(); + #endif #ifdef _HAVE_L2PROJECTIONBASE_ case L2ProjectionBaseAnalysisEnum : return new L2ProjectionBaseAnalysis(); #endif diff --git a/src/c/analyses/HydrologyPrescribeAnalysis.cpp b/src/c/analyses/HydrologyPrescribeAnalysis.cpp new file mode 100644 index 000000000..558997226 --- /dev/null +++ b/src/c/analyses/HydrologyPrescribeAnalysis.cpp @@ -0,0 +1,170 @@ +#include "./HydrologyPrescribeAnalysis.h" +#include "../toolkits/toolkits.h" +#include "../classes/classes.h" +#include "../shared/shared.h" +#include "../modules/modules.h" + +/*Model processing*/ +void HydrologyPrescribeAnalysis::CreateConstraints(Constraints* constraints,IoModel* iomodel){/*{{{*/ + /*Nothing to be done*/ +}/*}}}*/ +void HydrologyPrescribeAnalysis::CreateLoads(Loads* loads, IoModel* iomodel){/*{{{*/ + /*Nothing to be done*/ +}/*}}}*/ +void HydrologyPrescribeAnalysis::CreateNodes(Nodes* nodes,IoModel* iomodel,bool isamr){/*{{{*/ + + /*Fetch parameters: */ + int hydrology_model; + iomodel->FindConstant(&hydrology_model,"md.hydrology.model"); + + /*Now, do we really want Prescribe?*/ + if(hydrology_model!=HydrologyprescribeEnum) return; + + if(iomodel->domaintype==Domain3DEnum) iomodel->FetchData(2,"md.mesh.vertexonbase","md.mesh.vertexonsurface"); + ::CreateNodes(nodes,iomodel,HydrologyPrescribeAnalysisEnum,P1Enum); + iomodel->DeleteData(2,"md.mesh.vertexonbase","md.mesh.vertexonsurface"); + +}/*}}}*/ +int HydrologyPrescribeAnalysis::DofsPerNode(int** doflist,int domaintype,int approximation){/*{{{*/ + return 1; +}/*}}}*/ +void HydrologyPrescribeAnalysis::UpdateElements(Elements* elements,Inputs* inputs,IoModel* iomodel,int analysis_counter,int analysis_type){/*{{{*/ + + /*Fetch data needed: */ + int hydrology_model,frictionlaw; + iomodel->FindConstant(&hydrology_model,"md.hydrology.model"); + + /*Now, do we really want Prescribe?*/ + if(hydrology_model!=HydrologyprescribeEnum) return; + + /*Update elements: */ + int counter=0; + for(int i=0;inumberofelements;i++){ + if(iomodel->my_elements[i]){ + Element* element=(Element*)elements->GetObjectByOffset(counter); + element->Update(inputs,i,iomodel,analysis_counter,analysis_type,P1Enum); + counter++; + } + } + + /*Add input to elements*/ + iomodel->FetchDataToInput(inputs,elements,"md.mask.ice_levelset",MaskIceLevelsetEnum); + iomodel->FetchDataToInput(inputs,elements,"md.mask.ocean_levelset",MaskOceanLevelsetEnum); + iomodel->FetchDataToInput(inputs,elements,"md.hydrology.head",HydrologyHeadEnum); + + iomodel->FetchDataToInput(inputs,elements,"md.geometry.thickness",ThicknessEnum); + iomodel->FetchDataToInput(inputs,elements,"md.geometry.base",BaseEnum); + + /*Initialize requested outputs in case they are not defined later for this partition*/ + iomodel->ConstantToInput(inputs,elements,0.,EffectivePressureEnum,P1Enum); +}/*}}}*/ +void HydrologyPrescribeAnalysis::UpdateParameters(Parameters* parameters,IoModel* iomodel,int solution_enum,int analysis_enum){/*{{{*/ + + /*retrieve some parameters: */ + int hydrology_model; + int numoutputs; + char** requestedoutputs = NULL; + iomodel->FindConstant(&hydrology_model,"md.hydrology.model"); + + /*Now, do we really want Prescribe?*/ + if(hydrology_model!=HydrologyprescribeEnum) return; + parameters->AddObject(new IntParam(HydrologyModelEnum,hydrology_model)); + + /*Requested outputs*/ + iomodel->FindConstant(&requestedoutputs,&numoutputs,"md.hydrology.requested_outputs"); + parameters->AddObject(new IntParam(HydrologyNumRequestedOutputsEnum,numoutputs)); + if(numoutputs)parameters->AddObject(new StringArrayParam(HydrologyRequestedOutputsEnum,requestedoutputs,numoutputs)); + iomodel->DeleteData(&requestedoutputs,numoutputs,"md.hydrology.requested_outputs"); + + /*Nothing else to add for now*/ +}/*}}}*/ + +/*Finite Element Analysis*/ +void HydrologyPrescribeAnalysis::Core(FemModel* femmodel){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +void HydrologyPrescribeAnalysis::PreCore(FemModel* femmodel){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +ElementVector* HydrologyPrescribeAnalysis::CreateDVector(Element* element){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +ElementMatrix* HydrologyPrescribeAnalysis::CreateJacobianMatrix(Element* element){/*{{{*/ +_error_("Not implemented"); +}/*}}}*/ +ElementMatrix* HydrologyPrescribeAnalysis::CreateKMatrix(Element* element){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +ElementVector* HydrologyPrescribeAnalysis::CreatePVector(Element* element){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +void HydrologyPrescribeAnalysis::GetSolutionFromInputs(Vector* solution,Element* element){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +void HydrologyPrescribeAnalysis::GradientJ(Vector* gradient,Element* element,int control_type,int control_interp,int control_index){/*{{{*/ + _error_("Not implemented yet"); +}/*}}}*/ +void HydrologyPrescribeAnalysis::InputUpdateFromSolution(IssmDouble* solution,Element* element){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ +void HydrologyPrescribeAnalysis::UpdateConstraints(FemModel* femmodel){/*{{{*/ + _error_("not implemented"); +}/*}}}*/ + +/*Additional methods*/ +void HydrologyPrescribeAnalysis::UpdateEffectivePressure(FemModel* femmodel){/*{{{*/ + + /*Loop over each element to compute Subglacial Water Pressure at vertices*/ + for(Object* &object:femmodel->elements->objects){ + Element* element = xDynamicCast(object); + UpdateEffectivePressure(element); + } + +}/*}}}*/ +void HydrologyPrescribeAnalysis::UpdateEffectivePressure(Element* element){/*{{{*/ + + /*Skip if water or ice shelf element*/ + if(!element->IsOnBase()) return; + + /*Intermediaries*/ + IssmDouble bed,thickness,head; + + /* Fetch number of nodes and allocate output*/ + int numnodes = element->GetNumberOfNodes(); + IssmDouble* N = xNew(numnodes); + + /*Retrieve all inputs and parameters*/ + IssmDouble g = element->FindParam(ConstantsGEnum); + IssmDouble rho_ice = element->FindParam(MaterialsRhoIceEnum); + IssmDouble rho_water = element->FindParam(MaterialsRhoFreshwaterEnum); + Input* head_input = element->GetInput(HydrologyHeadEnum); _assert_(head_input); + Input* thickness_input = element->GetInput(ThicknessEnum); _assert_(thickness_input); + Input* base_input = element->GetInput(BaseEnum); _assert_(base_input); + + Gauss* gauss=element->NewGauss(); + for (int i=0;iGaussVertex(i); + + base_input->GetInputValue(&bed,gauss); + thickness_input->GetInputValue(&thickness,gauss); + head_input->GetInputValue(&head,gauss); + + N[i] = rho_ice*g*thickness - rho_water*g*(head-bed); + } + + /*Set to 0 if inactive element*/ + if(element->IsAllFloating() || !element->IsIceInElement()){ + for(int iv=0;ivAddInput(EffectivePressureEnum,N,P1Enum); + xDelete(N); + return; + } + + /*Add new gap as an input*/ + element->AddInput(EffectivePressureEnum,N,P1Enum); + + /*Clean up and return*/ + xDelete(N); + delete gauss; +}/*}}}*/ + diff --git a/src/c/analyses/HydrologyPrescribeAnalysis.h b/src/c/analyses/HydrologyPrescribeAnalysis.h new file mode 100644 index 000000000..c097fb152 --- /dev/null +++ b/src/c/analyses/HydrologyPrescribeAnalysis.h @@ -0,0 +1,38 @@ +/*! \file HydrologyPrescribeAnalysis.h + * \brief: header file for generic external result object + */ + +#ifndef _HydrologyPrescribeAnalysis_ +#define _HydrologyPrescribeAnalysis_ + +/*Headers*/ +#include "./Analysis.h" + +class HydrologyPrescribeAnalysis: public Analysis{ + + public: + /*Model processing*/ + void CreateConstraints(Constraints* constraints,IoModel* iomodel); + void CreateLoads(Loads* loads, IoModel* iomodel); + void CreateNodes(Nodes* nodes,IoModel* iomodel,bool isamr=false); + int DofsPerNode(int** doflist,int domaintype,int approximation); + void UpdateElements(Elements* elements,Inputs* inputs,IoModel* iomodel,int analysis_counter,int analysis_type); + void UpdateParameters(Parameters* parameters,IoModel* iomodel,int solution_enum,int analysis_enum); + + /*Finite element Analysis*/ + void Core(FemModel* femmodel); + void PreCore(FemModel* femmodel); + ElementVector* CreateDVector(Element* element); + ElementMatrix* CreateJacobianMatrix(Element* element); + ElementMatrix* CreateKMatrix(Element* element); + ElementVector* CreatePVector(Element* element); + void GetSolutionFromInputs(Vector* solution,Element* element); + void GradientJ(Vector* gradient,Element* element,int control_type,int control_interp,int control_index); + void InputUpdateFromSolution(IssmDouble* solution,Element* element); + void UpdateConstraints(FemModel* femmodel); + + /*Intermediaries*/ + void UpdateEffectivePressure(FemModel* femmodel); + void UpdateEffectivePressure(Element* element); +}; +#endif diff --git a/src/c/analyses/HydrologyShaktiAnalysis.cpp b/src/c/analyses/HydrologyShaktiAnalysis.cpp index de9674395..c7c8b9d5a 100644 --- a/src/c/analyses/HydrologyShaktiAnalysis.cpp +++ b/src/c/analyses/HydrologyShaktiAnalysis.cpp @@ -136,10 +136,14 @@ void HydrologyShaktiAnalysis::UpdateElements(Elements* elements,Inputs* inputs,I /*Initialize requested outputs in case they are not defined later for this partition*/ iomodel->ConstantToInput(inputs,elements,0.,HydrologyBasalFluxEnum,P0Enum); + iomodel->ConstantToInput(inputs,elements,0.,HydrologyWaterVxEnum,P0Enum); + iomodel->ConstantToInput(inputs,elements,0.,HydrologyWaterVyEnum,P0Enum); iomodel->ConstantToInput(inputs,elements,0.,DegreeOfChannelizationEnum,P0Enum); iomodel->ConstantToInput(inputs,elements,0.,HydrologyMeltRateEnum,P0Enum); iomodel->ConstantToInput(inputs,elements,0.,HydrologyFrictionHeatEnum,P0Enum); iomodel->ConstantToInput(inputs,elements,0.,HydrologyDissipationEnum,P0Enum); + /* FIXME: 'EffectivePressure' in md.hydrology.requested_outputs sometimes causes an error indicating that ISSM cannot write EffectivePressure. Therefore, this line was added to resolve this issue. This line should be tracked to determine whether it remains necessary in the future. */ + iomodel->ConstantToInput(inputs,elements,0.,EffectivePressureEnum,P1Enum); /*Friction*/ FrictionUpdateInputs(elements, inputs, iomodel); @@ -159,6 +163,7 @@ void HydrologyShaktiAnalysis::UpdateParameters(Parameters* parameters,IoModel* i parameters->AddObject(iomodel->CopyConstantObject("md.hydrology.relaxation",HydrologyRelaxationEnum)); parameters->AddObject(iomodel->CopyConstantObject("md.hydrology.gap_height_min",HydrologyGapHeightMinEnum)); parameters->AddObject(iomodel->CopyConstantObject("md.hydrology.gap_height_max",HydrologyGapHeightMaxEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.hydrology.melt_flag",HydrologyMeltFlagEnum)); /*Requested outputs*/ iomodel->FindConstant(&requestedoutputs,&numoutputs,"md.hydrology.requested_outputs"); @@ -282,6 +287,7 @@ ElementVector* HydrologyShaktiAnalysis::CreatePVector(Element* element){/*{{{*/ IssmDouble alpha2,frictionheat; IssmDouble PMPheat,dissipation,dpressure_water[2],dbed[2]; IssmDouble* xyz_list = NULL; + int meltflag; /*Fetch number of nodes and dof for this finite element*/ int numnodes = basalelement->GetNumberOfNodes(); @@ -308,6 +314,7 @@ ElementVector* HydrologyShaktiAnalysis::CreatePVector(Element* element){/*{{{*/ Input* br_input = basalelement->GetInput(HydrologyBumpHeightEnum); _assert_(br_input); Input* headold_input = basalelement->GetInput(HydrologyHeadOldEnum); _assert_(headold_input); Input* storage_input = basalelement->GetInput(HydrologyStorageEnum); _assert_(storage_input); + Input* meltrate_input = basalelement->GetInput(BasalforcingsGroundediceMeltingRateEnum); _assert_(meltrate_input); /*Get conductivity from inputs*/ IssmDouble conductivity = GetConductivity(basalelement); @@ -315,6 +322,7 @@ ElementVector* HydrologyShaktiAnalysis::CreatePVector(Element* element){/*{{{*/ /*Get Params*/ IssmDouble dt; basalelement->FindParam(&dt,TimesteppingTimeStepEnum); + basalelement->FindParam(&meltflag,HydrologyMeltFlagEnum); /*Build friction basalelement, needed later: */ Friction* friction=new Friction(basalelement,2); @@ -367,7 +375,17 @@ ElementVector* HydrologyShaktiAnalysis::CreatePVector(Element* element){/*{{{*/ dpressure_water[0] = rho_water*g*(dh[0] - dbed[0]); dpressure_water[1] = rho_water*g*(dh[1] - dbed[1]); - meltrate = 1/latentheat*(G+frictionheat+rho_water*g*conductivity*(dh[0]*dh[0]+dh[1]*dh[1])); + if (meltflag == 0){ + meltrate = 1/latentheat*(G+frictionheat+rho_water*g*conductivity*(dh[0]*dh[0]+dh[1]*dh[1])); + }else if (meltflag == 1){ + meltrate_input->GetInputValue(&meltrate,gauss); + /*Unit conversion: ice m s-1 to kg m-2 s-1*/ + meltrate = meltrate*rho_ice; + /*NOTE: Add dissipation melting due to subglacial flow*/ + //meltrate += dissipation; + }else{ + _error_("Not implemented yet."); + } IssmDouble factor = Jdet*gauss->weight* (meltrate*(1/rho_water-1/rho_ice) @@ -558,12 +576,16 @@ void HydrologyShaktiAnalysis::UpdateGapHeight(Element* element){/*{{{*/ IssmDouble alpha2,frictionheat; IssmDouble* xyz_list = NULL; IssmDouble dpressure_water[3],dbed[3],PMPheat,dissipation; - IssmDouble q = 0.; - IssmDouble channelization = 0.; + IssmDouble q = 0.; + IssmDouble qx = 0.; + IssmDouble qy = 0.; + IssmDouble channelization = 0.; + int meltflag; /*Retrieve all inputs and parameters*/ basalelement->GetVerticesCoordinates(&xyz_list); basalelement->FindParam(&dt,TimesteppingTimeStepEnum); + basalelement->FindParam(&meltflag,HydrologyMeltFlagEnum); IssmDouble latentheat = basalelement->FindParam(MaterialsLatentheatEnum); IssmDouble g = basalelement->FindParam(ConstantsGEnum); IssmDouble rho_ice = basalelement->FindParam(MaterialsRhoIceEnum); @@ -579,6 +601,7 @@ void HydrologyShaktiAnalysis::UpdateGapHeight(Element* element){/*{{{*/ Input* n_input = basalelement->GetInput(MaterialsRheologyNEnum); _assert_(n_input); Input* lr_input = basalelement->GetInput(HydrologyBumpSpacingEnum); _assert_(lr_input); Input* br_input = basalelement->GetInput(HydrologyBumpHeightEnum); _assert_(br_input); + Input* meltrate_input = basalelement->GetInput(BasalforcingsGroundediceMeltingRateEnum); _assert_(meltrate_input); /*Get conductivity from inputs*/ IssmDouble conductivity = GetConductivity(basalelement); @@ -631,7 +654,17 @@ void HydrologyShaktiAnalysis::UpdateGapHeight(Element* element){/*{{{*/ dpressure_water[1] = rho_water*g*(dh[1] - dbed[1]); dissipation=rho_water*g*conductivity*(dh[0]*dh[0]+dh[1]*dh[1]); - meltrate = 1/latentheat*(G+frictionheat+rho_water*g*conductivity*(dh[0]*dh[0]+dh[1]*dh[1])); + if (meltflag == 0){ + meltrate = 1/latentheat*(G+frictionheat+rho_water*g*conductivity*(dh[0]*dh[0]+dh[1]*dh[1])); + }else if (meltflag == 1){ + meltrate_input->GetInputValue(&meltrate,gauss); + /*Unit conversion: ice m s-1 to kg m-2 s-1*/ + meltrate = meltrate*rho_ice; + /*NOTE: Add dissipation melting due to subglacial flow*/ + //meltrate += dissipation; + }else{ + _error_("Not implemented yet."); + } element->AddBasalInput(HydrologyMeltRateEnum,&meltrate,P0Enum); element->AddBasalInput(HydrologyFrictionHeatEnum,&frictionheat,P0Enum); @@ -647,7 +680,9 @@ void HydrologyShaktiAnalysis::UpdateGapHeight(Element* element){/*{{{*/ totalweights +=gauss->weight*Jdet; /* Compute basal water flux */ - q += gauss->weight*Jdet*(conductivity*sqrt(dh[0]*dh[0]+dh[1]*dh[1])); + q += gauss->weight*Jdet*(conductivity*sqrt(dh[0]*dh[0]+dh[1]*dh[1])); + qx+= -gauss->weight*Jdet*(conductivity*dh[0]); + qy+= -gauss->weight*Jdet*(conductivity*dh[1]); /* Compute "degree of channelization" (ratio of melt opening to opening by sliding) */ channelization += gauss->weight*Jdet*(meltrate/rho_ice/(meltrate/rho_ice+beta*sqrt(vx*vx+vy*vy))); @@ -663,7 +698,11 @@ void HydrologyShaktiAnalysis::UpdateGapHeight(Element* element){/*{{{*/ /*Divide by connectivity, add basal flux as an input*/ q = q/totalweights; + qx= qx/totalweights; + qy= qy/totalweights; element->AddBasalInput(HydrologyBasalFluxEnum,&q,P0Enum); + element->AddBasalInput(HydrologyWaterVxEnum,&qx,P0Enum); + element->AddBasalInput(HydrologyWaterVyEnum,&qy,P0Enum); /* Divide by connectivity, add degree of channelization as an input */ channelization = channelization/totalweights; diff --git a/src/c/analyses/MasstransportAnalysis.cpp b/src/c/analyses/MasstransportAnalysis.cpp index 56614353c..0266cf39b 100644 --- a/src/c/analyses/MasstransportAnalysis.cpp +++ b/src/c/analyses/MasstransportAnalysis.cpp @@ -239,6 +239,42 @@ void MasstransportAnalysis::UpdateElements(Elements* elements,Inputs* inputs,IoM iomodel->FetchDataToInput(inputs,elements,"md.basalforcings.basin_id",BasalforcingsLinearBasinIdEnum); if(isstochastic) iomodel->FetchDataToInput(inputs,elements,"md.stochasticforcing.default_id",StochasticForcingDefaultIdEnum); break; + case BasalforcingsIsmip7Enum:{ + /*TODO: Update for ISMIP7*/ + iomodel->FetchDataToInput(inputs,elements,"md.basalforcings.coriolis_f",BasalforcingsCoriolisFEnum); + + /*Deal with tf...*/ + IssmDouble* array2d = NULL; int M,N,K; IssmDouble* temp = NULL; + iomodel->FetchData(&temp,&M,&K,"md.basalforcings.tf_depths"); xDelete(temp); + _assert_(M==1); _assert_(K>=1); + for(int kk=0;kkFetchData(&array2d, &M, &N, kk, "md.basalforcings.tf"); + if(!array2d) _error_("md.basalforcings.tf not found in binary file"); + for(Object* & object : elements->objects){ + Element* element = xDynamicCast(object); + if(iomodel->domaintype!=Domain2DhorizontalEnum && !element->IsOnBase()) continue; + element->DatasetInputAdd(BasalforcingsIsmip7TfEnum,array2d,inputs,iomodel,M,N,1,BasalforcingsIsmip7TfEnum,kk); + } + xDelete(array2d); + } + + /*Deal with salinity...*/ + for(int kk=0;kkFetchData(&array2d, &M, &N, kk, "md.basalforcings.salinity"); + if(!array2d) _error_("md.basalforcings.salinity not found in binary file"); + for(Object* & object : elements->objects){ + Element* element = xDynamicCast(object); + if(iomodel->domaintype!=Domain2DhorizontalEnum && !element->IsOnBase()) continue; + element->DatasetInputAdd(BasalforcingsIsmip7SalinityEnum,array2d,inputs,iomodel,M,N,1,BasalforcingsIsmip7SalinityEnum,kk); + } + xDelete(array2d); + } + } + break; default: _error_("Basal forcing model "<FindConstant(&ismappedforcing,"md.smb.ismappedforcing"); + iomodel->FindConstant(&isprecipforcingremapped,"md.smb.isprecipforcingremapped"); if (!ismappedforcing){ iomodel->FetchDataToInput(inputs,elements,"md.smb.Ta",SmbTaEnum); iomodel->FetchDataToInput(inputs,elements,"md.smb.V",SmbVEnum); @@ -65,6 +67,9 @@ void SmbAnalysis::UpdateElements(Elements* elements,Inputs* inputs,IoModel* iomo iomodel->FetchDataToInput(inputs,elements,"md.smb.Vz",SmbVzEnum); } else { iomodel->FetchDataToInput(inputs,elements,"md.smb.mappedforcingpoint",SmbMappedforcingpointEnum); + if(isprecipforcingremapped){ + iomodel->FetchDataToInput(inputs,elements,"md.smb.mappedforcingprecipscaling",SmbMappedforcingprecipscalingEnum); + } } iomodel->FetchDataToInput(inputs,elements,"md.smb.zTop",SmbZTopEnum); @@ -123,6 +128,16 @@ void SmbAnalysis::UpdateElements(Elements* elements,Inputs* inputs,IoModel* iomo iomodel->FetchDataToDatasetInput(inputs,elements,"md.smb.monthlytemperatures",SmbMonthlytemperaturesEnum); iomodel->FetchDataToDatasetInput(inputs,elements,"md.smb.precipitation",SmbPrecipitationEnum); break; + case SMBpddFastEnum: + iomodel->FetchDataToInput(inputs,elements,"md.smb.s0p",SmbS0pEnum); + iomodel->FetchDataToInput(inputs,elements,"md.smb.s0t",SmbS0tEnum); + iomodel->FindConstant(&isfirnwarming,"md.smb.isfirnwarming"); + iomodel->FetchDataToInput(inputs,elements,"md.smb.smb_corr",SmbSmbCorrEnum); + iomodel->FetchDataToInput(inputs,elements,"md.smb.precipitation_anomaly",SmbPrecipitationsAnomalyEnum); + iomodel->FetchDataToInput(inputs,elements,"md.smb.temperature_anomaly",SmbTemperaturesAnomalyEnum); + iomodel->FetchDataToDatasetInput(inputs,elements,"md.smb.monthlytemperatures",SmbMonthlytemperaturesEnum); + iomodel->FetchDataToDatasetInput(inputs,elements,"md.smb.precipitation",SmbPrecipitationEnum); + break; case SMBpddGCMEnum: iomodel->FetchDataToInput(inputs,elements,"md.smb.enhance_factor",SmbEnhanceFactorEnum); iomodel->FetchDataToInput(inputs,elements,"md.smb.lapserates",SmbGCMLapseratesEnum); @@ -341,6 +356,13 @@ void SmbAnalysis::UpdateParameters(Parameters* parameters,IoModel* iomodel,int s parameters->AddObject(iomodel->CopyConstantObject("md.smb.pdd_fac_ice",PddfacIceEnum)); parameters->AddObject(iomodel->CopyConstantObject("md.smb.pdd_fac_snow",PddfacSnowEnum)); break; + case SMBpddFastEnum: + parameters->AddObject(iomodel->CopyConstantObject("md.smb.isfirnwarming",SmbIsfirnwarmingEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.smb.desfac",SmbDesfacEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.smb.rlaps",SmbRlapsEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.smb.pdd_fac_ice",PddfacIceEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.smb.pdd_fac_snow",PddfacSnowEnum)); + break; case SMBdebrisEvattEnum: parameters->AddObject(iomodel->CopyConstantObject("md.smb.qlaps",SmbDesfacEnum)); parameters->AddObject(iomodel->CopyConstantObject("md.smb.rlaps",SmbRlapsEnum)); @@ -675,6 +697,10 @@ void SmbAnalysis::Core(FemModel* femmodel){/*{{{*/ if(VerboseSolution()) _printf0_(" call SICOPOLIS positive degree day module\n"); PositiveDegreeDaySicopolisx(femmodel); break; + case SMBpddFastEnum: + if(VerboseSolution()) _printf0_(" call Fast positive degree day module\n"); + PositiveDegreeDayFastx(femmodel); + break; case SMBpddGCMEnum: if(VerboseSolution()) _printf0_(" call positive degree day module based on downsacling GCM data\n"); PositiveDegreeDayGCMx(femmodel); diff --git a/src/c/analyses/ThermalAnalysis.cpp b/src/c/analyses/ThermalAnalysis.cpp index 17e55acc5..12405777d 100644 --- a/src/c/analyses/ThermalAnalysis.cpp +++ b/src/c/analyses/ThermalAnalysis.cpp @@ -35,6 +35,7 @@ void ThermalAnalysis::CreateConstraints(Constraints* constraints,IoModel* iomode if(smb_model==SMBpddEnum) isdynamic=true; if(smb_model==SMBd18opddEnum) isdynamic=true; if(smb_model==SMBpddSicopolisEnum) isdynamic=true; + if(smb_model==SMBpddFastEnum) isdynamic=true; } else{ _error_("Solution "<solution_enum)<<" not supported yet"); diff --git a/src/c/analyses/analyses.h b/src/c/analyses/analyses.h index a3c3f2db1..990915c37 100644 --- a/src/c/analyses/analyses.h +++ b/src/c/analyses/analyses.h @@ -35,6 +35,7 @@ #include "./HydrologyGlaDSAnalysis.h" #include "./HydrologyShaktiAnalysis.h" #include "./HydrologyPismAnalysis.h" +#include "./HydrologyPrescribeAnalysis.h" #include "./LevelsetAnalysis.h" #include "./MasstransportAnalysis.h" #include "./MmemasstransportAnalysis.h" diff --git a/src/c/classes/BarystaticContributions.cpp b/src/c/classes/BarystaticContributions.cpp index 6cbd5a2d4..194d2e47c 100644 --- a/src/c/classes/BarystaticContributions.cpp +++ b/src/c/classes/BarystaticContributions.cpp @@ -17,38 +17,52 @@ /*Constructors and destructors:*/ BarystaticContributions::BarystaticContributions(IoModel* iomodel ){ /*{{{*/ - int nel; + /*Intermediaries*/ + int nel; - iomodel->FetchData(&nice,"md.solidearth.npartice"); - if(nice){ - iomodel->FetchData(&pice,&nel,NULL,"md.solidearth.partitionice"); - ice=new Vector(nice); - cumice=new Vector(nice); cumice->Set(0); cumice->Assemble(); + /*Allocate all pointers to NULL*/ + this->ice = NULL; //contributions to every ice partition (size nice x 1) + this->cumice = NULL; //cumulated contributions to every ice partition + this->pice = NULL; //ice partition (nel) + + this->hydro = NULL; //contributions to every hydro partition (size nhydro x 1) + this->cumhydro = NULL; //cumulated contributions to every hydro partition + this->phydro = NULL; //hydro partition (nel) + + this->ocean = NULL; //contributions to every ocean partition (size nocean x 1) + this->cumocean = NULL; //cumulated contributions to every ocean partition + this->pocean = NULL; //ocean partition (nel) + + iomodel->FetchData(&this->nice,"md.solidearth.npartice"); + if(this->nice){ + iomodel->FetchData(&this->pice,&nel,NULL,"md.solidearth.partitionice"); + this->ice=new Vector(nice); + this->cumice=new Vector(nice); this->cumice->Set(0); this->cumice->Assemble(); } else{ - ice=new Vector(1); - cumice=new Vector(1); + this->ice=new Vector(1); + this->cumice=new Vector(1); } - iomodel->FetchData(&nhydro,"md.solidearth.nparthydro"); - if(nhydro){ - iomodel->FetchData(&phydro,&nel,NULL,"md.solidearth.partitionhydro"); - hydro=new Vector(nhydro); - cumhydro=new Vector(nhydro); cumhydro->Set(0); cumhydro->Assemble(); + iomodel->FetchData(&this->nhydro,"md.solidearth.nparthydro"); + if(this->nhydro){ + iomodel->FetchData(&this->phydro,&nel,NULL,"md.solidearth.partitionhydro"); + this->hydro=new Vector(this->nhydro); + this->cumhydro=new Vector(this->nhydro); this->cumhydro->Set(0); this->cumhydro->Assemble(); } else{ - hydro=new Vector(1); - cumhydro=new Vector(1); + this->hydro=new Vector(1); + this->cumhydro=new Vector(1); } - iomodel->FetchData(&nocean,"md.solidearth.npartocean"); - if(nocean){ - iomodel->FetchData(&pocean,&nel,NULL,"md.solidearth.partitionocean"); - ocean=new Vector(nocean); - cumocean=new Vector(nocean); cumocean->Set(0); cumocean->Assemble(); + iomodel->FetchData(&this->nocean,"md.solidearth.npartocean"); + if(this->nocean){ + iomodel->FetchData(&this->pocean,&nel,NULL,"md.solidearth.partitionocean"); + this->ocean=new Vector(this->nocean); + this->cumocean=new Vector(this->nocean); this->cumocean->Set(0); this->cumocean->Assemble(); } else{ - ocean=new Vector(1); - cumocean=new Vector(1); + this->ocean=new Vector(1); + this->cumocean=new Vector(1); } } /*}}}*/ @@ -163,7 +177,7 @@ void BarystaticContributions::Save(Results* results, Parameters* parameters, Iss ice->Sum(&sumice); hydro->Sum(&sumhydro); ocean->Sum(&sumocean); results->AddResult(new GenericExternalResult(results->Size()+1,BslcEnum,this->Total()/oceanarea/rho_water,step,time)); results->AddResult(new GenericExternalResult(results->Size()+1,BslcIceEnum,sumice/oceanarea/rho_water,step,time)); - results->AddResult(new GenericExternalResult(results->Size()+1,BslcHydroEnum,sumice/oceanarea/rho_water,step,time)); + results->AddResult(new GenericExternalResult(results->Size()+1,BslcHydroEnum,sumhydro/oceanarea/rho_water,step,time)); results->AddResult(new GenericExternalResult(results->Size()+1,BslcOceanEnum,sumocean/oceanarea/rho_water,step,time)); cumice->Sum(&sumice); cumhydro->Sum(&sumhydro); cumocean->Sum(&sumocean); diff --git a/src/c/classes/Elements/Element.cpp b/src/c/classes/Elements/Element.cpp index 2b7fcccd4..5002ff42f 100644 --- a/src/c/classes/Elements/Element.cpp +++ b/src/c/classes/Elements/Element.cpp @@ -2503,6 +2503,66 @@ void Element::Ismip6FloatingiceMeltingRate(){/*{{{*/ xDelete(mean_tf); xDelete(depths); +}/*}}}*/ +void Element::Ismip7FloatingiceMeltingRate(){/*{{{*/ + if(!this->IsIceInElement() || !this->IsAllFloating() || !this->IsOnBase()) return; + + int basinid,num_basins,M,N; + IssmDouble* xyz_list; + + IssmDouble tf,gamma0; + IssmDouble salinity; /*local salinity [psu]*/ + IssmDouble coriolis; /*Coriolis parameter*/ + IssmDouble dbase[2]; /*derivative of z_b*/ + IssmDouble theta, slope; + IssmDouble* depths = NULL; + + + /*Allocate some arrays*/ + const int numvertices = this->GetNumberOfVertices(); + IssmDouble basalmeltrate[MAXVERTICES]; + + /*Get variables*/ + this->GetVerticesCoordinates(&xyz_list); + + IssmDouble rhoi = this->FindParam(MaterialsRhoIceEnum); + IssmDouble rhow = this->FindParam(MaterialsRhoSeawaterEnum); + IssmDouble lf = this->FindParam(MaterialsLatentheatEnum); + IssmDouble cp = this->FindParam(MaterialsMixedLayerCapacityEnum); + IssmDouble betaS = 7.86e-4; /*Salinity expansion coefficient [psu-1]*/ + IssmDouble g = this->FindParam(ConstantsGEnum); + + /* Get parameters and inputs */ + this->parameters->FindParam(&gamma0,BasalforcingsIsmip7GammaEnum); + + Input* base_input = this->GetInput(BaseEnum); _assert_(base_input); + Input* tf_input = this->GetInput(BasalforcingsIsmip7TfShelfEnum); _assert_(tf_input); + Input* salinity_input = this->GetInput(BasalforcingsIsmip7SalinityShelfEnum); _assert_(salinity_input); + Input* coriolis_input = this->GetInput(BasalforcingsCoriolisFEnum); _assert_(coriolis_input); + + /*Compute melt rate for Local and Nonlocal parameterizations*/ + Gauss* gauss=this->NewGauss(); + for(int i=0;iGaussVertex(i); + + tf_input->GetInputValue(&tf,gauss); + salinity_input->GetInputValue(&salinity,gauss); + coriolis_input->GetInputValue(&coriolis,gauss); + + base_input->GetInputDerivativeValue(&dbase[0],xyz_list,gauss); + slope = sqrt(pow(dbase[0],2)+pow(dbase[1],2)); + theta = atan(slope); + + basalmeltrate[i] = gamma0*sin(theta)*(rhow/rhoi)*pow(cp/lf,2)*betaS*salinity*g/2.0/fabs(coriolis)*fabs(tf)*tf; + } + + /*Return basal melt rate*/ + this->AddInput(BasalforcingsFloatingiceMeltingRateEnum,basalmeltrate,P1DGEnum); + + /*Cleanup and return*/ + delete gauss; + xDelete(depths); + }/*}}}*/ void Element::LapseRateBasinSMB(int numelevbins, IssmDouble* lapserates, IssmDouble* elevbins,IssmDouble* refelevation){/*{{{*/ @@ -3865,6 +3925,198 @@ void Element::PositiveDegreeDaySicopolis(bool isfirnwarming){/*{{{*/ xDelete(melt_star); } /*}}}*/ +void Element::PositiveDegreeDayFast(bool isfirnwarming){/*{{{*/ + + /* General FIXMEs: get Tmelting point, pddicefactor, pddsnowfactor, sigma from parameters/user input */ + + const int NUM_VERTICES = this->GetNumberOfVertices(); + const int NUM_VERTICES_MONTHS_PER_YEAR = NUM_VERTICES * 12; + + int i,vertexlids[MAXVERTICES];; + IssmDouble* smb=xNew(NUM_VERTICES); // surface mass balance + IssmDouble* melt=xNew(NUM_VERTICES); // melting comp. of surface mass balance + IssmDouble* accu=xNew(NUM_VERTICES); // accuumulation comp. of surface mass balance + IssmDouble* melt_star=xNew(NUM_VERTICES); + IssmDouble* monthlytemperatures=xNew(NUM_VERTICES_MONTHS_PER_YEAR); + IssmDouble* monthlyprec=xNew(NUM_VERTICES_MONTHS_PER_YEAR); + IssmDouble* yearlytemperatures=xNew(NUM_VERTICES); memset(yearlytemperatures, 0., NUM_VERTICES*sizeof(IssmDouble)); + IssmDouble* s=xNew(NUM_VERTICES); // actual surface height + IssmDouble* s0p=xNew(NUM_VERTICES); // reference elevation for precip. + IssmDouble* s0t=xNew(NUM_VERTICES); // reference elevation for temperature + IssmDouble* smbcorr=xNew(NUM_VERTICES); // surface mass balance correction; will be added after pdd call + IssmDouble* p_ampl=xNew(NUM_VERTICES); // precip anomaly + IssmDouble* t_ampl=xNew(NUM_VERTICES); // remperature anomaly + IssmDouble rho_water,rho_ice,desfac,rlaps; + IssmDouble pdd_fac_ice,pdd_fac_snow; + IssmDouble inv_twelve=1./12.; //factor for monthly average + IssmDouble time,yts,time_yr; + + /*Get vertex Lids for later*/ + this->GetVerticesLidList(&vertexlids[0]); + + /*Get material parameters :*/ + rho_water=this->FindParam(MaterialsRhoSeawaterEnum); + rho_ice=this->FindParam(MaterialsRhoIceEnum); + + /*Get parameters for height corrections*/ + desfac=this->FindParam(SmbDesfacEnum); + rlaps=this->FindParam(SmbRlapsEnum); + + /*Get pdd melt factors*/ + pdd_fac_ice=this->FindParam(PddfacIceEnum); + pdd_fac_snow=this->FindParam(PddfacSnowEnum); + + /* Get time */ + this->parameters->FindParam(&time,TimeEnum); + this->parameters->FindParam(&yts,ConstantsYtsEnum); + time_yr=floor(time/yts)*yts; + + /* Set parameters for finrnwarming */ + IssmDouble MU_0 = 9.7155; //Firn-warming correction, in (d*deg C)/(mm WE) + IssmDouble mu = MU_0*(1000.0*86400.0)*(rho_ice/rho_water); // (d*deg C)/(mm WE) --> (s*deg C)/(m IE) + + /*Get inputs*/ + DatasetInput* dinput =this->GetDatasetInput(SmbMonthlytemperaturesEnum); _assert_(dinput); + DatasetInput* dinput2=this->GetDatasetInput(SmbPrecipitationEnum); _assert_(dinput2); + + /*loop over vertices: */ + Gauss* gauss=this->NewGauss(); + for(int month=0;month<12;month++){ + + for(int iv=0;ivGaussVertex(iv); + dinput->GetInputValue(&monthlytemperatures[iv*12+month],gauss,month); + monthlytemperatures[iv*12+month]=monthlytemperatures[iv*12+month]-273.15; // conversion from Kelvin to celcius for PDD module + dinput2->GetInputValue(&monthlyprec[iv*12+month],gauss,month); + monthlyprec[iv*12+month]=monthlyprec[iv*12+month]*yts; + } + } + + /*Recover info at the vertices: */ + GetInputListOnVertices(&s[0],SurfaceEnum); + GetInputListOnVertices(&s0p[0],SmbS0pEnum); + GetInputListOnVertices(&s0t[0],SmbS0tEnum); + GetInputListOnVertices(&smbcorr[0],SmbSmbCorrEnum); + GetInputListOnVertices(&t_ampl[0],SmbTemperaturesAnomalyEnum); + GetInputListOnVertices(&p_ampl[0],SmbPrecipitationsAnomalyEnum); + + /*measure the surface mass balance*/ + for (int iv = 0; iv=melt[iv]){ + yearlytemperatures[iv]= yearlytemperatures[iv]+mu*(melt_star[iv]-melt[iv]); + } + else{ + yearlytemperatures[iv]= yearlytemperatures[iv]; + } + } + if (yearlytemperatures[iv]>273.15) yearlytemperatures[iv]=273.15; + } + + switch(this->ObjectEnum()){ + case TriaEnum: + //this->AddInput(TemperatureEnum,&yearlytemperatures[0],P1Enum); + this->AddInput(TemperaturePDDEnum,&yearlytemperatures[0],P1Enum); + this->AddInput(SmbMassBalanceEnum,&smb[0],P1Enum); + this->AddInput(SmbAccumulationEnum,&accu[0],P1Enum); + this->AddInput(SmbMeltEnum,&melt[0],P1Enum); + break; + case PentaEnum: + bool isthermal; + this->parameters->FindParam(&isthermal,TransientIsthermalEnum); + if(isthermal){ + bool isenthalpy; + this->parameters->FindParam(&isenthalpy,ThermalIsenthalpyEnum); + if(IsOnSurface()){ + /*Here, we want to change the BC of the thermal model, keep + * the temperatures as they are for the base of the penta and + * use yearlytemperatures for the top*/ + + /*FIXME: look at other function Element::PositiveDegreeDay and propagate change! Just assert for now*/ + PentaInput* temp_input = xDynamicCast(this->GetInput(TemperatureEnum)); _assert_(temp_input); + switch(temp_input->GetInputInterpolationType()){ + case P1Enum: + temp_input->element_values[3] = yearlytemperatures[3]; + temp_input->element_values[4] = yearlytemperatures[4]; + temp_input->element_values[5] = yearlytemperatures[5]; + temp_input->SetInput(P1Enum,NUM_VERTICES,&vertexlids[0],temp_input->element_values); + break; + case P1DGEnum: + case P1xP2Enum: + case P1xP3Enum: + case P1xP4Enum: + temp_input->element_values[3] = yearlytemperatures[3]; + temp_input->element_values[4] = yearlytemperatures[4]; + temp_input->element_values[5] = yearlytemperatures[5]; + temp_input->SetInput(temp_input->GetInputInterpolationType(),this->lid,this->GetNumberOfNodes(temp_input->GetInputInterpolationType()),temp_input->element_values); + break; + default: + _error_("Interpolation "<GetInputInterpolationType())<<" not supported yet"); + } + + if(isenthalpy){ + /*Convert that to enthalpy for the enthalpy model*/ + PentaInput* enth_input = xDynamicCast(this->GetInput(EnthalpyEnum)); _assert_(enth_input); + switch(enth_input->GetInputInterpolationType()){ + case P1Enum: + ThermalToEnthalpy(&enth_input->element_values[3],yearlytemperatures[3],0.,0.); + ThermalToEnthalpy(&enth_input->element_values[4],yearlytemperatures[4],0.,0.); + ThermalToEnthalpy(&enth_input->element_values[5],yearlytemperatures[5],0.,0.); + enth_input->SetInput(P1Enum,NUM_VERTICES,&vertexlids[0],enth_input->element_values); + break; + case P1DGEnum: + case P1xP2Enum: + case P1xP3Enum: + case P1xP4Enum: + ThermalToEnthalpy(&enth_input->element_values[3],yearlytemperatures[3],0.,0.); + ThermalToEnthalpy(&enth_input->element_values[4],yearlytemperatures[4],0.,0.); + ThermalToEnthalpy(&enth_input->element_values[5],yearlytemperatures[5],0.,0.); + enth_input->SetInput(enth_input->GetInputInterpolationType(),this->lid,this->GetNumberOfNodes(enth_input->GetInputInterpolationType()),enth_input->element_values); + break; + default: + _error_("Interpolation "<GetInputInterpolationType())<<" not supported yet"); + } + } + } + } + this->AddInput(SmbMassBalanceEnum,&smb[0],P1Enum); + this->AddInput(TemperaturePDDEnum,&yearlytemperatures[0],P1Enum); + this->AddInput(SmbAccumulationEnum,&accu[0],P1Enum); + this->AddInput(SmbMeltEnum,&melt[0],P1Enum); + this->InputExtrude(TemperaturePDDEnum,-1); + this->InputExtrude(SmbMassBalanceEnum,-1); + this->InputExtrude(SmbAccumulationEnum,-1); + this->InputExtrude(SmbMeltEnum,-1); + break; + default: _error_("Not implemented yet"); + } + + /*clean-up*/ + delete gauss; + xDelete(monthlytemperatures); + xDelete(monthlyprec); + xDelete(smb); + xDelete(melt); + xDelete(accu); + xDelete(yearlytemperatures); + xDelete(s); + xDelete(s0t); + xDelete(s0p); + xDelete(t_ampl); + xDelete(p_ampl); + xDelete(smbcorr); + xDelete(melt_star); +} +/*}}}*/ void Element::PositiveDegreeDayGCM(){/*{{{*/ const int NUM_VERTICES = this->GetNumberOfVertices(); @@ -5680,8 +5932,12 @@ void Element::SmbGemb(IssmDouble timeinputs, int count, int steps){/*{{{*/ qsparam=fmax(0.622*esparam/(pparam/100 - 0.378*esparam),0); if ((isprecipmap) && (qsparam>0)){ - P=fmax(prparam*qs/qsparam,0.0); - C=fmax(C*qs/qsparam,0.0); + IssmDouble precipscaling = 1.0; + Input *pscaling_input = NULL; + pscaling_input = this->GetInput(SmbMappedforcingprecipscalingEnum); _assert_(pscaling_input); + pscaling_input->GetInputAverage(&precipscaling); + P=fmax(prparam*qs/qsparam*precipscaling,0.0); + C=fmax(C*qs/qsparam*precipscaling,0.0); } else P=prparam; @@ -6254,6 +6510,19 @@ IssmDouble Element::TotalGroundedBmb(IssmDouble* mask, bool scaled){/*{{{*/ return this->TotalGroundedBmb(scaled); } /*}}}*/ +IssmDouble Element::TotalHydrologyBasalFlux(IssmDouble* mask, bool scaled){/*{{{*/ + + /*Retrieve values of the mask defining the element: */ + for(int i=0;iGetNumberOfVertices();i++){ + if(mask[this->vertices[i]->Sid()]<=0.){ + return 0.; + } + } + + /*Return: */ + return this->TotalHydrologyBasalFlux(scaled); +} +/*}}}*/ IssmDouble Element::TotalSmb(IssmDouble* mask, bool scaled){/*{{{*/ /*Retrieve values of the mask defining the element: */ diff --git a/src/c/classes/Elements/Element.h b/src/c/classes/Elements/Element.h index 99ac38f6d..2507d72d9 100644 --- a/src/c/classes/Elements/Element.h +++ b/src/c/classes/Elements/Element.h @@ -160,6 +160,7 @@ class Element: public Object{ bool IsAllMinThicknessInElement(); bool IsLandInElement(); void Ismip6FloatingiceMeltingRate(); + void Ismip7FloatingiceMeltingRate(); void LapseRateBasinSMB(int numelevbins, IssmDouble* lapserates, IssmDouble* elevbins,IssmDouble* refelevation); void LinearFloatingiceMeltingRate(); void SpatialLinearFloatingiceMeltingRate(); @@ -179,6 +180,7 @@ class Element: public Object{ void PicoComputeBasalMelt(); void PositiveDegreeDay(IssmDouble* pdds,IssmDouble* pds,IssmDouble signorm,bool ismungsm,bool issetpddfac); void PositiveDegreeDaySicopolis(bool isfirnwarming); + void PositiveDegreeDayFast(bool isfirnwarming); void PositiveDegreeDayGCM(); void ProjectGridDataToMesh(IssmDouble* griddata,IssmDouble* x_grid,IssmDouble* y_grid,int Nx,int Ny,int input_enum); void SmbDebrisEvatt(); @@ -206,6 +208,7 @@ class Element: public Object{ void SubglacialWaterPressure(int output_enum); IssmDouble TotalFloatingBmb(IssmDouble* mask, bool scaled); IssmDouble TotalGroundedBmb(IssmDouble* mask, bool scaled); + IssmDouble TotalHydrologyBasalFlux(IssmDouble* mask, bool scaled); IssmDouble TotalSmb(IssmDouble* mask, bool scaled); IssmDouble TotalSmbMelt(IssmDouble* mask, bool scaled); IssmDouble TotalSmbRefreeze(IssmDouble* mask, bool scaled); @@ -395,6 +398,7 @@ class Element: public Object{ virtual IssmDouble TotalCalvingMeltingFluxLevelset(bool scaled){_error_("not implemented");}; virtual IssmDouble TotalFloatingBmb(bool scaled)=0; virtual IssmDouble TotalGroundedBmb(bool scaled)=0; + virtual IssmDouble TotalHydrologyBasalFlux(bool scaled)=0; virtual IssmDouble TotalSmb(bool scaled)=0; virtual IssmDouble TotalSmbMelt(bool scaled)=0; virtual IssmDouble TotalSmbRefreeze(bool scaled)=0; diff --git a/src/c/classes/Elements/Penta.cpp b/src/c/classes/Elements/Penta.cpp index 2a724b96c..a6a5957eb 100644 --- a/src/c/classes/Elements/Penta.cpp +++ b/src/c/classes/Elements/Penta.cpp @@ -2354,7 +2354,12 @@ void Penta::InputDepthAverageAtBase(int original_enum,int average_enum){/* /*Now we only need to divide the depth integrated input by the total thickness!*/ for(int iv=0;iv<3;iv++){ - total[iv ] = total[iv]/intz[iv]; + if(intz[iv]<1e-50){/*If thickness is 0 we set the input as 0*/ + total[iv] = 0.; + } + else{ + total[iv] = total[iv]/intz[iv]; + } total[iv+3] = total[iv]; } GetVerticesLidList(&lidlist[0]); @@ -2439,9 +2444,6 @@ void Penta::ControlInputExtrude(int enum_type,int start){/*{{{*/ ElementInput* input = this->inputs->GetControlInputData(enum_type,"value"); if(input->ObjectEnum()!=PentaInputEnum) _error_("not supported yet"); PentaInput* pentainput = xDynamicCast(input); - ElementInput* input2 = this->inputs->GetControlInputData(enum_type,"savedvalues"); - if(input->ObjectEnum()!=PentaInputEnum) _error_("not supported yet"); - PentaInput* pentainput2= xDynamicCast(input2); /*FIXME: this should not be necessary*/ ElementInput* input3 = this->inputs->GetControlInputData(enum_type,"gradient"); if(input->ObjectEnum()!=PentaInputEnum) _error_("not supported yet"); @@ -2450,29 +2452,24 @@ void Penta::ControlInputExtrude(int enum_type,int start){/*{{{*/ int lidlist[NUMVERTICES]; this->GetVerticesLidList(&lidlist[0]); pentainput->Serve(NUMVERTICES,&lidlist[0]); - pentainput2->Serve(NUMVERTICES,&lidlist[0]); pentainput3->Serve(NUMVERTICES,&lidlist[0]); if(pentainput->GetInterpolation()==P1Enum){ /*Extrude values first*/ IssmDouble extrudedvalues[NUMVERTICES]; - IssmDouble extrudedvalues2[NUMVERTICES]; IssmDouble extrudedvalues3[NUMVERTICES]; this->GetInputListOnVertices(&extrudedvalues[0],pentainput,0.); - this->GetInputListOnVertices(&extrudedvalues2[0],pentainput2,0.); this->GetInputListOnVertices(&extrudedvalues3[0],pentainput3,0.); if(start==-1){ for(int i=0;iGetVerticesLidList(&vertexlids[0]); pentainput->SetInput(P1Enum,NUMVERTICES,&vertexlids[0],&extrudedvalues[0]); - pentainput2->SetInput(P1Enum,NUMVERTICES,&vertexlids[0],&extrudedvalues2[0]); if(start==-1 && !penta->IsOnBase()){ pentainput3->SetInput(P1Enum,NUMVERTICES,&vertexlids[0],&extrudedvalues3[0]); } diff --git a/src/c/classes/Elements/Penta.h b/src/c/classes/Elements/Penta.h index 11200df69..3b0976692 100644 --- a/src/c/classes/Elements/Penta.h +++ b/src/c/classes/Elements/Penta.h @@ -200,6 +200,7 @@ class Penta: public Element,public ElementHook,public PentaRef{ IssmDouble TotalCalvingMeltingFluxLevelset(bool scaled); IssmDouble TotalFloatingBmb(bool scaled); IssmDouble TotalGroundedBmb(bool scaled); + IssmDouble TotalHydrologyBasalFlux(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmb(bool scaled); IssmDouble TotalSmbMelt(bool scaled); IssmDouble TotalSmbRefreeze(bool scaled); diff --git a/src/c/classes/Elements/Seg.h b/src/c/classes/Elements/Seg.h index 5d14f99b2..2cb305628 100644 --- a/src/c/classes/Elements/Seg.h +++ b/src/c/classes/Elements/Seg.h @@ -156,6 +156,7 @@ class Seg: public Element,public ElementHook,public SegRef{ IssmDouble TimeAdapt(){_error_("not implemented yet");}; IssmDouble TotalFloatingBmb(bool scaled){_error_("not implemented yet");}; IssmDouble TotalGroundedBmb(bool scaled){_error_("not implemented yet");}; + IssmDouble TotalHydrologyBasalFlux(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmb(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmbMelt(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmbRefreeze(bool scaled){_error_("not implemented yet");}; diff --git a/src/c/classes/Elements/Tetra.h b/src/c/classes/Elements/Tetra.h index a4e67f3fa..8090686f8 100644 --- a/src/c/classes/Elements/Tetra.h +++ b/src/c/classes/Elements/Tetra.h @@ -165,6 +165,7 @@ class Tetra: public Element,public ElementHook,public TetraRef{ IssmDouble TimeAdapt(){_error_("not implemented yet");}; IssmDouble TotalFloatingBmb(bool scaled){_error_("not implemented yet");}; IssmDouble TotalGroundedBmb(bool scaled){_error_("not implemented yet");}; + IssmDouble TotalHydrologyBasalFlux(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmb(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmbMelt(bool scaled){_error_("not implemented yet");}; IssmDouble TotalSmbRefreeze(bool scaled){_error_("not implemented yet");}; diff --git a/src/c/classes/Elements/Tria.cpp b/src/c/classes/Elements/Tria.cpp index d8da0377d..0fd8272fd 100644 --- a/src/c/classes/Elements/Tria.cpp +++ b/src/c/classes/Elements/Tria.cpp @@ -6031,6 +6031,129 @@ IssmDouble Tria::TotalGroundedBmb(bool scaled){/*{{{*/ return Total_Gbmb; } /*}}}*/ +IssmDouble Tria::TotalHydrologyBasalFlux(bool scaled){/*{{{*/ + + /*Make sure there is a grounding line here*/ + if(!IsIceInElement()) return 0; + if(!IsZeroLevelset(MaskOceanLevelsetEnum)) return 0; + + /*Scaled not implemented yet...*/ + _assert_(!scaled); + + int domaintype,index1,index2; + const IssmPDouble epsilon = 1.e-15; + IssmDouble s1,s2; + IssmDouble gl[NUMVERTICES]; + IssmDouble xyz_front[2][3]; + + IssmDouble xyz_list[NUMVERTICES][3]; + ::GetVerticesCoordinates(&xyz_list[0][0],vertices,NUMVERTICES); + + /*Recover parameters and values*/ + parameters->FindParam(&domaintype,DomainTypeEnum); + Element::GetInputListOnVertices(&gl[0],MaskOceanLevelsetEnum); + + /*Be sure that values are not zero*/ + if(gl[0]==0.) gl[0]=gl[0]+epsilon; + if(gl[1]==0.) gl[1]=gl[1]+epsilon; + if(gl[2]==0.) gl[2]=gl[2]+epsilon; + + if(domaintype==Domain2DverticalEnum || domaintype==Domain3DEnum || domaintype==Domain3DsurfaceEnum){ + _error_("not implemented"); + } + else if(domaintype==Domain2DhorizontalEnum){ + int pt1 = 0; + int pt2 = 1; + if(gl[0]*gl[1]>0){ //Nodes 0 and 1 are similar, so points must be found on segment 0-2 and 1-2 + + /*Portion of the segments*/ + s1=gl[2]/(gl[2]-gl[1]); + s2=gl[2]/(gl[2]-gl[0]); + if(gl[2]<0.){ + pt1 = 1; pt2 = 0; + } + xyz_front[pt2][0]=xyz_list[2][0]+s1*(xyz_list[1][0]-xyz_list[2][0]); + xyz_front[pt2][1]=xyz_list[2][1]+s1*(xyz_list[1][1]-xyz_list[2][1]); + xyz_front[pt2][2]=xyz_list[2][2]+s1*(xyz_list[1][2]-xyz_list[2][2]); + xyz_front[pt1][0]=xyz_list[2][0]+s2*(xyz_list[0][0]-xyz_list[2][0]); + xyz_front[pt1][1]=xyz_list[2][1]+s2*(xyz_list[0][1]-xyz_list[2][1]); + xyz_front[pt1][2]=xyz_list[2][2]+s2*(xyz_list[0][2]-xyz_list[2][2]); + } + else if(gl[1]*gl[2]>0){ //Nodes 1 and 2 are similar, so points must be found on segment 0-1 and 0-2 + + /*Portion of the segments*/ + s1=gl[0]/(gl[0]-gl[1]); + s2=gl[0]/(gl[0]-gl[2]); + if(gl[0]<0.){ + pt1 = 1; pt2 = 0; + } + + xyz_front[pt1][0]=xyz_list[0][0]+s1*(xyz_list[1][0]-xyz_list[0][0]); + xyz_front[pt1][1]=xyz_list[0][1]+s1*(xyz_list[1][1]-xyz_list[0][1]); + xyz_front[pt1][2]=xyz_list[0][2]+s1*(xyz_list[1][2]-xyz_list[0][2]); + xyz_front[pt2][0]=xyz_list[0][0]+s2*(xyz_list[2][0]-xyz_list[0][0]); + xyz_front[pt2][1]=xyz_list[0][1]+s2*(xyz_list[2][1]-xyz_list[0][1]); + xyz_front[pt2][2]=xyz_list[0][2]+s2*(xyz_list[2][2]-xyz_list[0][2]); + } + else if(gl[0]*gl[2]>0){ //Nodes 0 and 2 are similar, so points must be found on segment 1-0 and 1-2 + + /*Portion of the segments*/ + s1=gl[1]/(gl[1]-gl[0]); + s2=gl[1]/(gl[1]-gl[2]); + if(gl[1]<0.){ + pt1 = 1; pt2 = 0; + } + + xyz_front[pt2][0]=xyz_list[1][0]+s1*(xyz_list[0][0]-xyz_list[1][0]); + xyz_front[pt2][1]=xyz_list[1][1]+s1*(xyz_list[0][1]-xyz_list[1][1]); + xyz_front[pt2][2]=xyz_list[1][2]+s1*(xyz_list[0][2]-xyz_list[1][2]); + xyz_front[pt1][0]=xyz_list[1][0]+s2*(xyz_list[2][0]-xyz_list[1][0]); + xyz_front[pt1][1]=xyz_list[1][1]+s2*(xyz_list[2][1]-xyz_list[1][1]); + xyz_front[pt1][2]=xyz_list[1][2]+s2*(xyz_list[2][2]-xyz_list[1][2]); + } + else{ + _error_("case not possible"); + } + + } + else _error_("mesh type "<=0 && s1<=1.); + _assert_(s2>=0 && s2<=1.); + + /*Get normal vector*/ + IssmDouble normal[3]; + this->NormalSection(&normal[0],&xyz_front[0][0]); + + /*Get inputs*/ + IssmDouble flux = 0.; + IssmDouble vx,vy,Jdet; + IssmDouble rho_water=FindParam(MaterialsRhoFreshwaterEnum); + Input* vx_input=NULL; + Input* vy_input=NULL; + if(domaintype==Domain2DhorizontalEnum){ + vx_input=this->GetInput(HydrologyWaterVxEnum); _assert_(vx_input); + vy_input=this->GetInput(HydrologyWaterVyEnum); _assert_(vy_input); + } + else{ + _error_("Not implemented yet."); + } + + /*Start looping on Gaussian points*/ + Gauss* gauss=this->NewGauss(&xyz_list[0][0],&xyz_front[0][0],3); + while(gauss->next()){ + vx_input->GetInputValue(&vx,gauss); + vy_input->GetInputValue(&vy,gauss); + this->JacobianDeterminantSurface(&Jdet,&xyz_front[0][0],gauss); + + flux += rho_water*Jdet*gauss->weight*(vx*normal[0] + vy*normal[1]); + } + + /*Cleanup and return*/ + delete gauss; + return flux; +}/*}}}*/ IssmDouble Tria::TotalSmb(bool scaled){/*{{{*/ /*The smb[kg yr-1] of one element is area[m2] * smb [kg m^-2 yr^-1]*/ diff --git a/src/c/classes/Elements/Tria.h b/src/c/classes/Elements/Tria.h index 560241449..7d4862069 100644 --- a/src/c/classes/Elements/Tria.h +++ b/src/c/classes/Elements/Tria.h @@ -158,6 +158,7 @@ class Tria: public Element,public ElementHook,public TriaRef{ IssmDouble TotalCalvingMeltingFluxLevelset(bool scaled); IssmDouble TotalFloatingBmb(bool scaled); IssmDouble TotalGroundedBmb(bool scaled); + IssmDouble TotalHydrologyBasalFlux(bool scaled); IssmDouble TotalSmb(bool scaled); IssmDouble TotalSmbMelt(bool scaled); IssmDouble TotalSmbRefreeze(bool scaled); diff --git a/src/c/classes/ExternalResults/Results.cpp b/src/c/classes/ExternalResults/Results.cpp index 5653e6dc1..2b9c2c9fc 100644 --- a/src/c/classes/ExternalResults/Results.cpp +++ b/src/c/classes/ExternalResults/Results.cpp @@ -61,21 +61,6 @@ int Results::AddResult(ExternalResult* in_result){/*{{{*/ return 1; } /*}}}*/ -int Results::DeleteResult(int result_enum,int result_step){/*{{{*/ - - for(Object* &object : this->objects){ - ExternalResult* result=xDynamicCast(object); - if(result->GetStep()==result_step){ - if(strcmp(result->GetResultName(),EnumToStringx(result_enum))==0){ - this->DeleteObject(result); - break; - } - } - } - - return 1; -} -/*}}}*/ ExternalResult* Results::FindResult(int result_enum){/*{{{*/ for(Object* &object : this->objects){ diff --git a/src/c/classes/ExternalResults/Results.h b/src/c/classes/ExternalResults/Results.h index accb09014..0ba919797 100644 --- a/src/c/classes/ExternalResults/Results.h +++ b/src/c/classes/ExternalResults/Results.h @@ -21,7 +21,6 @@ class Results: public DataSet{ /*Mehthos*/ int AddResult(ExternalResult* result); - int DeleteResult(int result_enum,int result_step); ExternalResult* FindResult(int result_enum); void Write(Parameters* parameters); }; diff --git a/src/c/classes/FemModel.cpp b/src/c/classes/FemModel.cpp index 53f3d1c06..976ec7206 100644 --- a/src/c/classes/FemModel.cpp +++ b/src/c/classes/FemModel.cpp @@ -32,7 +32,6 @@ #include "../modules/ModelProcessorx/ModelProcessorx.h" #include "../modules/SpcNodesx/SpcNodesx.h" #include "../modules/ConfigureObjectsx/ConfigureObjectsx.h" -#include "../modules/ParseToolkitsOptionsx/ParseToolkitsOptionsx.h" #include "../modules/GetVectorFromInputsx/GetVectorFromInputsx.h" #include "../modules/InputUpdateFromVectorx/InputUpdateFromVectorx.h" #include "../modules/NodesDofx/NodesDofx.h" @@ -803,6 +802,9 @@ void FemModel::SolutionAnalysesList(int** panalyses,int* pnumanalyses,IoModel* i if(hydrology_model==HydrologyarmapwEnum){ analyses_temp[numanalyses++]=HydrologyArmapwAnalysisEnum; } + if(hydrology_model==HydrologyprescribeEnum){ + analyses_temp[numanalyses++]=HydrologyPrescribeAnalysisEnum; + } } break; @@ -2504,6 +2506,7 @@ void FemModel::RequestedOutputsx(Results **presults,char** requested_outputs, in case TotalFloatingBmbScaledEnum: this->TotalFloatingBmbx(&double_result,true); break; case TotalGroundedBmbEnum: this->TotalGroundedBmbx(&double_result,false); break; case TotalGroundedBmbScaledEnum: this->TotalGroundedBmbx(&double_result,true); break; + case TotalHydrologyBasalFluxEnum: this->TotalHydrologyBasalFluxx(&double_result,false); break; case TotalSmbEnum: this->TotalSmbx(&double_result,false); break; case TotalSmbMeltEnum: this->TotalSmbMeltx(&double_result,false); break; case TotalSmbRefreezeEnum: this->TotalSmbRefreezex(&double_result,false); break; @@ -2779,6 +2782,7 @@ void FemModel::Responsex(IssmDouble* responses,int response_descriptor_enum){/*{ case TotalFloatingBmbScaledEnum: this->TotalFloatingBmbx(responses, true); break; case TotalGroundedBmbEnum: this->TotalGroundedBmbx(responses, false); break; case TotalGroundedBmbScaledEnum: this->TotalGroundedBmbx(responses, true); break; + case TotalHydrologyBasalFluxEnum: this->TotalHydrologyBasalFluxx(responses, false); break; case TotalSmbEnum: this->TotalSmbx(responses, false); break; case TotalSmbMeltEnum: this->TotalSmbMeltx(responses, false); break; case TotalSmbRefreezeEnum: this->TotalSmbRefreezex(responses, false); break; @@ -3159,6 +3163,20 @@ void FemModel::TotalGroundedBmbx(IssmDouble* pGbmb, bool scaled){/*{{{*/ *pGbmb=total_gbmb; }/*}}}*/ +void FemModel::TotalHydrologyBasalFluxx(IssmDouble* pM, bool scaled){/*{{{*/ + IssmDouble local_basalflux= 0.0; + IssmDouble total_basalflux; + + for(Object* & object : this->elements->objects){ + Element* element = xDynamicCast(object); + local_basalflux+=element->TotalHydrologyBasalFlux(scaled); + } + ISSM_MPI_Reduce(&local_basalflux,&total_basalflux,1,ISSM_MPI_DOUBLE,ISSM_MPI_SUM,0,IssmComm::GetComm() ); + ISSM_MPI_Bcast(&total_basalflux,1,ISSM_MPI_DOUBLE,0,IssmComm::GetComm()); + + /*Assign output pointers: */ + *pM=total_basalflux; +}/*}}}*/ void FemModel::TotalSmbx(IssmDouble* pSmb, bool scaled){/*{{{*/ IssmDouble local_smb = 0; diff --git a/src/c/classes/FemModel.h b/src/c/classes/FemModel.h index 52924ce43..14fbb28d2 100644 --- a/src/c/classes/FemModel.h +++ b/src/c/classes/FemModel.h @@ -143,6 +143,7 @@ class FemModel { void TotalCalvingMeltingFluxLevelsetx(IssmDouble* pGbmb, bool scaled); void TotalFloatingBmbx(IssmDouble* pFbmb, bool scaled); void TotalGroundedBmbx(IssmDouble* pGbmb, bool scaled); + void TotalHydrologyBasalFluxx(IssmDouble* pM, bool scaled); void TotalSmbx(IssmDouble* pSmb, bool scaled); void TotalSmbMeltx(IssmDouble* pSmbMelt, bool scaled); void TotalSmbRefreezex(IssmDouble* pSmbRefreeze, bool scaled); diff --git a/src/c/classes/GrdLoads.cpp b/src/c/classes/GrdLoads.cpp index 48efa4b3c..854d6a910 100644 --- a/src/c/classes/GrdLoads.cpp +++ b/src/c/classes/GrdLoads.cpp @@ -158,7 +158,7 @@ void GrdLoads::SHDegree2Coefficients(IssmDouble* deg2coeff, FemModel* femmodel, }; /*}}}*/ void GrdLoads::Combineloads(int nel,SealevelGeometry* slgeom){ /*{{{*/ - int e,l, nbar, ae; + int e, l, nbar, ae; //Determine loads /*{{{*/ nactiveloads=0; @@ -225,7 +225,7 @@ void GrdLoads::Combineloads(int nel,SealevelGeometry* slgeom){ /*{{{*/ ae=0; if(subsealevelloads && l==SLGEOM_OCEAN){ for (e=0;elayout_enum){ case TriaInputEnum: this->values =new TriaInput(nbe,nbv,interp); - this->savedvalues=new TriaInput(nbe,nbv,interp); this->minvalues =new TriaInput(nbe,nbv,interp); this->maxvalues =new TriaInput(nbe,nbv,interp); this->gradient =new TriaInput(nbe,nbv,interp); break; case PentaInputEnum: this->values =new PentaInput(nbe,nbv,interp); - this->savedvalues=new PentaInput(nbe,nbv,interp); this->minvalues =new PentaInput(nbe,nbv,interp); this->maxvalues =new PentaInput(nbe,nbv,interp); this->gradient =new PentaInput(nbe,nbv,interp); @@ -57,7 +54,6 @@ ControlInput::ControlInput(int enum_in,int nbe, int nbv,int id,IssmDouble* times this->layout_enum = TransientInputEnum; /*Tria or Penta?*/ this->values =new TransientInput(enum_in,nbe,nbv,times,numtimes); - this->savedvalues=new TransientInput(enum_in,nbe,nbv,times,numtimes); this->minvalues =new TransientInput(enum_in,nbe,nbv,times,numtimes); this->maxvalues =new TransientInput(enum_in,nbe,nbv,times,numtimes); this->gradient =new TransientInput(enum_in,nbe,nbv,times,numtimes); @@ -65,7 +61,6 @@ ControlInput::ControlInput(int enum_in,int nbe, int nbv,int id,IssmDouble* times /*}}}*/ ControlInput::~ControlInput(){/*{{{*/ delete values; - delete savedvalues; delete minvalues; delete maxvalues; delete gradient; @@ -83,7 +78,6 @@ Input* ControlInput::copy() {/*{{{*/ output->layout_enum = this->layout_enum; if(values) output->values = this->values->copy(); - if(savedvalues) output->savedvalues = this->savedvalues->copy(); if(minvalues) output->minvalues = this->minvalues->copy(); if(maxvalues) output->maxvalues = this->maxvalues->copy(); if(gradient) output->gradient = this->gradient->copy(); @@ -93,7 +87,6 @@ Input* ControlInput::copy() {/*{{{*/ /*}}}*/ void ControlInput::Configure(Parameters* params){/*{{{*/ this->values->Configure(params); - this->savedvalues->Configure(params); this->minvalues->Configure(params); this->maxvalues->Configure(params); this->gradient->Configure(params); @@ -105,7 +98,6 @@ void ControlInput::DeepEcho(void){/*{{{*/ _printf_(setw(15)<<" ControlInput "<enum_type)<<"\n"); _printf_(setw(15)<<" Layout "<layout_enum)<<"\n"); _printf_("---values: \n"); if (values) values->Echo(); - _printf_("---savedvalues: \n");if (savedvalues) savedvalues->Echo(); _printf_("---minvalues: \n"); if (minvalues) minvalues->Echo(); _printf_("---maxvalues: \n"); if (maxvalues) maxvalues->Echo(); _printf_("---gradient: \n"); if (gradient){ gradient->Echo();} else{_printf_(" Not set yet\n");} @@ -131,21 +123,18 @@ void ControlInput::Marshall(MarshallHandle* marshallhandle){ /*{{{*/ switch(this->layout_enum){ case TriaInputEnum: this->values =new TriaInput(); - this->savedvalues=new TriaInput(); this->minvalues =new TriaInput(); this->maxvalues =new TriaInput(); this->gradient =new TriaInput(); break; case PentaInputEnum: this->values =new PentaInput(); - this->savedvalues=new PentaInput(); this->minvalues =new PentaInput(); this->maxvalues =new PentaInput(); this->gradient =new PentaInput(); break; case TransientInputEnum: this->values =new TransientInput(); - this->savedvalues=new TransientInput(); this->minvalues =new TransientInput(); this->maxvalues =new TransientInput(); this->gradient =new TransientInput(); @@ -156,7 +145,6 @@ void ControlInput::Marshall(MarshallHandle* marshallhandle){ /*{{{*/ } this->values->Marshall(marshallhandle); - this->savedvalues->Marshall(marshallhandle); this->minvalues->Marshall(marshallhandle); this->maxvalues->Marshall(marshallhandle); this->gradient->Marshall(marshallhandle); @@ -243,10 +231,6 @@ ElementInput* ControlInput::GetInput(const char* data){/*{{{*/ _assert_(values); return xDynamicCast(values); } - else if(strcmp(data,"savedvalues")==0){ - _assert_(savedvalues); - return xDynamicCast(values); - } else if (strcmp(data,"lowerbound")==0){ _assert_(minvalues); return xDynamicCast(minvalues); @@ -277,10 +261,6 @@ TransientInput* ControlInput::GetTransientInput(const char* data){/*{{{*/ _assert_(values); return xDynamicCast(values); } - else if(strcmp(data,"savedvalues")==0){ - _assert_(savedvalues); - return xDynamicCast(values); - } else if (strcmp(data,"lowerbound")==0){ _assert_(minvalues); return xDynamicCast(minvalues); diff --git a/src/c/classes/Inputs/ControlInput.h b/src/c/classes/Inputs/ControlInput.h index fa0dd0125..ffda7560b 100644 --- a/src/c/classes/Inputs/ControlInput.h +++ b/src/c/classes/Inputs/ControlInput.h @@ -20,7 +20,6 @@ class ControlInput: public Input{ Input *gradient; Input *maxvalues; Input *minvalues; - Input *savedvalues; Input *values; /*ControlInput constructors, destructors: {{{*/ diff --git a/src/c/classes/IoModel.cpp b/src/c/classes/IoModel.cpp index d65c76285..4cb9aa49f 100644 --- a/src/c/classes/IoModel.cpp +++ b/src/c/classes/IoModel.cpp @@ -427,6 +427,7 @@ Param* IoModel::CopyConstantObject(const char* constant_name,int param_enum){/*{ } } + this->PrintDebugMessage(); _error_("Constant \"" << constant_name << "\" not found in iomodel"); return NULL; } @@ -2779,15 +2780,7 @@ void IoModel::FindConstant(bool* pvalue,const char* constant_name){/*{{{*/ if(strcmp(ioconstant->name,constant_name)==0){ if(ioconstant->constant->ObjectEnum()!=BoolParamEnum){ - _printf0_("=========================================================================\n"); - _printf0_(" Marshalled file is not consistent with compiled code \n"); - _printf0_(" \n"); - _printf0_(" This problem typically happens when two different versions of ISSM \n"); - _printf0_(" are being used. Make sure that you are running the same version: \n"); - _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); - _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); - _printf0_(" \n"); - _printf0_("=========================================================================\n\n"); + this->PrintDebugMessage(); _error_("\""<< constant_name <<"\" cannot return a bool, it is a " << EnumToStringx(ioconstant->constant->ObjectEnum())); } ioconstant->constant->GetParameterValue(pvalue); @@ -2809,15 +2802,7 @@ void IoModel::FindConstant(int* pvalue,const char* constant_name){/*{{{*/ if(strcmp(ioconstant->name,constant_name)==0){ if(ioconstant->constant->ObjectEnum()!=IntParamEnum){ - _printf0_("=========================================================================\n"); - _printf0_(" Marshalled file is not consistent with compiled code \n"); - _printf0_(" \n"); - _printf0_(" This problem typically happens when two different versions of ISSM \n"); - _printf0_(" are being used. Make sure that you are running the same version: \n"); - _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); - _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); - _printf0_(" \n"); - _printf0_("=========================================================================\n\n"); + this->PrintDebugMessage(); _error_("\""<< constant_name <<"\" cannot return an int, it is a " << EnumToStringx(ioconstant->constant->ObjectEnum())); } ioconstant->constant->GetParameterValue(pvalue); @@ -2838,15 +2823,7 @@ void IoModel::FindConstant(IssmDouble* pvalue,const char* constant_name){/*{{{* if(strcmp(ioconstant->name,constant_name)==0){ if(ioconstant->constant->ObjectEnum()!=DoubleParamEnum){ - _printf0_("=========================================================================\n"); - _printf0_(" Marshalled file is not consistent with compiled code \n"); - _printf0_(" \n"); - _printf0_(" This problem typically happens when two different versions of ISSM \n"); - _printf0_(" are being used. Make sure that you are running the same version: \n"); - _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); - _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); - _printf0_(" \n"); - _printf0_("=========================================================================\n\n"); + this->PrintDebugMessage(); _error_("\""<< constant_name <<"\" cannot return a double, it is a " << EnumToStringx(ioconstant->constant->ObjectEnum())); } ioconstant->constant->GetParameterValue(pvalue); @@ -2867,15 +2844,7 @@ void IoModel::FindConstant(char** pvalue,const char* constant_name){/*{{{*/ if(strcmp(ioconstant->name,constant_name)==0){ if(ioconstant->constant->ObjectEnum()!=StringParamEnum){ - _printf0_("=========================================================================\n"); - _printf0_(" Marshalled file is not consistent with compiled code \n"); - _printf0_(" \n"); - _printf0_(" This problem typically happens when two different versions of ISSM \n"); - _printf0_(" are being used. Make sure that you are running the same version: \n"); - _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); - _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); - _printf0_(" \n"); - _printf0_("=========================================================================\n\n"); + this->PrintDebugMessage(); _error_("\""<< constant_name <<"\" cannot return a string, it is a " << EnumToStringx(ioconstant->constant->ObjectEnum())); } ioconstant->constant->GetParameterValue(pvalue); @@ -2896,15 +2865,7 @@ void IoModel::FindConstant(char*** pvalue,int* psize,const char* constant_name) if(strcmp(ioconstant->name,constant_name)==0){ if(ioconstant->constant->ObjectEnum()!=StringArrayParamEnum){ - _printf0_("=========================================================================\n"); - _printf0_(" Marshalled file is not consistent with compiled code \n"); - _printf0_(" \n"); - _printf0_(" This problem typically happens when two different versions of ISSM \n"); - _printf0_(" are being used. Make sure that you are running the same version: \n"); - _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); - _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); - _printf0_(" \n"); - _printf0_("=========================================================================\n\n"); + this->PrintDebugMessage(); _error_("\""<< constant_name <<"\" cannot return a string array, it is a " << EnumToStringx(ioconstant->constant->ObjectEnum())); } ioconstant->constant->GetParameterValue(pvalue,psize); @@ -2938,6 +2899,19 @@ int IoModel::NumIndependents(void){/*{{{*/ return num_independents; } /*}}}*/ +void IoModel::PrintDebugMessage(void){/*{{{*/ + + _printf0_("=========================================================================\n"); + _printf0_(" Input file (.bin) is not consistent with compiled code \n"); + _printf0_(" \n"); + _printf0_(" This problem typically happens when two different versions of ISSM \n"); + _printf0_(" are being used. Make sure that you are running the same version: \n"); + _printf0_(" - to marshall the model (i.e., MATLAB/python interface) \n"); + _printf0_(" - to run ISSM (i.e., the compiled code issm.exe) \n"); + _printf0_(" \n"); + _printf0_("=========================================================================\n\n"); +} +/*}}}*/ fpos_t* IoModel::SetFilePointersToData(int** pcodes,int** pvector_types, int* pnum_instances,const char* data_name){/*{{{*/ int found = 0; @@ -3154,7 +3128,10 @@ FILE* IoModel::SetFilePointerToData(int* pcode,int* pvector_type,const char* dat } } ISSM_MPI_Bcast(&found,1,ISSM_MPI_INT,0,IssmComm::GetComm()); - if(!found) _error_("could not find data with name \"" << data_name << "\" in binary file"); + if(!found){ + this->PrintDebugMessage(); + _error_("could not find data with name \"" << data_name << "\" in binary file"); + } /*Broadcast code and vector type: */ ISSM_MPI_Bcast(&record_code,1,ISSM_MPI_INT,0,IssmComm::GetComm()); diff --git a/src/c/classes/IoModel.h b/src/c/classes/IoModel.h index 5c08514c5..6974a0a56 100644 --- a/src/c/classes/IoModel.h +++ b/src/c/classes/IoModel.h @@ -117,7 +117,8 @@ class IoModel { void FindConstant(IssmDouble* pvalue,const char* constant_name); void FindConstant(char **pvalue,const char* constant_name); void FindConstant(char ***pvalue,int* psize,const char* constant_name); - int NumIndependents(); + int NumIndependents(void); + void PrintDebugMessage(void); /*Input/Output*/ void CheckFile(void); diff --git a/src/c/classes/Loads/Friction.cpp b/src/c/classes/Loads/Friction.cpp index 76f8f8587..e0d3d5366 100644 --- a/src/c/classes/Loads/Friction.cpp +++ b/src/c/classes/Loads/Friction.cpp @@ -97,6 +97,13 @@ Friction::Friction(Element* element_in){/*{{{*/ _error_("not supported yet"); } } + + #ifdef _HAVE_PyBind11_ + Param* emulator_param = element_in->parameters->FindParamObject(FrictionEmulatorEnum); + if(emulator_param->ObjectEnum()!=EmulatorParamEnum) _error_("Paramerer should be EmulatorParam"); + this->emulator = (EmulatorParam*)emulator_param; + #endif + } /*}}}*/ Friction::Friction(Element* element_in,int dim) : Friction(element_in) {/*{{{*/ @@ -108,7 +115,7 @@ Friction::Friction(Element* element_in,IssmPDouble dim) : Friction(element_in) { } /*}}}*/ Friction::~Friction(){/*{{{*/ - if(this->linearize){ + if(this->linearize!=0){ xDelete(this->alpha2_list); xDelete(this->alpha2_complement_list); } @@ -437,6 +444,11 @@ void Friction::GetAlpha2(IssmDouble* palpha2, Gauss* gauss){/*{{{*/ case 15: GetAlpha2RegCoulomb2(palpha2,gauss); break; + #ifdef _HAVE_PyBind11_ + case 20: + GetAlpha2Emulator(palpha2, gauss); + break; + #endif default: _error_("Friction law "<< this->law <<" not supported"); } @@ -1084,6 +1096,19 @@ void Friction::GetAlpha2RegCoulomb2(IssmDouble* palpha2, Gauss* gauss){/*{{{*/ /*Assign output pointers:*/ *palpha2=alpha2; }/*}}}*/ +#if _HAVE_PyBind11_ +void Friction::GetAlpha2Emulator(IssmDouble* palpha2, Gauss* gauss){/*{{{*/ + + /*Get velocity magnitude*/ + IssmDouble ub = VelMag(gauss); + + /*Compute alpha^2*/ + IssmDouble alpha2 = 0.0; + + /*Assign output pointers:*/ + *palpha2=alpha2; +}/*}}}*/ +#endif IssmDouble Friction::EffectivePressure(Gauss* gauss){/*{{{*/ /*Get effective pressure as a function of flag */ @@ -1451,6 +1476,21 @@ void FrictionUpdateParameters(Parameters* parameters,IoModel* iomodel){/*{{{*/ parameters->AddObject(new IntParam(FrictionCouplingEnum,2)); parameters->AddObject(iomodel->CopyConstantObject("md.friction.effective_pressure_limit",FrictionEffectivePressureLimitEnum)); break; + #ifdef _HAVE_PyBind11_ + case 20:{ + /*Get path from iomodel*/ + char* module_dir = NULL; + char* pt_name = NULL; + char* py_name = NULL; + iomodel->FetchData(&module_dir, "md.friction.module_dir"); + iomodel->FetchData(&pt_name, "md.friction.pt_name"); + iomodel->FetchData(&py_name, "md.friction.py_name"); + parameters->AddObject(new EmulatorParam(FrictionEmulatorEnum, module_dir,pt_name, py_name)); + xDelete(module_dir); + xDelete(pt_name); + xDelete(py_name); + } + #endif default: _error_("Friction law "< +#else +#error "Cannot compile with HAVE_CONFIG_H symbol! run configure first!" +#endif + /*Headers:*/ class Inputs; class Elements; @@ -12,6 +18,9 @@ class Parameters; class IoModel; class GaussPenta; class GaussTria; +#ifdef _HAVE_PyBind11_ +class EmulatorParam; +#endif class Friction{ @@ -26,6 +35,9 @@ class Friction{ Input *vz_input; IssmDouble *alpha2_list; IssmDouble *alpha2_complement_list; + #ifdef _HAVE_PyBind11_ + EmulatorParam* emulator; + #endif /*methods: */ Friction(); @@ -59,6 +71,9 @@ class Friction{ void GetAlpha2RegCoulomb(IssmDouble* palpha2,Gauss* gauss); void GetAlpha2RegCoulomb2(IssmDouble* palpha2,Gauss* gauss); void GetAlpha2Tsai(IssmDouble* palpha2,Gauss* gauss); + #if _HAVE_PyBind11_ + void GetAlpha2Emulator(IssmDouble* palpha2, Gauss* gauss); + #endif IssmDouble EffectivePressure(Gauss* gauss); IssmDouble IcePressure(Gauss* gauss); diff --git a/src/c/classes/Misfit.cpp b/src/c/classes/Misfit.cpp index 03545c2c6..9afff1786 100644 --- a/src/c/classes/Misfit.cpp +++ b/src/c/classes/Misfit.cpp @@ -205,7 +205,6 @@ IssmDouble Misfit::Response(FemModel* femmodel){/*{{{*/ misfit_t += pow(model[i]-observation[i],2)*weights[i]; if (weights[i]!=0)count++; } - misfit=sqrt(misfit_t/count); /*Add this time's contribution to curent misfit: */ misfit=sqrt(misfit_t)/count; diff --git a/src/c/classes/Node.cpp b/src/c/classes/Node.cpp index 174e2fb39..5cbf3d442 100644 --- a/src/c/classes/Node.cpp +++ b/src/c/classes/Node.cpp @@ -385,17 +385,17 @@ int Node::GetDof(int dofindex,int setenum){/*{{{*/ } /*}}}*/ void Node::GetDofList(int* outdoflist,int approximation_enum,int setenum,bool hideclones){/*{{{*/ + _assert_(!this->indexingupdate); - int i; int* doflistpointer = NULL; - if(setenum==GsetEnum) doflistpointer = gdoflist; - else if(setenum==FsetEnum)for(i=0;igsize;i++) doflistpointer = fdoflist; - else if(setenum==SsetEnum)for(i=0;igsize;i++) doflistpointer = sdoflist; + if(setenum==GsetEnum) doflistpointer = gdoflist; + else if(setenum==FsetEnum) doflistpointer = fdoflist; + else if(setenum==SsetEnum) doflistpointer = sdoflist; else _error_("not supported"); if(approximation_enum==NoneApproximationEnum){ - for(i=0;igsize;i++){ + for(int i=0;igsize;i++){ if(hideclones && this->IsClone()){ outdoflist[i]=-1; } @@ -407,14 +407,14 @@ void Node::GetDofList(int* outdoflist,int approximation_enum,int setenum,bool hi else{ if(doftype){ int count = 0; - for(i=0;igsize;i++){ + for(int i=0;igsize;i++){ if(doftype[i]==approximation_enum){ outdoflist[count++]=doflistpointer[i]; } } } else{ - for(i=0;igsize;i++){ + for(int i=0;igsize;i++){ if(hideclones && this->IsClone()){ outdoflist[i]=-1; } diff --git a/src/c/classes/Params/EmulatorParam.cpp b/src/c/classes/Params/EmulatorParam.cpp new file mode 100644 index 000000000..2bb20cf72 --- /dev/null +++ b/src/c/classes/Params/EmulatorParam.cpp @@ -0,0 +1,105 @@ +/*!\file EmulatorParam.c + * \brief: implementation of the EmulatorParam object + */ + +/*header files: */ +/*{{{*/ +#ifdef HAVE_CONFIG_H + #include +#else +#error "Cannot compile with HAVE_CONFIG_H symbol! run configure first!" +#endif + +#include "../classes.h" +#include "shared/shared.h" +/*}}}*/ +#include +namespace py = pybind11; + +/*EmulatorParam constructors and destructor*/ +EmulatorParam::EmulatorParam(){/*{{{*/ + return; +} +/*}}}*/ +EmulatorParam::EmulatorParam(int in_enum_type, char* module_dir_in, char* pt_name_in, char* py_name_in){/*{{{*/ + + this->enum_type=in_enum_type; + + /*Copy path to emulator*/ + this->module_dir = xNew(strlen(module_dir_in)+1); + xMemCpy(this->module_dir, module_dir_in,(strlen(module_dir_in)+1)); + this->pt_name = xNew(strlen(pt_name_in)+1); + xMemCpy(this->pt_name, pt_name_in,(strlen(pt_name_in)+1)); + this->py_name = xNew(strlen(py_name_in)+1); + xMemCpy(this->py_name, py_name_in,(strlen(py_name_in)+1)); + + /*Activate interpretor*/ + this->guard = NULL; + try{ + /*What if multi-emulator are activated?*/ + this->guard = new py::scoped_interpreter(); + + py::module_ sys = py::module_::import("sys"); + sys.attr("path").attr("append")(this->module_dir); + std::string pt_path(this->module_dir); + if(!pt_path.empty() && pt_path.back() != '/'){ + pt_path += "/"; + } + pt_path += this->pt_name; + std::string py_module_name(this->py_name); + std::size_t dot = py_module_name.rfind('.'); + if(dot != std::string::npos){ + py_module_name = py_module_name.substr(0,dot); + } + + this->mod = py::module_::import(py_module_name.c_str()); + this->mod.attr("init_model")(pt_path.c_str(), "auto"); + } + catch(...){ + delete this->guard; + this->guard = NULL; + throw; + } +} +/*}}}*/ +EmulatorParam::~EmulatorParam(){/*{{{*/ + xDelete(this->module_dir); + xDelete(this->pt_name); + xDelete(this->py_name); + delete this->guard; +} +/*}}}*/ + +/*Object virtual functions definitions:*/ +Param* EmulatorParam::copy() {/*{{{*/ + + _error_("not implemented"); + +} +/*}}}*/ +void EmulatorParam::DeepEcho(void){/*{{{*/ + + _error_("not implemented"); + +} +/*}}}*/ +void EmulatorParam::Echo(void){/*{{{*/ + this->DeepEcho(); +} +/*}}}*/ +int EmulatorParam::Id(void){ return -1; }/*{{{*/ +/*}}}*/ +void EmulatorParam::Marshall(MarshallHandle* marshallhandle){ /*{{{*/ + + _error_("Not implemented yet"); + +} +/*}}}*/ +int EmulatorParam::ObjectEnum(void){/*{{{*/ + + return EmulatorParamEnum; + +} +/*}}}*/ + +/*EmulatorParam virtual functions definitions: */ diff --git a/src/c/classes/Params/EmulatorParam.h b/src/c/classes/Params/EmulatorParam.h new file mode 100644 index 000000000..4e10236c7 --- /dev/null +++ b/src/c/classes/Params/EmulatorParam.h @@ -0,0 +1,84 @@ +/*! \file EmulatorParam.h + * \brief: header file for triavertexinput object + */ + +#ifndef _EMULATORPARAM_H_ +#define _EMULATORPARAM_H_ + +/*Headers:*/ +/*{{{*/ +#ifdef HAVE_CONFIG_H + #include +#else +#error "Cannot compile with HAVE_CONFIG_H symbol! run configure first!" +#endif +#include "./Param.h" +#include "../../shared/shared.h" +/*}}}*/ +#include +namespace py = pybind11; +class EmulatorParam: public Param{ + + private: + int enum_type; + + public: + char* module_dir; + char* pt_name; + char* py_name; + py::scoped_interpreter* guard; + py::module_ mod; + + /*EmulatorParam constructors, destructors: {{{*/ + EmulatorParam(); + EmulatorParam(int enum_type, char* module_dir_in, char* pt_name_in, char* py_name_in); + ~EmulatorParam(); + /*}}}*/ + /*Object virtual functions definitions:{{{ */ + Param* copy(); + void DeepEcho(); + void Echo(); + int Id(); + void Marshall(MarshallHandle* marshallhandle); + int ObjectEnum(); + /*}}}*/ + /*Param virtual function definitions: {{{*/ + void GetParameterValue(bool* pbool){ _error_("Param "<< EnumToStringx(enum_type) << " cannot return a bool");} + void GetParameterValue(int* pinteger){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble");} + void GetParameterValue(int** pintarray,int* pM){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble");} + void GetParameterValue(int** pintarray,int* pM,int* pN){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble");} + void GetParameterValue(IssmDouble* pIssmDouble){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble");} + void GetParameterValue(IssmDouble* pdouble,IssmDouble time){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble for a given time");} + void GetParameterValue(IssmDouble* pdouble,IssmDouble time, int timestepping, IssmDouble dt){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble for a given time");} + void GetParameterValue(FILE** pfile){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a file pointer");} + void GetParameterValue(char** pstring){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a string");} + void GetParameterValue(char*** pstringarray,int* pM){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a string array");} + void GetParameterValue(IssmDouble** pIssmDoublearray,int* pM){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble array");} + void GetParameterValue(IssmDouble** pIssmDoublearray,int* pM, int* pN){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble array");} + void GetParameterValue(IssmDouble** pIssmDoublearray,int* pM, const char* data){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a IssmDouble array");} + void GetParameterValue(IssmDouble*** parray, int* pM,int** pmdims, int** pndims){_error_("DataSet param of enum " << enum_type << " (" << EnumToStringx(enum_type) << ") cannot return a matrix array");} + void GetParameterValue(Vector** pvec){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a Vec");} + void GetParameterValue(Matrix** pmat){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a Mat");} + void GetParameterValue(DataSet** pdataset){_error_("Param "<< EnumToStringx(enum_type) << " cannot return a Dataset");} + int InstanceEnum(){return enum_type;} + + void SetEnum(int enum_in){this->enum_type = enum_in;}; + void SetValue(bool boolean){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a string");} + void SetValue(int integer){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a string");} + void SetValue(IssmDouble scalar){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a string");} + void SetValue(char* string){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a string");} + void SetValue(FILE* fid){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a file pointer");} + void SetValue(char** stringarray,int M){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a string array");} + void SetValue(IssmDouble* IssmDoublearray){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a IssmDouble array");} + void SetValue(IssmDouble* IssmDoublearray,int M){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a IssmDouble array");} + void SetValue(IssmDouble* pIssmDoublearray,int M,int N){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a IssmDouble array");} + void SetValue(int* intarray,int M){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a int array");} + void SetValue(int* pintarray,int M,int N){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a int array");} + void SetValue(Vector* vec){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a Vec");} + void SetValue(Matrix* mat){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a Mat");} + void SetValue(DataSet* dataset){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold a Dataset");} + void SetValue(IssmDouble** array, int M, int* mdim_array, int* ndim_array){_error_("DataSet param of enum " << enum_type << " (" << EnumToStringx(enum_type) << ") cannot hold an array of matrices");} + void SetGradient(IssmDouble* poutput, int M, int N){_error_("Param "<< EnumToStringx(enum_type) << " cannot hold an IssmDouble");}; + /*}}}*/ +}; +#endif diff --git a/src/c/classes/Regionaloutput.cpp b/src/c/classes/Regionaloutput.cpp index caa15e86e..e9fd48b7a 100644 --- a/src/c/classes/Regionaloutput.cpp +++ b/src/c/classes/Regionaloutput.cpp @@ -139,6 +139,9 @@ IssmDouble Regionaloutput::Response(FemModel* femmodel){/*{{{*/ case TotalGroundedBmbScaledEnum: val_t+=element->TotalGroundedBmb(this->mask,true); break; + case TotalHydrologyBasalFluxEnum: + val_t+=element->TotalHydrologyBasalFlux(this->mask,false); + break; case TotalSmbEnum: val_t+=element->TotalSmb(this->mask,false); break; diff --git a/src/c/classes/Vertex.cpp b/src/c/classes/Vertex.cpp index 579d62877..e46b6375f 100644 --- a/src/c/classes/Vertex.cpp +++ b/src/c/classes/Vertex.cpp @@ -50,7 +50,9 @@ Vertex::Vertex(int vertex_id, int vertex_sid,bool vertex_clone, IoModel* iomodel switch(iomodel->domaintype){ case Domain3DEnum: _assert_(iomodel->Data("md.geometry.base") && iomodel->Data("md.geometry.thickness")); + _assert_(iomodel->Data("md.geometry.thickness")[vertex_sid]>0.); this->sigma = (iomodel->Data("md.mesh.z")[vertex_sid]-iomodel->Data("md.geometry.base")[vertex_sid])/(iomodel->Data("md.geometry.thickness")[vertex_sid]); + _assert_(!xIsNan(this->sigma)); break; case Domain3DsurfaceEnum: _assert_(iomodel->Data("md.mesh.lat") && iomodel->Data("md.mesh.long") && iomodel->Data("md.mesh.r")); @@ -186,7 +188,7 @@ void Vertex::UpdatePosition(Vector* vx,Vector* vy, return; case Domain2DverticalEnum: oldy = this->y; - newy = bed[this->pid]+sigma*(surface[this->pid] - bed[this->pid]); + newy = bed[this->pid]+this->sigma*(surface[this->pid] - bed[this->pid]); vely = (newy-oldy)/dt; this->y = newy; vy->SetValue(this->pid,vely,INS_VAL); @@ -194,7 +196,7 @@ void Vertex::UpdatePosition(Vector* vx,Vector* vy, return; case Domain3DEnum: oldz = this->z; - newz = bed[this->pid]+sigma*(surface[this->pid] - bed[this->pid]); + newz = bed[this->pid]+this->sigma*(surface[this->pid] - bed[this->pid]); velz = (newz-oldz)/dt; this->z = newz; vz->SetValue(this->pid,velz,INS_VAL); diff --git a/src/c/classes/classes.h b/src/c/classes/classes.h index 9ba3983a6..ba4af41d6 100644 --- a/src/c/classes/classes.h +++ b/src/c/classes/classes.h @@ -5,6 +5,12 @@ #ifndef _ALL_CLASSES_H_ #define _ALL_CLASSES_H_ +#ifdef HAVE_CONFIG_H + #include +#else + #error "Cannot compile with HAVE_CONFIG_H symbol! run configure first!" +#endif + /*Objects: */ #include "./Contour.h" #include "./Vertices.h" @@ -111,6 +117,9 @@ #include "./Params/TransientArrayParam.h" #include "./Params/TransientGriddedFieldParam.h" #include "./Params/DataSetParam.h" +#ifdef _HAVE_PyBind11_ +#include "./Params/EmulatorParam.h" +#endif /*matrix: */ #include "./matrix/matrixobjects.h" diff --git a/src/c/cores/hydrology_core.cpp b/src/c/cores/hydrology_core.cpp index 2ae18aaac..493af1b4f 100644 --- a/src/c/cores/hydrology_core.cpp +++ b/src/c/cores/hydrology_core.cpp @@ -267,9 +267,19 @@ void hydrology_core(FemModel* femmodel){ /*{{{*/ analysis->UpdateSubglacialWaterPressure(femmodel); delete analysis; } + + /*Using the prescribed hydrology model*/ + else if (hydrology_model==HydrologyprescribeEnum){ + femmodel->SetCurrentConfiguration(HydrologyPrescribeAnalysisEnum); + if(VerboseSolution()) _printf0_(" updating effective pressure\n"); + HydrologyPrescribeAnalysis* analysis = new HydrologyPrescribeAnalysis(); + analysis->UpdateEffectivePressure(femmodel); + delete analysis; + } else{ _error_("Hydrology model "<< EnumToStringx(hydrology_model) <<" not supported yet"); } + if(save_results){ if(hydrology_model==HydrologydcEnum && ThawedNodes==0){ if(VerboseSolution()) _printf0_(" No thawed node hydro is skiped \n");} diff --git a/src/c/cores/sealevelchange_core.cpp b/src/c/cores/sealevelchange_core.cpp index ccacee65e..e508549bb 100644 --- a/src/c/cores/sealevelchange_core.cpp +++ b/src/c/cores/sealevelchange_core.cpp @@ -531,6 +531,7 @@ void coupleroutput_core(FemModel* femmodel){ /*{{{*/ femmodel->parameters->FindParam(&frequency,SolidearthSettingsRunFrequencyEnum); count++; + if(count>frequency) count=1; femmodel->parameters->SetParam(count,SealevelchangeRunCountEnum); if(iscoupling){ diff --git a/src/c/cores/transient_core.cpp b/src/c/cores/transient_core.cpp index 65650ac1e..2ad17b7ec 100644 --- a/src/c/cores/transient_core.cpp +++ b/src/c/cores/transient_core.cpp @@ -24,6 +24,7 @@ void transient_core(FemModel* femmodel){/*{{{*/ /*parameters: */ IssmDouble finaltime,dt,yts; + bool save_final_results; bool iscontrol,isautodiff; int timestepping; int output_frequency,checkpoint_frequency; @@ -48,6 +49,7 @@ void transient_core(FemModel* femmodel){/*{{{*/ femmodel->parameters->FindParam(&amr_frequency,TransientAmrFrequencyEnum); femmodel->parameters->FindParam(&iscontrol,InversionIscontrolEnum); femmodel->parameters->FindParam(&isautodiff,AutodiffIsautodiffEnum); + femmodel->parameters->FindParam(&save_final_results,SaveFinalResultsEnum); /*call modules that are not dependent on time stepping:*/ transient_precore(femmodel); @@ -83,8 +85,9 @@ void transient_core(FemModel* femmodel){/*{{{*/ _printf0_("\e[92miteration " << step << "/" << ceil((finaltime-time)/dt)+step << \ " time [yr]: " <= finaltime - (yts*DBL_EPSILON)) || step==1) save_results=true; + const bool save_results = step==1 //save first step + || step%output_frequency==0 //save at regular intervals + || (save_final_results && time >= finaltime - (yts*DBL_EPSILON)); //save last step (optional) femmodel->parameters->SetParam(save_results,SaveResultsEnum); /*Run transient step!*/ @@ -174,13 +177,13 @@ void transient_step(FemModel* femmodel){/*{{{*/ femmodel->parameters->FindParam(&isenthalpy,ThermalIsenthalpyEnum); femmodel->parameters->FindParam(&smb_model,SmbEnum); if(isenthalpy){ - if(smb_model==SMBpddEnum || smb_model==SMBd18opddEnum || smb_model==SMBpddSicopolisEnum){ + if(smb_model==SMBpddEnum || smb_model==SMBd18opddEnum || smb_model==SMBpddSicopolisEnum || smb_model==SMBpddFastEnum){ femmodel->SetCurrentConfiguration(EnthalpyAnalysisEnum); ResetBoundaryConditions(femmodel,EnthalpyAnalysisEnum); } } else{ - if(smb_model==SMBpddEnum || smb_model==SMBd18opddEnum || smb_model==SMBpddSicopolisEnum){ + if(smb_model==SMBpddEnum || smb_model==SMBd18opddEnum || smb_model==SMBpddSicopolisEnum || smb_model==SMBpddFastEnum){ femmodel->SetCurrentConfiguration(ThermalAnalysisEnum); ResetBoundaryConditions(femmodel,ThermalAnalysisEnum); } diff --git a/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.cpp b/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.cpp index 40d25c839..3546e0abb 100644 --- a/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.cpp +++ b/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.cpp @@ -51,13 +51,17 @@ void FloatingiceMeltingRatex(FemModel* femmodel){/*{{{*/ FloatingiceMeltingRateIsmip6x(femmodel); break; case BeckmannGoosseFloatingMeltRateEnum: - if(VerboseSolution())_printf0_(" call BeckmannGoosse Floating melting rate module\n"); + if(VerboseSolution())_printf0_(" call BeckmannGoosse Floating melting rate module\n"); BeckmannGoosseFloatingiceMeltingRatex(femmodel); break; case LinearFloatingMeltRatearmaEnum: - if(VerboseSolution())_printf0_(" call Linear Floating melting rate ARMA module\n"); + if(VerboseSolution())_printf0_(" call Linear Floating melting rate ARMA module\n"); LinearFloatingiceMeltingRatearmax(femmodel); break; + case BasalforcingsIsmip7Enum: + if(VerboseSolution())_printf0_(" call ISMIP 7 Floating melting rate module\n"); + FloatingiceMeltingRateIsmip7x(femmodel); + break; default: _error_("Basal forcing model "<(tf_depths); } /*}}}*/ +void FloatingiceMeltingRateIsmip7x(FemModel* femmodel){/*{{{*/ + + IssmDouble time; + IssmDouble g; + IssmDouble* tf_depths=NULL; + int num_depths; + + femmodel->parameters->FindParam(&time,TimeEnum); + + femmodel->parameters->FindParam(&tf_depths,&num_depths,BasalforcingsIsmip7TfDepthsEnum); _assert_(tf_depths); + + /*Binary search works for vectors that are sorted in increasing order only, make depths positive*/ + //if(VerboseSolution())_printf0_(" ismip7: prepare binary search\n"); + for(int i=0;ielements->objects){ + Element* element = xDynamicCast(object); + int numvertices = element->GetNumberOfVertices(); + + /*Set melt to 0 if non floating*/ + if(!element->IsIceInElement() || !element->IsAllFloating() || !element->IsOnBase()){ + IssmDouble* values = xNewZeroInit(numvertices); + element->AddInput(BasalforcingsFloatingiceMeltingRateEnum,values,P1DGEnum); + element->AddInput(BasalforcingsIsmip7TfShelfEnum,values,P1DGEnum); + element->AddInput(BasalforcingsIsmip7SalinityShelfEnum,values,P1DGEnum); + xDelete(values); + continue; + } + + /*Get TF on all vertices*/ + IssmDouble* tf_test = xNew(numvertices); + IssmDouble* so_test = xNew(numvertices); + IssmDouble* depth_vertices = xNew(numvertices); + DatasetInput* tf_input = element->GetDatasetInput(BasalforcingsIsmip7TfEnum); _assert_(tf_input); + DatasetInput* so_input = element->GetDatasetInput(BasalforcingsIsmip7SalinityEnum); _assert_(so_input); + + element->GetInputListOnVertices(&depth_vertices[0],BaseEnum); + + Gauss* gauss=element->NewGauss(); + for(int iv=0;ivGaussVertex(iv); + + /*Find out where the ice shelf base is within tf_depths*/ + IssmDouble depth = -depth_vertices[iv]; /*NOTE: make sure we are dealing with depth>0*/ + int offset; + int found=binary_search(&offset,depth,tf_depths,num_depths); + if(!found) _error_("depth not found"); + + if (offset==-1){ + /*get values for the first depth: */ + _assert_(depth<=tf_depths[0]); + tf_input->GetInputValue(&tf_test[iv],gauss,0); + so_input->GetInputValue(&so_test[iv],gauss,0); + } + else if(offset==num_depths-1){ + /*get values for the last time: */ + _assert_(depth>=tf_depths[num_depths-1]); + tf_input->GetInputValue(&tf_test[iv],gauss,num_depths-1); + so_input->GetInputValue(&so_test[iv],gauss,num_depths-1); + } + else { + /*get values between two times [offset:offset+1], Interpolate linearly*/ + _assert_(depth>=tf_depths[offset] && depthGetInputValue(&tf1,gauss,offset); + tf_input->GetInputValue(&tf2,gauss,offset+1); + tf_test[iv] = alpha1*tf1 + alpha2*tf2; + + so_input->GetInputValue(&so1,gauss,offset); + so_input->GetInputValue(&so2,gauss,offset+1); + so_test[iv] = alpha1*so1 + alpha2*so2; + } + } + + element->AddInput(BasalforcingsIsmip7TfShelfEnum,tf_test,P1DGEnum); + element->AddInput(BasalforcingsIsmip7SalinityShelfEnum,so_test,P1DGEnum); + xDelete(tf_test); + xDelete(so_test); + xDelete(depth_vertices); + delete gauss; + } + + /*Compute meltrates*/ + //if(VerboseSolution())_printf0_(" ismip7: compute melting rate\n"); + for(Object* & object : femmodel->elements->objects){ + Element* element = xDynamicCast(object); + element->Ismip7FloatingiceMeltingRate(); + } + + /*Cleanup and return */ + xDelete(tf_depths); +} void BeckmannGoosseFloatingiceMeltingRatex(FemModel* femmodel){/*{{{*/ for(Object* & object : femmodel->elements->objects){ diff --git a/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.h b/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.h index ee93e5c37..ae1d72ba4 100644 --- a/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.h +++ b/src/c/modules/FloatingiceMeltingRatex/FloatingiceMeltingRatex.h @@ -17,5 +17,6 @@ void MismipFloatingiceMeltingRatex(FemModel* femmodel); void FloatingiceMeltingRateIsmip6x(FemModel* femmodel); void BeckmannGoosseFloatingiceMeltingRatex(FemModel* femmodel); void LinearFloatingiceMeltingRatearmax(FemModel* femmodel); +void FloatingiceMeltingRateIsmip7x(FemModel* femmodel); #endif /* _FloatingiceMeltingRatex_H*/ diff --git a/src/c/modules/ModelProcessorx/CreateParameters.cpp b/src/c/modules/ModelProcessorx/CreateParameters.cpp index 134ea5417..66a7f990b 100644 --- a/src/c/modules/ModelProcessorx/CreateParameters.cpp +++ b/src/c/modules/ModelProcessorx/CreateParameters.cpp @@ -286,6 +286,14 @@ void CreateParameters(Parameters* parameters,IoModel* iomodel,char* rootpath,FIL parameters->AddObject(new DoubleVecParam(BasalforcingsDeepwaterElevationEnum,transparam,N)); xDelete(transparam); break; + case BasalforcingsIsmip7Enum: + parameters->AddObject(iomodel->CopyConstantObject("md.basalforcings.num_basins",BasalforcingsIsmip7NumBasinsEnum)); + parameters->AddObject(iomodel->CopyConstantObject("md.basalforcings.gamma",BasalforcingsIsmip7GammaEnum)); + iomodel->FetchData(&transparam,&M,&N,"md.basalforcings.tf_depths"); + parameters->AddObject(new DoubleVecParam(BasalforcingsIsmip7TfDepthsEnum,transparam,N)); + xDelete(transparam); + break; + break; default: _error_("Basal forcing model "<AddObject(new BoolParam(SaveResultsEnum,true)); + + /*Option to not save results after the final time step, e.g. for external coupling*/ + parameters->AddObject(new BoolParam(SaveFinalResultsEnum,true)); /*Should we output results on nodes?*/ iomodel->FindConstant(&outputonnodes,&numoutputs,"md.settings.results_on_nodes"); @@ -484,6 +495,9 @@ void CreateParameters(Parameters* parameters,IoModel* iomodel,char* rootpath,FIL parameters->AddObject(new DoubleMatParam(HydrologyarmaMonthlyFactorsEnum,transparam,M,N)); xDelete(transparam); } + else if(hydrology_model==HydrologyprescribeEnum){ + /*Nothing to add*/ + } else{ _error_("Hydrology model "<PositiveDegreeDaySicopolis(isfirnwarming); } +}/*}}}*/ +void PositiveDegreeDayFastx(FemModel* femmodel){/*{{{*/ + + bool isfirnwarming; + femmodel->parameters->FindParam(&isfirnwarming,SmbIsfirnwarmingEnum); + + for(Object* & object : femmodel->elements->objects){ + Element* element=xDynamicCast(object); + element->PositiveDegreeDayFast(isfirnwarming); + } + }/*}}}*/ void PositiveDegreeDayGCMx(FemModel* femmodel){/*{{{*/ IssmDouble* x = NULL; diff --git a/src/c/modules/SurfaceMassBalancex/SurfaceMassBalancex.h b/src/c/modules/SurfaceMassBalancex/SurfaceMassBalancex.h index 5619c1906..7a16079fc 100644 --- a/src/c/modules/SurfaceMassBalancex/SurfaceMassBalancex.h +++ b/src/c/modules/SurfaceMassBalancex/SurfaceMassBalancex.h @@ -18,6 +18,7 @@ void MungsmtpParameterizationx(FemModel* femmodel); void Delta18opdParameterizationx(FemModel* femmodel); void PositiveDegreeDayx(FemModel* femmodel); void PositiveDegreeDaySicopolisx(FemModel* femmodel); +void PositiveDegreeDayFastx(FemModel* femmodel); void PositiveDegreeDayGCMx(FemModel* femmodel); void SmbHenningx(FemModel* femmodel); void SmbComponentsx(FemModel* femmodel); diff --git a/src/c/shared/Elements/PddSurfaceMassBalanceFast.cpp b/src/c/shared/Elements/PddSurfaceMassBalanceFast.cpp new file mode 100644 index 000000000..6d54a792f --- /dev/null +++ b/src/c/shared/Elements/PddSurfaceMassBalanceFast.cpp @@ -0,0 +1,135 @@ +/* file: PddSurfaceMassBlanceFast.cpp + Calculating the surface mass balance using the adapted PDD routine from SICOPOLIS, modified by Alicia + by borrowing code from Lev's original PDD calculation + */ + +#include "./elements.h" +#include "../Numerics/numerics.h" +#include "../Exceptions/exceptions.h" +#include + +IssmDouble PddSurfaceMassBalanceFast(IssmDouble* monthlytemperatures, IssmDouble* monthlyprec, + IssmDouble* melt, IssmDouble* accu, IssmDouble* melt_star, IssmDouble* t_ampl, IssmDouble* p_ampl, + IssmDouble yts, IssmDouble s, IssmDouble desfac, + IssmDouble s0t, IssmDouble s0p, IssmDouble rlaps, + IssmDouble rho_water,IssmDouble rho_ice,IssmDouble pdd_fac_ice,IssmDouble pdd_fac_snow){ + + int imonth; // month counter + IssmDouble B; // output: surface mass balance (m/a IE), melt+accumulation + IssmDouble frac_solid, snowfall, rainfall, runoff; + IssmDouble saccu; // yearly surface accumulation (m/a IE) + IssmDouble smelt; // yearly melt (m/a IE) + IssmDouble smelt_star; // yearly ... + IssmDouble precip; // total precipitation during 1 year + IssmDouble sconv; //rhow_rain/rhoi / 12 months + IssmDouble st; // elevation between altitude of the temp record and current altitude + IssmDouble sp; // elevation between altitude of the prec record and current altitude + IssmDouble q; // q is desert/elev. fact + IssmDouble pdd; // pdd factor (a * degC) + IssmDouble tstar; // monthly temp. after lapse rate correction (degC) + IssmDouble precip_star; // monthly precip after correction (m/a IE) + IssmDouble Pmax = 0.6; + IssmDouble inv_twelve=1./12.; + + sconv=(rho_water/rho_ice); //rhow_rain/rhoi + + pdd_fac_snow=pdd_fac_snow*(0.001*365)*sconv; // (mm WE)/(d*deg C) --> (m IE)/(a*deg C) + pdd_fac_ice=pdd_fac_ice*(0.001*365)*sconv; // (mm WE)/(d*deg C) --> (m IE)/(a*deg C) + + /* initalize fields */ + precip=0.0; + tstar=0.0; + snowfall=0.0; + pdd=0.0; + /* seasonal loop */ + for(imonth=0;imonth<12;imonth++){ + + /********* Surface temperature correction *******/ + st=(s-s0t)/1000.; + + /******** Monhtly temperature correction *******/ + monthlytemperatures[imonth]=monthlytemperatures[imonth]-rlaps*st;//*max(st,1e-3); + tstar=monthlytemperatures[imonth]+t_ampl[0]; + + /********* Precipitation correction *************/ + /* Ref: Vizcaino et al 2010; DOI 10.1007/s00382-009-0591-y */ + if(s0p<2000.0) + q=exp(desfac*(max(s,2000.0)-2000.0)); + else + q=exp(desfac*(max(s,2000.0)-s0p)); + + precip_star=q*monthlyprec[imonth]*sconv*p_ampl[0]*yts; // convert precip from m/s -> m/a + precip=precip+precip_star*inv_twelve; + + /********* compute PDD **************************/ + /* Ref: Calov & Greve 2005 Journal of Glaciology, Vol. 51, No. 172, 2005, Correspondence */ + IssmDouble s_stat=5.0; + IssmDouble inv_sqrt2pi =1.0/sqrt(2.0*PI); + IssmDouble inv_s_stat =1.0/s_stat; + IssmDouble inv_sqrt2 =1.0/sqrt(2.0); + + #if !defined(_HAVE_ADOLC_) + pdd=pdd+(s_stat*inv_sqrt2pi*exp(-0.5*pow(tstar*inv_s_stat,2)) + +0.5*tstar*erfc(-tstar*inv_s_stat*inv_sqrt2))*inv_twelve; + #else + _error_("Cannot differentiate erfc, talk to ADOLC folks (http://functions.wolfram.com/GammaBetaErf/Erfc/20/01/)"); + #endif + + /*Partition of precip in solid and liquid parts, Bales et al. (2009) */ + IssmDouble temp_rain=7.2; // Threshold monthly mean temperature for + // precipitation = 101% rain, in deg C + IssmDouble temp_snow=-11.6; // Threshold monthly mean temperature for + // precipitation = 100% snow, in deg C + + IssmDouble coeff1=5.4714e-01; // Coefficients + IssmDouble coeff2=-9.1603e-02; // of + IssmDouble coeff3=-3.314e-03; // the + IssmDouble coeff4= 4.66e-04; // fifth-order + IssmDouble coeff5=3.8e-05; // polynomial + IssmDouble coeff6=6.0e-07; // fit + + if(tstar>=temp_rain) + frac_solid = 0.0; + else if(tstar<=temp_snow) + frac_solid = 1.0; + else{ + frac_solid=coeff1+tstar*(coeff2 + +tstar*(coeff3+tstar*(coeff4+tstar*(coeff5+tstar*coeff6)))); + } + + snowfall=snowfall+precip_star*frac_solid*inv_twelve; + } + /* end of seasonal loop */ + + rainfall=precip-snowfall; + if(snowfall<0.0) snowfall=0.0; // correction of + if(rainfall<0.0) rainfall=0.0; // negative values + + if(rainfall<=(Pmax*snowfall)){ + if((rainfall+pdd_fac_snow*pdd)<=(Pmax*snowfall)) { + smelt_star = rainfall+pdd_fac_snow*pdd; + smelt = 0.0; + runoff = smelt; + } + else{ + smelt_star = Pmax*snowfall; + smelt = pdd_fac_ice*(pdd-(smelt_star-rainfall)/pdd_fac_snow); + runoff = smelt; + } + } + else{ + smelt_star = Pmax*snowfall; + smelt = pdd_fac_ice*pdd; + runoff = smelt+rainfall-Pmax*snowfall; + } + + saccu = precip; + + /* asign output*/ + melt[0]=runoff/yts; + accu[0]=saccu/yts; + melt_star[0]=smelt_star/yts; + B=(saccu-runoff)/yts; + + return B; +} diff --git a/src/c/shared/Elements/elements.h b/src/c/shared/Elements/elements.h index f11a8677b..9fd23ccbe 100644 --- a/src/c/shared/Elements/elements.h +++ b/src/c/shared/Elements/elements.h @@ -29,6 +29,10 @@ IssmDouble PddSurfaceMassBalanceSicopolis(IssmDouble* monthlytemperatures, Issm IssmDouble* melt, IssmDouble* accu, IssmDouble* melt_star, IssmDouble* t_ampl, IssmDouble* p_ampl, IssmDouble yts, IssmDouble s, IssmDouble desfac,IssmDouble s0t, IssmDouble s0p, IssmDouble rlaps, IssmDouble rho_water, IssmDouble rho_ice, IssmDouble pdd_fac_ice, IssmDouble pdd_fac_snow); +IssmDouble PddSurfaceMassBalanceFast(IssmDouble* monthlytemperatures, IssmDouble* monthlyprec, + IssmDouble* melt, IssmDouble* accu, IssmDouble* melt_star, IssmDouble* t_ampl, IssmDouble* p_ampl, + IssmDouble yts, IssmDouble s, IssmDouble desfac,IssmDouble s0t, + IssmDouble s0p, IssmDouble rlaps, IssmDouble rho_water, IssmDouble rho_ice, IssmDouble pdd_fac_ice, IssmDouble pdd_fac_snow); void ComputeDelta18oTemperaturePrecipitation(IssmDouble Delta18oSurfacePresent, IssmDouble Delta18oSurfaceLgm, IssmDouble Delta18oSurfaceTime, IssmDouble Delta18oPresent, IssmDouble Delta18oLgm, IssmDouble Delta18oTime, IssmDouble* PrecipitationsPresentday, diff --git a/src/c/shared/Enum/Enum.vim b/src/c/shared/Enum/Enum.vim index bb95a37d1..cb530e0f7 100644 --- a/src/c/shared/Enum/Enum.vim +++ b/src/c/shared/Enum/Enum.vim @@ -85,6 +85,9 @@ syn keyword cConstant BasalforcingsIsmip6Gamma0Enum syn keyword cConstant BasalforcingsIsmip6IsLocalEnum syn keyword cConstant BasalforcingsIsmip6NumBasinsEnum syn keyword cConstant BasalforcingsIsmip6TfDepthsEnum +syn keyword cConstant BasalforcingsIsmip7TfDepthsEnum +syn keyword cConstant BasalforcingsIsmip7NumBasinsEnum +syn keyword cConstant BasalforcingsIsmip7GammaEnum syn keyword cConstant BasalforcingsLinearNumBasinsEnum syn keyword cConstant BasalforcingsLinearNumBreaksEnum syn keyword cConstant BasalforcingsLinearNumParamsEnum @@ -466,6 +469,7 @@ syn keyword cConstant SamplingRequestedOutputsEnum syn keyword cConstant SamplingRobinEnum syn keyword cConstant SamplingSeedEnum syn keyword cConstant SaveResultsEnum +syn keyword cConstant SaveFinalResultsEnum syn keyword cConstant SolidearthPartitionIceEnum syn keyword cConstant SolidearthPartitionHydroEnum syn keyword cConstant SolidearthPartitionOceanEnum @@ -774,6 +778,7 @@ syn keyword cConstant BalancethicknessOmegaEnum syn keyword cConstant BalancethicknessSpcthicknessEnum syn keyword cConstant BalancethicknessThickeningRateEnum syn keyword cConstant BasalCrevasseEnum +syn keyword cConstant BasalforcingsCoriolisFEnum syn keyword cConstant BasalforcingsDeepwaterMeltingRatearmaEnum syn keyword cConstant BasalforcingsDeepwaterMeltingRateNoiseEnum syn keyword cConstant BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum @@ -793,6 +798,10 @@ syn keyword cConstant BasalforcingsIsmip6BasinIdEnum syn keyword cConstant BasalforcingsIsmip6TfEnum syn keyword cConstant BasalforcingsIsmip6TfShelfEnum syn keyword cConstant BasalforcingsIsmip6MeltAnomalyEnum +syn keyword cConstant BasalforcingsIsmip7TfEnum +syn keyword cConstant BasalforcingsIsmip7TfShelfEnum +syn keyword cConstant BasalforcingsIsmip7SalinityEnum +syn keyword cConstant BasalforcingsIsmip7SalinityShelfEnum syn keyword cConstant BasalforcingsMeltrateFactorEnum syn keyword cConstant BasalforcingsOceanSalinityEnum syn keyword cConstant BasalforcingsOceanTempEnum @@ -1188,6 +1197,7 @@ syn keyword cConstant SmbHrefEnum syn keyword cConstant SmbIsInitializedEnum syn keyword cConstant SmbMAddEnum syn keyword cConstant SmbMappedforcingpointEnum +syn keyword cConstant SmbMappedforcingprecipscalingEnum syn keyword cConstant SmbMassBalanceEnum syn keyword cConstant SmbMassBalanceSnowEnum syn keyword cConstant SmbMassBalanceIceEnum @@ -3392,6 +3402,7 @@ syn keyword cConstant BalancethicknessSolutionEnum syn keyword cConstant BalancevelocityAnalysisEnum syn keyword cConstant BalancevelocitySolutionEnum syn keyword cConstant BasalforcingsIsmip6Enum +syn keyword cConstant BasalforcingsIsmip7Enum syn keyword cConstant BasalforcingsPicoEnum syn keyword cConstant BeckmannGoosseFloatingMeltRateEnum syn keyword cConstant BedSlopeSolutionEnum @@ -3465,6 +3476,7 @@ syn keyword cConstant DoubleVecParamEnum syn keyword cConstant ElementEnum syn keyword cConstant ElementHookEnum syn keyword cConstant ElementSIdEnum +syn keyword cConstant EmulatorParamEnum syn keyword cConstant EnthalpyAnalysisEnum syn keyword cConstant EsaAnalysisEnum syn keyword cConstant EsaSolutionEnum @@ -3486,6 +3498,7 @@ syn keyword cConstant FloatingMeltRateEnum syn keyword cConstant FreeEnum syn keyword cConstant FreeSurfaceBaseAnalysisEnum syn keyword cConstant FreeSurfaceTopAnalysisEnum +syn keyword cConstant FrictionEmulatorEnum syn keyword cConstant FrontalForcingsDefaultEnum syn keyword cConstant FrontalForcingsRignotEnum syn keyword cConstant FrontalForcingsRignotarmaEnum @@ -3521,6 +3534,7 @@ syn keyword cConstant HydrologyGlaDSEnum syn keyword cConstant HydrologyPismAnalysisEnum syn keyword cConstant HydrologyShaktiAnalysisEnum syn keyword cConstant HydrologyShreveAnalysisEnum +syn keyword cConstant HydrologyPrescribeAnalysisEnum syn keyword cConstant HydrologySolutionEnum syn keyword cConstant HydrologySubstepsEnum syn keyword cConstant HydrologySubTimeEnum @@ -3528,6 +3542,7 @@ syn keyword cConstant HydrologydcEnum syn keyword cConstant HydrologypismEnum syn keyword cConstant HydrologyshaktiEnum syn keyword cConstant HydrologyshreveEnum +syn keyword cConstant HydrologyprescribeEnum syn keyword cConstant IceMassEnum syn keyword cConstant IceMassScaledEnum syn keyword cConstant IceVolumeAboveFloatationEnum @@ -3689,6 +3704,7 @@ syn keyword cConstant SMBmeltcomponentsEnum syn keyword cConstant SMBpddEnum syn keyword cConstant SMBpddSicopolisEnum syn keyword cConstant SMBpddGCMEnum +syn keyword cConstant SMBpddFastEnum syn keyword cConstant SMBsemicEnum syn keyword cConstant SSAApproximationEnum syn keyword cConstant SSAFSApproximationEnum @@ -3748,6 +3764,7 @@ syn keyword cConstant TotalFloatingBmbEnum syn keyword cConstant TotalFloatingBmbScaledEnum syn keyword cConstant TotalGroundedBmbEnum syn keyword cConstant TotalGroundedBmbScaledEnum +syn keyword cConstant TotalHydrologyBasalFluxEnum syn keyword cConstant TotalSmbEnum syn keyword cConstant TotalSmbScaledEnum syn keyword cConstant TotalSmbRefreezeEnum @@ -3808,6 +3825,7 @@ syn keyword cType Cfsurfacelogvel syn keyword cType Cfsurfacesquare syn keyword cType Cfsurfacesquaretransient syn keyword cType Channel +syn keyword cType classes syn keyword cType Constraint syn keyword cType Constraints syn keyword cType Contour @@ -3815,8 +3833,8 @@ syn keyword cType Contours syn keyword cType ControlInput syn keyword cType ControlParam syn keyword cType Covertree -syn keyword cType DataSetParam syn keyword cType DatasetInput +syn keyword cType DataSetParam syn keyword cType Definition syn keyword cType DependentObject syn keyword cType DoubleInput @@ -3829,19 +3847,21 @@ syn keyword cType Element syn keyword cType ElementHook syn keyword cType ElementInput syn keyword cType ElementMatrix -syn keyword cType ElementVector syn keyword cType Elements +syn keyword cType ElementVector +syn keyword cType EmulatorParam syn keyword cType ExponentialVariogram syn keyword cType ExternalResult syn keyword cType FemModel syn keyword cType FileParam syn keyword cType Friction syn keyword cType Gauss +syn keyword cType GaussianVariogram +syn keyword cType gaussobjects syn keyword cType GaussPenta syn keyword cType GaussSeg syn keyword cType GaussTetra syn keyword cType GaussTria -syn keyword cType GaussianVariogram syn keyword cType GenericExternalResult syn keyword cType GenericOption syn keyword cType GenericParam @@ -3858,6 +3878,7 @@ syn keyword cType IntVecParam syn keyword cType IoModel syn keyword cType IssmDirectApplicInterface syn keyword cType IssmParallelDirectApplicInterface +syn keyword cType krigingobjects syn keyword cType Load syn keyword cType Loads syn keyword cType Masscon @@ -3868,6 +3889,7 @@ syn keyword cType Materials syn keyword cType Matestar syn keyword cType Matice syn keyword cType Matlitho +syn keyword cType matrixobjects syn keyword cType MatrixParam syn keyword cType Misfit syn keyword cType Moulin @@ -3894,13 +3916,13 @@ syn keyword cType Quadtree syn keyword cType Radar syn keyword cType Regionaloutput syn keyword cType Results -syn keyword cType RiftStruct syn keyword cType Riftfront +syn keyword cType RiftStruct syn keyword cType SealevelGeometry syn keyword cType Seg syn keyword cType SegInput -syn keyword cType SegRef syn keyword cType Segment +syn keyword cType SegRef syn keyword cType SpcDynamic syn keyword cType SpcStatic syn keyword cType SpcTransient @@ -3921,10 +3943,6 @@ syn keyword cType Variogram syn keyword cType VectorParam syn keyword cType Vertex syn keyword cType Vertices -syn keyword cType classes -syn keyword cType gaussobjects -syn keyword cType krigingobjects -syn keyword cType matrixobjects syn keyword cType AdjointBalancethickness2Analysis syn keyword cType AdjointBalancethicknessAnalysis syn keyword cType AdjointHorizAnalysis @@ -3951,6 +3969,7 @@ syn keyword cType HydrologyDCEfficientAnalysis syn keyword cType HydrologyDCInefficientAnalysis syn keyword cType HydrologyGlaDSAnalysis syn keyword cType HydrologyPismAnalysis +syn keyword cType HydrologyPrescribeAnalysis syn keyword cType HydrologyShaktiAnalysis syn keyword cType HydrologyShreveAnalysis syn keyword cType HydrologyTwsAnalysis diff --git a/src/c/shared/Enum/EnumDefinitions.h b/src/c/shared/Enum/EnumDefinitions.h index f941e36b5..2037722d7 100644 --- a/src/c/shared/Enum/EnumDefinitions.h +++ b/src/c/shared/Enum/EnumDefinitions.h @@ -79,6 +79,9 @@ enum definitions{ BasalforcingsIsmip6IsLocalEnum, BasalforcingsIsmip6NumBasinsEnum, BasalforcingsIsmip6TfDepthsEnum, + BasalforcingsIsmip7TfDepthsEnum, + BasalforcingsIsmip7NumBasinsEnum, + BasalforcingsIsmip7GammaEnum, BasalforcingsLinearNumBasinsEnum, BasalforcingsLinearNumBreaksEnum, BasalforcingsLinearNumParamsEnum, @@ -460,6 +463,7 @@ enum definitions{ SamplingRobinEnum, SamplingSeedEnum, SaveResultsEnum, + SaveFinalResultsEnum, SolidearthPartitionIceEnum, SolidearthPartitionHydroEnum, SolidearthPartitionOceanEnum, @@ -770,6 +774,7 @@ enum definitions{ BalancethicknessSpcthicknessEnum, BalancethicknessThickeningRateEnum, BasalCrevasseEnum, + BasalforcingsCoriolisFEnum, BasalforcingsDeepwaterMeltingRatearmaEnum, BasalforcingsDeepwaterMeltingRateNoiseEnum, BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum, @@ -789,6 +794,10 @@ enum definitions{ BasalforcingsIsmip6TfEnum, BasalforcingsIsmip6TfShelfEnum, BasalforcingsIsmip6MeltAnomalyEnum, + BasalforcingsIsmip7TfEnum, + BasalforcingsIsmip7TfShelfEnum, + BasalforcingsIsmip7SalinityEnum, + BasalforcingsIsmip7SalinityShelfEnum, BasalforcingsMeltrateFactorEnum, BasalforcingsOceanSalinityEnum, BasalforcingsOceanTempEnum, @@ -1184,6 +1193,7 @@ enum definitions{ SmbIsInitializedEnum, SmbMAddEnum, SmbMappedforcingpointEnum, + SmbMappedforcingprecipscalingEnum, SmbMassBalanceEnum, SmbMassBalanceSnowEnum, SmbMassBalanceIceEnum, @@ -3391,6 +3401,7 @@ enum definitions{ BalancevelocityAnalysisEnum, BalancevelocitySolutionEnum, BasalforcingsIsmip6Enum, + BasalforcingsIsmip7Enum, BasalforcingsPicoEnum, BeckmannGoosseFloatingMeltRateEnum, BedSlopeSolutionEnum, @@ -3464,6 +3475,7 @@ enum definitions{ ElementEnum, ElementHookEnum, ElementSIdEnum, + EmulatorParamEnum, EnthalpyAnalysisEnum, EsaAnalysisEnum, EsaSolutionEnum, @@ -3485,6 +3497,7 @@ enum definitions{ FreeEnum, FreeSurfaceBaseAnalysisEnum, FreeSurfaceTopAnalysisEnum, + FrictionEmulatorEnum, FrontalForcingsDefaultEnum, FrontalForcingsRignotEnum, FrontalForcingsRignotarmaEnum, @@ -3520,6 +3533,7 @@ enum definitions{ HydrologyPismAnalysisEnum, HydrologyShaktiAnalysisEnum, HydrologyShreveAnalysisEnum, + HydrologyPrescribeAnalysisEnum, HydrologySolutionEnum, HydrologySubstepsEnum, HydrologySubTimeEnum, @@ -3527,6 +3541,7 @@ enum definitions{ HydrologypismEnum, HydrologyshaktiEnum, HydrologyshreveEnum, + HydrologyprescribeEnum, IceMassEnum, IceMassScaledEnum, IceVolumeAboveFloatationEnum, @@ -3688,6 +3703,7 @@ enum definitions{ SMBpddEnum, SMBpddSicopolisEnum, SMBpddGCMEnum, + SMBpddFastEnum, SMBsemicEnum, SSAApproximationEnum, SSAFSApproximationEnum, @@ -3747,6 +3763,7 @@ enum definitions{ TotalFloatingBmbScaledEnum, TotalGroundedBmbEnum, TotalGroundedBmbScaledEnum, + TotalHydrologyBasalFluxEnum, TotalSmbEnum, TotalSmbScaledEnum, TotalSmbRefreezeEnum, diff --git a/src/c/shared/Enum/EnumToStringx.cpp b/src/c/shared/Enum/EnumToStringx.cpp index 14e78bb32..9deac1eab 100644 --- a/src/c/shared/Enum/EnumToStringx.cpp +++ b/src/c/shared/Enum/EnumToStringx.cpp @@ -87,6 +87,9 @@ const char* EnumToStringx(int en){ case BasalforcingsIsmip6IsLocalEnum : return "BasalforcingsIsmip6IsLocal"; case BasalforcingsIsmip6NumBasinsEnum : return "BasalforcingsIsmip6NumBasins"; case BasalforcingsIsmip6TfDepthsEnum : return "BasalforcingsIsmip6TfDepths"; + case BasalforcingsIsmip7TfDepthsEnum : return "BasalforcingsIsmip7TfDepths"; + case BasalforcingsIsmip7NumBasinsEnum : return "BasalforcingsIsmip7NumBasins"; + case BasalforcingsIsmip7GammaEnum : return "BasalforcingsIsmip7Gamma"; case BasalforcingsLinearNumBasinsEnum : return "BasalforcingsLinearNumBasins"; case BasalforcingsLinearNumBreaksEnum : return "BasalforcingsLinearNumBreaks"; case BasalforcingsLinearNumParamsEnum : return "BasalforcingsLinearNumParams"; @@ -468,6 +471,7 @@ const char* EnumToStringx(int en){ case SamplingRobinEnum : return "SamplingRobin"; case SamplingSeedEnum : return "SamplingSeed"; case SaveResultsEnum : return "SaveResults"; + case SaveFinalResultsEnum : return "SaveFinalResults"; case SolidearthPartitionIceEnum : return "SolidearthPartitionIce"; case SolidearthPartitionHydroEnum : return "SolidearthPartitionHydro"; case SolidearthPartitionOceanEnum : return "SolidearthPartitionOcean"; @@ -776,6 +780,7 @@ const char* EnumToStringx(int en){ case BalancethicknessSpcthicknessEnum : return "BalancethicknessSpcthickness"; case BalancethicknessThickeningRateEnum : return "BalancethicknessThickeningRate"; case BasalCrevasseEnum : return "BasalCrevasse"; + case BasalforcingsCoriolisFEnum : return "BasalforcingsCoriolisF"; case BasalforcingsDeepwaterMeltingRatearmaEnum : return "BasalforcingsDeepwaterMeltingRatearma"; case BasalforcingsDeepwaterMeltingRateNoiseEnum : return "BasalforcingsDeepwaterMeltingRateNoise"; case BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum : return "BasalforcingsDeepwaterMeltingRateValuesAutoregression"; @@ -795,6 +800,10 @@ const char* EnumToStringx(int en){ case BasalforcingsIsmip6TfEnum : return "BasalforcingsIsmip6Tf"; case BasalforcingsIsmip6TfShelfEnum : return "BasalforcingsIsmip6TfShelf"; case BasalforcingsIsmip6MeltAnomalyEnum : return "BasalforcingsIsmip6MeltAnomaly"; + case BasalforcingsIsmip7TfEnum : return "BasalforcingsIsmip7Tf"; + case BasalforcingsIsmip7TfShelfEnum : return "BasalforcingsIsmip7TfShelf"; + case BasalforcingsIsmip7SalinityEnum : return "BasalforcingsIsmip7Salinity"; + case BasalforcingsIsmip7SalinityShelfEnum : return "BasalforcingsIsmip7SalinityShelf"; case BasalforcingsMeltrateFactorEnum : return "BasalforcingsMeltrateFactor"; case BasalforcingsOceanSalinityEnum : return "BasalforcingsOceanSalinity"; case BasalforcingsOceanTempEnum : return "BasalforcingsOceanTemp"; @@ -1190,6 +1199,7 @@ const char* EnumToStringx(int en){ case SmbIsInitializedEnum : return "SmbIsInitialized"; case SmbMAddEnum : return "SmbMAdd"; case SmbMappedforcingpointEnum : return "SmbMappedforcingpoint"; + case SmbMappedforcingprecipscalingEnum : return "SmbMappedforcingprecipscaling"; case SmbMassBalanceEnum : return "SmbMassBalance"; case SmbMassBalanceSnowEnum : return "SmbMassBalanceSnow"; case SmbMassBalanceIceEnum : return "SmbMassBalanceIce"; @@ -3394,6 +3404,7 @@ const char* EnumToStringx(int en){ case BalancevelocityAnalysisEnum : return "BalancevelocityAnalysis"; case BalancevelocitySolutionEnum : return "BalancevelocitySolution"; case BasalforcingsIsmip6Enum : return "BasalforcingsIsmip6"; + case BasalforcingsIsmip7Enum : return "BasalforcingsIsmip7"; case BasalforcingsPicoEnum : return "BasalforcingsPico"; case BeckmannGoosseFloatingMeltRateEnum : return "BeckmannGoosseFloatingMeltRate"; case BedSlopeSolutionEnum : return "BedSlopeSolution"; @@ -3467,6 +3478,7 @@ const char* EnumToStringx(int en){ case ElementEnum : return "Element"; case ElementHookEnum : return "ElementHook"; case ElementSIdEnum : return "ElementSId"; + case EmulatorParamEnum : return "EmulatorParam"; case EnthalpyAnalysisEnum : return "EnthalpyAnalysis"; case EsaAnalysisEnum : return "EsaAnalysis"; case EsaSolutionEnum : return "EsaSolution"; @@ -3488,6 +3500,7 @@ const char* EnumToStringx(int en){ case FreeEnum : return "Free"; case FreeSurfaceBaseAnalysisEnum : return "FreeSurfaceBaseAnalysis"; case FreeSurfaceTopAnalysisEnum : return "FreeSurfaceTopAnalysis"; + case FrictionEmulatorEnum : return "FrictionEmulator"; case FrontalForcingsDefaultEnum : return "FrontalForcingsDefault"; case FrontalForcingsRignotEnum : return "FrontalForcingsRignot"; case FrontalForcingsRignotarmaEnum : return "FrontalForcingsRignotarma"; @@ -3523,6 +3536,7 @@ const char* EnumToStringx(int en){ case HydrologyPismAnalysisEnum : return "HydrologyPismAnalysis"; case HydrologyShaktiAnalysisEnum : return "HydrologyShaktiAnalysis"; case HydrologyShreveAnalysisEnum : return "HydrologyShreveAnalysis"; + case HydrologyPrescribeAnalysisEnum : return "HydrologyPrescribeAnalysis"; case HydrologySolutionEnum : return "HydrologySolution"; case HydrologySubstepsEnum : return "HydrologySubsteps"; case HydrologySubTimeEnum : return "HydrologySubTime"; @@ -3530,6 +3544,7 @@ const char* EnumToStringx(int en){ case HydrologypismEnum : return "Hydrologypism"; case HydrologyshaktiEnum : return "Hydrologyshakti"; case HydrologyshreveEnum : return "Hydrologyshreve"; + case HydrologyprescribeEnum : return "Hydrologyprescribe"; case IceMassEnum : return "IceMass"; case IceMassScaledEnum : return "IceMassScaled"; case IceVolumeAboveFloatationEnum : return "IceVolumeAboveFloatation"; @@ -3691,6 +3706,7 @@ const char* EnumToStringx(int en){ case SMBpddEnum : return "SMBpdd"; case SMBpddSicopolisEnum : return "SMBpddSicopolis"; case SMBpddGCMEnum : return "SMBpddGCM"; + case SMBpddFastEnum : return "SMBpddFast"; case SMBsemicEnum : return "SMBsemic"; case SSAApproximationEnum : return "SSAApproximation"; case SSAFSApproximationEnum : return "SSAFSApproximation"; @@ -3750,6 +3766,7 @@ const char* EnumToStringx(int en){ case TotalFloatingBmbScaledEnum : return "TotalFloatingBmbScaled"; case TotalGroundedBmbEnum : return "TotalGroundedBmb"; case TotalGroundedBmbScaledEnum : return "TotalGroundedBmbScaled"; + case TotalHydrologyBasalFluxEnum : return "TotalHydrologyBasalFlux"; case TotalSmbEnum : return "TotalSmb"; case TotalSmbScaledEnum : return "TotalSmbScaled"; case TotalSmbRefreezeEnum : return "TotalSmbRefreeze"; diff --git a/src/c/shared/Enum/Enumjl.vim b/src/c/shared/Enum/Enumjl.vim index ce3c81591..107ea2bac 100644 --- a/src/c/shared/Enum/Enumjl.vim +++ b/src/c/shared/Enum/Enumjl.vim @@ -78,6 +78,9 @@ syn keyword juliaConstC BasalforcingsIsmip6Gamma0Enum syn keyword juliaConstC BasalforcingsIsmip6IsLocalEnum syn keyword juliaConstC BasalforcingsIsmip6NumBasinsEnum syn keyword juliaConstC BasalforcingsIsmip6TfDepthsEnum +syn keyword juliaConstC BasalforcingsIsmip7TfDepthsEnum +syn keyword juliaConstC BasalforcingsIsmip7NumBasinsEnum +syn keyword juliaConstC BasalforcingsIsmip7GammaEnum syn keyword juliaConstC BasalforcingsLinearNumBasinsEnum syn keyword juliaConstC BasalforcingsLinearNumBreaksEnum syn keyword juliaConstC BasalforcingsLinearNumParamsEnum @@ -459,6 +462,7 @@ syn keyword juliaConstC SamplingRequestedOutputsEnum syn keyword juliaConstC SamplingRobinEnum syn keyword juliaConstC SamplingSeedEnum syn keyword juliaConstC SaveResultsEnum +syn keyword juliaConstC SaveFinalResultsEnum syn keyword juliaConstC SolidearthPartitionIceEnum syn keyword juliaConstC SolidearthPartitionHydroEnum syn keyword juliaConstC SolidearthPartitionOceanEnum @@ -767,6 +771,7 @@ syn keyword juliaConstC BalancethicknessOmegaEnum syn keyword juliaConstC BalancethicknessSpcthicknessEnum syn keyword juliaConstC BalancethicknessThickeningRateEnum syn keyword juliaConstC BasalCrevasseEnum +syn keyword juliaConstC BasalforcingsCoriolisFEnum syn keyword juliaConstC BasalforcingsDeepwaterMeltingRatearmaEnum syn keyword juliaConstC BasalforcingsDeepwaterMeltingRateNoiseEnum syn keyword juliaConstC BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum @@ -786,6 +791,10 @@ syn keyword juliaConstC BasalforcingsIsmip6BasinIdEnum syn keyword juliaConstC BasalforcingsIsmip6TfEnum syn keyword juliaConstC BasalforcingsIsmip6TfShelfEnum syn keyword juliaConstC BasalforcingsIsmip6MeltAnomalyEnum +syn keyword juliaConstC BasalforcingsIsmip7TfEnum +syn keyword juliaConstC BasalforcingsIsmip7TfShelfEnum +syn keyword juliaConstC BasalforcingsIsmip7SalinityEnum +syn keyword juliaConstC BasalforcingsIsmip7SalinityShelfEnum syn keyword juliaConstC BasalforcingsMeltrateFactorEnum syn keyword juliaConstC BasalforcingsOceanSalinityEnum syn keyword juliaConstC BasalforcingsOceanTempEnum @@ -1181,6 +1190,7 @@ syn keyword juliaConstC SmbHrefEnum syn keyword juliaConstC SmbIsInitializedEnum syn keyword juliaConstC SmbMAddEnum syn keyword juliaConstC SmbMappedforcingpointEnum +syn keyword juliaConstC SmbMappedforcingprecipscalingEnum syn keyword juliaConstC SmbMassBalanceEnum syn keyword juliaConstC SmbMassBalanceSnowEnum syn keyword juliaConstC SmbMassBalanceIceEnum @@ -3385,6 +3395,7 @@ syn keyword juliaConstC BalancethicknessSolutionEnum syn keyword juliaConstC BalancevelocityAnalysisEnum syn keyword juliaConstC BalancevelocitySolutionEnum syn keyword juliaConstC BasalforcingsIsmip6Enum +syn keyword juliaConstC BasalforcingsIsmip7Enum syn keyword juliaConstC BasalforcingsPicoEnum syn keyword juliaConstC BeckmannGoosseFloatingMeltRateEnum syn keyword juliaConstC BedSlopeSolutionEnum @@ -3458,6 +3469,7 @@ syn keyword juliaConstC DoubleVecParamEnum syn keyword juliaConstC ElementEnum syn keyword juliaConstC ElementHookEnum syn keyword juliaConstC ElementSIdEnum +syn keyword juliaConstC EmulatorParamEnum syn keyword juliaConstC EnthalpyAnalysisEnum syn keyword juliaConstC EsaAnalysisEnum syn keyword juliaConstC EsaSolutionEnum @@ -3479,6 +3491,7 @@ syn keyword juliaConstC FloatingMeltRateEnum syn keyword juliaConstC FreeEnum syn keyword juliaConstC FreeSurfaceBaseAnalysisEnum syn keyword juliaConstC FreeSurfaceTopAnalysisEnum +syn keyword juliaConstC FrictionEmulatorEnum syn keyword juliaConstC FrontalForcingsDefaultEnum syn keyword juliaConstC FrontalForcingsRignotEnum syn keyword juliaConstC FrontalForcingsRignotarmaEnum @@ -3514,6 +3527,7 @@ syn keyword juliaConstC HydrologyGlaDSEnum syn keyword juliaConstC HydrologyPismAnalysisEnum syn keyword juliaConstC HydrologyShaktiAnalysisEnum syn keyword juliaConstC HydrologyShreveAnalysisEnum +syn keyword juliaConstC HydrologyPrescribeAnalysisEnum syn keyword juliaConstC HydrologySolutionEnum syn keyword juliaConstC HydrologySubstepsEnum syn keyword juliaConstC HydrologySubTimeEnum @@ -3521,6 +3535,7 @@ syn keyword juliaConstC HydrologydcEnum syn keyword juliaConstC HydrologypismEnum syn keyword juliaConstC HydrologyshaktiEnum syn keyword juliaConstC HydrologyshreveEnum +syn keyword juliaConstC HydrologyprescribeEnum syn keyword juliaConstC IceMassEnum syn keyword juliaConstC IceMassScaledEnum syn keyword juliaConstC IceVolumeAboveFloatationEnum @@ -3682,6 +3697,7 @@ syn keyword juliaConstC SMBmeltcomponentsEnum syn keyword juliaConstC SMBpddEnum syn keyword juliaConstC SMBpddSicopolisEnum syn keyword juliaConstC SMBpddGCMEnum +syn keyword juliaConstC SMBpddFastEnum syn keyword juliaConstC SMBsemicEnum syn keyword juliaConstC SSAApproximationEnum syn keyword juliaConstC SSAFSApproximationEnum @@ -3741,6 +3757,7 @@ syn keyword juliaConstC TotalFloatingBmbEnum syn keyword juliaConstC TotalFloatingBmbScaledEnum syn keyword juliaConstC TotalGroundedBmbEnum syn keyword juliaConstC TotalGroundedBmbScaledEnum +syn keyword juliaConstC TotalHydrologyBasalFluxEnum syn keyword juliaConstC TotalSmbEnum syn keyword juliaConstC TotalSmbScaledEnum syn keyword juliaConstC TotalSmbRefreezeEnum diff --git a/src/c/shared/Enum/StringToEnumx.cpp b/src/c/shared/Enum/StringToEnumx.cpp index b75f68d5c..e2415a672 100644 --- a/src/c/shared/Enum/StringToEnumx.cpp +++ b/src/c/shared/Enum/StringToEnumx.cpp @@ -87,6 +87,9 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"BasalforcingsIsmip6IsLocal")==0) return BasalforcingsIsmip6IsLocalEnum; else if (strcmp(name,"BasalforcingsIsmip6NumBasins")==0) return BasalforcingsIsmip6NumBasinsEnum; else if (strcmp(name,"BasalforcingsIsmip6TfDepths")==0) return BasalforcingsIsmip6TfDepthsEnum; + else if (strcmp(name,"BasalforcingsIsmip7TfDepths")==0) return BasalforcingsIsmip7TfDepthsEnum; + else if (strcmp(name,"BasalforcingsIsmip7NumBasins")==0) return BasalforcingsIsmip7NumBasinsEnum; + else if (strcmp(name,"BasalforcingsIsmip7Gamma")==0) return BasalforcingsIsmip7GammaEnum; else if (strcmp(name,"BasalforcingsLinearNumBasins")==0) return BasalforcingsLinearNumBasinsEnum; else if (strcmp(name,"BasalforcingsLinearNumBreaks")==0) return BasalforcingsLinearNumBreaksEnum; else if (strcmp(name,"BasalforcingsLinearNumParams")==0) return BasalforcingsLinearNumParamsEnum; @@ -133,13 +136,13 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"CalvingXoffset")==0) return CalvingXoffsetEnum; else if (strcmp(name,"CalvingYoffset")==0) return CalvingYoffsetEnum; else if (strcmp(name,"CalvingVelLowerbound")==0) return CalvingVelLowerboundEnum; - else if (strcmp(name,"CalvingVelMax")==0) return CalvingVelMaxEnum; - else if (strcmp(name,"CalvingVelThreshold")==0) return CalvingVelThresholdEnum; - else if (strcmp(name,"CalvingVelUpperbound")==0) return CalvingVelUpperboundEnum; else stage=2; } if(stage==2){ - if (strcmp(name,"CalvingRc")==0) return CalvingRcEnum; + if (strcmp(name,"CalvingVelMax")==0) return CalvingVelMaxEnum; + else if (strcmp(name,"CalvingVelThreshold")==0) return CalvingVelThresholdEnum; + else if (strcmp(name,"CalvingVelUpperbound")==0) return CalvingVelUpperboundEnum; + else if (strcmp(name,"CalvingRc")==0) return CalvingRcEnum; else if (strcmp(name,"CalvingNumberofBasins")==0) return CalvingNumberofBasinsEnum; else if (strcmp(name,"ConfigurationType")==0) return ConfigurationTypeEnum; else if (strcmp(name,"ConstantsG")==0) return ConstantsGEnum; @@ -256,13 +259,13 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"FrontalForcingsSdmaOrder")==0) return FrontalForcingsSdmaOrderEnum; else if (strcmp(name,"FrontalForcingsSdMonthlyFrac")==0) return FrontalForcingsSdMonthlyFracEnum; else if (strcmp(name,"FrontalForcingsSdNumberofBreaks")==0) return FrontalForcingsSdNumberofBreaksEnum; - else if (strcmp(name,"FrontalForcingsSdNumberofParams")==0) return FrontalForcingsSdNumberofParamsEnum; - else if (strcmp(name,"FrontalForcingsSdpolyparams")==0) return FrontalForcingsSdpolyparamsEnum; - else if (strcmp(name,"GrdModel")==0) return GrdModelEnum; else stage=3; } if(stage==3){ - if (strcmp(name,"GroundinglineFrictionInterpolation")==0) return GroundinglineFrictionInterpolationEnum; + if (strcmp(name,"FrontalForcingsSdNumberofParams")==0) return FrontalForcingsSdNumberofParamsEnum; + else if (strcmp(name,"FrontalForcingsSdpolyparams")==0) return FrontalForcingsSdpolyparamsEnum; + else if (strcmp(name,"GrdModel")==0) return GrdModelEnum; + else if (strcmp(name,"GroundinglineFrictionInterpolation")==0) return GroundinglineFrictionInterpolationEnum; else if (strcmp(name,"GroundinglineMeltInterpolation")==0) return GroundinglineMeltInterpolationEnum; else if (strcmp(name,"GroundinglineMigration")==0) return GroundinglineMigrationEnum; else if (strcmp(name,"GroundinglineNumRequestedOutputs")==0) return GroundinglineNumRequestedOutputsEnum; @@ -379,13 +382,13 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"LoveIntStepsPerLayer")==0) return LoveIntStepsPerLayerEnum; else if (strcmp(name,"LoveMinIntegrationSteps")==0) return LoveMinIntegrationStepsEnum; else if (strcmp(name,"LoveMaxIntegrationdr")==0) return LoveMaxIntegrationdrEnum; - else if (strcmp(name,"LoveIntegrationScheme")==0) return LoveIntegrationSchemeEnum; - else if (strcmp(name,"LoveKernels")==0) return LoveKernelsEnum; - else if (strcmp(name,"LoveMu0")==0) return LoveMu0Enum; else stage=4; } if(stage==4){ - if (strcmp(name,"LoveNfreq")==0) return LoveNfreqEnum; + if (strcmp(name,"LoveIntegrationScheme")==0) return LoveIntegrationSchemeEnum; + else if (strcmp(name,"LoveKernels")==0) return LoveKernelsEnum; + else if (strcmp(name,"LoveMu0")==0) return LoveMu0Enum; + else if (strcmp(name,"LoveNfreq")==0) return LoveNfreqEnum; else if (strcmp(name,"LoveNTemporalIterations")==0) return LoveNTemporalIterationsEnum; else if (strcmp(name,"LoveNYiEquations")==0) return LoveNYiEquationsEnum; else if (strcmp(name,"LoveR0")==0) return LoveR0Enum; @@ -477,6 +480,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SamplingRobin")==0) return SamplingRobinEnum; else if (strcmp(name,"SamplingSeed")==0) return SamplingSeedEnum; else if (strcmp(name,"SaveResults")==0) return SaveResultsEnum; + else if (strcmp(name,"SaveFinalResults")==0) return SaveFinalResultsEnum; else if (strcmp(name,"SolidearthPartitionIce")==0) return SolidearthPartitionIceEnum; else if (strcmp(name,"SolidearthPartitionHydro")==0) return SolidearthPartitionHydroEnum; else if (strcmp(name,"SolidearthPartitionOcean")==0) return SolidearthPartitionOceanEnum; @@ -501,14 +505,14 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SealevelchangeRequestedOutputs")==0) return SealevelchangeRequestedOutputsEnum; else if (strcmp(name,"RotationalAngularVelocity")==0) return RotationalAngularVelocityEnum; else if (strcmp(name,"RotationalEquatorialMoi")==0) return RotationalEquatorialMoiEnum; - else if (strcmp(name,"RotationalPolarMoi")==0) return RotationalPolarMoiEnum; - else if (strcmp(name,"LovePolarMotionTransferFunctionColinear")==0) return LovePolarMotionTransferFunctionColinearEnum; - else if (strcmp(name,"LovePolarMotionTransferFunctionOrthogonal")==0) return LovePolarMotionTransferFunctionOrthogonalEnum; - else if (strcmp(name,"TidalLoveH")==0) return TidalLoveHEnum; else stage=5; } if(stage==5){ - if (strcmp(name,"TidalLoveK")==0) return TidalLoveKEnum; + if (strcmp(name,"RotationalPolarMoi")==0) return RotationalPolarMoiEnum; + else if (strcmp(name,"LovePolarMotionTransferFunctionColinear")==0) return LovePolarMotionTransferFunctionColinearEnum; + else if (strcmp(name,"LovePolarMotionTransferFunctionOrthogonal")==0) return LovePolarMotionTransferFunctionOrthogonalEnum; + else if (strcmp(name,"TidalLoveH")==0) return TidalLoveHEnum; + else if (strcmp(name,"TidalLoveK")==0) return TidalLoveKEnum; else if (strcmp(name,"TidalLoveL")==0) return TidalLoveLEnum; else if (strcmp(name,"TidalLoveK2Secular")==0) return TidalLoveK2SecularEnum; else if (strcmp(name,"LoadLoveH")==0) return LoadLoveHEnum; @@ -624,14 +628,14 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SmbIsdensification")==0) return SmbIsdensificationEnum; else if (strcmp(name,"SmbIsdeltaLWup")==0) return SmbIsdeltaLWupEnum; else if (strcmp(name,"SmbIsfirnwarming")==0) return SmbIsfirnwarmingEnum; - else if (strcmp(name,"SmbIsgraingrowth")==0) return SmbIsgraingrowthEnum; - else if (strcmp(name,"SmbIsmappedforcing")==0) return SmbIsmappedforcingEnum; - else if (strcmp(name,"SmbIsmelt")==0) return SmbIsmeltEnum; - else if (strcmp(name,"SmbIsmungsm")==0) return SmbIsmungsmEnum; else stage=6; } if(stage==6){ - if (strcmp(name,"SmbIsprecipforcingremapped")==0) return SmbIsprecipforcingremappedEnum; + if (strcmp(name,"SmbIsgraingrowth")==0) return SmbIsgraingrowthEnum; + else if (strcmp(name,"SmbIsmappedforcing")==0) return SmbIsmappedforcingEnum; + else if (strcmp(name,"SmbIsmelt")==0) return SmbIsmeltEnum; + else if (strcmp(name,"SmbIsmungsm")==0) return SmbIsmungsmEnum; + else if (strcmp(name,"SmbIsprecipforcingremapped")==0) return SmbIsprecipforcingremappedEnum; else if (strcmp(name,"SmbIsprecipscaled")==0) return SmbIsprecipscaledEnum; else if (strcmp(name,"SmbIssetpddfac")==0) return SmbIssetpddfacEnum; else if (strcmp(name,"SmbIsshortwave")==0) return SmbIsshortwaveEnum; @@ -747,14 +751,14 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"TransientIsdamageevolution")==0) return TransientIsdamageevolutionEnum; else if (strcmp(name,"TransientIsdebris")==0) return TransientIsdebrisEnum; else if (strcmp(name,"TransientIsesa")==0) return TransientIsesaEnum; - else if (strcmp(name,"TransientIsgia")==0) return TransientIsgiaEnum; - else if (strcmp(name,"TransientIsgroundingline")==0) return TransientIsgroundinglineEnum; - else if (strcmp(name,"TransientIshydrology")==0) return TransientIshydrologyEnum; - else if (strcmp(name,"TransientIsmasstransport")==0) return TransientIsmasstransportEnum; else stage=7; } if(stage==7){ - if (strcmp(name,"TransientIsmmemasstransport")==0) return TransientIsmmemasstransportEnum; + if (strcmp(name,"TransientIsgia")==0) return TransientIsgiaEnum; + else if (strcmp(name,"TransientIsgroundingline")==0) return TransientIsgroundinglineEnum; + else if (strcmp(name,"TransientIshydrology")==0) return TransientIshydrologyEnum; + else if (strcmp(name,"TransientIsmasstransport")==0) return TransientIsmasstransportEnum; + else if (strcmp(name,"TransientIsmmemasstransport")==0) return TransientIsmmemasstransportEnum; else if (strcmp(name,"TransientIsoceantransport")==0) return TransientIsoceantransportEnum; else if (strcmp(name,"TransientIsmovingfront")==0) return TransientIsmovingfrontEnum; else if (strcmp(name,"TransientIsoceancoupling")==0) return TransientIsoceancouplingEnum; @@ -794,6 +798,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"BalancethicknessSpcthickness")==0) return BalancethicknessSpcthicknessEnum; else if (strcmp(name,"BalancethicknessThickeningRate")==0) return BalancethicknessThickeningRateEnum; else if (strcmp(name,"BasalCrevasse")==0) return BasalCrevasseEnum; + else if (strcmp(name,"BasalforcingsCoriolisF")==0) return BasalforcingsCoriolisFEnum; else if (strcmp(name,"BasalforcingsDeepwaterMeltingRatearma")==0) return BasalforcingsDeepwaterMeltingRatearmaEnum; else if (strcmp(name,"BasalforcingsDeepwaterMeltingRateNoise")==0) return BasalforcingsDeepwaterMeltingRateNoiseEnum; else if (strcmp(name,"BasalforcingsDeepwaterMeltingRateValuesAutoregression")==0) return BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum; @@ -813,6 +818,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"BasalforcingsIsmip6Tf")==0) return BasalforcingsIsmip6TfEnum; else if (strcmp(name,"BasalforcingsIsmip6TfShelf")==0) return BasalforcingsIsmip6TfShelfEnum; else if (strcmp(name,"BasalforcingsIsmip6MeltAnomaly")==0) return BasalforcingsIsmip6MeltAnomalyEnum; + else if (strcmp(name,"BasalforcingsIsmip7Tf")==0) return BasalforcingsIsmip7TfEnum; + else if (strcmp(name,"BasalforcingsIsmip7TfShelf")==0) return BasalforcingsIsmip7TfShelfEnum; + else if (strcmp(name,"BasalforcingsIsmip7Salinity")==0) return BasalforcingsIsmip7SalinityEnum; + else if (strcmp(name,"BasalforcingsIsmip7SalinityShelf")==0) return BasalforcingsIsmip7SalinityShelfEnum; else if (strcmp(name,"BasalforcingsMeltrateFactor")==0) return BasalforcingsMeltrateFactorEnum; else if (strcmp(name,"BasalforcingsOceanSalinity")==0) return BasalforcingsOceanSalinityEnum; else if (strcmp(name,"BasalforcingsOceanTemp")==0) return BasalforcingsOceanTempEnum; @@ -865,7 +874,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"DamageF")==0) return DamageFEnum; else if (strcmp(name,"DebrisThickness")==0) return DebrisThicknessEnum; else if (strcmp(name,"DegreeOfChannelization")==0) return DegreeOfChannelizationEnum; - else if (strcmp(name,"DepthBelowSurface")==0) return DepthBelowSurfaceEnum; + else stage=8; + } + if(stage==8){ + if (strcmp(name,"DepthBelowSurface")==0) return DepthBelowSurfaceEnum; else if (strcmp(name,"DeltaIceThickness")==0) return DeltaIceThicknessEnum; else if (strcmp(name,"DeltaTws")==0) return DeltaTwsEnum; else if (strcmp(name,"DeltaBottomPressure")==0) return DeltaBottomPressureEnum; @@ -874,10 +886,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Dsl")==0) return DslEnum; else if (strcmp(name,"DeltaStr")==0) return DeltaStrEnum; else if (strcmp(name,"StrOld")==0) return StrOldEnum; - else stage=8; - } - if(stage==8){ - if (strcmp(name,"Str")==0) return StrEnum; + else if (strcmp(name,"Str")==0) return StrEnum; else if (strcmp(name,"DeviatoricStresseffective")==0) return DeviatoricStresseffectiveEnum; else if (strcmp(name,"DeviatoricStressxx")==0) return DeviatoricStressxxEnum; else if (strcmp(name,"DeviatoricStressxy")==0) return DeviatoricStressxyEnum; @@ -988,7 +997,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"HydrologySheetThicknessOld")==0) return HydrologySheetThicknessOldEnum; else if (strcmp(name,"HydrologyStorage")==0) return HydrologyStorageEnum; else if (strcmp(name,"HydrologyTws")==0) return HydrologyTwsEnum; - else if (strcmp(name,"HydrologyTwsSpc")==0) return HydrologyTwsSpcEnum; + else stage=9; + } + if(stage==9){ + if (strcmp(name,"HydrologyTwsSpc")==0) return HydrologyTwsSpcEnum; else if (strcmp(name,"HydrologyTwsAnalysis")==0) return HydrologyTwsAnalysisEnum; else if (strcmp(name,"HydrologyWatercolumnMax")==0) return HydrologyWatercolumnMaxEnum; else if (strcmp(name,"HydrologyWaterVx")==0) return HydrologyWaterVxEnum; @@ -997,10 +1009,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"DebrisMaskNodeActivation")==0) return DebrisMaskNodeActivationEnum; else if (strcmp(name,"Ice")==0) return IceEnum; else if (strcmp(name,"IceMaskNodeActivation")==0) return IceMaskNodeActivationEnum; - else stage=9; - } - if(stage==9){ - if (strcmp(name,"Input")==0) return InputEnum; + else if (strcmp(name,"Input")==0) return InputEnum; else if (strcmp(name,"InversionCostFunctionsCoefficients")==0) return InversionCostFunctionsCoefficientsEnum; else if (strcmp(name,"InversionSurfaceObs")==0) return InversionSurfaceObsEnum; else if (strcmp(name,"InversionThicknessObs")==0) return InversionThicknessObsEnum; @@ -1111,7 +1120,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SealevelchangeAzimuthIndex")==0) return SealevelchangeAzimuthIndexEnum; else if (strcmp(name,"SealevelchangeGrot")==0) return SealevelchangeGrotEnum; else if (strcmp(name,"SealevelchangeGSatGravirot")==0) return SealevelchangeGSatGravirotEnum; - else if (strcmp(name,"SealevelchangeGUrot")==0) return SealevelchangeGUrotEnum; + else stage=10; + } + if(stage==10){ + if (strcmp(name,"SealevelchangeGUrot")==0) return SealevelchangeGUrotEnum; else if (strcmp(name,"SealevelchangeGNrot")==0) return SealevelchangeGNrotEnum; else if (strcmp(name,"SealevelchangeGErot")==0) return SealevelchangeGErotEnum; else if (strcmp(name,"SealevelchangeAlphaIndexOcean")==0) return SealevelchangeAlphaIndexOceanEnum; @@ -1120,10 +1132,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SealevelchangeAzimuthIndexOcean")==0) return SealevelchangeAzimuthIndexOceanEnum; else if (strcmp(name,"SealevelchangeAzimuthIndexIce")==0) return SealevelchangeAzimuthIndexIceEnum; else if (strcmp(name,"SealevelchangeAzimuthIndexHydro")==0) return SealevelchangeAzimuthIndexHydroEnum; - else stage=10; - } - if(stage==10){ - if (strcmp(name,"SealevelchangeViscousRSL")==0) return SealevelchangeViscousRSLEnum; + else if (strcmp(name,"SealevelchangeViscousRSL")==0) return SealevelchangeViscousRSLEnum; else if (strcmp(name,"SealevelchangeViscousSG")==0) return SealevelchangeViscousSGEnum; else if (strcmp(name,"SealevelchangeViscousU")==0) return SealevelchangeViscousUEnum; else if (strcmp(name,"SealevelchangeViscousN")==0) return SealevelchangeViscousNEnum; @@ -1217,6 +1226,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SmbIsInitialized")==0) return SmbIsInitializedEnum; else if (strcmp(name,"SmbMAdd")==0) return SmbMAddEnum; else if (strcmp(name,"SmbMappedforcingpoint")==0) return SmbMappedforcingpointEnum; + else if (strcmp(name,"SmbMappedforcingprecipscaling")==0) return SmbMappedforcingprecipscalingEnum; else if (strcmp(name,"SmbMassBalance")==0) return SmbMassBalanceEnum; else if (strcmp(name,"SmbMassBalanceSnow")==0) return SmbMassBalanceSnowEnum; else if (strcmp(name,"SmbMassBalanceIce")==0) return SmbMassBalanceIceEnum; @@ -1233,7 +1243,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SmbMonthlydsradiation")==0) return SmbMonthlydsradiationEnum; else if (strcmp(name,"SmbMonthlydlradiation")==0) return SmbMonthlydlradiationEnum; else if (strcmp(name,"SmbMonthlywindspeed")==0) return SmbMonthlywindspeedEnum; - else if (strcmp(name,"SmbMonthlyairhumidity")==0) return SmbMonthlyairhumidityEnum; + else stage=11; + } + if(stage==11){ + if (strcmp(name,"SmbMonthlyairhumidity")==0) return SmbMonthlyairhumidityEnum; else if (strcmp(name,"SmbMSurf")==0) return SmbMSurfEnum; else if (strcmp(name,"SmbMSurfSum")==0) return SmbMSurfSumEnum; else if (strcmp(name,"SmbNetLW")==0) return SmbNetLWEnum; @@ -1243,10 +1256,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SmbPddfacIce")==0) return SmbPddfacIceEnum; else if (strcmp(name,"SmbPddfacSnow")==0) return SmbPddfacSnowEnum; else if (strcmp(name,"SmbPrecipitation")==0) return SmbPrecipitationEnum; - else stage=11; - } - if(stage==11){ - if (strcmp(name,"SmbPrecipitationSubstep")==0) return SmbPrecipitationSubstepEnum; + else if (strcmp(name,"SmbPrecipitationSubstep")==0) return SmbPrecipitationSubstepEnum; else if (strcmp(name,"SmbPrecipitationsAnomaly")==0) return SmbPrecipitationsAnomalyEnum; else if (strcmp(name,"SmbDsradiationAnomaly")==0) return SmbDsradiationAnomalyEnum; else if (strcmp(name,"SmbDlradiationAnomaly")==0) return SmbDlradiationAnomalyEnum; @@ -1356,7 +1366,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"ThicknessAbsGradient")==0) return ThicknessAbsGradientEnum; else if (strcmp(name,"ThicknessAbsMisfit")==0) return ThicknessAbsMisfitEnum; else if (strcmp(name,"ThicknessAcrossGradient")==0) return ThicknessAcrossGradientEnum; - else if (strcmp(name,"ThicknessAlongGradient")==0) return ThicknessAlongGradientEnum; + else stage=12; + } + if(stage==12){ + if (strcmp(name,"ThicknessAlongGradient")==0) return ThicknessAlongGradientEnum; else if (strcmp(name,"Thickness")==0) return ThicknessEnum; else if (strcmp(name,"ThicknessOld")==0) return ThicknessOldEnum; else if (strcmp(name,"ThicknessPositive")==0) return ThicknessPositiveEnum; @@ -1366,10 +1379,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"VxAverage")==0) return VxAverageEnum; else if (strcmp(name,"VxBase")==0) return VxBaseEnum; else if (strcmp(name,"VxDebris")==0) return VxDebrisEnum; - else stage=12; - } - if(stage==12){ - if (strcmp(name,"Vx")==0) return VxEnum; + else if (strcmp(name,"Vx")==0) return VxEnum; else if (strcmp(name,"VxMesh")==0) return VxMeshEnum; else if (strcmp(name,"VxObs")==0) return VxObsEnum; else if (strcmp(name,"VxShear")==0) return VxShearEnum; @@ -1479,7 +1489,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition76")==0) return Outputdefinition76Enum; else if (strcmp(name,"Outputdefinition77")==0) return Outputdefinition77Enum; else if (strcmp(name,"Outputdefinition78")==0) return Outputdefinition78Enum; - else if (strcmp(name,"Outputdefinition79")==0) return Outputdefinition79Enum; + else stage=13; + } + if(stage==13){ + if (strcmp(name,"Outputdefinition79")==0) return Outputdefinition79Enum; else if (strcmp(name,"Outputdefinition7")==0) return Outputdefinition7Enum; else if (strcmp(name,"Outputdefinition80")==0) return Outputdefinition80Enum; else if (strcmp(name,"Outputdefinition81")==0) return Outputdefinition81Enum; @@ -1489,10 +1502,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition85")==0) return Outputdefinition85Enum; else if (strcmp(name,"Outputdefinition86")==0) return Outputdefinition86Enum; else if (strcmp(name,"Outputdefinition87")==0) return Outputdefinition87Enum; - else stage=13; - } - if(stage==13){ - if (strcmp(name,"Outputdefinition88")==0) return Outputdefinition88Enum; + else if (strcmp(name,"Outputdefinition88")==0) return Outputdefinition88Enum; else if (strcmp(name,"Outputdefinition89")==0) return Outputdefinition89Enum; else if (strcmp(name,"Outputdefinition8")==0) return Outputdefinition8Enum; else if (strcmp(name,"Outputdefinition90")==0) return Outputdefinition90Enum; @@ -1602,7 +1612,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition194")==0) return Outputdefinition194Enum; else if (strcmp(name,"Outputdefinition195")==0) return Outputdefinition195Enum; else if (strcmp(name,"Outputdefinition196")==0) return Outputdefinition196Enum; - else if (strcmp(name,"Outputdefinition197")==0) return Outputdefinition197Enum; + else stage=14; + } + if(stage==14){ + if (strcmp(name,"Outputdefinition197")==0) return Outputdefinition197Enum; else if (strcmp(name,"Outputdefinition198")==0) return Outputdefinition198Enum; else if (strcmp(name,"Outputdefinition199")==0) return Outputdefinition199Enum; else if (strcmp(name,"Outputdefinition109")==0) return Outputdefinition109Enum; @@ -1612,10 +1625,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition211")==0) return Outputdefinition211Enum; else if (strcmp(name,"Outputdefinition212")==0) return Outputdefinition212Enum; else if (strcmp(name,"Outputdefinition213")==0) return Outputdefinition213Enum; - else stage=14; - } - if(stage==14){ - if (strcmp(name,"Outputdefinition214")==0) return Outputdefinition214Enum; + else if (strcmp(name,"Outputdefinition214")==0) return Outputdefinition214Enum; else if (strcmp(name,"Outputdefinition215")==0) return Outputdefinition215Enum; else if (strcmp(name,"Outputdefinition216")==0) return Outputdefinition216Enum; else if (strcmp(name,"Outputdefinition217")==0) return Outputdefinition217Enum; @@ -1725,7 +1735,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition321")==0) return Outputdefinition321Enum; else if (strcmp(name,"Outputdefinition322")==0) return Outputdefinition322Enum; else if (strcmp(name,"Outputdefinition323")==0) return Outputdefinition323Enum; - else if (strcmp(name,"Outputdefinition324")==0) return Outputdefinition324Enum; + else stage=15; + } + if(stage==15){ + if (strcmp(name,"Outputdefinition324")==0) return Outputdefinition324Enum; else if (strcmp(name,"Outputdefinition325")==0) return Outputdefinition325Enum; else if (strcmp(name,"Outputdefinition326")==0) return Outputdefinition326Enum; else if (strcmp(name,"Outputdefinition327")==0) return Outputdefinition327Enum; @@ -1735,10 +1748,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition330")==0) return Outputdefinition330Enum; else if (strcmp(name,"Outputdefinition331")==0) return Outputdefinition331Enum; else if (strcmp(name,"Outputdefinition332")==0) return Outputdefinition332Enum; - else stage=15; - } - if(stage==15){ - if (strcmp(name,"Outputdefinition333")==0) return Outputdefinition333Enum; + else if (strcmp(name,"Outputdefinition333")==0) return Outputdefinition333Enum; else if (strcmp(name,"Outputdefinition334")==0) return Outputdefinition334Enum; else if (strcmp(name,"Outputdefinition335")==0) return Outputdefinition335Enum; else if (strcmp(name,"Outputdefinition336")==0) return Outputdefinition336Enum; @@ -1848,7 +1858,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition403")==0) return Outputdefinition403Enum; else if (strcmp(name,"Outputdefinition440")==0) return Outputdefinition440Enum; else if (strcmp(name,"Outputdefinition441")==0) return Outputdefinition441Enum; - else if (strcmp(name,"Outputdefinition442")==0) return Outputdefinition442Enum; + else stage=16; + } + if(stage==16){ + if (strcmp(name,"Outputdefinition442")==0) return Outputdefinition442Enum; else if (strcmp(name,"Outputdefinition443")==0) return Outputdefinition443Enum; else if (strcmp(name,"Outputdefinition444")==0) return Outputdefinition444Enum; else if (strcmp(name,"Outputdefinition445")==0) return Outputdefinition445Enum; @@ -1858,10 +1871,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition449")==0) return Outputdefinition449Enum; else if (strcmp(name,"Outputdefinition404")==0) return Outputdefinition404Enum; else if (strcmp(name,"Outputdefinition450")==0) return Outputdefinition450Enum; - else stage=16; - } - if(stage==16){ - if (strcmp(name,"Outputdefinition451")==0) return Outputdefinition451Enum; + else if (strcmp(name,"Outputdefinition451")==0) return Outputdefinition451Enum; else if (strcmp(name,"Outputdefinition452")==0) return Outputdefinition452Enum; else if (strcmp(name,"Outputdefinition453")==0) return Outputdefinition453Enum; else if (strcmp(name,"Outputdefinition454")==0) return Outputdefinition454Enum; @@ -1971,7 +1981,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition558")==0) return Outputdefinition558Enum; else if (strcmp(name,"Outputdefinition559")==0) return Outputdefinition559Enum; else if (strcmp(name,"Outputdefinition505")==0) return Outputdefinition505Enum; - else if (strcmp(name,"Outputdefinition560")==0) return Outputdefinition560Enum; + else stage=17; + } + if(stage==17){ + if (strcmp(name,"Outputdefinition560")==0) return Outputdefinition560Enum; else if (strcmp(name,"Outputdefinition561")==0) return Outputdefinition561Enum; else if (strcmp(name,"Outputdefinition562")==0) return Outputdefinition562Enum; else if (strcmp(name,"Outputdefinition563")==0) return Outputdefinition563Enum; @@ -1981,10 +1994,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition567")==0) return Outputdefinition567Enum; else if (strcmp(name,"Outputdefinition568")==0) return Outputdefinition568Enum; else if (strcmp(name,"Outputdefinition569")==0) return Outputdefinition569Enum; - else stage=17; - } - if(stage==17){ - if (strcmp(name,"Outputdefinition506")==0) return Outputdefinition506Enum; + else if (strcmp(name,"Outputdefinition506")==0) return Outputdefinition506Enum; else if (strcmp(name,"Outputdefinition570")==0) return Outputdefinition570Enum; else if (strcmp(name,"Outputdefinition571")==0) return Outputdefinition571Enum; else if (strcmp(name,"Outputdefinition572")==0) return Outputdefinition572Enum; @@ -2094,7 +2104,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition676")==0) return Outputdefinition676Enum; else if (strcmp(name,"Outputdefinition677")==0) return Outputdefinition677Enum; else if (strcmp(name,"Outputdefinition678")==0) return Outputdefinition678Enum; - else if (strcmp(name,"Outputdefinition679")==0) return Outputdefinition679Enum; + else stage=18; + } + if(stage==18){ + if (strcmp(name,"Outputdefinition679")==0) return Outputdefinition679Enum; else if (strcmp(name,"Outputdefinition607")==0) return Outputdefinition607Enum; else if (strcmp(name,"Outputdefinition680")==0) return Outputdefinition680Enum; else if (strcmp(name,"Outputdefinition681")==0) return Outputdefinition681Enum; @@ -2104,10 +2117,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition685")==0) return Outputdefinition685Enum; else if (strcmp(name,"Outputdefinition686")==0) return Outputdefinition686Enum; else if (strcmp(name,"Outputdefinition687")==0) return Outputdefinition687Enum; - else stage=18; - } - if(stage==18){ - if (strcmp(name,"Outputdefinition688")==0) return Outputdefinition688Enum; + else if (strcmp(name,"Outputdefinition688")==0) return Outputdefinition688Enum; else if (strcmp(name,"Outputdefinition689")==0) return Outputdefinition689Enum; else if (strcmp(name,"Outputdefinition608")==0) return Outputdefinition608Enum; else if (strcmp(name,"Outputdefinition690")==0) return Outputdefinition690Enum; @@ -2217,7 +2227,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition794")==0) return Outputdefinition794Enum; else if (strcmp(name,"Outputdefinition795")==0) return Outputdefinition795Enum; else if (strcmp(name,"Outputdefinition796")==0) return Outputdefinition796Enum; - else if (strcmp(name,"Outputdefinition797")==0) return Outputdefinition797Enum; + else stage=19; + } + if(stage==19){ + if (strcmp(name,"Outputdefinition797")==0) return Outputdefinition797Enum; else if (strcmp(name,"Outputdefinition798")==0) return Outputdefinition798Enum; else if (strcmp(name,"Outputdefinition799")==0) return Outputdefinition799Enum; else if (strcmp(name,"Outputdefinition709")==0) return Outputdefinition709Enum; @@ -2227,10 +2240,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition811")==0) return Outputdefinition811Enum; else if (strcmp(name,"Outputdefinition812")==0) return Outputdefinition812Enum; else if (strcmp(name,"Outputdefinition813")==0) return Outputdefinition813Enum; - else stage=19; - } - if(stage==19){ - if (strcmp(name,"Outputdefinition814")==0) return Outputdefinition814Enum; + else if (strcmp(name,"Outputdefinition814")==0) return Outputdefinition814Enum; else if (strcmp(name,"Outputdefinition815")==0) return Outputdefinition815Enum; else if (strcmp(name,"Outputdefinition816")==0) return Outputdefinition816Enum; else if (strcmp(name,"Outputdefinition817")==0) return Outputdefinition817Enum; @@ -2340,7 +2350,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition921")==0) return Outputdefinition921Enum; else if (strcmp(name,"Outputdefinition922")==0) return Outputdefinition922Enum; else if (strcmp(name,"Outputdefinition923")==0) return Outputdefinition923Enum; - else if (strcmp(name,"Outputdefinition924")==0) return Outputdefinition924Enum; + else stage=20; + } + if(stage==20){ + if (strcmp(name,"Outputdefinition924")==0) return Outputdefinition924Enum; else if (strcmp(name,"Outputdefinition925")==0) return Outputdefinition925Enum; else if (strcmp(name,"Outputdefinition926")==0) return Outputdefinition926Enum; else if (strcmp(name,"Outputdefinition927")==0) return Outputdefinition927Enum; @@ -2350,10 +2363,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition930")==0) return Outputdefinition930Enum; else if (strcmp(name,"Outputdefinition931")==0) return Outputdefinition931Enum; else if (strcmp(name,"Outputdefinition932")==0) return Outputdefinition932Enum; - else stage=20; - } - if(stage==20){ - if (strcmp(name,"Outputdefinition933")==0) return Outputdefinition933Enum; + else if (strcmp(name,"Outputdefinition933")==0) return Outputdefinition933Enum; else if (strcmp(name,"Outputdefinition934")==0) return Outputdefinition934Enum; else if (strcmp(name,"Outputdefinition935")==0) return Outputdefinition935Enum; else if (strcmp(name,"Outputdefinition936")==0) return Outputdefinition936Enum; @@ -2463,7 +2473,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1003")==0) return Outputdefinition1003Enum; else if (strcmp(name,"Outputdefinition1040")==0) return Outputdefinition1040Enum; else if (strcmp(name,"Outputdefinition1041")==0) return Outputdefinition1041Enum; - else if (strcmp(name,"Outputdefinition1042")==0) return Outputdefinition1042Enum; + else stage=21; + } + if(stage==21){ + if (strcmp(name,"Outputdefinition1042")==0) return Outputdefinition1042Enum; else if (strcmp(name,"Outputdefinition1043")==0) return Outputdefinition1043Enum; else if (strcmp(name,"Outputdefinition1044")==0) return Outputdefinition1044Enum; else if (strcmp(name,"Outputdefinition1045")==0) return Outputdefinition1045Enum; @@ -2473,10 +2486,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1049")==0) return Outputdefinition1049Enum; else if (strcmp(name,"Outputdefinition1004")==0) return Outputdefinition1004Enum; else if (strcmp(name,"Outputdefinition1050")==0) return Outputdefinition1050Enum; - else stage=21; - } - if(stage==21){ - if (strcmp(name,"Outputdefinition1051")==0) return Outputdefinition1051Enum; + else if (strcmp(name,"Outputdefinition1051")==0) return Outputdefinition1051Enum; else if (strcmp(name,"Outputdefinition1052")==0) return Outputdefinition1052Enum; else if (strcmp(name,"Outputdefinition1053")==0) return Outputdefinition1053Enum; else if (strcmp(name,"Outputdefinition1054")==0) return Outputdefinition1054Enum; @@ -2586,7 +2596,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1158")==0) return Outputdefinition1158Enum; else if (strcmp(name,"Outputdefinition1159")==0) return Outputdefinition1159Enum; else if (strcmp(name,"Outputdefinition1105")==0) return Outputdefinition1105Enum; - else if (strcmp(name,"Outputdefinition1160")==0) return Outputdefinition1160Enum; + else stage=22; + } + if(stage==22){ + if (strcmp(name,"Outputdefinition1160")==0) return Outputdefinition1160Enum; else if (strcmp(name,"Outputdefinition1161")==0) return Outputdefinition1161Enum; else if (strcmp(name,"Outputdefinition1162")==0) return Outputdefinition1162Enum; else if (strcmp(name,"Outputdefinition1163")==0) return Outputdefinition1163Enum; @@ -2596,10 +2609,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1167")==0) return Outputdefinition1167Enum; else if (strcmp(name,"Outputdefinition1168")==0) return Outputdefinition1168Enum; else if (strcmp(name,"Outputdefinition1169")==0) return Outputdefinition1169Enum; - else stage=22; - } - if(stage==22){ - if (strcmp(name,"Outputdefinition1106")==0) return Outputdefinition1106Enum; + else if (strcmp(name,"Outputdefinition1106")==0) return Outputdefinition1106Enum; else if (strcmp(name,"Outputdefinition1170")==0) return Outputdefinition1170Enum; else if (strcmp(name,"Outputdefinition1171")==0) return Outputdefinition1171Enum; else if (strcmp(name,"Outputdefinition1172")==0) return Outputdefinition1172Enum; @@ -2709,7 +2719,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1276")==0) return Outputdefinition1276Enum; else if (strcmp(name,"Outputdefinition1277")==0) return Outputdefinition1277Enum; else if (strcmp(name,"Outputdefinition1278")==0) return Outputdefinition1278Enum; - else if (strcmp(name,"Outputdefinition1279")==0) return Outputdefinition1279Enum; + else stage=23; + } + if(stage==23){ + if (strcmp(name,"Outputdefinition1279")==0) return Outputdefinition1279Enum; else if (strcmp(name,"Outputdefinition1207")==0) return Outputdefinition1207Enum; else if (strcmp(name,"Outputdefinition1280")==0) return Outputdefinition1280Enum; else if (strcmp(name,"Outputdefinition1281")==0) return Outputdefinition1281Enum; @@ -2719,10 +2732,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1285")==0) return Outputdefinition1285Enum; else if (strcmp(name,"Outputdefinition1286")==0) return Outputdefinition1286Enum; else if (strcmp(name,"Outputdefinition1287")==0) return Outputdefinition1287Enum; - else stage=23; - } - if(stage==23){ - if (strcmp(name,"Outputdefinition1288")==0) return Outputdefinition1288Enum; + else if (strcmp(name,"Outputdefinition1288")==0) return Outputdefinition1288Enum; else if (strcmp(name,"Outputdefinition1289")==0) return Outputdefinition1289Enum; else if (strcmp(name,"Outputdefinition1208")==0) return Outputdefinition1208Enum; else if (strcmp(name,"Outputdefinition1290")==0) return Outputdefinition1290Enum; @@ -2832,7 +2842,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1394")==0) return Outputdefinition1394Enum; else if (strcmp(name,"Outputdefinition1395")==0) return Outputdefinition1395Enum; else if (strcmp(name,"Outputdefinition1396")==0) return Outputdefinition1396Enum; - else if (strcmp(name,"Outputdefinition1397")==0) return Outputdefinition1397Enum; + else stage=24; + } + if(stage==24){ + if (strcmp(name,"Outputdefinition1397")==0) return Outputdefinition1397Enum; else if (strcmp(name,"Outputdefinition1398")==0) return Outputdefinition1398Enum; else if (strcmp(name,"Outputdefinition1399")==0) return Outputdefinition1399Enum; else if (strcmp(name,"Outputdefinition1309")==0) return Outputdefinition1309Enum; @@ -2842,10 +2855,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1411")==0) return Outputdefinition1411Enum; else if (strcmp(name,"Outputdefinition1412")==0) return Outputdefinition1412Enum; else if (strcmp(name,"Outputdefinition1413")==0) return Outputdefinition1413Enum; - else stage=24; - } - if(stage==24){ - if (strcmp(name,"Outputdefinition1414")==0) return Outputdefinition1414Enum; + else if (strcmp(name,"Outputdefinition1414")==0) return Outputdefinition1414Enum; else if (strcmp(name,"Outputdefinition1415")==0) return Outputdefinition1415Enum; else if (strcmp(name,"Outputdefinition1416")==0) return Outputdefinition1416Enum; else if (strcmp(name,"Outputdefinition1417")==0) return Outputdefinition1417Enum; @@ -2955,7 +2965,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1521")==0) return Outputdefinition1521Enum; else if (strcmp(name,"Outputdefinition1522")==0) return Outputdefinition1522Enum; else if (strcmp(name,"Outputdefinition1523")==0) return Outputdefinition1523Enum; - else if (strcmp(name,"Outputdefinition1524")==0) return Outputdefinition1524Enum; + else stage=25; + } + if(stage==25){ + if (strcmp(name,"Outputdefinition1524")==0) return Outputdefinition1524Enum; else if (strcmp(name,"Outputdefinition1525")==0) return Outputdefinition1525Enum; else if (strcmp(name,"Outputdefinition1526")==0) return Outputdefinition1526Enum; else if (strcmp(name,"Outputdefinition1527")==0) return Outputdefinition1527Enum; @@ -2965,10 +2978,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1530")==0) return Outputdefinition1530Enum; else if (strcmp(name,"Outputdefinition1531")==0) return Outputdefinition1531Enum; else if (strcmp(name,"Outputdefinition1532")==0) return Outputdefinition1532Enum; - else stage=25; - } - if(stage==25){ - if (strcmp(name,"Outputdefinition1533")==0) return Outputdefinition1533Enum; + else if (strcmp(name,"Outputdefinition1533")==0) return Outputdefinition1533Enum; else if (strcmp(name,"Outputdefinition1534")==0) return Outputdefinition1534Enum; else if (strcmp(name,"Outputdefinition1535")==0) return Outputdefinition1535Enum; else if (strcmp(name,"Outputdefinition1536")==0) return Outputdefinition1536Enum; @@ -3078,7 +3088,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1603")==0) return Outputdefinition1603Enum; else if (strcmp(name,"Outputdefinition1640")==0) return Outputdefinition1640Enum; else if (strcmp(name,"Outputdefinition1641")==0) return Outputdefinition1641Enum; - else if (strcmp(name,"Outputdefinition1642")==0) return Outputdefinition1642Enum; + else stage=26; + } + if(stage==26){ + if (strcmp(name,"Outputdefinition1642")==0) return Outputdefinition1642Enum; else if (strcmp(name,"Outputdefinition1643")==0) return Outputdefinition1643Enum; else if (strcmp(name,"Outputdefinition1644")==0) return Outputdefinition1644Enum; else if (strcmp(name,"Outputdefinition1645")==0) return Outputdefinition1645Enum; @@ -3088,10 +3101,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1649")==0) return Outputdefinition1649Enum; else if (strcmp(name,"Outputdefinition1604")==0) return Outputdefinition1604Enum; else if (strcmp(name,"Outputdefinition1650")==0) return Outputdefinition1650Enum; - else stage=26; - } - if(stage==26){ - if (strcmp(name,"Outputdefinition1651")==0) return Outputdefinition1651Enum; + else if (strcmp(name,"Outputdefinition1651")==0) return Outputdefinition1651Enum; else if (strcmp(name,"Outputdefinition1652")==0) return Outputdefinition1652Enum; else if (strcmp(name,"Outputdefinition1653")==0) return Outputdefinition1653Enum; else if (strcmp(name,"Outputdefinition1654")==0) return Outputdefinition1654Enum; @@ -3201,7 +3211,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1758")==0) return Outputdefinition1758Enum; else if (strcmp(name,"Outputdefinition1759")==0) return Outputdefinition1759Enum; else if (strcmp(name,"Outputdefinition1705")==0) return Outputdefinition1705Enum; - else if (strcmp(name,"Outputdefinition1760")==0) return Outputdefinition1760Enum; + else stage=27; + } + if(stage==27){ + if (strcmp(name,"Outputdefinition1760")==0) return Outputdefinition1760Enum; else if (strcmp(name,"Outputdefinition1761")==0) return Outputdefinition1761Enum; else if (strcmp(name,"Outputdefinition1762")==0) return Outputdefinition1762Enum; else if (strcmp(name,"Outputdefinition1763")==0) return Outputdefinition1763Enum; @@ -3211,10 +3224,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1767")==0) return Outputdefinition1767Enum; else if (strcmp(name,"Outputdefinition1768")==0) return Outputdefinition1768Enum; else if (strcmp(name,"Outputdefinition1769")==0) return Outputdefinition1769Enum; - else stage=27; - } - if(stage==27){ - if (strcmp(name,"Outputdefinition1706")==0) return Outputdefinition1706Enum; + else if (strcmp(name,"Outputdefinition1706")==0) return Outputdefinition1706Enum; else if (strcmp(name,"Outputdefinition1770")==0) return Outputdefinition1770Enum; else if (strcmp(name,"Outputdefinition1771")==0) return Outputdefinition1771Enum; else if (strcmp(name,"Outputdefinition1772")==0) return Outputdefinition1772Enum; @@ -3324,7 +3334,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1876")==0) return Outputdefinition1876Enum; else if (strcmp(name,"Outputdefinition1877")==0) return Outputdefinition1877Enum; else if (strcmp(name,"Outputdefinition1878")==0) return Outputdefinition1878Enum; - else if (strcmp(name,"Outputdefinition1879")==0) return Outputdefinition1879Enum; + else stage=28; + } + if(stage==28){ + if (strcmp(name,"Outputdefinition1879")==0) return Outputdefinition1879Enum; else if (strcmp(name,"Outputdefinition1807")==0) return Outputdefinition1807Enum; else if (strcmp(name,"Outputdefinition1880")==0) return Outputdefinition1880Enum; else if (strcmp(name,"Outputdefinition1881")==0) return Outputdefinition1881Enum; @@ -3334,10 +3347,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1885")==0) return Outputdefinition1885Enum; else if (strcmp(name,"Outputdefinition1886")==0) return Outputdefinition1886Enum; else if (strcmp(name,"Outputdefinition1887")==0) return Outputdefinition1887Enum; - else stage=28; - } - if(stage==28){ - if (strcmp(name,"Outputdefinition1888")==0) return Outputdefinition1888Enum; + else if (strcmp(name,"Outputdefinition1888")==0) return Outputdefinition1888Enum; else if (strcmp(name,"Outputdefinition1889")==0) return Outputdefinition1889Enum; else if (strcmp(name,"Outputdefinition1808")==0) return Outputdefinition1808Enum; else if (strcmp(name,"Outputdefinition1890")==0) return Outputdefinition1890Enum; @@ -3447,7 +3457,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Outputdefinition1994")==0) return Outputdefinition1994Enum; else if (strcmp(name,"Outputdefinition1995")==0) return Outputdefinition1995Enum; else if (strcmp(name,"Outputdefinition1996")==0) return Outputdefinition1996Enum; - else if (strcmp(name,"Outputdefinition1997")==0) return Outputdefinition1997Enum; + else stage=29; + } + if(stage==29){ + if (strcmp(name,"Outputdefinition1997")==0) return Outputdefinition1997Enum; else if (strcmp(name,"Outputdefinition1998")==0) return Outputdefinition1998Enum; else if (strcmp(name,"Outputdefinition1999")==0) return Outputdefinition1999Enum; else if (strcmp(name,"Outputdefinition1909")==0) return Outputdefinition1909Enum; @@ -3457,10 +3470,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"AdaptiveTimestepping")==0) return AdaptiveTimesteppingEnum; else if (strcmp(name,"AdjointBalancethickness2Analysis")==0) return AdjointBalancethickness2AnalysisEnum; else if (strcmp(name,"AdjointBalancethicknessAnalysis")==0) return AdjointBalancethicknessAnalysisEnum; - else stage=29; - } - if(stage==29){ - if (strcmp(name,"AdjointHorizAnalysis")==0) return AdjointHorizAnalysisEnum; + else if (strcmp(name,"AdjointHorizAnalysis")==0) return AdjointHorizAnalysisEnum; else if (strcmp(name,"AgeAnalysis")==0) return AgeAnalysisEnum; else if (strcmp(name,"AggressiveMigration")==0) return AggressiveMigrationEnum; else if (strcmp(name,"AmrBamg")==0) return AmrBamgEnum; @@ -3478,6 +3488,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"BalancevelocityAnalysis")==0) return BalancevelocityAnalysisEnum; else if (strcmp(name,"BalancevelocitySolution")==0) return BalancevelocitySolutionEnum; else if (strcmp(name,"BasalforcingsIsmip6")==0) return BasalforcingsIsmip6Enum; + else if (strcmp(name,"BasalforcingsIsmip7")==0) return BasalforcingsIsmip7Enum; else if (strcmp(name,"BasalforcingsPico")==0) return BasalforcingsPicoEnum; else if (strcmp(name,"BeckmannGoosseFloatingMeltRate")==0) return BeckmannGoosseFloatingMeltRateEnum; else if (strcmp(name,"BedSlopeSolution")==0) return BedSlopeSolutionEnum; @@ -3551,6 +3562,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Element")==0) return ElementEnum; else if (strcmp(name,"ElementHook")==0) return ElementHookEnum; else if (strcmp(name,"ElementSId")==0) return ElementSIdEnum; + else if (strcmp(name,"EmulatorParam")==0) return EmulatorParamEnum; else if (strcmp(name,"EnthalpyAnalysis")==0) return EnthalpyAnalysisEnum; else if (strcmp(name,"EsaAnalysis")==0) return EsaAnalysisEnum; else if (strcmp(name,"EsaSolution")==0) return EsaSolutionEnum; @@ -3568,10 +3580,14 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"FixedTimestepping")==0) return FixedTimesteppingEnum; else if (strcmp(name,"FloatingArea")==0) return FloatingAreaEnum; else if (strcmp(name,"FloatingAreaScaled")==0) return FloatingAreaScaledEnum; - else if (strcmp(name,"FloatingMeltRate")==0) return FloatingMeltRateEnum; + else stage=30; + } + if(stage==30){ + if (strcmp(name,"FloatingMeltRate")==0) return FloatingMeltRateEnum; else if (strcmp(name,"Free")==0) return FreeEnum; else if (strcmp(name,"FreeSurfaceBaseAnalysis")==0) return FreeSurfaceBaseAnalysisEnum; else if (strcmp(name,"FreeSurfaceTopAnalysis")==0) return FreeSurfaceTopAnalysisEnum; + else if (strcmp(name,"FrictionEmulator")==0) return FrictionEmulatorEnum; else if (strcmp(name,"FrontalForcingsDefault")==0) return FrontalForcingsDefaultEnum; else if (strcmp(name,"FrontalForcingsRignot")==0) return FrontalForcingsRignotEnum; else if (strcmp(name,"FrontalForcingsRignotarma")==0) return FrontalForcingsRignotarmaEnum; @@ -3580,10 +3596,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"GLheightadvectionAnalysis")==0) return GLheightadvectionAnalysisEnum; else if (strcmp(name,"GaussPenta")==0) return GaussPentaEnum; else if (strcmp(name,"GaussSeg")==0) return GaussSegEnum; - else stage=30; - } - if(stage==30){ - if (strcmp(name,"GaussTetra")==0) return GaussTetraEnum; + else if (strcmp(name,"GaussTetra")==0) return GaussTetraEnum; else if (strcmp(name,"GaussTria")==0) return GaussTriaEnum; else if (strcmp(name,"GenericOption")==0) return GenericOptionEnum; else if (strcmp(name,"GenericParam")==0) return GenericParamEnum; @@ -3610,6 +3623,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"HydrologyPismAnalysis")==0) return HydrologyPismAnalysisEnum; else if (strcmp(name,"HydrologyShaktiAnalysis")==0) return HydrologyShaktiAnalysisEnum; else if (strcmp(name,"HydrologyShreveAnalysis")==0) return HydrologyShreveAnalysisEnum; + else if (strcmp(name,"HydrologyPrescribeAnalysis")==0) return HydrologyPrescribeAnalysisEnum; else if (strcmp(name,"HydrologySolution")==0) return HydrologySolutionEnum; else if (strcmp(name,"HydrologySubsteps")==0) return HydrologySubstepsEnum; else if (strcmp(name,"HydrologySubTime")==0) return HydrologySubTimeEnum; @@ -3617,6 +3631,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Hydrologypism")==0) return HydrologypismEnum; else if (strcmp(name,"Hydrologyshakti")==0) return HydrologyshaktiEnum; else if (strcmp(name,"Hydrologyshreve")==0) return HydrologyshreveEnum; + else if (strcmp(name,"Hydrologyprescribe")==0) return HydrologyprescribeEnum; else if (strcmp(name,"IceMass")==0) return IceMassEnum; else if (strcmp(name,"IceMassScaled")==0) return IceMassScaledEnum; else if (strcmp(name,"IceVolumeAboveFloatation")==0) return IceVolumeAboveFloatationEnum; @@ -3688,7 +3703,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"Matestar")==0) return MatestarEnum; else if (strcmp(name,"Matice")==0) return MaticeEnum; else if (strcmp(name,"Matlitho")==0) return MatlithoEnum; - else if (strcmp(name,"Mathydro")==0) return MathydroEnum; + else stage=31; + } + if(stage==31){ + if (strcmp(name,"Mathydro")==0) return MathydroEnum; else if (strcmp(name,"MatrixParam")==0) return MatrixParamEnum; else if (strcmp(name,"MaxAbsVx")==0) return MaxAbsVxEnum; else if (strcmp(name,"MaxAbsVy")==0) return MaxAbsVyEnum; @@ -3703,10 +3721,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"MeshElements")==0) return MeshElementsEnum; else if (strcmp(name,"MeshX")==0) return MeshXEnum; else if (strcmp(name,"MeshY")==0) return MeshYEnum; - else stage=31; - } - if(stage==31){ - if (strcmp(name,"MinVel")==0) return MinVelEnum; + else if (strcmp(name,"MinVel")==0) return MinVelEnum; else if (strcmp(name,"MinVx")==0) return MinVxEnum; else if (strcmp(name,"MinVy")==0) return MinVyEnum; else if (strcmp(name,"MinVz")==0) return MinVzEnum; @@ -3781,6 +3796,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SMBpdd")==0) return SMBpddEnum; else if (strcmp(name,"SMBpddSicopolis")==0) return SMBpddSicopolisEnum; else if (strcmp(name,"SMBpddGCM")==0) return SMBpddGCMEnum; + else if (strcmp(name,"SMBpddFast")==0) return SMBpddFastEnum; else if (strcmp(name,"SMBsemic")==0) return SMBsemicEnum; else if (strcmp(name,"SSAApproximation")==0) return SSAApproximationEnum; else if (strcmp(name,"SSAFSApproximation")==0) return SSAFSApproximationEnum; @@ -3810,7 +3826,10 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"SpcStatic")==0) return SpcStaticEnum; else if (strcmp(name,"SpcTransient")==0) return SpcTransientEnum; else if (strcmp(name,"Sset")==0) return SsetEnum; - else if (strcmp(name,"StatisticsSolution")==0) return StatisticsSolutionEnum; + else stage=32; + } + if(stage==32){ + if (strcmp(name,"StatisticsSolution")==0) return StatisticsSolutionEnum; else if (strcmp(name,"SteadystateSolution")==0) return SteadystateSolutionEnum; else if (strcmp(name,"StressIntensityFactor")==0) return StressIntensityFactorEnum; else if (strcmp(name,"StressbalanceAnalysis")==0) return StressbalanceAnalysisEnum; @@ -3826,10 +3845,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"IntrusionMelt")==0) return IntrusionMeltEnum; else if (strcmp(name,"SubelementMelt1")==0) return SubelementMelt1Enum; else if (strcmp(name,"SubelementMelt2")==0) return SubelementMelt2Enum; - else stage=32; - } - if(stage==32){ - if (strcmp(name,"SubelementMigration")==0) return SubelementMigrationEnum; + else if (strcmp(name,"SubelementMigration")==0) return SubelementMigrationEnum; else if (strcmp(name,"SurfaceSlopeSolution")==0) return SurfaceSlopeSolutionEnum; else if (strcmp(name,"TaylorHood")==0) return TaylorHoodEnum; else if (strcmp(name,"Tetra")==0) return TetraEnum; @@ -3843,6 +3859,7 @@ int StringToEnumx(const char* name,bool notfounderror){ else if (strcmp(name,"TotalFloatingBmbScaled")==0) return TotalFloatingBmbScaledEnum; else if (strcmp(name,"TotalGroundedBmb")==0) return TotalGroundedBmbEnum; else if (strcmp(name,"TotalGroundedBmbScaled")==0) return TotalGroundedBmbScaledEnum; + else if (strcmp(name,"TotalHydrologyBasalFlux")==0) return TotalHydrologyBasalFluxEnum; else if (strcmp(name,"TotalSmb")==0) return TotalSmbEnum; else if (strcmp(name,"TotalSmbScaled")==0) return TotalSmbScaledEnum; else if (strcmp(name,"TotalSmbRefreeze")==0) return TotalSmbRefreezeEnum; diff --git a/src/c/shared/Enum/issmenums.jl b/src/c/shared/Enum/issmenums.jl index 98ba8ce2d..c18b02437 100644 --- a/src/c/shared/Enum/issmenums.jl +++ b/src/c/shared/Enum/issmenums.jl @@ -74,6 +74,9 @@ BasalforcingsIsmip6IsLocalEnum BasalforcingsIsmip6NumBasinsEnum BasalforcingsIsmip6TfDepthsEnum + BasalforcingsIsmip7TfDepthsEnum + BasalforcingsIsmip7NumBasinsEnum + BasalforcingsIsmip7GammaEnum BasalforcingsLinearNumBasinsEnum BasalforcingsLinearNumBreaksEnum BasalforcingsLinearNumParamsEnum @@ -455,6 +458,7 @@ SamplingRobinEnum SamplingSeedEnum SaveResultsEnum + SaveFinalResultsEnum SolidearthPartitionIceEnum SolidearthPartitionHydroEnum SolidearthPartitionOceanEnum @@ -763,6 +767,7 @@ BalancethicknessSpcthicknessEnum BalancethicknessThickeningRateEnum BasalCrevasseEnum + BasalforcingsCoriolisFEnum BasalforcingsDeepwaterMeltingRatearmaEnum BasalforcingsDeepwaterMeltingRateNoiseEnum BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum @@ -782,6 +787,10 @@ BasalforcingsIsmip6TfEnum BasalforcingsIsmip6TfShelfEnum BasalforcingsIsmip6MeltAnomalyEnum + BasalforcingsIsmip7TfEnum + BasalforcingsIsmip7TfShelfEnum + BasalforcingsIsmip7SalinityEnum + BasalforcingsIsmip7SalinityShelfEnum BasalforcingsMeltrateFactorEnum BasalforcingsOceanSalinityEnum BasalforcingsOceanTempEnum @@ -1177,6 +1186,7 @@ SmbIsInitializedEnum SmbMAddEnum SmbMappedforcingpointEnum + SmbMappedforcingprecipscalingEnum SmbMassBalanceEnum SmbMassBalanceSnowEnum SmbMassBalanceIceEnum @@ -3381,6 +3391,7 @@ BalancevelocityAnalysisEnum BalancevelocitySolutionEnum BasalforcingsIsmip6Enum + BasalforcingsIsmip7Enum BasalforcingsPicoEnum BeckmannGoosseFloatingMeltRateEnum BedSlopeSolutionEnum @@ -3454,6 +3465,7 @@ ElementEnum ElementHookEnum ElementSIdEnum + EmulatorParamEnum EnthalpyAnalysisEnum EsaAnalysisEnum EsaSolutionEnum @@ -3475,6 +3487,7 @@ FreeEnum FreeSurfaceBaseAnalysisEnum FreeSurfaceTopAnalysisEnum + FrictionEmulatorEnum FrontalForcingsDefaultEnum FrontalForcingsRignotEnum FrontalForcingsRignotarmaEnum @@ -3510,6 +3523,7 @@ HydrologyPismAnalysisEnum HydrologyShaktiAnalysisEnum HydrologyShreveAnalysisEnum + HydrologyPrescribeAnalysisEnum HydrologySolutionEnum HydrologySubstepsEnum HydrologySubTimeEnum @@ -3517,6 +3531,7 @@ HydrologypismEnum HydrologyshaktiEnum HydrologyshreveEnum + HydrologyprescribeEnum IceMassEnum IceMassScaledEnum IceVolumeAboveFloatationEnum @@ -3678,6 +3693,7 @@ SMBpddEnum SMBpddSicopolisEnum SMBpddGCMEnum + SMBpddFastEnum SMBsemicEnum SSAApproximationEnum SSAFSApproximationEnum @@ -3737,6 +3753,7 @@ TotalFloatingBmbScaledEnum TotalGroundedBmbEnum TotalGroundedBmbScaledEnum + TotalHydrologyBasalFluxEnum TotalSmbEnum TotalSmbScaledEnum TotalSmbRefreezeEnum @@ -3853,6 +3870,9 @@ function EnumToString(enum::IssmEnum) if(enum==BasalforcingsIsmip6IsLocalEnum) return "BasalforcingsIsmip6IsLocal" end if(enum==BasalforcingsIsmip6NumBasinsEnum) return "BasalforcingsIsmip6NumBasins" end if(enum==BasalforcingsIsmip6TfDepthsEnum) return "BasalforcingsIsmip6TfDepths" end + if(enum==BasalforcingsIsmip7TfDepthsEnum) return "BasalforcingsIsmip7TfDepths" end + if(enum==BasalforcingsIsmip7NumBasinsEnum) return "BasalforcingsIsmip7NumBasins" end + if(enum==BasalforcingsIsmip7GammaEnum) return "BasalforcingsIsmip7Gamma" end if(enum==BasalforcingsLinearNumBasinsEnum) return "BasalforcingsLinearNumBasins" end if(enum==BasalforcingsLinearNumBreaksEnum) return "BasalforcingsLinearNumBreaks" end if(enum==BasalforcingsLinearNumParamsEnum) return "BasalforcingsLinearNumParams" end @@ -4234,6 +4254,7 @@ function EnumToString(enum::IssmEnum) if(enum==SamplingRobinEnum) return "SamplingRobin" end if(enum==SamplingSeedEnum) return "SamplingSeed" end if(enum==SaveResultsEnum) return "SaveResults" end + if(enum==SaveFinalResultsEnum) return "SaveFinalResults" end if(enum==SolidearthPartitionIceEnum) return "SolidearthPartitionIce" end if(enum==SolidearthPartitionHydroEnum) return "SolidearthPartitionHydro" end if(enum==SolidearthPartitionOceanEnum) return "SolidearthPartitionOcean" end @@ -4542,6 +4563,7 @@ function EnumToString(enum::IssmEnum) if(enum==BalancethicknessSpcthicknessEnum) return "BalancethicknessSpcthickness" end if(enum==BalancethicknessThickeningRateEnum) return "BalancethicknessThickeningRate" end if(enum==BasalCrevasseEnum) return "BasalCrevasse" end + if(enum==BasalforcingsCoriolisFEnum) return "BasalforcingsCoriolisF" end if(enum==BasalforcingsDeepwaterMeltingRatearmaEnum) return "BasalforcingsDeepwaterMeltingRatearma" end if(enum==BasalforcingsDeepwaterMeltingRateNoiseEnum) return "BasalforcingsDeepwaterMeltingRateNoise" end if(enum==BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum) return "BasalforcingsDeepwaterMeltingRateValuesAutoregression" end @@ -4561,6 +4583,10 @@ function EnumToString(enum::IssmEnum) if(enum==BasalforcingsIsmip6TfEnum) return "BasalforcingsIsmip6Tf" end if(enum==BasalforcingsIsmip6TfShelfEnum) return "BasalforcingsIsmip6TfShelf" end if(enum==BasalforcingsIsmip6MeltAnomalyEnum) return "BasalforcingsIsmip6MeltAnomaly" end + if(enum==BasalforcingsIsmip7TfEnum) return "BasalforcingsIsmip7Tf" end + if(enum==BasalforcingsIsmip7TfShelfEnum) return "BasalforcingsIsmip7TfShelf" end + if(enum==BasalforcingsIsmip7SalinityEnum) return "BasalforcingsIsmip7Salinity" end + if(enum==BasalforcingsIsmip7SalinityShelfEnum) return "BasalforcingsIsmip7SalinityShelf" end if(enum==BasalforcingsMeltrateFactorEnum) return "BasalforcingsMeltrateFactor" end if(enum==BasalforcingsOceanSalinityEnum) return "BasalforcingsOceanSalinity" end if(enum==BasalforcingsOceanTempEnum) return "BasalforcingsOceanTemp" end @@ -4956,6 +4982,7 @@ function EnumToString(enum::IssmEnum) if(enum==SmbIsInitializedEnum) return "SmbIsInitialized" end if(enum==SmbMAddEnum) return "SmbMAdd" end if(enum==SmbMappedforcingpointEnum) return "SmbMappedforcingpoint" end + if(enum==SmbMappedforcingprecipscalingEnum) return "SmbMappedforcingprecipscaling" end if(enum==SmbMassBalanceEnum) return "SmbMassBalance" end if(enum==SmbMassBalanceSnowEnum) return "SmbMassBalanceSnow" end if(enum==SmbMassBalanceIceEnum) return "SmbMassBalanceIce" end @@ -7160,6 +7187,7 @@ function EnumToString(enum::IssmEnum) if(enum==BalancevelocityAnalysisEnum) return "BalancevelocityAnalysis" end if(enum==BalancevelocitySolutionEnum) return "BalancevelocitySolution" end if(enum==BasalforcingsIsmip6Enum) return "BasalforcingsIsmip6" end + if(enum==BasalforcingsIsmip7Enum) return "BasalforcingsIsmip7" end if(enum==BasalforcingsPicoEnum) return "BasalforcingsPico" end if(enum==BeckmannGoosseFloatingMeltRateEnum) return "BeckmannGoosseFloatingMeltRate" end if(enum==BedSlopeSolutionEnum) return "BedSlopeSolution" end @@ -7233,6 +7261,7 @@ function EnumToString(enum::IssmEnum) if(enum==ElementEnum) return "Element" end if(enum==ElementHookEnum) return "ElementHook" end if(enum==ElementSIdEnum) return "ElementSId" end + if(enum==EmulatorParamEnum) return "EmulatorParam" end if(enum==EnthalpyAnalysisEnum) return "EnthalpyAnalysis" end if(enum==EsaAnalysisEnum) return "EsaAnalysis" end if(enum==EsaSolutionEnum) return "EsaSolution" end @@ -7254,6 +7283,7 @@ function EnumToString(enum::IssmEnum) if(enum==FreeEnum) return "Free" end if(enum==FreeSurfaceBaseAnalysisEnum) return "FreeSurfaceBaseAnalysis" end if(enum==FreeSurfaceTopAnalysisEnum) return "FreeSurfaceTopAnalysis" end + if(enum==FrictionEmulatorEnum) return "FrictionEmulator" end if(enum==FrontalForcingsDefaultEnum) return "FrontalForcingsDefault" end if(enum==FrontalForcingsRignotEnum) return "FrontalForcingsRignot" end if(enum==FrontalForcingsRignotarmaEnum) return "FrontalForcingsRignotarma" end @@ -7289,6 +7319,7 @@ function EnumToString(enum::IssmEnum) if(enum==HydrologyPismAnalysisEnum) return "HydrologyPismAnalysis" end if(enum==HydrologyShaktiAnalysisEnum) return "HydrologyShaktiAnalysis" end if(enum==HydrologyShreveAnalysisEnum) return "HydrologyShreveAnalysis" end + if(enum==HydrologyPrescribeAnalysisEnum) return "HydrologyPrescribeAnalysis" end if(enum==HydrologySolutionEnum) return "HydrologySolution" end if(enum==HydrologySubstepsEnum) return "HydrologySubsteps" end if(enum==HydrologySubTimeEnum) return "HydrologySubTime" end @@ -7296,6 +7327,7 @@ function EnumToString(enum::IssmEnum) if(enum==HydrologypismEnum) return "Hydrologypism" end if(enum==HydrologyshaktiEnum) return "Hydrologyshakti" end if(enum==HydrologyshreveEnum) return "Hydrologyshreve" end + if(enum==HydrologyprescribeEnum) return "Hydrologyprescribe" end if(enum==IceMassEnum) return "IceMass" end if(enum==IceMassScaledEnum) return "IceMassScaled" end if(enum==IceVolumeAboveFloatationEnum) return "IceVolumeAboveFloatation" end @@ -7457,6 +7489,7 @@ function EnumToString(enum::IssmEnum) if(enum==SMBpddEnum) return "SMBpdd" end if(enum==SMBpddSicopolisEnum) return "SMBpddSicopolis" end if(enum==SMBpddGCMEnum) return "SMBpddGCM" end + if(enum==SMBpddFastEnum) return "SMBpddFast" end if(enum==SMBsemicEnum) return "SMBsemic" end if(enum==SSAApproximationEnum) return "SSAApproximation" end if(enum==SSAFSApproximationEnum) return "SSAFSApproximation" end @@ -7516,6 +7549,7 @@ function EnumToString(enum::IssmEnum) if(enum==TotalFloatingBmbScaledEnum) return "TotalFloatingBmbScaled" end if(enum==TotalGroundedBmbEnum) return "TotalGroundedBmb" end if(enum==TotalGroundedBmbScaledEnum) return "TotalGroundedBmbScaled" end + if(enum==TotalHydrologyBasalFluxEnum) return "TotalHydrologyBasalFlux" end if(enum==TotalSmbEnum) return "TotalSmb" end if(enum==TotalSmbScaledEnum) return "TotalSmbScaled" end if(enum==TotalSmbRefreezeEnum) return "TotalSmbRefreeze" end @@ -7632,6 +7666,9 @@ function StringToEnum(name::String) if(name=="BasalforcingsIsmip6IsLocal") return BasalforcingsIsmip6IsLocalEnum end if(name=="BasalforcingsIsmip6NumBasins") return BasalforcingsIsmip6NumBasinsEnum end if(name=="BasalforcingsIsmip6TfDepths") return BasalforcingsIsmip6TfDepthsEnum end + if(name=="BasalforcingsIsmip7TfDepths") return BasalforcingsIsmip7TfDepthsEnum end + if(name=="BasalforcingsIsmip7NumBasins") return BasalforcingsIsmip7NumBasinsEnum end + if(name=="BasalforcingsIsmip7Gamma") return BasalforcingsIsmip7GammaEnum end if(name=="BasalforcingsLinearNumBasins") return BasalforcingsLinearNumBasinsEnum end if(name=="BasalforcingsLinearNumBreaks") return BasalforcingsLinearNumBreaksEnum end if(name=="BasalforcingsLinearNumParams") return BasalforcingsLinearNumParamsEnum end @@ -8013,6 +8050,7 @@ function StringToEnum(name::String) if(name=="SamplingRobin") return SamplingRobinEnum end if(name=="SamplingSeed") return SamplingSeedEnum end if(name=="SaveResults") return SaveResultsEnum end + if(name=="SaveFinalResults") return SaveFinalResultsEnum end if(name=="SolidearthPartitionIce") return SolidearthPartitionIceEnum end if(name=="SolidearthPartitionHydro") return SolidearthPartitionHydroEnum end if(name=="SolidearthPartitionOcean") return SolidearthPartitionOceanEnum end @@ -8321,6 +8359,7 @@ function StringToEnum(name::String) if(name=="BalancethicknessSpcthickness") return BalancethicknessSpcthicknessEnum end if(name=="BalancethicknessThickeningRate") return BalancethicknessThickeningRateEnum end if(name=="BasalCrevasse") return BasalCrevasseEnum end + if(name=="BasalforcingsCoriolisF") return BasalforcingsCoriolisFEnum end if(name=="BasalforcingsDeepwaterMeltingRatearma") return BasalforcingsDeepwaterMeltingRatearmaEnum end if(name=="BasalforcingsDeepwaterMeltingRateNoise") return BasalforcingsDeepwaterMeltingRateNoiseEnum end if(name=="BasalforcingsDeepwaterMeltingRateValuesAutoregression") return BasalforcingsDeepwaterMeltingRateValuesAutoregressionEnum end @@ -8340,6 +8379,10 @@ function StringToEnum(name::String) if(name=="BasalforcingsIsmip6Tf") return BasalforcingsIsmip6TfEnum end if(name=="BasalforcingsIsmip6TfShelf") return BasalforcingsIsmip6TfShelfEnum end if(name=="BasalforcingsIsmip6MeltAnomaly") return BasalforcingsIsmip6MeltAnomalyEnum end + if(name=="BasalforcingsIsmip7Tf") return BasalforcingsIsmip7TfEnum end + if(name=="BasalforcingsIsmip7TfShelf") return BasalforcingsIsmip7TfShelfEnum end + if(name=="BasalforcingsIsmip7Salinity") return BasalforcingsIsmip7SalinityEnum end + if(name=="BasalforcingsIsmip7SalinityShelf") return BasalforcingsIsmip7SalinityShelfEnum end if(name=="BasalforcingsMeltrateFactor") return BasalforcingsMeltrateFactorEnum end if(name=="BasalforcingsOceanSalinity") return BasalforcingsOceanSalinityEnum end if(name=="BasalforcingsOceanTemp") return BasalforcingsOceanTempEnum end @@ -8735,6 +8778,7 @@ function StringToEnum(name::String) if(name=="SmbIsInitialized") return SmbIsInitializedEnum end if(name=="SmbMAdd") return SmbMAddEnum end if(name=="SmbMappedforcingpoint") return SmbMappedforcingpointEnum end + if(name=="SmbMappedforcingprecipscaling") return SmbMappedforcingprecipscalingEnum end if(name=="SmbMassBalance") return SmbMassBalanceEnum end if(name=="SmbMassBalanceSnow") return SmbMassBalanceSnowEnum end if(name=="SmbMassBalanceIce") return SmbMassBalanceIceEnum end @@ -10939,6 +10983,7 @@ function StringToEnum(name::String) if(name=="BalancevelocityAnalysis") return BalancevelocityAnalysisEnum end if(name=="BalancevelocitySolution") return BalancevelocitySolutionEnum end if(name=="BasalforcingsIsmip6") return BasalforcingsIsmip6Enum end + if(name=="BasalforcingsIsmip7") return BasalforcingsIsmip7Enum end if(name=="BasalforcingsPico") return BasalforcingsPicoEnum end if(name=="BeckmannGoosseFloatingMeltRate") return BeckmannGoosseFloatingMeltRateEnum end if(name=="BedSlopeSolution") return BedSlopeSolutionEnum end @@ -11012,6 +11057,7 @@ function StringToEnum(name::String) if(name=="Element") return ElementEnum end if(name=="ElementHook") return ElementHookEnum end if(name=="ElementSId") return ElementSIdEnum end + if(name=="EmulatorParam") return EmulatorParamEnum end if(name=="EnthalpyAnalysis") return EnthalpyAnalysisEnum end if(name=="EsaAnalysis") return EsaAnalysisEnum end if(name=="EsaSolution") return EsaSolutionEnum end @@ -11033,6 +11079,7 @@ function StringToEnum(name::String) if(name=="Free") return FreeEnum end if(name=="FreeSurfaceBaseAnalysis") return FreeSurfaceBaseAnalysisEnum end if(name=="FreeSurfaceTopAnalysis") return FreeSurfaceTopAnalysisEnum end + if(name=="FrictionEmulator") return FrictionEmulatorEnum end if(name=="FrontalForcingsDefault") return FrontalForcingsDefaultEnum end if(name=="FrontalForcingsRignot") return FrontalForcingsRignotEnum end if(name=="FrontalForcingsRignotarma") return FrontalForcingsRignotarmaEnum end @@ -11068,6 +11115,7 @@ function StringToEnum(name::String) if(name=="HydrologyPismAnalysis") return HydrologyPismAnalysisEnum end if(name=="HydrologyShaktiAnalysis") return HydrologyShaktiAnalysisEnum end if(name=="HydrologyShreveAnalysis") return HydrologyShreveAnalysisEnum end + if(name=="HydrologyPrescribeAnalysis") return HydrologyPrescribeAnalysisEnum end if(name=="HydrologySolution") return HydrologySolutionEnum end if(name=="HydrologySubsteps") return HydrologySubstepsEnum end if(name=="HydrologySubTime") return HydrologySubTimeEnum end @@ -11075,6 +11123,7 @@ function StringToEnum(name::String) if(name=="Hydrologypism") return HydrologypismEnum end if(name=="Hydrologyshakti") return HydrologyshaktiEnum end if(name=="Hydrologyshreve") return HydrologyshreveEnum end + if(name=="Hydrologyprescribe") return HydrologyprescribeEnum end if(name=="IceMass") return IceMassEnum end if(name=="IceMassScaled") return IceMassScaledEnum end if(name=="IceVolumeAboveFloatation") return IceVolumeAboveFloatationEnum end @@ -11236,6 +11285,7 @@ function StringToEnum(name::String) if(name=="SMBpdd") return SMBpddEnum end if(name=="SMBpddSicopolis") return SMBpddSicopolisEnum end if(name=="SMBpddGCM") return SMBpddGCMEnum end + if(name=="SMBpddFast") return SMBpddFastEnum end if(name=="SMBsemic") return SMBsemicEnum end if(name=="SSAApproximation") return SSAApproximationEnum end if(name=="SSAFSApproximation") return SSAFSApproximationEnum end @@ -11295,6 +11345,7 @@ function StringToEnum(name::String) if(name=="TotalFloatingBmbScaled") return TotalFloatingBmbScaledEnum end if(name=="TotalGroundedBmb") return TotalGroundedBmbEnum end if(name=="TotalGroundedBmbScaled") return TotalGroundedBmbScaledEnum end + if(name=="TotalHydrologyBasalFlux") return TotalHydrologyBasalFluxEnum end if(name=="TotalSmb") return TotalSmbEnum end if(name=="TotalSmbScaled") return TotalSmbScaledEnum end if(name=="TotalSmbRefreeze") return TotalSmbRefreezeEnum end diff --git a/src/c/shared/MemOps/MemOps.h b/src/c/shared/MemOps/MemOps.h index fd53a9cc5..088d9f587 100644 --- a/src/c/shared/MemOps/MemOps.h +++ b/src/c/shared/MemOps/MemOps.h @@ -91,7 +91,7 @@ template void xDelete(T**& aT_pp) {/*{{{*/ delete [](*aT_pp); delete [](aT_pp); #else - free((void*)*aT_pp) + free((void*)*aT_pp); free((void**)aT_pp); #endif } diff --git a/src/c/shared/io/Marshalling/IoCodeConversions.cpp b/src/c/shared/io/Marshalling/IoCodeConversions.cpp index 5fa02a506..ae7cb6912 100644 --- a/src/c/shared/io/Marshalling/IoCodeConversions.cpp +++ b/src/c/shared/io/Marshalling/IoCodeConversions.cpp @@ -268,6 +268,7 @@ int IoCodeToEnumSMB(int enum_in){/*{{{*/ case 13: return SMBarmaEnum; case 14: return SMBdebrisEvattEnum; case 15: return SMBpddGCMEnum; + case 16: return SMBpddFastEnum; default: _error_("Marshalled SMB code \""<',0); + if prod(size(self.mappedforcingprecipscaling))==1 + disp('WARNING:smb.mappedforcingprecipscaling is now a vector of mapped elements. Set to md.smb.mappedforcingprecipscaling*ones(size(md.smb.mappedforcingelevation)).'); + end + end end md = checkfield(md,'fieldname','smb.aIdx','NaN',1,'Inf',1,'values',[0,1,2,3,4]); @@ -618,6 +632,9 @@ function marshall(self,prefix,md,fid) % {{{ WriteData(fid,prefix,'object',self,'class','smb','fieldname','mappedforcingelevation','format','DoubleMat','mattype',3); WriteData(fid,prefix,'object',self,'class','smb','fieldname','lapseTaValue','format','DoubleMat','mattype',3); WriteData(fid,prefix,'object',self,'class','smb','fieldname','lapsedlwrfValue','format','DoubleMat','mattype',3); + if (self.isprecipforcingremapped) + WriteData(fid,prefix,'object',self,'class','smb','fieldname','mappedforcingprecipscaling','format','DoubleMat','mattype',2); + end end %figure out dt from forcings: diff --git a/src/m/classes/SMBgemb.py b/src/m/classes/SMBgemb.py index ed05e7407..3ad480cf3 100644 --- a/src/m/classes/SMBgemb.py +++ b/src/m/classes/SMBgemb.py @@ -56,6 +56,8 @@ def __init__(self, *args): # {{{ self.dulwrfValue = np.nan #Delta with which to perturb the long wave radiation upwards. Use if isdeltaLWup is true. self.mappedforcingpoint = np.nan #Mapping of which forcing point will map to each mesh element (integer). Of size number of elements. Use if ismappedforcing is true. self.mappedforcingelevation = np.nan #The elevation of each mapped forcing location (m above sea level). Of size number of forcing points. Use if ismappedforcing is true. + self.mappedforcingprecipscaling = np.nan #Map of a precipitation multiplier correction term to be applied to forcing P. Of size number of elements. Use if ismappedforcing is true and isprecipforcingremapped is true. (Default value is 1) + self.lapseTaValue = np.nan #Temperature lapse rate if forcing has different grid and should be remapped. Use if ismappedforcing is true. (Default value is -0.006 K m-1., vector of mapping points) self.lapsedlwrfValue = np.nan #Longwave down lapse rate if forcing has different grid and should be remapped. Use if ismappedforcing is true. Where set to 0, dlwrf will scale with a constant effective atmospheric emissivity. (Default value is -0.032 W m-2 m-1., vector of mapping points) @@ -224,6 +226,7 @@ def __repr__(self): # {{{ s += '{}\n'.format(fielddisplay(self,'mappedforcingpoint','Mapping of which forcing point will map to each mesh element for ismappedforcing option (integer). Size number of elements.')) s += '{}\n'.format(fielddisplay(self,'mappedforcingelevation','The elevation of each mapped forcing location (m above sea level) for ismappedforcing option. Size number of forcing points.')) + s += '{}\n'.format(fielddisplay(self,'mappedforcingprecipscaling','Map of a precipitation multiplier correction term to be applied to forcing P when ismappedforcing and isprecipforcingremapped options are true. Size number of elements. (Default is 1)')) s += '{}\n'.format(fielddisplay(self,'lapseTaValue','Temperature lapse rate of each mapped forcing location, if forcing has different grid and should be remapped for ismappedforcing option. (Default value is -0.006 K m-1, vector of mapping points)')) s += '{}\n'.format(fielddisplay(self,'lapsedlwrfValue','Longwave down lapse rate of each mapped forcing location if forcing has different grid and should be remapped for ismappedforcing option. Where set to 0, dlwrf will scale with a constant effective atmospheric emissivity. (Default value is -0.032 W m-2 m-1., vector of mapping points)')) @@ -330,6 +333,8 @@ def extrude(self, md): # {{{ self.teValue = project3d(md, 'vector', self.teValue, 'type', 'element') if not np.isnan(self.mappedforcingpoint): self.mappedforcingpoint = project3d(md, 'vector', self.mappedforcingpoint, 'type', 'element') + if not np.isnan(self.mappedforcingprecipscaling): + self.mappedforcingprecipscaling = project3d(md, 'vector', self.mappedforcingprecipscaling, 'type', 'element') return self # }}} @@ -387,6 +392,7 @@ def setdefaultparameters(self, mesh): # {{{ self.dulwrfValue = np.zeros((mesh.numberofelements,)) self.lapseTaValue = -0.006 self.lapsedlwrfValue = -0.032 + self.mappedforcingprecipscaling = 1.0 self.dswdiffrf = 0.0 * np.ones(mesh.numberofelements,) self.szaValue = 0.0 * np.ones(mesh.numberofelements,) @@ -458,6 +464,10 @@ def checkconsistency(self, md, solution, analyses): # {{{ md = checkfield(md, 'fieldname', 'smb.lapseTaValue', 'size',[sizeta[0]-1],'NaN',1,'Inf',1); md = checkfield(md, 'fieldname', 'smb.lapsedlwrfValue', 'size',[sizeta[0]-1], 'NaN',1,'Inf',1); + if self.isprecipforcingremapped: + md = checkfield(md,'fieldname','smb.mappedforcingprecipscaling','size',[md.mesh.numberofelements],'NaN',1,'Inf',1,'>',0) + if np.prod(np.shape(self.mappedforcingprecipscaling))==1: + print("WARNING:smb.mappedforcingprecipscaling is now a vector of mapped elements. Set to md.smb.mappedforcingprecipscaling*ones(size(md.smb.mappedforcingelevation)).") md = checkfield(md, 'fieldname', 'smb.aIdx', 'NaN', 1, 'Inf', 1, 'values', [0, 1, 2, 3, 4]) md = checkfield(md, 'fieldname', 'smb.eIdx', 'NaN', 1, 'Inf', 1, 'values', [0, 1, 2]) @@ -592,6 +602,8 @@ def marshall(self, prefix, md, fid): # {{{ WriteData(fid,prefix,'object',self,'class','smb','fieldname','mappedforcingelevation','format','DoubleMat','mattype',3) WriteData(fid,prefix,'object',self,'class','smb','fieldname','lapseTaValue','format','DoubleMat','mattype',3) WriteData(fid,prefix,'object',self,'class','smb','fieldname','lapsedlwrfValue','format','DoubleMat','mattype',3) + if self.isprecipforcingremapped: + WriteData(fid,prefix,'object',self,'class','smb','fieldname','mappedforcingprecipscaling','format','DoubleMat','mattype',2) # Figure out dt from forcings if (np.any(self.P[-1] - self.Ta[-1] != 0) | np.any(self.V[-1] - self.Ta[-1] != 0) | np.any(self.dswrf[-1] - self.Ta[-1] != 0) | np.any(self.dlwrf[-1] - self.Ta[-1] != 0) | np.any(self.eAir[-1] - self.Ta[-1] != 0) | np.any(self.pAir[-1] - self.Ta[-1] != 0)): diff --git a/src/m/classes/SMBpddFast.m b/src/m/classes/SMBpddFast.m new file mode 100644 index 000000000..e414650b9 --- /dev/null +++ b/src/m/classes/SMBpddFast.m @@ -0,0 +1,159 @@ +%SMBpddFast Class definition +% +% Usage: +% SMBpddFast=SMBpddFast(); + +classdef SMBpddFast + properties (SetAccess=public) + + precipitation = NaN; + monthlytemperatures = NaN; + temperature_anomaly = NaN; + precipitation_anomaly = NaN; + smb_corr = NaN; + desfac = 0; + s0p = NaN; + s0t = NaN; + rlaps = 0; + isfirnwarming = 0; + pdd_fac_ice = 0; + pdd_fac_snow = 0; + steps_per_step = 1 + averaging = 0 + requested_outputs = {}; + end + methods + function self = SMBpddFast(varargin) % {{{ + switch nargin + case 0 + self=setdefaultparameters(self); + case 1 + self=structtoobj(SMBpddFast(), varargin{1}); + otherwise + error('constructor not supported'); + end + end % }}} + function self = extrude(self,md) % {{{ + self.precipitation=project3d(md,'vector',self.precipitation,'type','node'); + self.monthlytemperatures=project3d(md,'vector',self.monthlytemperatures,'type','node'); + self.temperature_anomaly=project3d(md,'vector',self.temperature_anomaly,'type','node'); + self.precipitation_anomaly=project3d(md,'vector',self.precipitation_anomaly,'type','node'); + self.smb_corr=project3d(md,'vector',self.smb_corr,'type','node'); + self.s0p=project3d(md,'vector',self.s0p,'type','node'); + self.s0t=project3d(md,'vector',self.s0t,'type','node'); + + end % }}} + function list = defaultoutputs(self,md) % {{{ + list = {'SmbMassBalance'}; + end % }}} + function self = initialize(self,md) % {{{ + + if isnan(self.s0p), + self.s0p=zeros(md.mesh.numberofvertices,1); + disp(' no SMBpddFast.s0p specified: values set as zero'); + end + if isnan(self.s0t), + self.s0t=zeros(md.mesh.numberofvertices,1); + disp(' no SMBpddFast.s0t specified: values set as zero'); + end + if isnan(self.temperature_anomaly), + self.temperature_anomaly=zeros(md.mesh.numberofvertices,1); + disp(' no SMBpddFast.temperature_anomaly specified: values set as zero'); + end + if isnan(self.precipitation_anomaly), + self.precipitation_anomaly=ones(md.mesh.numberofvertices,1); + disp(' no SMBpddFast.precipitation_anomaly specified: values set as ones'); + end + if isnan(self.smb_corr), + self.smb_corr=zeros(md.mesh.numberofvertices,1); + disp(' no SMBpddFast.smb_corr specified: values set as zero'); + end + + end % }}} + function self = setdefaultparameters(self) % {{{ + + self.isfirnwarming = 1; + self.desfac = -log(2.0)/1000; + self.rlaps = 7.4; + self.pdd_fac_ice = 7.28; + self.pdd_fac_snow = 2.73; + self.requested_outputs={'default'}; + + end % }}} + function md = checkconsistency(self,md,solution,analyses) % {{{ + + if (strcmp(solution,'TransientSolution') & md.transient.issmb == 0), return; end + + if ismember('MasstransportAnalysis',analyses), + md = checkfield(md,'fieldname','smb.desfac','<=',1,'numel',1); + md = checkfield(md,'fieldname','smb.s0p','>=',0,'NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); + md = checkfield(md,'fieldname','smb.s0t','>=',0,'NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); + md = checkfield(md,'fieldname','smb.rlaps','>=',0,'numel',1); + md = checkfield(md,'fieldname','smb.monthlytemperatures','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12],'>' ,0., '<', 300.); + md = checkfield(md,'fieldname','smb.precipitation','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12],'>=' ,0.); + md = checkfield(md,'fieldname','smb.pdd_fac_ice','>',0,'numel',1); + md = checkfield(md,'fieldname','smb.pdd_fac_snow','>',0,'numel',1); + + end + md = checkfield(md,'fieldname','smb.steps_per_step','>=',1,'numel',[1]); + md = checkfield(md,'fieldname','smb.averaging', 'numel', [1], 'values', [0, 1, 2]); + md = checkfield(md,'fieldname','smb.requested_outputs','stringrow',1); + + end % }}} + function disp(self) % {{{ + disp(sprintf(' surface forcings parameters:')); + + disp(sprintf('\n MODIFIED SICOPOLIS PDD scheme (After Calov & Greve, 2005, modified by Alicia Bråtner) :')); + fielddisplay(self,'monthlytemperatures','monthly surface temperatures [K]'); + fielddisplay(self,'precipitation','monthly surface precipitation [m/yr water eq]'); + fielddisplay(self,'temperature_anomaly','anomaly to monthly reference temperature (additive; [K])'); + fielddisplay(self,'precipitation_anomaly','anomaly to monthly precipitation (multiplicative, e.g. q=q0*exp(0.070458*DeltaT) after Huybrechts (2002)); [no unit])'); + fielddisplay(self,'smb_corr','correction of smb after PDD call [m/a]'); + fielddisplay(self,'s0p','should be set to elevation from precip source (between 0 and a few 1000s m, default is 0) [m]'); + fielddisplay(self,'s0t','should be set to elevation from temperature source (between 0 and a few 1000s m, default is 0) [m]'); + fielddisplay(self,'rlaps','present day lapse rate (default is 7.4 degree/km)'); + fielddisplay(self,'desfac','desertification elevation factor (default is -log(2.0)/1000)'); + fielddisplay(self,'isfirnwarming','is firnwarming (Reeh 1991) activated (0 or 1, default is 1)'); + fielddisplay(self,'steps_per_step', 'number of smb steps per time step'); + fielddisplay(self,'averaging','averaging methods from short to long steps'); + fielddisplay(self,'pdd_fac_ice','Pdd factor for ice for all the domain [mm water equiv/day/degree C]'); + fielddisplay(self,'pdd_fac_snow','Pdd factor for snow for all the domain [mm water equiv/day/degree C]'); + disp(sprintf('%51s 0: Arithmetic (default)',' ')); + disp(sprintf('%51s 1: Geometric',' ')); + disp(sprintf('%51s 2: Harmonic',' ')); + fielddisplay(self,'requested_outputs','additional outputs requested (TemperaturePDD, SmbAccumulation, SmbMelt)'); + end % }}} + function marshall(self,prefix,md,fid) % {{{ + + yts=md.constants.yts; + + WriteData(fid,prefix,'name','md.smb.model','data',16,'format','Integer'); + + WriteData(fid,prefix,'object',self,'class','smb','fieldname','isfirnwarming','format','Boolean'); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','desfac','format','Double'); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','s0p','format','DoubleMat','mattype',1); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','s0t','format','DoubleMat','mattype',1); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','rlaps','format','Double'); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','pdd_fac_ice','format','Double'); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','pdd_fac_snow','format','Double'); + + WriteData(fid,prefix,'object',self,'class','smb','fieldname','monthlytemperatures','format','DoubleMat','mattype',1,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','precipitation','format','DoubleMat','mattype',1,'scale',1./yts,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','temperature_anomaly','format','DoubleMat','mattype',1,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','precipitation_anomaly','format','DoubleMat','mattype',1,'scale',1./yts,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'class','smb','fieldname','smb_corr','format','DoubleMat','mattype',1,'scale',1./yts,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid, prefix, 'object', self, 'fieldname', 'steps_per_step', 'format', 'Integer'); + WriteData(fid, prefix, 'object', self, 'fieldname', 'averaging', 'format', 'Integer') + + %process requested outputs + outputs = self.requested_outputs; + pos = find(ismember(outputs,'default')); + if ~isempty(pos), + outputs(pos) = []; %remove 'default' from outputs + outputs = [outputs defaultoutputs(self,md)]; %add defaults + end + WriteData(fid,prefix,'data',outputs,'name','md.smb.requested_outputs','format','StringArray'); + + end % }}} + end +end diff --git a/src/m/classes/SMBpddSicopolis.m b/src/m/classes/SMBpddSicopolis.m index a94cee8a4..5799758d7 100644 --- a/src/m/classes/SMBpddSicopolis.m +++ b/src/m/classes/SMBpddSicopolis.m @@ -87,16 +87,17 @@ md = checkfield(md,'fieldname','smb.s0p','>=',0,'NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); md = checkfield(md,'fieldname','smb.s0t','>=',0,'NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); md = checkfield(md,'fieldname','smb.rlaps','>=',0,'numel',1); - md = checkfield(md,'fieldname','smb.monthlytemperatures','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12]); - md = checkfield(md,'fieldname','smb.precipitation','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12]); + md = checkfield(md,'fieldname','smb.monthlytemperatures','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12],'>' ,0., '<', 300.); + md = checkfield(md,'fieldname','smb.precipitation','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 12],'>=' ,0.); md = checkfield(md,'fieldname','smb.pdd_fac_ice','>',0,'numel',1); md = checkfield(md,'fieldname','smb.pdd_fac_snow','>',0,'numel',1); - + md = checkfield(md,'fieldname','smb.smb_corr','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); + md = checkfield(md,'fieldname','smb.temperature_anomaly','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); + md = checkfield(md,'fieldname','smb.precipitation_anomaly','NaN',1,'Inf',1,'size',[md.mesh.numberofvertices 1]); end md = checkfield(md,'fieldname','smb.steps_per_step','>=',1,'numel',[1]); md = checkfield(md,'fieldname','smb.averaging', 'numel', [1], 'values', [0, 1, 2]); md = checkfield(md,'fieldname','smb.requested_outputs','stringrow',1); - end % }}} function disp(self) % {{{ disp(sprintf(' surface forcings parameters:')); @@ -112,10 +113,10 @@ function disp(self) % {{{ fielddisplay(self,'rlaps','present day lapse rate (default is 7.4 degree/km)'); fielddisplay(self,'desfac','desertification elevation factor (default is -log(2.0)/1000)'); fielddisplay(self,'isfirnwarming','is firnwarming (Reeh 1991) activated (0 or 1, default is 1)'); - fielddisplay(self, 'steps_per_step', 'number of smb steps per time step'); + fielddisplay(self,'steps_per_step', 'number of smb steps per time step'); fielddisplay(self,'averaging','averaging methods from short to long steps'); - fielddisplay(self,'pdd_fac_ice','Pdd factor for ice for all the domain [mm ice equiv/day/degree C]'); - fielddisplay(self,'pdd_fac_snow','Pdd factor for snow for all the domain [mm ice equiv/day/degree C]'); + fielddisplay(self,'pdd_fac_ice','Pdd factor for ice for all the domain [mm water equiv/day/degree C]'); + fielddisplay(self,'pdd_fac_snow','Pdd factor for snow for all the domain [mm water equiv/day/degree C]'); disp(sprintf('%51s 0: Arithmetic (default)',' ')); disp(sprintf('%51s 1: Geometric',' ')); disp(sprintf('%51s 2: Harmonic',' ')); diff --git a/src/m/classes/SMBpddSicopolis.py b/src/m/classes/SMBpddSicopolis.py index 502f1a785..0264ff7c9 100644 --- a/src/m/classes/SMBpddSicopolis.py +++ b/src/m/classes/SMBpddSicopolis.py @@ -53,8 +53,8 @@ def __repr__(self): # {{{ s += '{}\n'.format(fielddisplay(self, 'isfirnwarming', 'is firnwarming (Reeh 1991) activated (0 or 1, default is 1)')) s += '{}\n'.format(fielddisplay(self, 'steps_per_step', 'number of smb steps per time step')) s += '{}\n'.format(fielddisplay(self, 'averaging', 'averaging methods from short to long steps')) - s += '{}\n'.format(fielddisplay(self, 'pdd_fac_ice', 'Pdd factor for ice for all the domain [mm ice equiv/day/degree C]')) - s += '{}\n'.format(fielddisplay(self, 'pdd_fac_snow', 'Pdd factor for snow for all the domain [mm ice equiv/day/degree C]')) + s += '{}\n'.format(fielddisplay(self, 'pdd_fac_ice', 'Pdd factor for ice for all the domain [mm water equiv/day/degree C]')) + s += '{}\n'.format(fielddisplay(self, 'pdd_fac_snow', 'Pdd factor for snow for all the domain [mm water equiv/day/degree C]')) s += '\t\t{}\n'.format('0: Arithmetic (default)') s += '\t\t{}\n'.format('1: Geometric') s += '\t\t{}\n'.format('2: Harmonic') @@ -118,8 +118,8 @@ def checkconsistency(self, md, solution, analyses): # {{{ md = checkfield(md, 'fieldname', 'smb.s0p', '>=', 0, 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 1]) md = checkfield(md, 'fieldname', 'smb.s0t', '>=', 0, 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 1]) md = checkfield(md, 'fieldname', 'smb.rlaps', '>=', 0, 'numel', 1) - md = checkfield(md, 'fieldname', 'smb.monthlytemperatures', 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 12]) - md = checkfield(md, 'fieldname', 'smb.precipitation', 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 12]) + md = checkfield(md, 'fieldname', 'smb.monthlytemperatures', 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 12], '>' ,0., '<', 300.) + md = checkfield(md, 'fieldname', 'smb.precipitation', 'NaN', 1, 'Inf', 1, 'size', [md.mesh.numberofvertices, 12],'>=' ,0.) md = checkfield(md, 'fieldname', 'smb.pdd_fac_ice', '>', 0, 'numel', 1) md = checkfield(md, 'fieldname', 'smb.pdd_fac_snow', '>', 0, 'numel', 1) md = checkfield(md, 'fieldname', 'smb.steps_per_step', '>=', 1, 'numel', [1]) diff --git a/src/m/classes/basalforcingsismip7.m b/src/m/classes/basalforcingsismip7.m new file mode 100644 index 000000000..4e0bfe16e --- /dev/null +++ b/src/m/classes/basalforcingsismip7.m @@ -0,0 +1,120 @@ +%ISMIP7 BASAL FORCINGS class definition +% +% Usage: +% basalforcingsismip7=basalforcingsismip7(); + +classdef basalforcingsismip7 + properties (SetAccess=public) + num_basins = 0; + basin_id = 0; + gamma = 0; + coriolis_f = NaN; + + salinity = NaN; + tf = NaN; + tf_depths = NaN; + + geothermalflux = NaN; + groundedice_melting_rate = NaN; + end + methods + function self = extrude(self,md) % {{{ + %self.tf=project3d(md,'vector',self.tf,'type','element','layer',1); + %self.delta_t=project3d(md,'vector',self.delta_t,'type','element','layer',1); + self.tf=project3d(md,'vector',self.tf,'type','node'); + self.salinity=project3d(md,'vector',self.salinity,'type','node'); + self.geothermalflux=project3d(md,'vector',self.geothermalflux,'type','element','layer',1); %bedrock only gets geothermal flux + self.groundedice_melting_rate=project3d(md,'vector',self.groundedice_melting_rate,'type','node','layer',1); + end % }}} + function self = basalforcingsismip7(varargin) % {{{ + switch nargin + case 0 + self=setdefaultparameters(self); + case 1 + self=setdefaultparameters(self); + self=structtoobj(self,varargin{1}); + otherwise + error('constructor not supported'); + end + end % }}} + function self = initialize(self,md) % {{{ + %Update fixed-coriolis parameter + if isnan(md.mesh.lat) | isempty(md.mesh.lat), + disp(' no md.mesh.lat specified.'); + if md.mesh.epsg == 3031 % For Antarctica + [lat, lon] = xy2ll(md.mesh.x,md.mesh.y,-1); + elseif md.mesh.epsg == 3413 % For Greenland + [lat, lon] = xy2ll(md.mesh.x,md.mesh.y,1); + else + error(' md.mesh.lat not specified and cannot be calculated from md.mesh.epsg'); + end + else + lat = md.mesh.lat; + end + omega=7.2921e-5; %angular velocity of the Earth (rad/s) + self.coriolis_f=2*omega*sin(lat/180*pi); + + if self.gamma == 0, + self.gamma = 14477; + disp(' no basalforcings.gamma specified: value set to 14477 m/yr'); + end + if isnan(self.groundedice_melting_rate), + self.groundedice_melting_rate=zeros(md.mesh.numberofvertices,1); + disp(' no basalforcings.groundedice_melting_rate specified: values set as zero'); + end + + end % }}} + function self = setdefaultparameters(self) % {{{ + self.gamma = 0.0; % ? + end % }}} + function md = checkconsistency(self,md,solution,analyses) % {{{ + + md = checkfield(md,'fieldname','basalforcings.num_basins','numel',1,'NaN',1,'Inf',1,'>',0); + md = checkfield(md,'fieldname','basalforcings.basin_id','Inf',1,'>=',0,'<=',md.basalforcings.num_basins,'size',[md.mesh.numberofelements 1]); + + md = checkfield(md,'fieldname','basalforcings.gamma','numel',1,'NaN',1,'Inf',1,'>',0); + + md = checkfield(md,'fieldname','basalforcings.coriolis_f','size',[md.mesh.numberofvertices, 1],'NaN',1,'Inf',1); + md = checkfield(md,'fieldname','basalforcings.tf_depths','NaN',1,'Inf',1,'size',[1,NaN],'<=',0); + md = checkfield(md,'fieldname','basalforcings.geothermalflux','NaN',1,'Inf',1,'>=',0,'timeseries',1); + md = checkfield(md,'fieldname','basalforcings.groundedice_melting_rate','NaN',1,'Inf',1,'timeseries',1); + + md = checkfield(md,'fieldname','basalforcings.tf','size',[1,1,numel(md.basalforcings.tf_depths)]); + md = checkfield(md,'fieldname','basalforcings.salinity','size',[1,1,numel(md.basalforcings.tf_depths)]); + for i=1:numel(md.basalforcings.tf_depths) + md = checkfield(md,'fieldname',['basalforcings.tf{' num2str(i) '}'],'field',md.basalforcings.tf{i},'size',[md.mesh.numberofvertices+1 NaN],'NaN',1,'Inf',1,'>=',0,'timeseries',1); + md = checkfield(md,'fieldname',['basalforcings.salinity{' num2str(i) '}'],'field',md.basalforcings.salinity{i},'size',[md.mesh.numberofvertices+1 NaN],'NaN',1,'Inf',1,'>=',0,'timeseries',1); + end + + end % }}} + function disp(self) % {{{ + disp(sprintf(' ISMIP7 basal melt rate parameterization:')); + fielddisplay(self,'num_basins','[TODO] number of basins the model domain is partitioned into [unitless]'); + fielddisplay(self,'basin_id','[TODO] basin number assigned to each node (unitless)'); + fielddisplay(self,'gamma','melt rate coefficient (m/yr)'); + fielddisplay(self,'tf_depths','elevation of vertical layers in ocean thermal forcing dataset'); + fielddisplay(self,'tf','thermal forcing (ocean temperature minus freezing point) (degrees C)'); + fielddisplay(self,'salinity','salinity (psu)'); + fielddisplay(self,'coriolis_f','Coriolis parameter (s^-1)'); + fielddisplay(self,'geothermalflux','geothermal heat flux (W/m^2)'); + fielddisplay(self,'groundedice_melting_rate','basal melting rate (positive if melting) (m/yr)'); + + end % }}} + function marshall(self,prefix,md,fid) % {{{ + + yts=md.constants.yts; + + WriteData(fid,prefix,'name','md.basalforcings.model','data',10,'format','Integer'); + WriteData(fid,prefix,'object',self,'fieldname','num_basins','format','Integer'); + WriteData(fid,prefix,'object',self,'fieldname','basin_id','data',self.basin_id-1,'name','md.basalforcings.basin_id','format','IntMat','mattype',2); %0-indexed + WriteData(fid,prefix,'object',self,'fieldname','gamma','format','Double','scale',1./yts); + WriteData(fid,prefix,'object',self,'fieldname','coriolis_f','format','DoubleMat','name','md.basalforcings.coriolis_f','mattype',1); + WriteData(fid,prefix,'object',self,'fieldname','tf_depths','format','DoubleMat','name','md.basalforcings.tf_depths'); + WriteData(fid,prefix,'object',self,'fieldname','tf','format','MatArray','name','md.basalforcings.tf','timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'fieldname','salinity','format','MatArray','name','md.basalforcings.salinity','timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'fieldname','geothermalflux','format','DoubleMat','name','md.basalforcings.geothermalflux','mattype',1,'timeserieslength',md.mesh.numberofelements+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'fieldname','groundedice_melting_rate','format','DoubleMat','mattype',1,'scale',1./yts,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts); + + end % }}} + end +end diff --git a/src/m/classes/basalforcingsismip7.py b/src/m/classes/basalforcingsismip7.py new file mode 100644 index 000000000..8caed0655 --- /dev/null +++ b/src/m/classes/basalforcingsismip7.py @@ -0,0 +1,128 @@ +#!/usr/bin/env python3 +import numpy as np + +from checkfield import checkfield +from fielddisplay import fielddisplay +from project3d import project3d +from WriteData import WriteData +from xy2ll import xy2ll + +class basalforcingsismip7(object): + """ISMIP7 BASAL FORCINGS class definition + + Usage: + basalforcings = basalforcingsismip7() + """ + + def __init__(self,*args): # {{{ + self.num_basins = 0 + self.basin_id = 0 + self.gamma = 0 + self.coriolis_f = np.nan + + self.salinity = np.nan + self.tf = np.nan + self.tf_depths = np.nan + + self.geothermalflux = np.nan + self.groundedice_melting_rate = np.nan + + if len(args) == 0: + self.setdefaultparameters() + elif len(args) == 1: + self.setdefaultparameters() + + constructor = args[0] + for field in vars(self).keys(): + if field in constructor.__dict__.keys(): + setattr(self,field,getattr(constructor,field)) + else: + raise Exception('constructuor not supported') + # }}} + def __repr__(self): # {{{ + s = ' ISMIP7 basal melt rate parameterization:\n' + s += '{}\n'.format(fielddisplay(self,'num_basins','[TODO] number of basins the model domain is partitioned into [unitless]')) + s += '{}\n'.format(fielddisplay(self,'basin_id','[TODO] basin number assigned to each node (unitless)')) + s += '{}\n'.format(fielddisplay(self,'gamma','melt rate coefficient (m/yr)')) + s += '{}\n'.format(fielddisplay(self,'tf_depths','elevation of vertical layers in ocean thermal forcing dataset')) + s += '{}\n'.format(fielddisplay(self,'tf','thermal forcing (ocean temperature minus freezing point) (degrees C)')) + s += '{}\n'.format(fielddisplay(self,'salinity','salinity (psu)')) + s += '{}\n'.format(fielddisplay(self,'coriolis_f','Coriolis parameter (s^-1)')) + s += '{}\n'.format(fielddisplay(self,'geothermalflux','geothermal heat flux (W/m^2)')) + s += '{}\n'.format(fielddisplay(self,'groundedice_melting_rate','basal melting rate (positive if melting) (m/yr)')) + return s + # }}} + def extrude(self, md): # {{{ + + self.tf = project3d(md,'vector',self.tf,'type','node') + self.salinity = project3d(md,'vector',self.salinity,'type','node') + + self.geothermalflux = project3d(md, 'vector', self.geothermalflux, 'type', 'node', 'layer', 1) # Bedrock only gets geothermal flux + self.groundedice_melting_rate = project3d(md, 'vector', self.groundedice_melting_rate, 'type', 'node', 'layer', 1) + return self + # }}} + def initialize(self, md): # {{{ + # Update fixed-coriolis parameter + if ~np.any(md.mesh.lat): + print(' no md.mesh.lat specified.') + if md.mesh.epsg == 3031: # For Antarctica + lat, lon = xy2ll(md.mesh.x,md.mesh.y,-1) + elif md.mesh.epsg == 3413: # For Greenland + lat, lon = xy2ll(md.mesh.x,md.mesh.y,1) + else: + raise Exception(' md.mesh.lat not specified and cannot be calculated from md.mesh.epsg') + else: + lat = md.mesh.lat[:] + + omega=7.2921e-5 #angular velocity of the Earth (rad/s) + self.coriolis_f=2*omega*np.sin(lat/180*np.pi) + + if self.gamma == 0: + self.gamma = 14477 + print(' no basalforcings.gamma specified: value set to 14477 m/yr') + if np.all(np.isnan(self.groundedice_melting_rate)): + self.groundedice_melting_rate = np.zeros((md.mesh.numberofvertices, )) + print(' no basalforcings.groundedice_melting_rate specified: values set as zero') + return self + # }}} + def setdefaultparameters(self): # {{{ + self.gamma = 0.0 + return self + # }}} + def checkconsistency(self, md, solution, analyses): # {{{ + + md = checkfield(md,'fieldname','basalforcings.num_basins','numel',1,'NaN',1,'Inf',1,'>',0) + md = checkfield(md,'fieldname','basalforcings.basin_id','Inf',1,'>=',0,'<=',md.basalforcings.num_basins,'size',[md.mesh.numberofelements, 1]) + + md = checkfield(md,'fieldname','basalforcings.gamma','numel',1,'NaN',1,'Inf',1,'>',0) + + md = checkfield(md,'fieldname','basalforcings.coriolis_f','size',[md.mesh.numberofvertices, 1],'NaN',1,'Inf',1) + md = checkfield(md,'fieldname','basalforcings.tf_depths','NaN',1,'Inf',1,'size',[1,np.nan],'<=',0) + md = checkfield(md,'fieldname','basalforcings.geothermalflux','NaN',1,'Inf',1,'>=',0,'timeseries',1) + md = checkfield(md,'fieldname','basalforcings.groundedice_melting_rate','NaN',1,'Inf',1,'timeseries',1) + + ndepths = np.shape(self.tf_depths)[1] + md = checkfield(md,'fieldname','basalforcings.tf','size',[ndepths]) + md = checkfield(md,'fieldname','basalforcings.salinity','size',[ndepths]) + for i in range(len(self.tf_depths)): + md = checkfield(md,'fieldname','basalforcings.tf[' + str(i) + ']','field',md.basalforcings.tf[i],'size',[md.mesh.numberofvertices+1, np.nan],'NaN',1,'Inf',1,'>=',0,'timeseries',1) + md = checkfield(md,'fieldname','basalforcings.salinity[' + str(i) + ']','field',md.basalforcings.salinity[i],'size',[md.mesh.numberofvertices+1, np.nan],'NaN',1,'Inf',1,'>=',0,'timeseries',1) + + return md + # }}} + def marshall(self, prefix, md, fid): # {{{ + + yts = md.constants.yts + + WriteData(fid,prefix,'name','md.basalforcings.model','data',10,'format','Integer') + WriteData(fid,prefix,'object',self,'fieldname','num_basins','format','Integer') + WriteData(fid,prefix,'object',self,'fieldname','basin_id','data',self.basin_id-1,'name','md.basalforcings.basin_id','format','IntMat','mattype',2) #0-indexed + WriteData(fid,prefix,'object',self,'fieldname','gamma','format','Double','scale',1/yts) + WriteData(fid,prefix,'object',self,'fieldname','coriolis_f','format','DoubleMat','name','md.basalforcings.coriolis_f','mattype',1) + WriteData(fid,prefix,'object',self,'fieldname','tf_depths','format','DoubleMat','name','md.basalforcings.tf_depths') + WriteData(fid,prefix,'object',self,'fieldname','tf','format','MatArray','name','md.basalforcings.tf','timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts) + WriteData(fid,prefix,'object',self,'fieldname','salinity','format','MatArray','name','md.basalforcings.salinity','timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts) + WriteData(fid,prefix,'object',self,'fieldname','geothermalflux','format','DoubleMat','name','md.basalforcings.geothermalflux','mattype',1,'timeserieslength',md.mesh.numberofelements+1,'yts',md.constants.yts) + WriteData(fid,prefix,'object',self,'fieldname','groundedice_melting_rate','format','DoubleMat','mattype',1,'scale',1./yts,'timeserieslength',md.mesh.numberofvertices+1,'yts',md.constants.yts) + + # }}} diff --git a/src/m/classes/basalforcingspico.py b/src/m/classes/basalforcingspico.py index f1e729480..5071dfcb6 100755 --- a/src/m/classes/basalforcingspico.py +++ b/src/m/classes/basalforcingspico.py @@ -6,6 +6,7 @@ from checkfield import checkfield from project3d import project3d from WriteData import WriteData +from fielddisplay import fielddisplay class basalforcingspico(object): """PICO BASAL FORCINGS class definition diff --git a/src/m/classes/cfsurfacelogvel.m b/src/m/classes/cfsurfacelogvel.m index a5f72a368..12053cac2 100644 --- a/src/m/classes/cfsurfacelogvel.m +++ b/src/m/classes/cfsurfacelogvel.m @@ -82,6 +82,8 @@ fielddisplay(self,'definitionstring','string that identifies this output definition uniquely, from ''Outputdefinition[1-10]'''); fielddisplay(self,'vxobs','observed field that we compare the model against'); fielddisplay(self,'vxobs_string','observation string'); + fielddisplay(self,'vyobs','observed field that we compare the model against'); + fielddisplay(self,'vyobs_string','observation string'); fielddisplay(self,'weights','weights (at vertices) to apply to the cfsurfacelogvel'); fielddisplay(self,'weights_string','string for weights for identification purposes'); fielddisplay(self,'datatime','time to compare data to model for misfit'); diff --git a/src/m/classes/clusters/acenet.m b/src/m/classes/clusters/acenet.m index 65ea2b14a..4b2a8f4b3 100644 --- a/src/m/classes/clusters/acenet.m +++ b/src/m/classes/clusters/acenet.m @@ -8,19 +8,14 @@ classdef acenet properties (SetAccess=public) % {{{ - %name='glacdyn.ace-net.ca' name='placentia.ace-net.ca' - %name='brasdor.ace-net.ca' - login='klemorza'; + login=''; np=10; port=0; queue='longq'; time=10; - % codepath='/usr/local/issm-r11321/bin'; % this one is for issm on acenet global - codepath='/home/klemorza/issm/trunk-jpl/bin'; % this one is for issm on my acenet directory - %executionpath='/home/klemorza/issm/trunk-jpl/execution'; - %executionpath='/home/klemorza/scratch/issmres.dir'; - executionpath='/net/glacdyn-data/glacdyn/1/klemorza/issm.dir'; + codepath=''; + executionpath=''; %}}} end methods @@ -55,24 +50,28 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -cwd\n'); fprintf(fid,'#$ -N issm\n'); - % fprintf(fid,'#$ -l h_rt=00:15:00\n'); - % fprintf(fid,'#$ -l h_rt=5:00:0\n'); - % fprintf(fid,'#$ -l h_rt=25:00:0\n'); - % fprintf(fid,'#$ -l h_rt=47:59:00\n'); - % fprintf(fid,'#$ -l h_rt=72:00:0\n'); - % fprintf(fid,'#$ -l h_rt=96:00:0\n'); - % fprintf(fid,'#$ -l h_rt=336:00:0\n'); tstr = sprintf('#$ -l h_rt=%i:00:00\n',cluster.time); fprintf(fid,tstr); @@ -81,12 +80,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin if strcmp(cluster.executionpath,'/home/klemorza/scratch/issmres.dir') % ---- Which acent queue to use ---- fprintf(fid,'#$ -q short.q@*,medium.q@*\n'); - %fprintf(fid,'#$ -q medium.q@*,long.q@*\n'); - %fprintf(fid,'#$ -q medium.q@*\n'); - %fprintf(fid,'#$ -q short.q@*\n'); - % Acenet nodes with 16cpus and more than 60G mem - % fprintf(fid,'#$ -l h=cl001|cl002|cl003|cl004|cl005|cl006|cl007|cl008|cl009|cl010|cl011|cl012|cl021|cl022|cl023|cl024 \n'); - % ---- cpus on different nodes ---- if cluster.np==4 % -------- All cpus in the same node -------- fprintf(fid,'#$ -pe openmp %i\n',cluster.np); @@ -95,42 +88,28 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end elseif strcmp(cluster.executionpath,'/net/glacdyn-data/glacdyn/1/klemorza/issm.dir') - % ---- Which node for Lev's queue are selected ---- fprintf(fid,'#$ -q tarasov.q\n'); fprintf(fid,'#$ -l h=cl27*|cl28*|cl29*|cl30*|cl31*|cl320|cl267|cl268|cl269|cl338 \n'); - %fprintf(fid,'#$ -l h=cl27*|cl28*|cl29*|cl30*|cl31*|cl320|cl267|cl268|cl269 \n'); - %fprintf(fid,'#$ -l h=cl0* \n'); - % fprintf(fid,'#$ -l h=cl338 \n'); if cluster.np==4 % -------- All cpus in the same node -------- fprintf(fid,'#$ -pe openmp %i\n',cluster.np); else fprintf(fid,'#$ -pe ompi* %i\n',cluster.np); - %fprintf(fid,'#$ -pe 4per %i\n',cluster.np); - %fprintf(fid,'#$ -pe 8per %i\n',cluster.np); end end - % ---- misc ---- fprintf(fid,'#$ -j y\n'); - fprintf(fid,'module purge\n'); - %fprintf(fid,'module load gcc openmpi/gcc\n'); - %fprintf(fid,'module unload openmpi\n'); fprintf(fid,'module load intel/12.1.7.367\n'); fprintf(fid,'module load openmpi/intel/1.2.9\n'); - fprintf(fid,'module load gsl\n'); - %fprintf(fid,'module load issm\n'); fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'\n'); fprintf(fid,'mpiexec %s/issm.exe %s %s %s 2> %s.errlog >%s.outlog\n',... cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname,modelname,modelname); - %fprintf(fid,'echo $HOSTNAME >>%s.outlog',modelname); fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ @@ -146,7 +125,6 @@ function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ issmscpout(cluster.name,cluster.executionpath,cluster.login,cluster.port,{[dirname '.tar.gz']}); end %}}} - function LaunchQueueJob(cluster,modelname,dirname,filelist,restart) % {{{ %Execute Queue job @@ -158,7 +136,6 @@ function LaunchQueueJob(cluster,modelname,dirname,filelist,restart) % {{{ end issmssh(cluster.name,cluster.login,cluster.port,launchcommand); end %}}} - function Download(cluster,dirname,filelist) % {{{ %copy files from cluster to current directory diff --git a/src/m/classes/clusters/aci.m b/src/m/classes/clusters/aci.m index 816dbc5a2..787f3e54c 100644 --- a/src/m/classes/clusters/aci.m +++ b/src/m/classes/clusters/aci.m @@ -59,13 +59,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#PBS -A %s\n', cluster.queue); %open or brp106.... fprintf(fid,'#PBS -l nodes=%i:ppn=%i:stmem\n',cluster.nodes,cluster.ppn); fprintf(fid,'#PBS -l walltime=%i\n',cluster.time*60); % walltime is in seconds diff --git a/src/m/classes/clusters/andes.m b/src/m/classes/clusters/andes.m index 90ea5eba7..341992785 100644 --- a/src/m/classes/clusters/andes.m +++ b/src/m/classes/clusters/andes.m @@ -58,13 +58,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildKrigingQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=ice\n'); %Make sure we use the ICE account for this run @@ -90,13 +101,25 @@ function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,is fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + if(isvalgrind); disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof); disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash -l\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=ice\n'); %Make sure we use the ICE account for this run @@ -154,7 +177,7 @@ function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch) % {{{ launchcommand=['cd ' cluster.executionpath ' && cd ' dirname ' && hostname && sbatch ' modelname '.queue ']; else launchcommand=['cd ' cluster.executionpath ' && rm -rf ./' dirname ' && mkdir ' dirname ... - ' && cd ' dirname ' && mv ../' dirname '.tar.gz ./ && tar -zxf ' dirname '.tar.gz && hostname && sbatch ' modelname '.queue ']; + ' && cd ' dirname ' && mv ../' dirname '.tar.gz ./ && tar -zxf ' dirname '.tar.gz && sbatch ' modelname '.queue ']; end issmssh(cluster.name,cluster.login,0,launchcommand); end %}}} @@ -162,7 +185,7 @@ function Download(cluster,dirname,filelist) % {{{ %copy files from cluster to current directory directory=[cluster.executionpath '/' dirname '/']; - issmscpin(cluster.name,cluster.login,0,directory,filelist); + issmscpin(cluster.name,cluster.login,0,directory,filelist, 2); %use {} and not \{\} end %}}} end diff --git a/src/m/classes/clusters/aurora.m b/src/m/classes/clusters/aurora.m index cb0742bbf..3cb55396f 100644 --- a/src/m/classes/clusters/aurora.m +++ b/src/m/classes/clusters/aurora.m @@ -61,23 +61,34 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.numnodes.*cluster.cpuspernode,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - executable='issm.exe'; + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind); disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof); disp('gprof not supported by cluster, ignoring...'); end + + executable='issm.exe' if isdakota, - version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); - if (version>=6), - executable='issm_dakota.exe'; - end + version=IssmConfig('_DAKOTA_VERSION_'); + version=str2num(version(1:3)); + if(version>=6) executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#PBS -l select=%i:ncpus=%i\n',cluster.numnodes,cluster.cpuspernode); fprintf(fid,'#PBS -N %s\n',modelname); @@ -97,7 +108,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'cd $PBS_O_WORKDIR\n'); fprintf(fid,'mpirun -n %i %s/%s %s %s %s',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/aws_issm_solution_server.m b/src/m/classes/clusters/aws_issm_solution_server.m index 183861aaa..d1bd2308a 100644 --- a/src/m/classes/clusters/aws_issm_solution_server.m +++ b/src/m/classes/clusters/aws_issm_solution_server.m @@ -86,23 +86,34 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; if isdakota, version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); - if (version>=6), + if (version>=6) executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'export PATH="${PATH}:."\n'); fprintf(fid,'export MPI_LAUNCH_TIMEOUT=520\n'); @@ -143,7 +154,7 @@ function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ for i=1:numel(filelist), compressstring = [compressstring ' ' filelist{i}]; end - if cluster.interactive, + if cluster.interactive compressstring = [compressstring ' ' modelname '.run ' modelname '.errlog ' modelname '.outlog ']; end system(compressstring); diff --git a/src/m/classes/clusters/camhpc.m b/src/m/classes/clusters/camhpc.m index 721df760f..9aa11d4c4 100644 --- a/src/m/classes/clusters/camhpc.m +++ b/src/m/classes/clusters/camhpc.m @@ -70,14 +70,24 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind); disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof); disp('gprof not supported by cluster, ignoring...'); end %write queuing script - disp(modelname) - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH -p %s \n',cluster.partition); @@ -98,7 +108,7 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpirun -np %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: diff --git a/src/m/classes/clusters/castor.m b/src/m/classes/clusters/castor.m index 0fcd22556..a19862cf7 100644 --- a/src/m/classes/clusters/castor.m +++ b/src/m/classes/clusters/castor.m @@ -45,18 +45,29 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind); disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof); disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/sh\n'); fprintf(fid,'#PBS -l walltime=%i\n',cluster.time*60); %walltime is in seconds. fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l ncpus=%i\n',cluster.np); - if ~isempty(queue), + if ~isempty(cluster.queue) fprintf(fid,'#PBS -q %s\n',cluster.queue); end fprintf(fid,'#PBS -o %s.outlog \n',modelname); @@ -66,7 +77,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'export OMP_NUM_THREADS=1\n'); fprintf(fid,'dplace -s1 -c0-%i mpiexec -np %i %s/issm.exe %s %s %s',cluster.np-1,cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/cloud.m b/src/m/classes/clusters/cloud.m index 50522f434..d6d92c2a3 100644 --- a/src/m/classes/clusters/cloud.m +++ b/src/m/classes/clusters/cloud.m @@ -47,10 +47,20 @@ function disp(cluster) % {{{ end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'source %s%s\n',cluster.codepath,'/../etc/environment.sh'); fprintf(fid,'cd %s\n',[cluster.executionpath '/' dirname]); diff --git a/src/m/classes/clusters/cloud.py b/src/m/classes/clusters/cloud.py index dc263b7a0..6ec674069 100644 --- a/src/m/classes/clusters/cloud.py +++ b/src/m/classes/clusters/cloud.py @@ -54,9 +54,20 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#/bin/bash\n') fid.write('source {}{}\n'.format(self.codepath, '/../etc/environment.sh')) diff --git a/src/m/classes/clusters/computecanada.m b/src/m/classes/clusters/computecanada.m index fab6408a1..3f5037873 100644 --- a/src/m/classes/clusters/computecanada.m +++ b/src/m/classes/clusters/computecanada.m @@ -70,17 +70,24 @@ function disp(cluster) % {{{ if ~(cluster.memory > 0), md = checkmessage(md,'memory must be > 0'); end end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - error('not implemented yet'); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=%s \n',cluster.projectaccount); @@ -97,7 +104,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'srun -n %i %s/issm.exe %s %s %s\n',cluster.np(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end fclose(fid); end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist)% {{{ @@ -110,7 +116,7 @@ function UploadQueueJob(cluster,modelname,dirname,filelist)% {{{ system(compressstring); %upload input files - issmscpout(cluster.name,cluster.executionpath,cluster.login,cluster.port,{[dirname '.tar.gz']}, 0, 2); + issmscpout(cluster.name,cluster.executionpath,cluster.login,cluster.port,{[dirname '.tar.gz']}, 2); end %}}} function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch)% {{{ diff --git a/src/m/classes/clusters/cosmos.m b/src/m/classes/clusters/cosmos.m index 227b6a899..93fb17e73 100644 --- a/src/m/classes/clusters/cosmos.m +++ b/src/m/classes/clusters/cosmos.m @@ -45,18 +45,29 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#PBS -l select=%i:ncpus=1\n',cluster.np); fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l walltime=%i\n',time*60); %walltime is in seconds. - fprintf(fid,'#PBS -q %s\n',queue); + fprintf(fid,'#PBS -q %s\n',cluster.queue); fprintf(fid,'#PBS -o %s.outlog \n',modelname); fprintf(fid,'#PBS -e %s.errlog \n',modelname); fprintf(fid,'export PBS_O_WORKDIR=%s\n',[cluster.executionpath '/' dirname]); diff --git a/src/m/classes/clusters/cyclone.py b/src/m/classes/clusters/cyclone.py index b507434e8..65ba2632d 100644 --- a/src/m/classes/clusters/cyclone.py +++ b/src/m/classes/clusters/cyclone.py @@ -70,10 +70,22 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + executable = 'issm.exe' + # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('export ISSM_DIR="%s/../ "\n' % self.codepath) fid.write('source $ISSM_DIR/etc/environment.sh\n') fid.write('INTELLIBS = "/opt/intel/intelcompiler-12.04/composerxe-2011.4.191/compiler/lib/intel64"\n') @@ -85,6 +97,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, fid.write('mpiexec -np %i %s/%s %s %s %s>%s.outlog 2>%s.errlog\n' % (self.np, self.codepath, executable, str(solution), rundir, modelname, runfile, runfile)) fid.close() # }}} + def UploadQueueJob(self, modelname, dirname, filelist): # {{{ #compress the files into one zip. compressstring = 'tar -zcf %s.tar.gz ' % dirname @@ -95,6 +108,7 @@ def UploadQueueJob(self, modelname, dirname, filelist): # {{{ #upload input files issmscpout(self.name, self.executionpath, self.login, self.port, [dirname + '.tar.gz']) # }}} + def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ #Execute Queue job if not isempty(restart): @@ -103,6 +117,7 @@ def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ launchcommand = 'cd %s && rm -rf ./%s && mkdir %s && cd %s && mv ../%s.tar.gz ./ && tar -zxf %s.tar.gz && chmod +x ./%s.queue && ./%s.queue' % (self.executionpath, dirname, dirname, dirname, dirname, dirname, modelname, modelname) issmssh(self.name, self.login, self.port, launchcommand) # }}} + def Download(self, dirname, filelist): # {{{ # Copy files from cluster to current directory directory = '%s/%s/' % (self.executionpath, dirname) diff --git a/src/m/classes/clusters/discover.m b/src/m/classes/clusters/discover.m index a199c8125..eb736cb1a 100644 --- a/src/m/classes/clusters/discover.m +++ b/src/m/classes/clusters/discover.m @@ -94,23 +94,34 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); if (version>=6), executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH -J %s \n',modelname); @@ -150,30 +161,28 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin else fprintf(fid,'mpiexec -np %i valgrind --leak-check=full %s/%s %s %s/%s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,cluster.executionpath,dirname,modelname); end - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); % TODO: Change this to system call (touch )? - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); % TODO: Change this to system call (touch )? - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ %compress the files into one zip. compressstring=['tar -zcf ' dirname '.tar.gz']; - for i=1:numel(filelist), + for i=1:numel(filelist) compressstring = [compressstring ' ' filelist{i}]; end - if cluster.interactive, + if cluster.interactive compressstring = [compressstring ' ' modelname '.run ' modelname '.errlog ' modelname '.outlog ']; end system(compressstring); %upload input files - if cluster.interactive, + if cluster.interactive directory=[cluster.executionpath '/Interactive' num2str(cluster.interactive)]; else directory=cluster.executionpath; diff --git a/src/m/classes/clusters/discover.py b/src/m/classes/clusters/discover.py index 05d7d637a..f3fa25fb8 100644 --- a/src/m/classes/clusters/discover.py +++ b/src/m/classes/clusters/discover.py @@ -111,7 +111,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not supported by cluster, ignoring...') @@ -125,8 +136,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, executable = 'issm_ocean.exe' # Write queuing script - fid = open(modelname + '.queue', 'w') - + fid = open(filename, 'w') fid.write('#!/bin/bash\n') fid.write('#SBATCH -J {} \n'.format(modelname)) fid.write('#SBATCH --qos={} \n'.format(self.queue)) diff --git a/src/m/classes/clusters/discovery.m b/src/m/classes/clusters/discovery.m index a9a2b3a18..23acecc88 100644 --- a/src/m/classes/clusters/discovery.m +++ b/src/m/classes/clusters/discovery.m @@ -60,13 +60,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildKrigingQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; - %write queuing script - fid=fopen([modelname '.queue'],'w'); + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end + + %write queueing script + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=ice\n'); %Make sure we use the ICE account for this run @@ -91,13 +102,24 @@ function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,is fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid = fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=ice\n'); %Make sure we use the ICE account for this run @@ -118,27 +140,24 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); fprintf(fid,'mpirun -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(), cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end - - if (cluster.deleteckptdata) + if(cluster.deleteckptdata) fprintf(fid,'rm -rf *.rst *.ckpt\n'); end fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpirun -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(), cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/eis_nasa_smce.py b/src/m/classes/clusters/eis_nasa_smce.py index e41339515..23ee022ad 100644 --- a/src/m/classes/clusters/eis_nasa_smce.py +++ b/src/m/classes/clusters/eis_nasa_smce.py @@ -104,7 +104,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not supported by cluster, ignoring...') @@ -120,7 +131,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, issmexec = 'issm_ocean.exe' # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/bash\n') fid.write('#SBATCH --partition={} \n'.format(self.partition)) diff --git a/src/m/classes/clusters/fram.py b/src/m/classes/clusters/fram.py index 9f612fa10..3dff8b37e 100644 --- a/src/m/classes/clusters/fram.py +++ b/src/m/classes/clusters/fram.py @@ -94,7 +94,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + executable = 'issm.exe' if isdakota: version = IssmConfig('_DAKOTA_VERSION_')[0:2] @@ -105,7 +116,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, executable = 'issm_ocean.exe' # Write queuing script shortname = modelname[0:min(12, len(modelname))] - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/bash -l\n') fid.write('#SBATCH --job-name=%s \n' % shortname) @@ -147,6 +158,7 @@ def UploadQueueJob(self, modelname, dirname, filelist): # {{{ issmscpout(self.name, self.executionpath, self.login, self.port, [dirname + '.tar.gz']) # }}} + def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ #Execute Queue job if not isempty(restart): @@ -155,6 +167,7 @@ def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ launchcommand = 'cd %s && rm -rf ./%s && mkdir %s && cd %s && mv ../%s.tar.gz ./ && tar -zxf %s.tar.gz && sbatch %s.queue' % (self.executionpath, dirname, dirname, dirname, dirname, dirname, modelname) issmssh(self.name, self.login, self.port, launchcommand) # }}} + def Download(self, dirname, filelist): # {{{ # Copy files from cluster to current directory directory = '%s/%s/' % (self.executionpath, dirname) diff --git a/src/m/classes/clusters/frontera.m b/src/m/classes/clusters/frontera.m index 9b1cc3727..cdd4f63bd 100644 --- a/src/m/classes/clusters/frontera.m +++ b/src/m/classes/clusters/frontera.m @@ -72,13 +72,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); fprintf(fid,'#$ -q %s \n',cluster.queue); @@ -97,25 +108,35 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); - if (version>=6), + if (version>=6) executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); - + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH -J %s \n',modelname); fprintf(fid,'#SBATCH -p %s \n',cluster.queue); @@ -132,11 +153,10 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,['module load ' cluster.modules{i} '\n']); end - if isdakota, + if isdakota fprintf(fid,'export KMP_AFFINITY="granularity=fine,compact,verbose" \n\n'); end - fprintf(fid,'export PATH="$PATH:."\n\n'); fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME @@ -145,21 +165,18 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end - fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'ibrun -np %i %s/%s %s %s %s\n',cluster.nprocs(),executable,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ @@ -193,7 +210,7 @@ function Download(cluster,dirname,filelist) % {{{ %copy files from cluster to current directory directory=[cluster.executionpath '/' dirname '/']; - issmscpin(cluster.name,cluster.login,cluster.port,directory,filelist); + issmscpin(cluster.name,cluster.login,cluster.port,directory,filelist, 2);%use {} instead of \{\} end %}}} end diff --git a/src/m/classes/clusters/gadi.py b/src/m/classes/clusters/gadi.py index d2e9211d7..52d9a30ae 100644 --- a/src/m/classes/clusters/gadi.py +++ b/src/m/classes/clusters/gadi.py @@ -119,10 +119,7 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript( - self, dirname, modelname, solution, - io_gather, isvalgrind, isgprof, isdakota, isoceancoupling - ): # {{{ + def BuildQueueScript(self, md, filename): # {{{ """ Create a PBS script for Gadi. Gadi typically uses #PBS lines like: @@ -133,6 +130,16 @@ def BuildQueueScript( - #PBS -j oe """ + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not typically used on Gadi via this script, ignoring...') @@ -152,7 +159,7 @@ def BuildQueueScript( walltime_str = '{:02d}:{:02d}:00'.format(hours, minutes) # Write queue script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/bash\n') fid.write('#PBS -P {}\n'.format(self.project)) fid.write('#PBS -q {}\n'.format(self.queue)) diff --git a/src/m/classes/clusters/gemini.m b/src/m/classes/clusters/gemini.m index fd8aa7972..8cad92912 100644 --- a/src/m/classes/clusters/gemini.m +++ b/src/m/classes/clusters/gemini.m @@ -45,18 +45,29 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/sh\n'); fprintf(fid,'#PBS -l walltime=%i\n',cluster.time*60); %walltime is in seconds. fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l ncpus=%i\n',cluster.np); - if ~isempty(queue), + if ~isempty(cluster.queue) fprintf(fid,'#PBS -q %s\n',cluster.queue); end fprintf(fid,'#PBS -o %s.outlog \n',modelname); @@ -67,7 +78,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'export OMP_NUM_THREADS=1\n'); fprintf(fid,'dplace -s1 -c0-%i mpiexec -np %i %s/issm.exe %s %s %s',cluster.np-1,cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/generic.m b/src/m/classes/clusters/generic.m index fcb04c83d..5ec163e16 100644 --- a/src/m/classes/clusters/generic.m +++ b/src/m/classes/clusters/generic.m @@ -32,7 +32,7 @@ function cluster=generic(varargin) % {{{ %Change the defaults if ispc - if ispc, + if ispc cluster.codepath = [issmdir() '\bin']; cluster.etcpath = [issmdir() '\etc']; cluster.executionpath = [issmdir() '\execution']; @@ -83,22 +83,31 @@ function disp(cluster) % {{{ end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + % Which executable are we calling? executable='issm.exe'; % default - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); - if (version>=6), - executable='issm_dakota.exe'; - end + if(version>=6) executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end - if ~ispc(), + if ~ispc() % Check that executable exists at the right path if ~exist([cluster.codepath '/' executable],'file'), error(['File ' cluster.codepath '/' executable ' does not exist']); @@ -108,13 +117,13 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin codepath=strrep(cluster.codepath,' ','\ '); % Write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); - if isvalgrind, + if isvalgrind %Add --gen-suppressions=all to get suppression lines %fprintf(fid,'LD_PRELOAD=%s \\\n',cluster.valgrindlib); it could be deleted - if ismac, - if IssmConfig('_HAVE_MPI_'), + if ismac + if IssmConfig('_HAVE_MPI_') fprintf(fid,'mpiexec -np %i %s --leak-check=full --leak-check=full --show-leak-kinds=all --error-limit=no --dsymutil=yes --suppressions=%s %s/%s %s %s %s 2> %s.errlog > %s.outlog ',... cluster.np,cluster.valgrind,cluster.valgrindsup,cluster.codepath,executable,solution,[cluster.executionpath '/' dirname], modelname,modelname,modelname); else @@ -165,29 +174,33 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirnames,modelnames,nps) % {{{ + function BuildQueueScriptMultipleModels(cluster, slm, dirnames, modelnames, nps, filename) % {{{ - %some checks: - if isempty(modelname), error('BuildQueueScriptMultipleModels error message: need a non empty model name!');end + %Get variables from slm + dirname = slm.private.runtimename; + modelname = slm.miscellaneous.name; + solution = slm.private.solution; + + %checks + if(isempty(modelname)) error('BuildQueueScriptMultipleModels error message: need a non empty model name!');end + if(ispc()) error('BuildQueueScriptMultipleModels not support yet on windows machines');end; %what is the executable being called? executable='issm_slc.exe'; - if ispc(), error('BuildQueueScriptMultipleModels not support yet on windows machines');end; - %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); %number of cpus: - mpistring=sprintf('mpiexec -np %i ',cluster.np); + mpistring = sprintf('mpiexec -np %i ',cluster.np); %executable: mpistring=[mpistring sprintf('%s/%s ',cluster.codepath,executable)]; @@ -207,7 +220,7 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna end %log files: - if ~cluster.interactive, + if ~cluster.interactive mpistring=[mpistring sprintf('2> %s.errlog> %s.outlog',modelname,modelname)]; end @@ -216,21 +229,30 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildQueueScriptIceOcean(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota) % {{{ + function BuildQueueScriptIceOcean(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; - %write queuing script %what is the executable being called? executable='issm_ocean.exe'; - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); - if ~isvalgrind, + if ~isvalgrind fprintf(fid,'mpiexec -np %i %s/%s %s %s %s : -np %i ./mitgcmuv\n',cluster.np,cluster.codepath,executable,solution,cluster.executionpath,modelname,cluster.npocean); else @@ -240,18 +262,28 @@ function BuildQueueScriptIceOcean(cluster,dirname,modelname,solution,io_gather,i fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; %write queuing script - if ~ispc(), + if ~ispc() - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/sh\n'); if ~isvalgrind, if cluster.interactive @@ -267,26 +299,13 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fprintf(fid,'mpiexec -np %i %s --leak-check=full --suppressions=%s %s/kriging.exe %s %s 2> %s.errlog >%s.outlog ',... cluster.np,cluster.valgrind,cluster.valgrindsup,cluster.codepath,[cluster.executionpath '/' modelname],modelname,modelname,modelname); end - if ~io_gather, %concatenate the output files: - fprintf(fid,'\ncat %s.outbin.* > %s.outbin',modelname,modelname); - end fclose(fid); - else % Windows - - fid=fopen([modelname '.bat'],'w'); - fprintf(fid,'@echo off\n'); - if cluster.interactive - fprintf(fid,'"%s/issm.exe" %s "%s" %s ',cluster.codepath,solution,[cluster.executionpath '/' modelname],modelname); - else - fprintf(fid,'"%s/issm.exe" %s "%s" %s 2> %s.errlog >%s.outlog',... - cluster.codepath,solution,[cluster.executionpath '/' modelname],modelname,modelname,modelname); - end - fclose(fid); + error('not supported'); end %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end @@ -314,7 +333,7 @@ function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ end %}}} function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch) % {{{ - if ~ispc, + if ~ispc %figure out what shell extension we will use: if isempty(strfind(cluster.shell,'csh')), shellext='sh'; @@ -325,7 +344,7 @@ function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch) % {{{ if ~isempty(restart) launchcommand=['source ' cluster.etcpath '/environment.' shellext ' && cd ' cluster.executionpath ' && cd ' dirname ' && source ' modelname '.queue ']; else - if ~batch, + if ~batch launchcommand=['source ' cluster.etcpath '/environment.' shellext ' && cd ' cluster.executionpath ' && rm -rf ./' dirname ' && mkdir ' dirname ... ' && cd ' dirname ' && mv ../' dirname '.tar.gz ./ && tar -zxf ' dirname '.tar.gz && source ' modelname '.queue ']; else @@ -368,7 +387,7 @@ function LaunchQueueJobIceOcean(cluster,modelname,dirname,filelist,restart,batch end %}}} function Download(cluster,dirname,filelist) % {{{ - if ispc(), + if ispc() %do nothing return; end diff --git a/src/m/classes/clusters/generic.py b/src/m/classes/clusters/generic.py index eb0a1a730..31e8fe003 100644 --- a/src/m/classes/clusters/generic.py +++ b/src/m/classes/clusters/generic.py @@ -86,21 +86,30 @@ def checkconsistency(self, md, solution, analyses): # {{{ return md # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + # Which executable are we calling? executable = 'issm.exe' # default - if isdakota: version = IssmConfig('_DAKOTA_VERSION_') version = float(version[0]) - if version >= 6: - executable = 'issm_dakota.exe' + if version >= 6: executable = 'issm_dakota.exe' if isoceancoupling: executable = 'issm_ocean.exe' # Write queuing script if not ispc(): - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/sh\n') if not isvalgrind: if self.interactive: @@ -153,10 +162,21 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, fid.close() # }}} - def BuildKrigingQueueScript(self, modelname, solution, io_gather, isvalgrind, isgprof): # {{{ + def BuildKrigingQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + #write queuing script if not ispc(): - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/sh\n') if not isvalgrind: if self.interactive: diff --git a/src/m/classes/clusters/generic_static.m b/src/m/classes/clusters/generic_static.m index 56b1f908c..0b471d215 100755 --- a/src/m/classes/clusters/generic_static.m +++ b/src/m/classes/clusters/generic_static.m @@ -60,24 +60,32 @@ function disp(cluster) % {{{ end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + % Which executable are we calling? executable='issm.exe'; % default - - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); - if (version>=6), - executable='issm_dakota.exe'; - end + if(version>=6) executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end - if ~ispc(), + if ~ispc() % Check that executable exists at the right path - if ~exist([cluster.codepath '/' executable],'file'), + if ~exist([cluster.codepath '/' executable],'file') error(['File ' cluster.codepath '/' executable ' does not exist']); end @@ -85,15 +93,14 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin codepath=strrep(cluster.codepath,' ','\ '); % Write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); fprintf(fid,['%s/mpiexec -np %i %s/%s %s %s %s \n'],codepath,cluster.np,codepath,executable,solution,'./',modelname); fclose(fid); else % Windows - fid=fopen([modelname '.bat'],'w'); + fid=fopen([modelname '.bat'], 'w'); fprintf(fid,'@echo off\n'); - - if cluster.np>1, + if cluster.np>1 fprintf(fid,'"%s\\mpiexec.exe" -n %i "%s/%s" %s ./ %s',cluster.codepath,cluster.np,cluster.codepath,executable,solution,modelname); else fprintf(fid,'"%s\\%s" %s ./ %s',cluster.codepath,executable,solution,modelname); @@ -102,10 +109,8 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end %Create an errlog and outlog file - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ @@ -113,9 +118,9 @@ function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ return; end %}}} function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch) % {{{ - if ~ispc, + if ~ispc % Figure out which file extension to use - if isempty(strfind(cluster.shell,'csh')), + if isempty(strfind(cluster.shell,'csh')) shellext='sh'; else shellext='csh'; diff --git a/src/m/classes/clusters/generic_static.py b/src/m/classes/clusters/generic_static.py index a7897f446..08254cbc0 100755 --- a/src/m/classes/clusters/generic_static.py +++ b/src/m/classes/clusters/generic_static.py @@ -65,44 +65,21 @@ def checkconsistency(self, md, solution, analyses): # {{{ return md # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ - # Which executable are we calling? - executable = 'issm.exe' # default - - if isdakota: - version = IssmConfig('_DAKOTA_VERSION_') - version = float(version[0]) - if version >= 6: - executable = 'issm_dakota.exe' - if isoceancoupling: - executable = 'issm_ocean.exe' - - # Check that executable exists at the right path - if not os.path.isfile(self.codepath + '/' + executable): - raise RuntimeError('File ' + self.codepath + '/' + executable + ' does not exist') - - # Process codepath and prepend empty spaces with \ to avoid errors in queuing script - codepath = self.codepath.replace(' ', r'\ ') - - # Write queuing script - fid = open(modelname + '.queue', 'w') - fid.write('#!{}'.format(self.shell) + '\n') - fid.write('{}/mpiexec -np {} {}/{} {} {} {}'.format(codepath, self.np, codepath, executable, solution, './', modelname)) - fid.close() - # Set permissions on queue script so that it can be run - subprocess.call(['chmod', '0755', modelname + '.queue']) + def BuildQueueScript(self, md, filename): # {{{ - # Create an errlog and outlog file - fid = open(modelname + '.errlog', 'w') - fid.close() - fid = open(modelname + '.outlog', 'w') - fid.close() - # }}} + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling - def BuildKrigingQueueScript(self, modelname, solution, io_gather, isvalgrind, isgprof): # {{{ # Which executable are we calling? - executable = 'kriging.exe' # default + executable = 'issm.exe' # default if isdakota: version = IssmConfig('_DAKOTA_VERSION_') @@ -120,9 +97,9 @@ def BuildKrigingQueueScript(self, modelname, solution, io_gather, isvalgrind, is codepath = self.codepath.replace(' ', r'\ ') # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!{}'.format(self.shell) + '\n') - fid.write('{}/mpiexec -np {} {}/{} {} {} {}'.format(codepath, self.np, codepath, executable, solution, './', modelname) + '\n') + fid.write('{}/mpiexec -np {} {}/{} {} {} {}'.format(codepath, self.np, codepath, executable, solution, './', modelname)) fid.close() # Set permissions on queue script so that it can be run diff --git a/src/m/classes/clusters/greenplanet.m b/src/m/classes/clusters/greenplanet.m index 2a418f2ba..d3c991f5c 100644 --- a/src/m/classes/clusters/greenplanet.m +++ b/src/m/classes/clusters/greenplanet.m @@ -68,38 +68,24 @@ function disp(cluster) % {{{ end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/bash\n'); - fprintf(fid,'#SBATCH --job-name=%s\n',modelname); - fprintf(fid,'#SBATCH -p %s \n',cluster.queue); - fprintf(fid,'#SBATCH -N %i -n %i\n',cluster.numnodes,cluster.cpuspernode); - fprintf(fid,'#SBATCH --time=%i\n',cluster.time*60); %walltime is in seconds. - fprintf(fid,'#SBATCH --mem-per-cpu=%igb\n',cluster.memory); - fprintf(fid,'#SBATCH -o %s.outlog \n',modelname); - fprintf(fid,'#SBATCH -e %s.errlog \n\n',modelname); - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); - fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s\n',cluster.nprocs(),cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --partition=%s',cluster.queue{1}); @@ -133,10 +119,8 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/hpc.m b/src/m/classes/clusters/hpc.m index ca5d318c2..c84b7d1bf 100644 --- a/src/m/classes/clusters/hpc.m +++ b/src/m/classes/clusters/hpc.m @@ -64,13 +64,24 @@ function disp(cluster) % {{{ end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); fprintf(fid,'#$ -q %s \n',cluster.queue); @@ -89,13 +100,24 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); fprintf(fid,'#$ -q %s \n',cluster.queue); @@ -114,17 +136,15 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpiexec -np %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/hpc_simba.m b/src/m/classes/clusters/hpc_simba.m index 46a676083..b5a1a1e23 100644 --- a/src/m/classes/clusters/hpc_simba.m +++ b/src/m/classes/clusters/hpc_simba.m @@ -60,13 +60,24 @@ function disp(cluster) % {{{ end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); fprintf(fid,'#$ -q %s \n',cluster.queue); @@ -79,19 +90,30 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,modelname); fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s\n',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); if 0 fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); @@ -107,8 +129,7 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'#PBS -N %s\n',modelname); %fprintf(fid,'#PBS -l nodes=simba01:ppn=%d\n',... % ceil(cluster.np/cluster.cpuspernode), cluster.np); - fprintf(fid,'#PBS -l nodes=simba01:ppn=%d\n',... - cluster.np); + fprintf(fid,'#PBS -l nodes=simba01:ppn=%d\n', cluster.np); fprintf(fid,'#PBS -o %s.outlog \n',modelname); fprintf(fid,'#PBS -e %s.errlog \n\n',modelname); fprintf(fid,'\n'); @@ -126,17 +147,15 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpiexec -np %i %s/issm.exe %s %s %s\n',cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/hpc_simba2.m b/src/m/classes/clusters/hpc_simba2.m deleted file mode 100755 index 145f4b1bc..000000000 --- a/src/m/classes/clusters/hpc_simba2.m +++ /dev/null @@ -1,277 +0,0 @@ -%HPC class definition -% -% Usage: -% cluster=hpc_simba(); -% cluster=hpc_simba('np',3); -% cluster=hpc_simba('np',3,'login','username'); - -classdef hpc_simba2 - properties (SetAccess=public) - % {{{ - name='simba20' - login='inwoo'; - numnodes=18; % number of nodes at 2019-11 installation - cpuspernode=36; % default number of cpus at each node - node=1; % number of nodes for calculating - np=4; % number of cpus for calculating at each node - port=0; % not know open port - queue='pub64'; - codepath= []; - executionpath=[]; - interactive=0; - verbose=0; % show process of downloading - isqsub=1; - end - %}}} - methods - function cluster=hpc_simba2(varargin) % {{{ - - %initialize cluster using default settings if provided - if (exist('hpc_settings')==2), hpc_settings; end - - %use provided options to change fields - cluster=AssignObjectFields(pairoptions(varargin{:}),cluster); - - % where is "ISSM_DIR"? - if strcmpi(cluster.name,'simba00') - ISSM_DIR=getenv('ISSM_DIR'); - elseif strcmpi(cluster.name,'simba20') - ISSM_DIR='/home/inwoo/issm/trunk-jpl/'; - else - error(sprintf('ERROR: %s is not supported cluster name...')) - end - cluster.codepath=sprintf('%s/bin',ISSM_DIR); - - % define specific user - [~, s] = system('whoami'); % get user name - s = s(1:end-1); - cluster.login=s; - if strcmpi(s,'inwoo'), - %cluster.executionpath=sprintf('%s/execution/',ISSM_DIR); - cluster.executionpath='/data2/inwoo/execution'; - elseif strcmpi(s,'emilia'), - cluster.executionpath='/home/DATA/emilia-model/executionlog'; - end - end - %}}} - function disp(cluster) % {{{ - % display the object - disp(sprintf('class ''%s'' object ''%s'' = ',class(cluster),inputname(1))); - disp(sprintf(' name: %s',cluster.name)); - disp(sprintf(' login: %s',cluster.login)); - disp(sprintf(' port: %i',cluster.port)); - disp(sprintf(' numnodes: %i',cluster.numnodes)); - disp(sprintf(' node: %i',cluster.node)); - disp(sprintf(' cpuspernode: %i',cluster.cpuspernode)); - disp(sprintf(' np: %i',cluster.np)); - disp(sprintf(' queue: %s',cluster.queue)); - disp(sprintf(' codepath: %s',cluster.codepath)); - disp(sprintf(' executionpath: %s',cluster.executionpath)); - disp(sprintf(' interactive: %i',cluster.interactive)); - disp(sprintf(' isqub: %i',cluster.isqsub)); - end - %}}} - function md = checkconsistency(cluster,md,solution,analyses) % {{{ - - available_queues={'pub64','free64','free48','free*,pub64','free*'}; - queue_requirements_time=[Inf Inf Inf Inf Inf]; - queue_requirements_np=[64 64 48 48 48]; - - QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,1) - - %Miscelaneous - if isempty(cluster.login), md = checkmessage(md,'login empty'); end - if isempty(cluster.codepath), md = checkmessage(md,'codepath empty'); end - if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end - - end - %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %write queuing script - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/bash\n'); - fprintf(fid,'#$ -N %s\n',modelname); - fprintf(fid,'#$ -q %s \n',cluster.queue); - fprintf(fid,'#$ -pe one-node-mpi 2-64\n'); - fprintf(fid,'#$ -R y\n'); - fprintf(fid,'#$ -m beas\n'); - fprintf(fid,'#$ -o %s.outlog \n',modelname); - fprintf(fid,'#$ -e %s.errlog \n\n',modelname); - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,modelname); - fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s\n',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %write queuing script - fid=fopen([modelname '.queue'],'w'); - %fprintf(fid,'#PBS -q workq\n'); - fprintf(fid,'#PBS -S /bin/bash\n'); % set shell name - fprintf(fid,'#PBS -N %s\n',modelname); % set job name - % 8 node is available at simba - if (cluster.node >= cluster.numnodes), - fprintf('cluster node : simba%02\n', cluster.node); - fprintf('number of node : %d\n', cluster.numnodes); - error('ERROR: cluster node is higher than number of nodes'); - end - % node -> how many nodes do you use? - % ppn (cluster.np) -> how many cpus do you use? - fprintf(fid,'#PBS -l nodes=%d:ppn=%d\n',cluster.node,cluster.np); - fprintf(fid,'#PBS -o %s.outlog \n',modelname); - fprintf(fid,'#PBS -e %s.errlog \n',modelname); - fprintf(fid,'\n'); - fprintf(fid,'source ~/.bashrc\n'); % FIXME - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME - fprintf(fid,'\n'); - fprintf(fid,'module load intel18/compiler-18\n'); - fprintf(fid,'module load intel18/mvapich2-2.2\n'); - fprintf(fid,'\n'); - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); - - if true, % HACK - % NOTE: old version - % fprintf(fid,'mpiexec -genv MV2_ENABLE_AFFINITY 0 -np %i %s/issm.exe %s %s %s\n',cluster.np*cluster.node,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - % NOTE: new version - % not requires np processor and machine file. Execution of ISSM is operated under PBS. - fprintf(fid,'mpiexec -genv MV2_ENABLE_AFFINITY 0 -np %d %s/issm.exe %s %s %s\n',... - cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - else - % use machine file for obtaining cluster nodes - fprintf(fid,'mpiexec -machinefile %s -np %i %s/issm.exe %s %s %s\n',[cluster.executionpath '/' dirname '/simba.host'],cluster.np*cluster.node,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - end - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - - %% generate machinefile "simba.host" file for SIMBA - %fid = fopen('simba.host','w'); - %nodeorder = [1:cluster.numnodes, 0]; - - %% generate host of simba - %for i = 1:cluster.numnodes-1, - % fprintf(fid,'simba%02d:%2d\n',nodeorder(i),cluster.cpuspernode); - %end - %fclose(fid); - - %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); - fprintf(fid,'mpiexec -np %i %s/issm.exe %s %s %s\n',cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); - end - end %}}} - function UploadQueueJob(cluster,modelname,dirname,filelist)% {{{ - %compress the files into one zip. - compressstring=['tar -zcf ' dirname '.tar.gz ']; - for i=1:numel(filelist), - compressstring = [compressstring ' ' filelist{i}]; - end - if cluster.interactive, - compressstring = [compressstring ' ' modelname '.errlog ' modelname '.outlog ']; - end - - % add machine file for SIMBA operation - %compressstring = [compressstring ' ' 'simba.host']; - system(compressstring); - - if cluster.verbose - disp('hpc_simba2: uploading input file and queueing script'); - if exist(sprintf('%s.tar.gz',dirname)) - fprintf('hpc_simba2: -- we find %s.tar.gz\n',dirname); - end - fprintf('hpc_simba2: -- remote hostname: %s\n',cluster.name); - end - - % use rclone for upload model. - isrclone = 0; - if isrclone, - command = ['ssh simba20 "rclone copy simba00:' pwd '/' dirname '.tar.gz ' cluster.executionpath '"']; - system(command); - else - if strfind(cluster.executionpath,'/data2/') - launchcommand = ['cp ' pwd '/' dirname '.tar.gz ' cluster.executionpath]; - issmssh(oshostname,cluster.login,cluster.port,launchcommand); - else % using scpout for exporting data... - issmscpout(cluster.name,cluster.executionpath,cluster.login,cluster.port,{[dirname '.tar.gz']}); - end - end - end %}}} - function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch)% {{{ - if ~isempty(restart) - if cluster.isqsub - launchcommand=['cd ' cluster.executionpath ' && cd ' dirname ' && hostname && qsub ' modelname '.queue ']; - else - launchcommand=['cd ' cluster.executionpath ' && cd ' dirname ' && hostname && source' modelname '.queue ']; - end - else - if cluster.isqsub - launchcommand=['cd ' cluster.executionpath ' && rm -rf ./' dirname ' && mkdir ' dirname ... - ' && cd ' dirname ' && mv ../' dirname '.tar.gz ./ && tar -zxf ' dirname '.tar.gz && hostname && qsub ' modelname '.queue ']; - else - launchcommand=['cd ' cluster.executionpath ' && rm -rf ./' dirname ' && mkdir ' dirname ... - ' && cd ' dirname ' && mv ../' dirname '.tar.gz ./ && tar -zxf ' dirname '.tar.gz && hostname && source ' modelname '.queue ']; - end - end - - if cluster.verbose - fprintf('check simulation at\n'); - fprintf('%s/%s\n',cluster.executionpath,dirname); - end - - % launch simulation - issmssh(cluster.name,cluster.login,cluster.port,launchcommand); - end %}}} - function Download(cluster,dirname,filelist)% {{{ - %copy files from cluster to current directory - directory=[cluster.executionpath '/' dirname '/']; - - isrclone = 0; - if isrclone, - command = ['ssh simba20 "rclone copy ' directory ' simba00:' pwd '/']; - for i = 1:length(filelist) - command = [command ' --include ' filelist{i}]; - end - if cluster.verbose, - command = [command ' --progress ']; - end - command = [command '"']; - %assignin('base','command',command); - - disp('download outputs....'); - system(command); - disp('download outputs.... done...'); - else - % In case of "SIMBA" machine, it shares the data storage. If following directory is set we do not need to download load file with using "issmscpin". Directly read file... - if 1==strfind(cluster.executionpath,'/data2/') - % tricky part for assign "cluster.name" as "simba00". - disp('hpc_simba2: download file from /data2/ directory!!! Not use scpin.'); - issmscpin('simba00',cluster.login,cluster.port,directory,filelist); - else - issmscpin(cluster.name,cluster.login,cluster.port,directory,filelist); - end - end - end %}}} - end -end diff --git a/src/m/classes/clusters/local.m b/src/m/classes/clusters/local.m index 4344cf178..c54282036 100644 --- a/src/m/classes/clusters/local.m +++ b/src/m/classes/clusters/local.m @@ -50,15 +50,25 @@ function disp(cluster) % {{{ end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + % Which executable are we calling? executable='issm.exe'; % default - - if isdakota, + if isdakota executable='issm_dakota.exe'; end - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); fprintf(fid,'mpiexec -np %i %s/%s %s %s %s \n',cluster.np,cluster.codepath,executable,solution,'./',modelname); fclose(fid); @@ -69,7 +79,6 @@ function UploadQueueJob(cluster,modelname,dirname,filelist)% {{{ end %}}} function LaunchQueueJob(cluster,modelname,dirname,filelist,restart,batch)% {{{ system(['source ' modelname '.queue']); - end %}}} function Download(cluster,dirname,filelist)% {{{ end %}}} diff --git a/src/m/classes/clusters/local.py b/src/m/classes/clusters/local.py index 25f17c21c..ad8617de9 100644 --- a/src/m/classes/clusters/local.py +++ b/src/m/classes/clusters/local.py @@ -32,6 +32,7 @@ def __init__(self, *args): # {{{ # OK get other fields self = options.AssignObjectFields(self) + #}}} def __repr__(cluster): # {{{ # Display the object @@ -57,17 +58,29 @@ def checkconsistency(self, md, solution, analyses): # {{{ return md # }}} - def BuildQueueScript(cluster, dirname, modelname, solution, io_gather, isvalgrind, isgporf, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + # Which executable are we calling? executable = 'issm.exe' # Default if isdakota: executable = 'issm_dakota.exe' - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!{}\n'.format(cluster.shell)) fid.write('mpiexec -np {} {}/{} {} {} {}\n',cluster.np,cluster.codepath,executable,solution,'./',modelname) fid.close() + #}}} def UploadQueueJob(cluster, modelname, dirname, filelist): # {{{ # Do nothing really diff --git a/src/m/classes/clusters/localpfe.m b/src/m/classes/clusters/localpfe.m index cb62136a3..772970741 100644 --- a/src/m/classes/clusters/localpfe.m +++ b/src/m/classes/clusters/localpfe.m @@ -74,7 +74,17 @@ function disp(cluster) % {{{ end end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; %write queuing script %what is the executable being called? @@ -86,26 +96,26 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end end - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); fprintf(fid,'mpiexec -np %i %s/%s %s %s %s \n',cluster.np,cluster.codepath,executable,solution,cluster.executionpath,modelname); fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); - if cluster.interactive==10, + if cluster.interactive + fid=fopen([filename '.run'],'w'); + if cluster.interactive==10 fprintf(fid,'module unload mpi-mvapich2/1.4.1/gcc\n'); fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.np,cluster.codepath,executable,solution,[pwd() '/run'],modelname); else - if ~isvalgrind, + if ~isvalgrind fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.np,cluster.codepath,executable,solution,cluster.executionpath,modelname); %fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/Interactive' num2str(cluster.interactive)],modelname); else fprintf(fid,'mpiexec -np %i valgrind --leak-check=full %s/%s %s %s %s\n',cluster.np,cluster.codepath,executable,solution,[cluster.executionpath '/Interactive' num2str(cluster.interactive)],modelname); end end - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); @@ -115,7 +125,12 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end end %}}} - function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirnames,modelnames,nps) % {{{ + function BuildQueueScriptMultipleModels(cluster, slm, dirnames, modelnames, nps, filename) % {{{ + + %Get variables from slm + dirname = slm.private.runtimename; + modelname = slm.miscellaneous.name; + solution = slm.private.solution; %some checks: if isempty(modelname), error('BuildQueueScriptMultipleModels error message: need a non empty model name!');end @@ -126,11 +141,8 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna if ispc & ~ismingw, error('BuildQueueScriptMultipleModels not support yet on windows machines');end; %write queuing script - fid=fopen([modelname '.queue'],'w'); - + fid=fopen(filename,'w'); fprintf(fid,'#!%s\n',cluster.shell); - - %number of cpus: mpistring=sprintf('mpiexec -np %i ',cluster.np); %executable: @@ -147,11 +159,11 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna %icecaps, glaciers and earth location, names and number of processors associated: for i=1:length(dirnames), - mpistring=[mpistring sprintf(' %s/%s %s %i ',cluster.executionpath,dirnames{i},modelnames{i},nps{i})]; + mpistring=[mpistring sprintf(' %s/%s %s %i ',cluster.executionpath,dirnames{i},modelnames{i},nps{i})]; end %log files: - if ~cluster.interactive, + if ~cluster.interactive mpistring=[mpistring sprintf('2> %s.errlog> %s.outlog',modelname,modelname)]; end @@ -160,78 +172,82 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildQueueScriptIceOcean(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota) % {{{ + function BuildQueueScriptIceOcean(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; %write queuing script %what is the executable being called? executable='issm_ocean.exe'; - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!%s\n',cluster.shell); fprintf(fid,'mpiexec -np %i %s/%s %s %s %s : -np %i ./mitgcmuv\n',cluster.np,cluster.codepath,executable,solution,cluster.executionpath,modelname,cluster.npocean); fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - %write queuing script - if ~ispc, - - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/sh\n'); - if ~isvalgrind, - if cluster.interactive - fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s ',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname); - else - fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s 2> %s.errlog >%s.outlog ',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname,modelname,modelname); - end - elseif isgprof, - fprintf(fid,'\n gprof %s/kriging.exe gmon.out > %s.performance',cluster.codepath,modelname); - else - %Add --gen-suppressions=all to get suppression lines - fprintf(fid,'LD_PRELOAD=%s \\\n',cluster.valgrindlib); - fprintf(fid,'mpiexec -np %i %s --leak-check=full --suppressions=%s %s/kriging.exe %s %s 2> %s.errlog >%s.outlog ',... - cluster.np,cluster.valgrind,cluster.valgrindsup,cluster.codepath,[cluster.executionpath '/' modelname],modelname,modelname,modelname); - end - if ~io_gather, %concatenate the output files: - fprintf(fid,'\ncat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - - else % Windows - - fid=fopen([modelname '.bat'],'w'); - fprintf(fid,'@echo off\n'); + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + fid=fopen(filename, 'w'); + fprintf(fid,'#!/bin/sh\n'); + if ~isvalgrind if cluster.interactive - fprintf(fid,'"%s/issm.exe" %s "%s" %s ',cluster.codepath,solution,[cluster.executionpath '/' modelname],modelname); + fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s ',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname); else - fprintf(fid,'"%s/issm.exe" %s "%s" %s 2> %s.errlog >%s.outlog',... - cluster.codepath,solution,[cluster.executionpath '/' modelname],modelname,modelname,modelname); + fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s 2> %s.errlog >%s.outlog ',cluster.np,cluster.codepath,[cluster.executionpath '/' modelname],modelname,modelname,modelname); end - fclose(fid); + elseif isgprof + fprintf(fid,'\n gprof %s/kriging.exe gmon.out > %s.performance',cluster.codepath,modelname); + else + %Add --gen-suppressions=all to get suppression lines + fprintf(fid,'LD_PRELOAD=%s \\\n',cluster.valgrindlib); + fprintf(fid,'mpiexec -np %i %s --leak-check=full --suppressions=%s %s/kriging.exe %s %s 2> %s.errlog >%s.outlog ',... + cluster.np,cluster.valgrind,cluster.valgrindsup,cluster.codepath,[cluster.executionpath '/' modelname],modelname,modelname,modelname); end + if ~io_gather, %concatenate the output files: + fprintf(fid,'\ncat %s.outbin.* > %s.outbin',modelname,modelname); + end + fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist)% {{{ - if ~ispc | ismingw, + if ~ispc || ismingw %compress the files into one zip. compressstring=['tar -zcf ' dirname '.tar.gz ']; diff --git a/src/m/classes/clusters/lonestar.m b/src/m/classes/clusters/lonestar.m index 1b3e39b97..3832d60b7 100644 --- a/src/m/classes/clusters/lonestar.m +++ b/src/m/classes/clusters/lonestar.m @@ -71,13 +71,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildKrigingQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#$ -N %s\n',modelname); fprintf(fid,'#$ -q %s \n',cluster.queue); @@ -90,30 +101,41 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,modelname); fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s\n',cluster.nprocs(),cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); if (version>=6), executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename,'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH -J %s \n',modelname); @@ -127,7 +149,7 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,['module load ' cluster.modules{i} '\n']); end - if isdakota, + if isdakota fprintf(fid,'export KMP_AFFINITY="granularity=fine,compact,verbose" \n\n'); end @@ -143,24 +165,22 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); fprintf(fid,'ibrun -np %i %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'ibrun -np %i %s/%s %s %s %s\n',cluster.nprocs(),executable,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/maui.m b/src/m/classes/clusters/maui.m index 97a557bc0..d221a16bb 100644 --- a/src/m/classes/clusters/maui.m +++ b/src/m/classes/clusters/maui.m @@ -69,17 +69,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - error('not implemented yet'); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --account=%s \n',cluster.projectaccount); @@ -95,22 +102,20 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'module swap PrgEnv-cray PrgEnv-intel\n'); fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); fprintf(fid,'srun -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpiexec -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} diff --git a/src/m/classes/clusters/pace.m b/src/m/classes/clusters/pace.m index 73fd52046..e2afcc490 100644 --- a/src/m/classes/clusters/pace.m +++ b/src/m/classes/clusters/pace.m @@ -51,15 +51,26 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - executable = 'issm.exe'; + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + executable = 'issm.exe'; %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/sh\n'); fprintf(fid,'#SBATCH -t%i\n',cluster.time); @@ -77,9 +88,7 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'cd $SLURM_SUBMIT_DIR\n'); fprintf(fid,'export LD_LIBRARY_PATH=/opt/slurm/current/lib:/opt/pmix/current/lib:$LD_LIBRARY_PATH \n'); fprintf(fid,'srun --mpi=pmi2 -n %i %s/%s %s %s %s \n',cluster.np,cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); - fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/pfe.m b/src/m/classes/clusters/pfe.m index 51ccc7a33..4835be184 100644 --- a/src/m/classes/clusters/pfe.m +++ b/src/m/classes/clusters/pfe.m @@ -153,23 +153,34 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); if (version>=6), executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#PBS -S /bin/bash\n'); % fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l select=%i:ncpus=%i:model=%s\n',cluster.numnodes,cluster.cpuspernode,cluster.processor); @@ -190,53 +201,54 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin end fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'cd %s/%s/\n\n',cluster.executionpath,dirname); - if ~isvalgrind, + if ~isvalgrind %fprintf(fid,'/u/scicon/tools/bin/several_tries mpiexec -np %i mbind.x -cs -n%i %s/%s %s %s/%s %s\n',cluster.nprocs(),cluster.cpuspernode,cluster.codepath,executable,solution,cluster.executionpath,dirname,modelname); fprintf(fid,'mpiexec -np %i %s/%s %s %s/%s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,cluster.executionpath,dirname,modelname); else fprintf(fid,'mpiexec -np %i valgrind --leak-check=full %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); end - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); + if cluster.interactive + fid=fopen([filename '.run'],'w'); if cluster.interactive==10, fprintf(fid,'module unload mpi-mvapich2/1.4.1/gcc\n'); fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[pwd() '/run'],modelname); else - if ~isvalgrind, + if ~isvalgrind fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/Interactive' num2str(cluster.interactive)],modelname); else fprintf(fid,'mpiexec -np %i valgrind --leak-check=full %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/Interactive' num2str(cluster.interactive)],modelname); end end - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirnames,modelnames,nps) % {{{ + function BuildQueueScriptMultipleModels(cluster, slm, dirnames, modelnames, nps, filename) % {{{ + + %Get variables from slm + dirname = slm.private.runtimename; + modelname = slm.miscellaneous.name; + solution = slm.private.solution; %some checks: - if isempty(modelname), error('BuildQueueScriptMultipleModels error message: need a non empty model name!');end + if isempty(modelname) error('BuildQueueScriptMultipleModels error message: need a non empty model name!');end %what is the executable being called? executable='issm_slc.exe'; - - if ispc & ~ismingw, error('BuildQueueScriptMultipleModels not support yet on windows machines');end; + if ispc && ~ismingw, error('BuildQueueScriptMultipleModels not support yet on windows machines');end; %write queuing script - fid=fopen([modelname '.queue'],'w'); - + fid=fopen(filename,'w'); fprintf(fid,'#PBS -S /bin/bash\n'); fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l select=%i:ncpus=%i:model=%s\n',cluster.numnodes,cluster.cpuspernode,cluster.processor); @@ -278,8 +290,8 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna fprintf(fid,mpistring); fclose(fid); - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); + if cluster.interactive + fid=fopen([filename '.run'],'w'); %number of cpus: mpistring=sprintf('mpiexec -np %i ',cluster.numnodes*cluster.cpuspernode); @@ -297,7 +309,7 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna mpistring=[mpistring sprintf(' %i ',length(dirnames))]; %icecaps, glaciers and earth location, names and number of processors associated: - for i=1:length(dirnames), + for i=1:length(dirnames) mpistring=[mpistring sprintf(' %s/Interactive%i %s %i ',cluster.executionpath,cluster.interactive,modelnames{i},nps{i})]; end @@ -305,19 +317,28 @@ function BuildQueueScriptMultipleModels(cluster,dirname,modelname,solution,dirna fprintf(fid,mpistring); fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#PBS -S /bin/bash\n'); % fprintf(fid,'#PBS -N %s\n',modelname); fprintf(fid,'#PBS -l select=%i:ncpus=%i:model=%s\n',cluster.numnodes,cluster.cpuspernode,cluster.processor); @@ -358,10 +379,20 @@ function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind fclose(fid); end end %}}} - function BuildOceanQueueScript(np,cluster,modelname) % {{{ + function BuildOceanQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#PBS -S /bin/bash\n'); fprintf(fid,'#PBS -l select=1:ncpus=%i:model=%s\n',np,cluster.processor); fprintf(fid,'#PBS -l walltime=%i\n',cluster.time*60); %walltime is in seconds. @@ -385,16 +416,14 @@ function BuildOceanQueueScript(np,cluster,modelname) % {{{ fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'module load mpi-sgi/mpt.2.15r20\n'); fprintf(fid,['mpiexec -np %i ./mitgcmuv \n'],np); fprintf(fid,['touch ' modelname '.lock %s\n']); fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} diff --git a/src/m/classes/clusters/pfe.py b/src/m/classes/clusters/pfe.py index 8be8edca5..8627b9579 100644 --- a/src/m/classes/clusters/pfe.py +++ b/src/m/classes/clusters/pfe.py @@ -148,7 +148,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not supported by cluster, ignoring...') @@ -162,7 +173,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, executable = 'issm_ocean.exe' # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#PBS -S /bin/bash\n') fid.write('#PBS -l select={}:ncpus={}:model={}\n'.format(self.numnodes, self.cpuspernode, self.processor)) fid.write('#PBS -l walltime={}\n'.format(self.time * 60)) # walltime is in seconds @@ -199,6 +210,7 @@ def UploadQueueJob(self, modelname, dirname, filelist): # {{{ issmscpout(self.name, directory, self.login, self.port, [dirname + '.tar.gz']) # }}} + def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ # Launch command, to be executed via ssh if self.interactive: @@ -219,6 +231,7 @@ def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch): # {{{ issmssh(self.name, self.login, self.port, launchcommand) # }}} + def Download(self, dirname, filelist): # {{{ # Copy files from cluster to current directory directory = '{}/{}/'.format(self.executionpath, dirname) diff --git a/src/m/classes/clusters/pollux.m b/src/m/classes/clusters/pollux.m index ed9bbf6a1..9247d64c0 100644 --- a/src/m/classes/clusters/pollux.m +++ b/src/m/classes/clusters/pollux.m @@ -45,13 +45,24 @@ function disp(cluster) % {{{ QueueRequirements(available_queues,queue_requirements_time,queue_requirements_np,cluster.queue,cluster.np,cluster.time) end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/sh\n'); fprintf(fid,'#PBS -l walltime=%i\n',cluster.time*60); %walltime is in seconds. fprintf(fid,'#PBS -N %s\n',modelname); @@ -66,7 +77,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'export OMP_NUM_THREADS=1\n'); fprintf(fid,'dplace -s1 -c0-%i mpiexec -np %i %s/issm.exe %s %s %s',cluster.np-1,cluster.np,cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); fclose(fid); - end %}}} function LaunchQueueJob(cluster,modelname,dirname,filelist,restart) % {{{ diff --git a/src/m/classes/clusters/saga.py b/src/m/classes/clusters/saga.py index fd0b8ab8d..342e3ab3c 100644 --- a/src/m/classes/clusters/saga.py +++ b/src/m/classes/clusters/saga.py @@ -91,7 +91,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + executable = 'issm.exe' if isdakota: version = IssmConfig('_DAKOTA_VERSION_')[0:2] @@ -107,8 +118,8 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, h, m = divmod(m, 60) d, h = divmod(h, 24) timestring = "%02d-%02d:%02d:%02d" % (d, h, m, s) - print('timestring') - fid = open(modelname + '.queue', 'w') + + fid = open(filename, 'w') fid.write('#!/bin/bash -l\n') fid.write('#SBATCH --job-name=%s \n' % shortname) if self.queue in ['devel']: @@ -145,6 +156,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, fid.close() # }}} + def UploadQueueJob(self, modelname, dirname, filelist): # {{{ # Compress the files into one zip compressstring = 'tar -zcf %s.tar.gz ' % dirname diff --git a/src/m/classes/clusters/sherlock.m b/src/m/classes/clusters/sherlock.m index df87f457b..06d8ab724 100644 --- a/src/m/classes/clusters/sherlock.m +++ b/src/m/classes/clusters/sherlock.m @@ -60,38 +60,24 @@ function disp(cluster) % {{{ end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/bash\n'); - fprintf(fid,'#SBATCH --job-name=%s\n',mdelname); - fprintf(fid,'#SBATCH -p %s \n',cluster.queue); - fprintf(fid,'#SBATCH -N %i -n %i\n',cluster.numnodes,cluster.cpuspernode); - fprintf(fid,'#SBATCH --time=%i\n',cluster.time*60); %walltime is in seconds. - fprintf(fid,'#SBATCH --mem-per-cpu=%igb\n',cluster.memory); - fprintf(fid,'#SBATCH -o %s.outlog \n',modelname); - fprintf(fid,'#SBATCH -e %s.errlog \n\n',modelname); - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); - fprintf(fid,'mpiexec -np %i %s/kriging.exe %s %s\n',cluster.nprocs(),cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH -N %i -n %i\n',cluster.numnodes,cluster.cpuspernode); @@ -103,23 +89,21 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); fprintf(fid,'mpiexec -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: + if ~io_gather %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, + if cluster.interactive fid=fopen([modelname '.run'],'w'); fprintf(fid,'mpiexec -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/stanage.m b/src/m/classes/clusters/stanage.m index 21abdf72b..0fa2921f7 100644 --- a/src/m/classes/clusters/stanage.m +++ b/src/m/classes/clusters/stanage.m @@ -60,44 +60,24 @@ function disp(cluster) % {{{ if isempty(cluster.executionpath), md = checkmessage(md,'executionpath empty'); end end %}}} - function BuildKrigingQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %write queuing script - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/bash\n'); - fprintf(fid,'#SBATCH --job-name=%s\n',modelname); - fprintf(fid,'#SBATCH --output=%s.outlog \n',modelname); - fprintf(fid,'#SBATCH --error=%s.errlog \n',modelname); - fprintf(fid,'#SBATCH --nodes=%i\n',cluster.numnodes); - fprintf(fid,'#SBATCH --ntasks-per-node=%i\n',cluster.cpuspernode); - fprintf(fid,'#SBATCH --time=%s\n',datestr(cluster.time/24,'HH:MM:SS')); %walltime is in HH:MM:SS format. cluster.time is in hour - fprintf(fid,'#SBATCH --mem=%iG\n',cluster.memory); - if ~isempty(cluster.email) - fprintf(fid,'#SBATCH --mail-type=%s\n',cluster.email); - fprintf(fid,'#SBATCH --mail-user=%s@%s\n',cluster.login, cluster.email_domain); - end - fprintf(fid,'\n'); - - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); - fprintf(fid,'srun %s/kriging.exe %s %s\n', cluster.codepath,[cluster.executionpath '/' modelname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); fprintf(fid,'#SBATCH --output=%s.outlog \n',modelname); @@ -125,17 +105,15 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); + if cluster.interactive + fid=fopen([filename '.run'],'w'); fprintf(fid,'mpirun -n %i %s/issm.exe %s %s %s\n',cluster.nprocs(), cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/tetralith.m b/src/m/classes/clusters/tetralith.m index 32dbdb675..771472188 100644 --- a/src/m/classes/clusters/tetralith.m +++ b/src/m/classes/clusters/tetralith.m @@ -71,53 +71,21 @@ function disp(cluster) % {{{ numprocs=self.numnodes*self.cpuspernode; end %}}} - function BuildKrigingQueueScript(cluster,modelname,solution,io_gather,isvalgrind,isgprof) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end - - %compute number of processors -% cluster.np=cluster.numnodes*cluster.cpuspernode; -% nprocs(cluster);%=cluster.numnodes*cluster.cpuspernode; - - %write queuing script - fid=fopen([modelname '.queue'],'w'); - fprintf(fid,'#!/bin/bash\n'); - fprintf(fid,'#\n'); - fprintf(fid,'#SBATCH --job-name=%s\n',modelname); -% fprintf(fid,'#SBATCH -p %s \n',cluster.partition); - fprintf(fid,'#SBATCH -A %s \n',cluster.accountname); -% fprintf(fid,'#SBATCH --mail-type=ALL\n'); - fprintf(fid,'#SBATCH -N %i -n %i\n',cluster.numnodes,cluster.cpuspernode); - %calculate walltime in hh:mm:ss format - walltime=datestr(cluster.time/(60*24),'HH:MM:SS') - fprintf(fid,'#SBATCH -t %s\n',walltime); %walltime should be in hh:mm:ss - fprintf(fid,'#SBATCH --mem=%i\n',cluster.mem); - fprintf(fid,'#SBATCH -o %s.outlog \n',modelname); - fprintf(fid,'#SBATCH -e %s.errlog \n\n',modelname); -% fprintf(fid,'module load intelcomp/17.0.0\n') %module load not recommended within job script at Tetralith -% fprintf(fid,'module load mpt/2.14\n') -% fprintf(fid,'module load petsc/3.7.4d\n') -% fprintf(fid,'module load parmetis/4.0.3\n') -% fprintf(fid,'module load mumps/5.0.2\n') -% fprintf(fid,'export ISSM_DIR="%s"\n',cluster.codepath); %FIXME - fprintf(fid,'export ISSM_DIR="%s/../"\n',cluster.codepath); %FIXME - fprintf(fid,'source $ISSM_DIR/etc/environment.sh\n'); %FIXME - fprintf(fid,'cd %s/%s\n\n',cluster.executionpath,dirname); -% fprintf(fid,'mpiexec -np %i %s/%s %s %s %s\n',cluster.nprocs(),cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); -% fprintf(fid,'mpiexec_mpt -np %i %s/%s %s %s %s\n',cluster.np,cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); - fprintf(fid,'mpiexec -np %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); -% fprintf(fid,'mpirun -np %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); - if ~io_gather, %concatenate the output files: - fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); - end - fclose(fid); - end - %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ - - if(isvalgrind), disp('valgrind not supported by cluster, ignoring...'); end - if(isgprof), disp('gprof not supported by cluster, ignoring...'); end + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; + + %checks + if(isvalgrind) disp('valgrind not supported by cluster, ignoring...'); end + if(isgprof) disp('gprof not supported by cluster, ignoring...'); end executable='issm.exe'; if isdakota, @@ -136,7 +104,7 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin % shortname = substring(modelname,1,min(12,length(modelname))); %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/bash\n'); fprintf(fid,'#\n'); fprintf(fid,'#SBATCH --job-name=%s\n',modelname); @@ -170,17 +138,15 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fclose(fid); %in interactive mode, create a run file, and errlog and outlog file - if cluster.interactive, - fid=fopen([modelname '.run'],'w'); + if cluster.interactive + fid=fopen([filename '.run'],'w'); fprintf(fid,'mpiexec_mpt -np %i %s/issm.exe %s %s %s\n',cluster.nprocs(),cluster.codepath,solution,[cluster.executionpath '/' dirname],modelname); if ~io_gather, %concatenate the output files: fprintf(fid,'cat %s.outbin.* > %s.outbin',modelname,modelname); end fclose(fid); - fid=fopen([modelname '.errlog'],'w'); - fclose(fid); - fid=fopen([modelname '.outlog'],'w'); - fclose(fid); + fid=fopen([modelname '.errlog'],'w'); fclose(fid); + fid=fopen([modelname '.outlog'],'w'); fclose(fid); end end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/clusters/ub_ccr.py b/src/m/classes/clusters/ub_ccr.py index 0692f0c5c..e0d3f1092 100644 --- a/src/m/classes/clusters/ub_ccr.py +++ b/src/m/classes/clusters/ub_ccr.py @@ -135,7 +135,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not supported by cluster, ignoring...') @@ -149,7 +160,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, executable = 'issm_ocean.exe' # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') fid.write('#!/bin/bash -l\n') fid.write('#SBATCH --time {:02d}:{:02d}:00\n'.format(int(floor(self.time / 3600)), int(floor(self.time % 3600) / 60))) diff --git a/src/m/classes/clusters/ub_ccr_from_ghub.py b/src/m/classes/clusters/ub_ccr_from_ghub.py index c002477ca..0ac8155cd 100644 --- a/src/m/classes/clusters/ub_ccr_from_ghub.py +++ b/src/m/classes/clusters/ub_ccr_from_ghub.py @@ -88,7 +88,18 @@ def checkconsistency(self, md, solution, analyses): # {{{ return self # }}} - def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling): # {{{ + def BuildQueueScript(self, md, filename): # {{{ + + # Get variables from md + dirname = md.private.runtimename + modelname = md.miscellaneous.name + solution = md.private.solution + io_gather = md.settings.io_gather + isvalgrind = md.debug.valgrind + isgprof = md.debug.gprof + isdakota = md.qmu.isdakota + isoceancoupling = md.transient.isoceancoupling + if isgprof: print('gprof not supported by cluster, ignoring...') @@ -102,7 +113,7 @@ def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, executable = 'issm_ocean.exe' # Write queuing script - fid = open(modelname + '.queue', 'w') + fid = open(filename, 'w') partition = 'general-compute' qos = 'general-compute' diff --git a/src/m/classes/clusters/yellowstone.m b/src/m/classes/clusters/yellowstone.m index 2ded6948a..a9ab5d111 100644 --- a/src/m/classes/clusters/yellowstone.m +++ b/src/m/classes/clusters/yellowstone.m @@ -71,22 +71,32 @@ function disp(cluster) % {{{ end %}}} - function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrind,isgprof,isdakota,isoceancoupling) % {{{ + function BuildQueueScript(cluster, md, filename) % {{{ + + %Get variables from md + dirname = md.private.runtimename; + modelname = md.miscellaneous.name; + solution = md.private.solution; + io_gather = md.settings.io_gather; + isvalgrind = md.debug.valgrind; + isgprof = md.debug.gprof; + isdakota = md.qmu.isdakota; + isoceancoupling = md.transient.isoceancoupling; executable='issm.exe'; - if isdakota, + if isdakota version=IssmConfig('_DAKOTA_VERSION_'); version=str2num(version(1:3)); if (version>=6), executable='issm_dakota.exe'; end end - if isoceancoupling, + if isoceancoupling executable='issm_ocean.exe'; end %write queuing script - fid=fopen([modelname '.queue'],'w'); + fid=fopen(filename, 'w'); fprintf(fid,'#!/bin/tcsh\n'); fprintf(fid,'#BSUB -P %s\n',cluster.grouplist); fprintf(fid,'#BSUB -W %i:%i\n',floor(cluster.time/60),cluster.time-floor(cluster.time/60)*60); @@ -116,7 +126,6 @@ function BuildQueueScript(cluster,dirname,modelname,solution,io_gather,isvalgrin fprintf(fid,'mpirun.lsf %s/%s %s %s %s\n',cluster.codepath,executable,solution,[cluster.executionpath '/' dirname],modelname); fclose(fid); - end %}}} function UploadQueueJob(cluster,modelname,dirname,filelist) % {{{ diff --git a/src/m/classes/frictionemulator.m b/src/m/classes/frictionemulator.m new file mode 100644 index 000000000..2fe777c4e --- /dev/null +++ b/src/m/classes/frictionemulator.m @@ -0,0 +1,39 @@ +%FRICTIONEMULATOR class definition +% +% Usage: +% frictionemulator=frictionemulator(); + +classdef frictionemulator + properties (SetAccess=public) + module_dir = ''; + pt_name = ''; + py_name = ''; + end + methods + function self = extrude(self,md) % {{{ + end % }}} + function self = frictionemulator(varargin) % {{{ + end % }}} + function md = checkconsistency(self,md,solution,analyses) % {{{ + md = checkfield(md,'fieldname','friction.module_dir','filepath',1); + md = checkfield(md,'fieldname','friction.py_name','stringrow',1); + md = checkfield(md,'fieldname','friction.pt_name','stringrow',1); + end % }}} + function disp(self) % {{{ + disp(sprintf('Basal shear stress parameters for pre-trained python emulator')); + fielddisplay(self,'module_dir', 'directory of the emulator module'); + fielddisplay(self,'pt_name', 'name of the checkpoint file for pre-trained ML model'); + fielddisplay(self,'py_name', 'name of the python file that defines ML architecture'); + end % }}} + function marshall(self,prefix,md,fid) % {{{ + yts=md.constants.yts; + WriteData(fid,prefix,'name','md.friction.law','data',20,'format','Integer'); + WriteData(fid,prefix,'class','friction','object',self,'fieldname','module_dir','format','String') + WriteData(fid,prefix,'class','friction','object',self,'fieldname','pt_name','format','String'); + WriteData(fid,prefix,'class','friction','object',self,'fieldname','py_name','format','String'); + end % }}} + function savemodeljs(self,fid,modelname) % {{{ + error('not implemented yet!'); + end % }}} + end +end diff --git a/src/m/classes/hydrologyprescribe.m b/src/m/classes/hydrologyprescribe.m new file mode 100644 index 000000000..0e9b87f55 --- /dev/null +++ b/src/m/classes/hydrologyprescribe.m @@ -0,0 +1,67 @@ +%HYDROLOGYPRESCRIBE class definition +% +% Usage: +% hydrologyprescribe(); + +classdef hydrologyprescribe + properties (SetAccess=public) + head = NaN; + requested_outputs = {}; + end + methods + function self = extrude(self,md) % {{{ + end % }}} + function self = hydrologyprescribe(varargin) % {{{ + switch nargin + case 0 + self=setdefaultparameters(self); + case 1 + self=structtoobj(self,varargin{1}); + otherwise + error('constructor not supported'); + end + end % }}} + function list = defaultoutputs(self,md) % {{{ + list = {'HydrologyHead','EffectivePressure'}; + end % }}} + + function self = setdefaultparameters(self) % {{{ + self.requested_outputs={'default'}; + end % }}} + function md = checkconsistency(self,md,solution,analyses) % {{{ + + %Early return + if ~ismember('HydrologyPrescribeAnalysis',analyses) + return; + end + + if ~isempty(md.initialization.hydraulic_potential) + warning('WARN: md.initialization.hydraulic_potential is defined. However, this is not used for "hydrologyprescribe" model.') + end + md = checkfield(md,'fieldname','hydrology.head','size',[md.mesh.numberofvertices 1],'NaN',1,'Inf',1); + end % }}} + function disp(self) % {{{ + disp(sprintf(' hydrologyprescribe solution parameters:')); + disp(sprintf(' This module is to simulate effective pressure Neff using hydraulic head from external subglacial hydrology model.')); + disp(sprintf(' Neff = rho_i g H - Pw')); + disp(sprintf(' Pw = rho_w g (head - z_b))')); + disp(sprintf(' H: ice thickness [m] / head: hydraulic head [m] / z_b: bedrock elevation')); + fielddisplay(self,'head','subglacial hydrology water head (m)'); + end % }}} + function marshall(self,prefix,md,fid) % {{{ + + yts=md.constants.yts; + + WriteData(fid,prefix,'name','md.hydrology.model','data',10,'format','Integer'); + WriteData(fid,prefix,'object',self,'class','hydrology','fieldname','head','format','DoubleMat','mattype',1); + + outputs = self.requested_outputs; + pos = find(ismember(outputs,'default')); + if ~isempty(pos) + outputs(pos) = []; %remove 'default' from outputs + outputs = [outputs, defaultoutputs(self,md)]; %add defaults + end + WriteData(fid,prefix,'data',outputs,'name','md.hydrology.requested_outputs','format','StringArray'); + end % }}} + end +end diff --git a/src/m/classes/hydrologyprescribe.py b/src/m/classes/hydrologyprescribe.py new file mode 100644 index 000000000..6f9240694 --- /dev/null +++ b/src/m/classes/hydrologyprescribe.py @@ -0,0 +1,73 @@ +#!/usr/bin/env python3 +import numpy as np +from fielddisplay import fielddisplay +from checkfield import checkfield +from WriteData import WriteData + +class hydrologyprescribe(object): + """ + HydrologyPrescribe class definition + + Usage: + hydrologyprescribe = hydrologyprescribe() + """ + + def __init__(self): # {{{ + self.head = np.nan + + #set defaults + self.setdefaultparameters() + # }}} + + def __repr__(self): # {{{ + string = ' hydrologypism solution parameters:\n' + string+= ' This module is to simulate effective pressure Neff using hydraulic head from external subglacial hydrology model\n' + string+= ' Neff = rho_i g H - Pw\n' + string+= ' Pw = rho_w g (head - z_b)\n' + string+= ' H: ice thickness (m) / head: hydrology head (m) / z_b: bedrock elevation' + + string+= "{}\n".format(fielddisplay(self, 'head', 'subglacial hydrology water head (m)' + return string + # }}} + + def extrude(self,md): # {{{ + return self + # }}} + + def defaultoutputs(self,md): # {{{ + lists=['HydrologyHead','EffectivePressure'] + return lists + # }}} + + def setdefaultparameters(self): # {{{ + self.requested_outputs = ['default'] + return self + # }}} + + def checkconsistency(self, md, solution, analyses): # {{{ + + #Early return + if 'HydrologyPrescribe' not in analyses: + return + + if not np.isempty(md.initialization.hydraulic_potential): + warnings.warn('WARN: md.initialization.hydrology is defined. However, this is not used for "hydrologyprescribe" model.') + + md = checkfield(md, 'fieldname', 'hydrology.head','size',[md.mesh.numberofvertices,1],'NaN',1,'Inf',1) + # }}} + + def marshall(self, prefix, md, fid): # {{{ + yts = md.constants.yts + + WriteData(fid, prefix, 'name', 'md.hydrology.model', 'data', 10, 'format', 'Integer') + WriteData(fid, prefix, 'object', self, 'class', 'hydrology', 'fieldname', 'head', 'format', 'DoubleMat', 'mattype', 1) + + # Process requested outputs + outputs = self.requested_outputs + indices = [i for i, x in enumerate(outputs) if x == 'default'] + if len(indices) > 0: + outputscopy = outputs[0:max(0, indices[0] - 1)] + self.defaultoutputs(md) + outputs[indices[0] + 1:] + outputs = outputscopy + WriteData(fid, prefix, 'data', outputs, 'name', 'md.hydrology.requested_outputs', 'format', 'StringArray') + # }}} + diff --git a/src/m/classes/hydrologyshakti.m b/src/m/classes/hydrologyshakti.m index ddbffe201..f6a036353 100644 --- a/src/m/classes/hydrologyshakti.m +++ b/src/m/classes/hydrologyshakti.m @@ -18,6 +18,7 @@ neumannflux = NaN; relaxation = 0; storage = NaN; + melt_flag = 0; requested_outputs = {}; end methods @@ -52,6 +53,7 @@ self.gap_height_max = 1.; self.relaxation=1; self.storage=0; + self.melt_flag=0; self.requested_outputs={'default'}; end % }}} function md = checkconsistency(self,md,solution,analyses) % {{{ @@ -74,6 +76,7 @@ md = checkfield(md,'fieldname','hydrology.spchead','Inf',1,'timeseries',1); md = checkfield(md,'fieldname','hydrology.relaxation','>=',0); md = checkfield(md,'fieldname','hydrology.storage','>=',0,'size','universal','NaN',1,'Inf',1); + md = checkfield(md,'fieldname','hydrology.melt_flag','numel',[1],'NaN',1,'Inf',1,'values',[0,1]); md = checkfield(md,'fieldname','hydrology.requested_outputs','stringrow',1); end % }}} function disp(self) % {{{ @@ -91,6 +94,7 @@ function disp(self) % {{{ fielddisplay(self,'spchead','water head constraints (NaN means no constraint) (m)'); fielddisplay(self,'relaxation','under-relaxation coefficient for nonlinear iteration'); fielddisplay(self,'storage','englacial storage coefficient (void ratio)'); + fielddisplay(self,'melt_flag','User specified basal melt? 0: no (default, Sommers et al. 2018), 1: use md.basalforcings.grounded_melting_rate'); fielddisplay(self,'requested_outputs','additional outputs requested'); end % }}} function marshall(self,prefix,md,fid) % {{{ @@ -116,6 +120,7 @@ function marshall(self,prefix,md,fid) % {{{ mattype=2; tsl = md.mesh.numberofelements; end WriteData(fid,prefix,'object',self,'class','hydrology','fieldname','storage','format','DoubleMat','mattype',mattype,'timeserieslength',tsl+1,'yts',md.constants.yts); + WriteData(fid,prefix,'object',self,'class','hydrology','fieldname','melt_flag','format','Integer'); outputs = self.requested_outputs; pos = find(ismember(outputs,'default')); diff --git a/src/m/classes/hydrologyshakti.py b/src/m/classes/hydrologyshakti.py index 4a0629f43..d93b2e9a6 100644 --- a/src/m/classes/hydrologyshakti.py +++ b/src/m/classes/hydrologyshakti.py @@ -25,12 +25,14 @@ def __init__(self): # {{{ self.neumannflux = np.nan self.relaxation = 0 self.storage = np.nan + self.melt_flag = 0 self.requested_outputs = [] - #set defaults + #set defaults self.setdefaultparameters() # }}} + def __repr__(self): # {{{ s = ' hydrologyshakti solution parameters:' s += '{}\n'.format(fielddisplay(self, 'head', 'subglacial hydrology water head (m)')) @@ -46,6 +48,7 @@ def __repr__(self): # {{{ s += '{}\n'.format(fielddisplay(self, 'spchead', 'water head constraints (NaN means no constraint) (m)')) s += '{}\n'.format(fielddisplay(self, 'relaxation', 'under - relaxation coefficient for nonlinear iteration')) s += '{}\n'.format(fielddisplay(self, 'storage', 'englacial storage coefficient (void ratio)')) + s += '{}\n'.format(fielddisplay(self,'melt_flag','User specified basal melt? 0: no (default, Sommers et al. 2018), 1: use md.basalforcings.grounded_melting_rate')) s += '{}\n'.format(fielddisplay(self, 'requested_outputs', 'additional outputs requested')) return s # }}} @@ -60,6 +63,7 @@ def setdefaultparameters(self): # {{{ self.gap_height_max = 1.; self.relaxation = 1 self.storage = 0 + self.melt_flag = 0 self.requested_outputs = ['default'] return self # }}} @@ -87,6 +91,7 @@ def checkconsistency(self, md, solution, analyses): # {{{ md = checkfield(md, 'fieldname', 'hydrology.spchead', 'Inf', 1, 'timeseries', 1) md = checkfield(md, 'fieldname', 'hydrology.relaxation', '>=', 0) md = checkfield(md, 'fieldname', 'hydrology.storage', '>=', 0, 'size', 'universal', 'NaN', 1, 'Inf', 1) + md = checkfield(md, 'fieldname', 'hydrology.melt_flag', 'numel', 1, 'NaN', 1, 'Inf', 1, 'values', [0,1]) md = checkfield(md, 'fieldname', 'hydrology.requested_outputs', 'stringrow', 1) return md # }}} @@ -115,6 +120,7 @@ def marshall(self, prefix, md, fid): # {{{ mattype = 2 tsl = md.mesh.numberofelements WriteData(fid, prefix, 'object', self, 'class', 'hydrology', 'fieldname', 'storage', 'format', 'DoubleMat', 'mattype', mattype, 'timeserieslength', tsl + 1, 'yts', md.constants.yts) + WriteData(fid, prefix, 'object', self, 'class', 'hydrology', 'fieldname', 'melt_flag', 'format', 'Integer') # Process requested outputs outputs = self.requested_outputs diff --git a/src/m/classes/model.m b/src/m/classes/model.m index 8623f0eaf..432cf3c3a 100644 --- a/src/m/classes/model.m +++ b/src/m/classes/model.m @@ -403,6 +403,14 @@ function disp(self) % {{{ md.smb.mass_balance=project2d(md,md.smb.mass_balance,md.mesh.numberoflayers); elseif isa(md.smb,'SMBhenning') & ~isnan(md.smb.smbref), md.smb.smbref=project2d(md,md.smb.smbref,md.mesh.numberoflayers); + elseif isa(md.smb, 'SMBpddSicopolis') || isa(md.smb, 'SMBpddFast'); + md.smb.s0p = project2d(md, md.smb.s0p, md.mesh.numberoflayers); + md.smb.s0t = project2d(md, md.smb.s0t, md.mesh.numberoflayers); + md.smb.smb_corr = project2d(md, md.smb.smb_corr, md.mesh.numberoflayers); + md.smb.monthlytemperatures = project2d(md, md.smb.monthlytemperatures, md.mesh.numberoflayers); + md.smb.temperature_anomaly = project2d(md, md.smb.temperature_anomaly, md.mesh.numberoflayers); + md.smb.precipitation = project2d(md, md.smb.precipitation, md.mesh.numberoflayers); + md.smb.precipitation_anomaly = project2d(md, md.smb.precipitation_anomaly, md.mesh.numberoflayers); end %results @@ -443,7 +451,6 @@ function disp(self) % {{{ md.initialization.debris=project2d(md,md.initialization.debris,1); end - %elementstype if ~isnan(md.flowequation.element_equation) md.flowequation.element_equation=project2d(md,md.flowequation.element_equation,1); @@ -500,6 +507,12 @@ function disp(self) % {{{ if isprop(md.basalforcings,'floatingice_melting_rate') & ~isnan(md.basalforcings.floatingice_melting_rate), md.basalforcings.floatingice_melting_rate=project2d(md,md.basalforcings.floatingice_melting_rate,1); end + if isprop(md.basalforcings,'deepwater_melting_rate') + md.basalforcings.deepwater_melting_rate = project2d(md,md.basalforcings.deepwater_melting_rate,1); + md.basalforcings.deepwater_elevation = project2d(md,md.basalforcings.deepwater_elevation,1); + md.basalforcings.upperwater_melting_rate = project2d(md,md.basalforcings.upperwater_melting_rate,1); + md.basalforcings.upperwater_elevation = project2d(md,md.basalforcings.upperwater_elevation,1); + end md.basalforcings.geothermalflux=project2d(md,md.basalforcings.geothermalflux,1); %bedrock only gets geothermal flux if isprop(md.calving,'coeff') & ~isnan(md.calving.coeff), @@ -1824,7 +1837,7 @@ function xylim(self) % {{{ save('id','md'); %Now, upload the file: - issmscpout(md.settings.upload_server,md.settings.upload_path,md.settings.upload_login,md.settings.upload_port,{id},1); + issmscpout(md.settings.upload_server,md.settings.upload_path,md.settings.upload_login,md.settings.upload_port,{id}); %Now, empty this model of everything except settings, and record name of file we just uploaded! settings_back=md.settings; @@ -1834,7 +1847,6 @@ function xylim(self) % {{{ %get locally rid of file that was uploaded delete(id); - end % }}} function md=download(md) % {{{ diff --git a/src/m/classes/organizer.m b/src/m/classes/organizer.m index 1aec02944..5d52e3e48 100644 --- a/src/m/classes/organizer.m +++ b/src/m/classes/organizer.m @@ -40,8 +40,15 @@ %Get repository repository=getfieldvalue(options,'repository','./'); - if ~ischar(repository), error('repository is not a string'); end - if exist(repository,'dir')~=7, error(['Directory ' repository ' not found']), end + if ~ischar(repository) error('repository is not a string'); end + if exist(repository,'dir')~=7 + choice=input(['Directory ' repository ' not found, would you like to create it (y/n)\n'],'s'); + if ~strcmp(choice,'y') + error('interrupting process') + else + mkdir(repository); + end + end org.repository=repository; %Color @@ -230,24 +237,37 @@ function loaddatanoprefix(org,string),% {{{ function savemodel(org,md) % {{{ %check - if (org.currentstep==0), error('Cannot save model because organizer (org) is empty! Make sure you did not skip any perform call'); end - if (org.currentstep>length(org.steps)), error('Cannot save model because organizer (org) is not up to date!'); end + if (org.currentstep==0) + error('Cannot save model because organizer (org) is empty! Make sure you did not skip any perform call'); + end + if (org.currentstep>length(org.steps)) + error('Cannot save model because organizer (org) is not up to date!'); + end + if ~isa(md,'model') & ~isa(md,'sealevelmodel') + warning('second argument is not a model'); + end + %File name to save the model name=[org.repository '/' org.prefix org.steps(org.currentstep).string ]; - disp(['saving model as: ' name]); %Skip if requested - if org.skipio, + if org.skipio disp(['WARNING: Skipping saving ' name]); return; end - %check that md is a model - if ~isa(md,'model') & ~isa(md,'sealevelmodel'), warning('second argument is not a model'); end - if (org.currentstep>length(org.steps)), error(['organizer error message: element with id ' num2str(org.currentstep) ' not found']); end + % which format needs to be used? + % v7.3: zlib compression (HDF5, slower read/write) >2Gb + % v6 : No compression → fastest read/write, 2Gb limit + mdSizeGB = whos('md').bytes / (1024^3); + if mdSizeGB > 2 + disp(['saving model (' num2str(mdSizeGB, '%.2f') ' GB → v7.3) as ' name]); + save(name,'md','-v7.3'); + else + disp(['saving model (' num2str(mdSizeGB, '%.2f') ' GB → v6) as ' name]); + save(name,'md'); + end - %save model - save(name,'md','-v7.3'); end%}}} function savedata(org,varargin) % {{{ diff --git a/src/m/classes/solidearth.js b/src/m/classes/solidearth.js index 431eb0267..0af6cbbc5 100644 --- a/src/m/classes/solidearth.js +++ b/src/m/classes/solidearth.js @@ -72,7 +72,7 @@ class solidearth {//{{{ } //}}} defaultoutputs(md) {//{{{ - return ['Sealevel']; + return ['Sealevel','Bed']; } //}}} marshall(md, prefix, fid) {//{{{ diff --git a/src/m/classes/solidearth.m b/src/m/classes/solidearth.m index b0b00847b..eb4bc5b9a 100644 --- a/src/m/classes/solidearth.m +++ b/src/m/classes/solidearth.m @@ -105,7 +105,7 @@ function disp(self) % {{{ end % }}} function list=defaultoutputs(self,md) % {{{ - list = {'Sealevel'}; + list = {'Sealevel', 'Bed'}; end % }}} function marshall(self,prefix,md,fid) % {{{ diff --git a/src/m/classes/solidearth.py b/src/m/classes/solidearth.py index d1661995c..134f33c96 100644 --- a/src/m/classes/solidearth.py +++ b/src/m/classes/solidearth.py @@ -101,7 +101,7 @@ def checkconsistency(self, md, solution, analyses): # {{{ # }}} def defaultoutputs(self, md): # {{{ - return ['Sealevel'] + return ['Sealevel', 'Bed'] # }}} def marshall(self, prefix, md, fid): # {{{ diff --git a/src/m/consistency/checkfield.m b/src/m/consistency/checkfield.m index ffcc6ed1b..0fc73181c 100644 --- a/src/m/consistency/checkfield.m +++ b/src/m/consistency/checkfield.m @@ -19,6 +19,7 @@ % - numel: list of acceptable number of elements % - cell: 1 if check that is cell % - empty: 1 if check that non empty +% - filepath: 1 if check file exists % - message: overloaded error message % % Usage: @@ -308,3 +309,16 @@ ['field ''' fieldname ''' columns must not contain duplicate timesteps'])); end end + +%Check filepath +if getfieldvalue(options,'filepath',0) + if ~ischar(field) + md = checkmessage(md,getfieldvalue(options,'message',... + ['field ''' fieldname ''' should be a file path (char)'])); + else + if ~exist(field, 'file') + md = checkmessage(md,getfieldvalue(options,'message',... + ['field ''' fieldname ''' file does not exist'])); + end + end +end diff --git a/src/m/consistency/ismodelselfconsistent.m b/src/m/consistency/ismodelselfconsistent.m index 0cce9c071..ff54c8caa 100644 --- a/src/m/consistency/ismodelselfconsistent.m +++ b/src/m/consistency/ismodelselfconsistent.m @@ -72,11 +72,11 @@ function ismodelselfconsistent(md) elseif strcmp(solutiontype,'EsaSolution') analyses={'EsaAnalysis'}; elseif strcmp(solutiontype,'TransientSolution') - analyses={'StressbalanceAnalysis','StressbalanceVerticalAnalysis','StressbalanceSIAAnalysis','L2ProjectionBaseAnalysis','ThermalAnalysis','MeltingAnalysis','EnthalpyAnalysis','MasstransportAnalysis','OceantransportAnalysis','HydrologyShaktiAnalysis','HydrologyGladsAnalysis','HydrologyShreveAnalysis','HydrologyTwsAnalysis','HydrologyDCInefficientAnalysis','HydrologyDCEfficientAnalysis','SealevelchangeAnalysis','AgeAnalysis','HydrologyArmapwAnalysis','AgeAnalysis','DebrisAnalysis'}; + analyses={'StressbalanceAnalysis','StressbalanceVerticalAnalysis','StressbalanceSIAAnalysis','L2ProjectionBaseAnalysis','ThermalAnalysis','MeltingAnalysis','EnthalpyAnalysis','MasstransportAnalysis','OceantransportAnalysis','HydrologyShaktiAnalysis','HydrologyGladsAnalysis','HydrologyShreveAnalysis','HydrologyTwsAnalysis','HydrologyDCInefficientAnalysis','HydrologyDCEfficientAnalysis','HydrologyPrescribeAnalysis','SealevelchangeAnalysis','AgeAnalysis','HydrologyArmapwAnalysis','AgeAnalysis','DebrisAnalysis'}; elseif strcmp(solutiontype,'SealevelchangeSolution') analyses={'SealevelchangeAnalysis'}; elseif strcmp(solutiontype,'HydrologySolution') - analyses={'L2ProjectionBaseAnalysis','HydrologyShreveAnalysis','HydrologyDCInefficientAnalysis','HydrologyDCEfficientAnalysis','HydrologyGladsAnalysis','HydrologyShaktiAnalysis','HydrologyTwsAnalysis','HydrologyArmapwAnalysis'}; + analyses={'L2ProjectionBaseAnalysis','HydrologyShreveAnalysis','HydrologyDCInefficientAnalysis','HydrologyDCEfficientAnalysis','HydrologyGladsAnalysis','HydrologyShaktiAnalysis','HydrologyTwsAnalysis','HydrologyArmapwAnalysis','HydrologyPrescribeAnalysis'}; elseif strcmp(solutiontype,'DamageEvolutionSolution') analyses={'DamageEvolutionAnalysis'}; elseif strcmp(solutiontype,'SamplingSolution') diff --git a/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.m b/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.m new file mode 100644 index 000000000..6ca9ce1a2 --- /dev/null +++ b/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.m @@ -0,0 +1,184 @@ +function basalforcings = interpISMIP7AntarcticaOcn(md, modelname, scenario, start_end) + %interpISMIP6AntarcticaOcn - interpolate chosen ISMIP7 ocean forcing to model + % + % Input: + % - md (model object) + % - modelname (string): name of the climate model and scenario + % - scenario (string): scenario (i.e., ssp126, ssp585) + % - start_end (optional int array): two entry array of [start_year end_year] + % + % Output: + % - basalforcings: prepared to be input directly into md.basalforcings + % time series from 1995-2100 + % + % Examples: + % # Get observation dataset + % md.basalforcings = interpISMIP7AntarcticaOcn(md,'obs') + % md.basalforcings = interpISMIP7AntarcticaOcn(md,'obs') + % md.basalforcings = interpISMIP7AntarcticaOcn(md,'miroc-esm-chem_rcp8.5'); + % md.basalforcings = interpISMIP7AntarcticaOcn(md,'miroc-esm-chem_rcp8.5', [2007 2050]); + + % Parse inputs + if nargin==2 % for observation + scenario = ''; + start_time = 1996; + end_time = 1996; + elseif nargin==3 + start_time = 1995; + end_time = 2100; + elseif nargin==4 + start_time = start_end(1); + end_time = start_end(2); + else + error('not supported'); + end + + % Find appropriate directory + % NOTE: data directory for ISMIP7 follows Globus repository... + % Globus repository for ISMIP7. + % https://app.globus.org/file-manager?origin_id=ccc9bbd2-4091-4e35-addd-eeb639cf5332&origin_path=%2FISMIP7%2F + switch oshostname() + case {'totten'} + error('set default machine settings'); + case {'amundsen.thayer.dartmouth.edu'} + error('set default machine settings'); + case {'simba00'} + datadir='/data2/msmg/DATA/ISMIP7/AIS/'; + otherwise + error('machine not supported yet, please provide your own path'); + end + + % Searching forcing files + [tf_file, so_file] = search_forcing_file(datadir, modelname, scenario); + + % Load TF and salinity data + disp(' == loading TF'); + x_n = double(ncread(tf_file,'x')); + y_n = double(ncread(tf_file,'y')); + % dimension (x, y, z, time) for tf and so files. + tf_data = double(ncread(tf_file,'tf')); + so_data = double(ncread(so_file,'tf')); % FIXME: really "tf" variable in "so" (salinity)? + z_data = double(ncread(tf_file,'z')); + + %Build tf and salinity cell array + tf = cell(1,1,size(tf_data,3)); + so = cell(1,1,size(so_data,3)); + if modelname + start_idx = 1; + final_idx = 1; + time = 1996; % set default starting time for observation. + else + start_idx = start_time - 1994; + final_idx = end_time - 1994; + time = start_time:end_time; + end + for i=1:size(tf_data,3) %Iterate over depths + disp([' == Interpolating over depth ' num2str(i) '/' num2str(size(tf_data,3))]); + + temp_matrix_tf=[]; + temp_matrix_so=[]; + for ii=start_idx:final_idx %Iterate over time steps + %temp_tfdata=InterpFromGridToMesh(x_n,y_n,tf_data(:,:,i,ii)',md.mesh.x,md.mesh.y,0); + temp_data=InterpFromGrid(x_n,y_n,tf_data(:,:,i,ii)',md.mesh.x,md.mesh.y); + temp_matrix_tf = [temp_matrix_tf temp_data]; + + temp_data=InterpFromGrid(x_n,y_n,so_data(:,:,i,ii)',md.mesh.x,md.mesh.y); + temp_matrix_so = [temp_matrix_so temp_data]; + end + tf{:,:,i} = [temp_matrix_tf; time]; + so{:,:,i} = [temp_matrix_so; time]; + end + + clear temp_matrix_tf, temp_matrix_so; + + % TODO: + % Wait calibrated dataset + %load Delta and gamma data + %deltatnc_median = fullfile(datadir,'parameterizations/coeff_gamma0_DeltaT_quadratic_non_local_median.nc'); + basin_datanc = fullfile(datadir,'obs/ocean/IMBIE-basins/v3/IMBIE-basins_AIS_obs_ocean_v3.nc'); + %deltaT_median = double(ncread(deltatnc_median,'deltaT_basin')); + %gamma0_median = double(ncread(deltatnc_median,'gamma0')); + basinid_data = double(ncread(basin_datanc,'basinNumber')); + + disp(' == Interpolating basin Id'); + num_basins = length(unique(basinid_data)); + %deltat_median = NaN(1,length(unique(basinid_data))); + + %for i=0:num_basins-1 + % pos = find(basinid_data==i); + % deltat_temp = deltaT_median(pos); + % deltat_temp = deltat_temp(1); + % deltat_median(i+1) = deltat_temp; + %end + + %Deal with basins ID + x_el = mean(md.mesh.x(md.mesh.elements),2); + y_el = mean(md.mesh.y(md.mesh.elements),2); + basinid = InterpFromGrid(x_n,y_n,basinid_data',x_el, y_el, 'nearest')+1; + + %Set ISMIP7 basal melt rate parameters + basalforcings = basalforcingsismip7(md.basalforcings); + basalforcings = initialize(basalforcings,md); + basalforcings.basin_id = basinid; + basalforcings.num_basins = num_basins; + %basalforcings.delta_t = deltat_median; + basalforcings.tf_depths = z_data'; + %basalforcings.gamma_0 = gamma0_median; + basalforcings.tf = tf; + basalforcings.salinity = so; + + disp(['Info: forcings cover ' num2str(start_time),' to ', num2str(end_time)]); +end + +function [tf_file, so_file] = search_forcing_file(datadir, modelname, scenario) + %{ + %Explain + %------- + % Return specific file names... + % + %Example + %------- + %.. code-block:: python + % [tf_file, so_file] = search_filenames(datadir, 'cesm2-waccm', 'ssp585') + % + %Parameters + %---------- + %datadir: str + % + %modelname: str + % + %scenario: str + % + %Returns + %------- + %tf_file, so_file: str + % thermal (tf_file) and salinity (so_file) forcing files, respectively. + %} + + modelname = lower(modelname); + switch modelname + case 'obs' + tf_file = fullfile(datadir,'obs/ocean/climatology/zhou_annual_06_nov/tf/v3/tf_AIS_obs_ocean_climatology_zhou_annual_06_nov_v3_1972-2024.nc'); + so_file = fullfile(datadir,'obs/ocean/climatology/zhou_annual_06_nov/so/v3/so_AIS_obs_ocean_climatology_zhou_annual_06_nov_v3_1972-2024.nc'); + case 'cesm2-waccm' + tf_file = ''; + so_file = ''; + otherwise + error('Error: not implemented yet.'); + end + + assert(exist(tf_file,'file')==2, ['Error: we cannot find filename: ' tf_file]); + assert(exist(so_file,'file')==2, ['Error: we cannot find filename: ' so_file]); +end + +function model_time_mapping(modelname, scenario, time_end) + modelname = upper(modelname); + switch modelname + case 'CESM2-WACCM' + historical=[[1850, 1859],... + [1860, 1869],... + [1870, 1879]]; + otherwise + error('Error: not implemented yet.'); + end +end diff --git a/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.py b/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.py new file mode 100644 index 000000000..9c8222088 --- /dev/null +++ b/src/m/contrib/inwoo/ismip7/interpISMIP7AntarcticaOcn.py @@ -0,0 +1,196 @@ +#!/usr/bin/env python3 +import numpy as np +import os, sys, platform +import socket +try: + import netCDF4 +except: + raise Exception('Error: netCDF4 is not installed. Install ''netCDF4'' module.') +try: + import scipy + import scipy.interpolate +except: + raise Exception('Error: scipy is not installed. Install ''scipy'' module.') + +from basalforcingsismip7 import basalforcingsismip7 +from InterpFromGridToMesh import InterpFromGridToMesh + +def interpISMIP7AntarcticaOcn(*args): + ''' + interpISMIP7AntarcticaOcn - interpolate chosen ISMIP7 ocean forcing to model + + Input: + - md (model object) + - modelname (string): name of the climate model + - scenario (string): scenario (e.g., ssp126, ssp370, ssp585) + - start_end (optional int array): two array of [start_year, end_year] + + Output: + - basalforcings: prepared to be input directly into md.basalforcings time series from 1995-2100. + + Example + # Get observation dataset + md.basalforcings = interpISMIP7AntarcticaOcn(md,'obs') + + # GCM forcings + md.basalforcings = interpISMIP7AntarcticaOcn(md,'cesm2-waccm','ssp585',[1995, 2100]) + ''' + + # Parse inputs + if len(args) == 2: + md, modelname = args + scenario = '' + start_time= 1996 + end_time = 1996 + elif len(args) == 3: + md, modelname, scenario = args + start_time = 1995 + end_time = 2100 + elif len(args) == 4: + md, modelname, scenario, start_end = args + else: + raise Exception('not supported.') + + # Find appropriate directory + # NOTE: data directory for ISMIP7 follows Globus repository... + # Globus repository for ISMIP7. + # https://app.globus.org/file-manager?origin_id=ccc9bbd2-4091-4e35-addd-eeb639cf5332&origin_path=#2FISMIP7#2F + hostname = socket.gethostname().lower().replace('-','') + if hostname == 'totten': + raise Exception('set default machine settings') + elif hostname == 'amundsen.thayer.dartmouth.edu': + raise Exception('set default machine settings') + elif hostname == 'simba00': + datadir='/data2/msmg/DATA/ISMIP7/AIS/' + else: + raise Exception('machine not supported yet, please provide your own path') + + # Search forcing files + tf_file, so_file = search_forcing_file(datadir, modelname, scenario) + + # Load TF and salinity data + nc_tf = netCDF4.Dataset(tf_file,'r') + nc_so = netCDF4.Dataset(so_file,'r') + + x_n = nc_tf['x'][:] + y_n = nc_tf['y'][:] + # Python: dimension (time, z, y, x) for tf and so file + tf_data = nc_tf['tf'][:] + so_data = nc_so['tf'][:] # FIXME: really "tf" variable in "so" (salinity) ? + z_data = nc_tf['z'][:] + if modelname == 'obs': + #NOTE: observation dataset contains (z, y, x). observation dataset is required additional axis + tf_data = tf_data[np.newaxis,:,:,:] + so_data = so_data[np.newaxis,:,:,:] + + nc_tf.close() + nc_so.close() + del nc_tf, nc_so + + # Build tf and salinity array + tf = [] + so = [] + if modelname: + start_idx = 0 + final_idx = 1 + time = [[1996]] + else: + start_idx = start_time - 1994 + final_idx = end_time - 1994 + time = np.arange(start_time, end_time+1) + + for i in range(len(z_data)): + print(' == Interpolating over depth ' + str(i+1) + '/' + str(len(z_data))) + + temp_matrix_tf=[] + temp_matrix_so=[] + for ii in range(start_idx,final_idx): + temp_data=InterpFromGridToMesh(x_n,y_n,tf_data[ii,i,:,:],md.mesh.x,md.mesh.y,np.nan) + temp_data=np.reshape(temp_data,(-1,1)) + temp_matrix_tf.append(temp_data[:]) + + temp_data=InterpFromGridToMesh(x_n,y_n,so_data[ii,i,:,:],md.mesh.x,md.mesh.y,np.nan) + temp_data=np.reshape(temp_data,(-1,1)) + temp_matrix_so.append(temp_data[:]) + + temp_matrix_tf = np.concatenate(temp_matrix_tf,axis=1) + temp_matrix_so = np.concatenate(temp_matrix_so,axis=1) + + tf.append(np.concatenate((temp_matrix_tf,time),axis=0)) + so.append(np.concatenate((temp_matrix_so,time),axis=0)) + + del temp_matrix_tf, temp_matrix_so + + # TODO: + # Wait calibrated dataset + # load Delta and gamm data + basin_datanc = netCDF4.Dataset(os.path.join(datadir,'obs/ocean/IMBIE-basins/v3/IMBIE-basins_AIS_obs_ocean_v3.nc'),'r') + basinid_data = basin_datanc['basinNumber'][:] + + print(' == Interpolating basin Id') + num_basins = len(np.unique(basinid_data)) + + # Deal with basins ID + x_el = np.mean(md.mesh.x[md.mesh.elements-1],axis=1) + y_el = np.mean(md.mesh.y[md.mesh.elements-1],axis=1) + # Use interpolator in "scipy.interpolate" + interpolator = scipy.interpolate.RegularGridInterpolator((x_n,y_n),np.transpose(basinid_data), + method='nearest', + bounds_error=False,fill_value=np.nan) + basinid = interpolator(np.vstack((x_el,y_el)).T) + del interpolator + + # Set ISMIP7 basal melt rate parameters + basalforcings = basalforcingsismip7(md.basalforcings) + basalforcings = basalforcings.initialize(md) + basalforcings.basin_id = basinid + basalforcings.num_basins = num_basins + basalforcings.tf_depths = np.reshape(z_data,(1,-1)) + basalforcings.tf = tf + basalforcings.salinity = so + + print('Info: forcings cover ' + str(start_time) + ' to ' + str(end_time)); + + return basalforcings + +def search_forcing_file(datadir, modelname, scenario): + """ + Explain + ------- + Return specific file names.... + + Example + ------- + .. code-block:: python + tf_file, so_file = search_forcing_file(datadir, 'cesm2-waccm', 'ssp585') + + Parameters + ---------- + datadir: str + + modelname: str + + scenario: str + + Returns + ------- + tf_file, so_file: list + thermal (tf_file) and salinity (so_file) forcing files, respectively. + """ + + assert(isinstance(modelname,str)) + modelname = modelname.lower() + if modelname == 'obs': + tf_file = os.path.join(datadir,'obs/ocean/climatology/zhou_annual_06_nov/tf/v3/tf_AIS_obs_ocean_climatology_zhou_annual_06_nov_v3_1972-2024.nc') + so_file = os.path.join(datadir,'obs/ocean/climatology/zhou_annual_06_nov/so/v3/so_AIS_obs_ocean_climatology_zhou_annual_06_nov_v3_1972-2024.nc') + elif modelname == 'cems2-waccm': + raise Exception('Error: given %s is not supported yet.'%(modelname)) + tf_file = '' + so_file = '' + else: + raise Exception('Error: not implemented yet.') + + assert(os.path.isfile(tf_file), 'Error: We cannot find filename: ' + tf_file) + assert(os.path.isfile(so_file), 'Error: We cannot find filename: ' + so_file) + + return tf_file, so_file diff --git a/src/m/contrib/morlighem/gslib/pkriging.m b/src/m/contrib/morlighem/gslib/pkriging.m index b848f5e54..f65a17a9b 100644 --- a/src/m/contrib/morlighem/gslib/pkriging.m +++ b/src/m/contrib/morlighem/gslib/pkriging.m @@ -9,7 +9,7 @@ options=removefield(options,'cluster',0); name = ['krig' num2str(feature('GetPid'))]; -if 1, +if 1 % ========================================= MARSHALL.m ================================================= disp(['marshalling file ' name '.bin']); fid=fopen([name '.bin'],'wb'); @@ -30,13 +30,15 @@ %Last, write "md.EOF" to make sure that the binary file is not corrupt WriteData(fid,'','name','md.EOF','data',true,'format','Boolean'); +%Fake md as a place holder +md=model; md.cluster=cluster; md.settings.waitonlock=Inf; md.private.runtimename=name;md.miscellaneous.name=name; + %Launch job on remote cluster -BuildKrigingQueueScript(cluster,name,'',1,0,0); %gather, valgrind, gprof +BuildKrigingQueueScript(cluster, md, [name '.queue']); UploadQueueJob(cluster,name,name,{[name '.bin'] [name '.queue']}) LaunchQueueJob(cluster,name,name,{[name '.bin'] [name '.queue']},'',0); %Call waitonlock -md=model; md.cluster=cluster; md.settings.waitonlock=Inf; md.private.runtimename=name;md.miscellaneous.name=name; waitonlock(md); %Download @@ -47,9 +49,7 @@ delete([name '.errlog']); delete([name '.outbin']); delete([name '.bin']); -if ~ispc(), - delete([name '.tar.gz']); -end +delete([name '.tar.gz']); %Process results B=structure.predictions; diff --git a/src/m/geometry/slope.m b/src/m/geometry/slope.m index d6a0e0ffd..76daf1f98 100644 --- a/src/m/geometry/slope.m +++ b/src/m/geometry/slope.m @@ -1,36 +1,45 @@ -function [sx,sy,s]=slope(md,surf) -%SLOPE - compute the surface slope +function [dfdx, dfdy, df] = slope(md, f) +%SLOPE - compute the gradient of any field % % Usage: -% [sx,sy,s]=slope(md) -% [sx,sy,s]=slope(md,md.results.TransientSolution(1).Surface) +% [dfdx, dfdy, df] = slope(md) +% [dfdx, dfdy, df] = slope(md, md.results.TransientSolution(1).Surface) +% df = slope(md, md.geometry.surface); +% +% where df = sqrt(dfdx^2 + dfdy^2) %load some variables (it is much faster if the variab;es are loaded from md once for all) -if dimension(md.mesh)==2, - numberofelements=md.mesh.numberofelements; - numberofnodes=md.mesh.numberofvertices; - index=md.mesh.elements; - x=md.mesh.x; y=md.mesh.y; +if dimension(md.mesh)==2 + numberofelements = md.mesh.numberofelements; + numberofnodes = md.mesh.numberofvertices; + index = md.mesh.elements; + x = md.mesh.x; + y = md.mesh.y; else - numberofelements=md.mesh.numberofelements2d; - numberofnodes=md.mesh.numberofvertices2d; - index=md.mesh.elements2d; - x=md.mesh.x2d; y=md.mesh.y2d; + numberofelements = md.mesh.numberofelements2d; + numberofnodes = md.mesh.numberofvertices2d; + index = md.mesh.elements2d; + x = md.mesh.x2d; + y = md.mesh.y2d; end -if nargin==1, - surf=md.geometry.surface; +if nargin==1 + f = md.geometry.surface; end + %compute nodal functions coefficients N(x,y)=alpha x + beta y + gamma [alpha beta]=GetNodalFunctionsCoeff(index,x,y); summation=[1;1;1]; -sx=(surf(index).*alpha)*summation; -sy=(surf(index).*beta)*summation; -s=sqrt(sx.^2+sy.^2); - -if dimension(md.mesh)==3, - sx=project3d(md,'vector',sx,'type','element'); - sy=project3d(md,'vector',sy,'type','element'); - s=sqrt(sx.^2+sy.^2); +dfdx = (f(index).*alpha)*summation; +dfdy = (f(index).*beta)*summation; +if dimension(md.mesh)==3 + dfdx = project3d(md,'vector',dfdx,'type','element'); + dfdy = project3d(md,'vector',dfdy,'type','element'); end + +%Compute magnitude +df = sqrt(dfdx.^2+dfdy.^2); + +%return magnitude only +if nargout==1; dfdx = df; end diff --git a/src/m/geometry/slope.py b/src/m/geometry/slope.py index 5f55825d7..96bfd5f99 100644 --- a/src/m/geometry/slope.py +++ b/src/m/geometry/slope.py @@ -2,14 +2,13 @@ from GetNodalFunctionsCoeff import GetNodalFunctionsCoeff from project3d import project3d - def slope(md, *args): """ - SLOPE - compute the surface slope + SLOPE - compute the gradient of any field Usage: - sx, sy, s = slope(md) - sx, sy, s = slope(md, md.results.TransientSolution(1).Surface) + dfdx, dfdy, ds = slope(md) + dfdx, dfdy, ds = slope(md, md.results.TransientSolution(1).Surface) """ #load some variables (it is much faster if the variables are loaded from md once for all) @@ -23,9 +22,9 @@ def slope(md, *args): y = md.mesh.y2d if len(args) == 0: - surf = md.geometry.surface + f = md.geometry.surface elif len(args) == 1: - surf = args[0] + f = args[0] else: raise RuntimeError("slope.py usage error") @@ -33,14 +32,13 @@ def slope(md, *args): alpha, beta = GetNodalFunctionsCoeff(index, x, y)[0:2] summation = np.array([[1], [1], [1]]) - sx = np.dot(surf[index - 1] * alpha, summation).reshape(-1, ) - sy = np.dot(surf[index - 1] * beta, summation).reshape(-1, ) - - s = np.sqrt(sx**2 + sy**2) - + dfdx = np.dot(f[index - 1] * alpha, summation).reshape(-1, ) + dfdy = np.dot(f[index - 1] * beta, summation).reshape(-1, ) if md.mesh.dimension() == 3: - sx = project3d(md, 'vector', sx, 'type', 'element') - sy = project3d(md, 'vector', sy, 'type', 'element') - s = np.sqrt(sx**2 + sy**2) + dfdx = project3d(md, 'vector', dfdx, 'type', 'element') + dfdy = project3d(md, 'vector', dfdy, 'type', 'element') + + #Compute magnitude + ds = np.sqrt(dfdx**2 + dfdy**2) - return (sx, sy, s) + return (dfdx, dfdy, ds) diff --git a/src/m/modeldata/interpAdusumilliIceShelfMelt.m b/src/m/modeldata/interpAdusumilliIceShelfMelt.m index 047d66414..830d2be7d 100644 --- a/src/m/modeldata/interpAdusumilliIceShelfMelt.m +++ b/src/m/modeldata/interpAdusumilliIceShelfMelt.m @@ -14,7 +14,7 @@ switch (oshostname()), case {'totten'} filename = '/totten_1/ModelData/Antarctica/Adusumilli2020IceShelfMelt/ANT_iceshelf_melt_rates_CS2_2010-2018_v0.h5'; - case {'thwaites','larsen','astrid'} + case {'thwaites','larsen','astrid','wilkins.jpl.nasa.gov'} filename = '/u/astrid-r1b/ModelData/Adusumilli2020IceShelfMelt/ANT_iceshelf_melt_rates_CS2_2010-2018_v0.h5'; otherwise error('hostname not supported yet'); diff --git a/src/m/modeldata/interpBamber2001.m b/src/m/modeldata/interpBamber2001.m index 315a96d7e..65892187b 100644 --- a/src/m/modeldata/interpBamber2001.m +++ b/src/m/modeldata/interpBamber2001.m @@ -1,7 +1,7 @@ function [bedout thicknessout] = interpBamber2001(X,Y), switch oshostname(), - case {'murdo','thwaites','astrid'} + case {'murdo','thwaites','astrid','wilkins.jpl.nasa.gov'} bamber2001bedpath ='/u/astrid-r1b/ModelData/BamberDEMGreenland5km/bedrock.mat'; bamber2001thxpath ='/u/astrid-r1b/ModelData/BamberDEMGreenland5km/thickness.mat'; case {'ronne'} diff --git a/src/m/modeldata/interpBedmachineGreenland.m b/src/m/modeldata/interpBedmachineGreenland.m index a7e201622..be7404e53 100644 --- a/src/m/modeldata/interpBedmachineGreenland.m +++ b/src/m/modeldata/interpBedmachineGreenland.m @@ -25,9 +25,7 @@ ncdate='2022-03-17'; ncdate='2022-05-18'; ncdate='2022-07-28'; - ncdate='v6.0'; - ncdate='v6.1'; - ncdate='v6.6'; + ncdate='v6.6'; %BedMachine v6 end if nargin<4 if strcmp(string,'mask') | strcmp(string,'source') diff --git a/src/m/modeldata/interpRignotIceShelfMelt.m b/src/m/modeldata/interpRignotIceShelfMelt.m index b94359fb3..056fda19d 100644 --- a/src/m/modeldata/interpRignotIceShelfMelt.m +++ b/src/m/modeldata/interpRignotIceShelfMelt.m @@ -11,7 +11,7 @@ rignotmelt='/totten_1/ModelData/Antarctica/RignotMeltingrate/Ant_MeltingRate.nc'; case {'amundsen.thayer.dartmouth.edu'} rignotmelt='/local/ModelData/AntarcticMeltRignot/Ant_MeltingRate.nc'; - case {'thwaites','larsen','murdo','astrid'} + case {'thwaites','larsen','murdo','astrid','wilkins.jpl.nasa.gov'} rignotmelt=['/u/astrid-r1b/ModelData/RignotAntarcticaMeltRates/Ant_MeltingRate.v2.nc']; otherwise error('hostname not supported yet'); diff --git a/src/m/modeldata/interpSeaRISE.m b/src/m/modeldata/interpSeaRISE.m index e03a7299e..c165debd5 100644 --- a/src/m/modeldata/interpSeaRISE.m +++ b/src/m/modeldata/interpSeaRISE.m @@ -53,7 +53,7 @@ elseif hemisphere==-1, searisenc='/home/ModelData/SeaRISE/Antarctica_5km_dev1.0.nc'; end - case {'thwaites','larsen','murdo','astrid'} + case {'thwaites','larsen','murdo','astrid','wilkins.jpl.nasa.gov'} if hemisphere==1, searisenc='/u/astrid-r1b/ModelData/SeaRISE/Greenland5km_v1.2/Greenland_5km_dev1.2.nc'; elseif hemisphere==-1, diff --git a/src/m/modeldata/interpStal2020.m b/src/m/modeldata/interpStal2020.m index a3fc33643..343c2ddd4 100644 --- a/src/m/modeldata/interpStal2020.m +++ b/src/m/modeldata/interpStal2020.m @@ -7,7 +7,7 @@ switch oshostname(), case {'amundsen.thayer.dartmouth.edu'} gtfpath='/local/ModelData/GeothermalFluxAntarcticaStal/aq1_01_20.nc'; - case {'thwaites','larsen','murdo','astrid'} + case {'thwaites','larsen','murdo','astrid','wilkins.jpl.nasa.gov'} gtfpath='/u/astrid-r1b/ModelData/StalGeothermalFlux2020/aq1_01_20.nc'; otherwise error('machine not supported yet'); diff --git a/src/m/modules/InterpFromGrid.m b/src/m/modules/InterpFromGrid.m new file mode 100644 index 000000000..9ce869e2f --- /dev/null +++ b/src/m/modules/InterpFromGrid.m @@ -0,0 +1,24 @@ +function dataout = InterpFromGrid(x, y, data, x_interp,y_interp, method) +%INTERPFROMGRID - Interpolation from a grid onto a list of points (faster than InterpFromGridToMesh) +% +% Usage: +% dataout = InterpFromGrid2(x, y, data, x_interp,y_interp) +% dataout = InterpFromGrid2(x, y, data, x_interp,y_interp, method) +% +% data: matrix holding the data to be interpolated onto the mesh +% x,y: coordinates of matrix data (x and y must be in increasing order) +% x_interp,y_interp: coordinates of the points onto which we interpolate +% method: interpolation method ('linear'/default, 'nearest', 'cubic') +% +% Example: +% md.inversion.vx_obs=InterpFromGrid(x, y, vx, md.mesh.x,md.mesh.y); + +% Call mex module +if nargin==5 + dataout = InterpFromGrid_matlab(x, y, data, x_interp, y_interp); +elseif nargin==6 + dataout = InterpFromGrid_matlab(x, y, data, x_interp, y_interp, method); +else + help InterpFromGrid + error('Wrong usage (see above)'); +end diff --git a/src/m/os/issmscpin.m b/src/m/os/issmscpin.m index ffc62c65e..aa2c2057d 100644 --- a/src/m/os/issmscpin.m +++ b/src/m/os/issmscpin.m @@ -46,13 +46,14 @@ function issmscpin(host, login, port, path, packages, bracketstyle) [status]=system(['scp ' login '@' host ':' path '/' fileliststr ' ./']); if status ~= 0 %List expansion is a bashism. Try again with '-OT'. + disp('issmscp info: standard scp command did not work, trying again with -OT flag'); [status,cmdout]=system(['scp -OT ' login '@' host ':' path '/' fileliststr ' ./']); end end %check scp worked if status ~= 0 - error(['issmscpin error message: ' cmdout]) + error(['scp error message: ' cmdout]) end for i=1:numel(packages), if ~exist(['./' packages{i}]), diff --git a/src/m/os/issmscpout.m b/src/m/os/issmscpout.m index d3d62f1b0..008faab4c 100644 --- a/src/m/os/issmscpout.m +++ b/src/m/os/issmscpout.m @@ -1,8 +1,8 @@ -function issmscpout(host, path, login, port, packages, no_symlinks, bracketstyle) +function issmscpout(host, path, login, port, packages, bracketstyle) %ISSMSCPOUT send files to host % % usage: -% issmscpout(host,path,login,port,packages,no_symlinks,bracketstyle) +% issmscpout(host,path,login,port,packages,bracketstyle) % % bracketstyle: 1 - \{\} (escaped; default) % 2 - {} (not escaped) @@ -10,16 +10,12 @@ function issmscpout(host, path, login, port, packages, no_symlinks, bracketstyle %get hostname hostname=oshostname(); -%disable symbolic links? -if nargin<6 - no_symlinks=0; -end %does machine require escaped brackets? -if nargin<7 +if nargin<6 bracketstyle = 1; end -%if hostname and host are the same, do a simple copy or symlinks +%if hostname and host are the same, do a simple symlinks if strcmpi(host,hostname) %Process both paths and add \ if there are any white spaces @@ -27,46 +23,43 @@ function issmscpout(host, path, login, port, packages, no_symlinks, bracketstyle for i=1:numel(packages) system(['rm -rf ' path '/' packages{i} ]); - if no_symlinks - system(['cp ' packages{i} ' ' path]); - else - system(['ln -s ' here '/' packages{i} ' ' path]); - end + system(['ln -s ' here '/' packages{i} ' ' path]); end + return; +end %General case: this is not a local machine +if numel(packages)==1 + fileliststr=packages{1}; else - if numel(packages)==1 - fileliststr=packages{1}; - else - fileliststr='\{'; - for i=1:numel(packages)-1, - fileliststr=[fileliststr packages{i} ',']; - end - fileliststr=[fileliststr packages{end} '\}']; - - %remove backslashes if bracketstyle is 2 - if bracketstyle==2 - fileliststr = [fileliststr(2:end-2) fileliststr(end)]; - end + fileliststr='\{'; + for i=1:numel(packages)-1, + fileliststr=[fileliststr packages{i} ',']; end - if port - disp(['scp -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]) - [status]=system(['scp -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]); - if status~=0 - %List expansion is a bashism. Try again with '-OT'. - [status,cmdout]=system(['scp -OT -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]); - end - else - [status]=system(['scp ' fileliststr ' ' login '@' host ':' path]); - if status~=0 - %List expansion is a bashism. Try again with '-OT'. - [status,cmdout]=system(['scp -OT ' fileliststr ' ' login '@' host ':' path]); - end + fileliststr=[fileliststr packages{end} '\}']; + + %remove backslashes if bracketstyle is 2 + if bracketstyle==2 + fileliststr = [fileliststr(2:end-2) fileliststr(end)]; end +end - %check scp worked +if port + disp(['scp -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]) + [status]=system(['scp -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]); + if status~=0 + %List expansion is a bashism. Try again with '-OT'. + [status,cmdout]=system(['scp -OT -P ' num2str(port) ' ' fileliststr ' ' login '@localhost:' path]); + end +else + [status]=system(['scp ' fileliststr ' ' login '@' host ':' path]); if status~=0 - error(['issmscpin error message: ' cmdout]) + %List expansion is a bashism. Try again with '-OT'. + [status,cmdout]=system(['scp -OT ' fileliststr ' ' login '@' host ':' path]); end end + +%check scp worked +if status~=0 + error(['issmscpin error message: ' cmdout]) +end diff --git a/src/m/os/issmscpout.py b/src/m/os/issmscpout.py index 53bf37591..dd67b60a7 100644 --- a/src/m/os/issmscpout.py +++ b/src/m/os/issmscpout.py @@ -3,11 +3,11 @@ from MatlabFuncs import * -def issmscpout(host, path, login, port, packages, no_symlinks=0, bracketstyle=1): +def issmscpout(host, path, login, port, packages, bracketstyle=1): """issmscpout send files to host Usage: - issmscpout(host, path, login, port, packages, no_symlinks, bracketstyle) + issmscpout(host, path, login, port, packages, bracketstyle) bracketstyle: 1 - \\{\\} (escaped; default) 2 - {} (not escaped) @@ -24,42 +24,39 @@ def issmscpout(host, path, login, port, packages, no_symlinks=0, bracketstyle=1) os.remove(os.path.join(path, package)) except OSError: pass - if no_symlinks: - subprocess.call('cp {} {}'.format(package, path), shell=True) - else: - subprocess.call('ln -s {} {}'.format(os.path.join(here, package), path), shell=True) + subprocess.call('ln -s {} {}'.format(os.path.join(here, package), path), shell=True) + return 0 # General case: this is not a local machine + if len(packages) == 1: + fileliststr = packages[0] else: - if len(packages) == 1: - fileliststr = packages[0] - else: - fileliststr = r'\{' - fileliststr += ','.join([package for package in packages]) - fileliststr += r'\}' - - # Remove backslashes if bracketstyle is 2 - if bracketstyle == 2: - fileliststr = fileliststr[1:-2] + fileliststr[-1] - if port: - subproc_cmd = 'scp -P {} {} {}@localhost:{}'.format(port, fileliststr, login, path) + fileliststr = r'\{' + fileliststr += ','.join([package for package in packages]) + fileliststr += r'\}' + + # Remove backslashes if bracketstyle is 2 + if bracketstyle == 2: + fileliststr = fileliststr[1:-2] + fileliststr[-1] + if port: + subproc_cmd = 'scp -P {} {} {}@localhost:{}'.format(port, fileliststr, login, path) + subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) + outs, errs = subproc.communicate() + if errs != '': + # List expansion is a bashism. Try again with '-OT'. + subproc_cmd = 'scp -OT -P {} {} {}@localhost:{}'.format(port, fileliststr, login, path) subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) outs, errs = subproc.communicate() - if errs != '': - # List expansion is a bashism. Try again with '-OT'. - subproc_cmd = 'scp -OT -P {} {} {}@localhost:{}'.format(port, fileliststr, login, path) - subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) - outs, errs = subproc.communicate() - else: - subproc_cmd = 'scp {} {}@{}:{}'.format(fileliststr, login, host, path) + else: + subproc_cmd = 'scp {} {}@{}:{}'.format(fileliststr, login, host, path) + subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) + outs, errs = subproc.communicate() + if errs != '': + # List expansion is a bashism. Try again with '-OT'. + subproc_cmd = 'scp -OT {} {}@{}:{}'.format(fileliststr, login, host, path) subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) outs, errs = subproc.communicate() - if errs != '': - # List expansion is a bashism. Try again with '-OT'. - subproc_cmd = 'scp -OT {} {}@{}:{}'.format(fileliststr, login, host, path) - subproc = subprocess.Popen(subproc_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) - outs, errs = subproc.communicate() - # Check scp worked - if errs != '': - raise OSError('issmscpin error message: {}'.format(errs)) + # Check scp worked + if errs != '': + raise OSError('issmscpin error message: {}'.format(errs)) diff --git a/src/m/parameterization/killberg.py b/src/m/parameterization/killberg.py index 8403874da..6811f8ead 100644 --- a/src/m/parameterization/killberg.py +++ b/src/m/parameterization/killberg.py @@ -24,6 +24,12 @@ def killberg_fast(md): ice_ls = md.mask.ice_levelset ocean_ls = md.mask.ocean_levelset + if np.ndim(ice_ls) == 2: # Maybe levelset dimension with (nV, 1) + ice_ls = np.ravel(ice_ls) + if np.ndim(ocean_ls) == 2: + ocean_ls = np.ravel(ocean_ls) + assert np.shape(ice_ls) == (nV,) + assert np.shape(ocean_ls) == (nV,) print("Looking for isolated patches of floating ice (icebergs) [fast]") diff --git a/src/m/plot/colormaps/bluewhitered.m b/src/m/plot/colormaps/bluewhitered.m index a5a54f506..68f925e32 100644 --- a/src/m/plot/colormaps/bluewhitered.m +++ b/src/m/plot/colormaps/bluewhitered.m @@ -1,4 +1,4 @@ -function newmap = bluewhitered(m,CAXIS) +function newmap = bluewhitered(m,CAXIS,reverse) %BLUEWHITERED Blue, white, and red color map. % BLUEWHITERED(M) returns an M-by-3 matrix containing a blue to white % to red colormap, with white corresponding to the CAXIS value closest @@ -32,12 +32,22 @@ if nargin < 1 m = size(get(gcf,'colormap'),1); end +if nargin < 2 + reverse = 0; +end -bottom = [0 0 0.5]; -botmiddle = [0 0.5 1]; +if reverse + top = [0 0 0.5]; + topmiddle = [0 0.5 1]; + botmiddle = [1 0 0]; + bottom = [0.5 0 0]; +else + bottom = [0 0 0.5]; + botmiddle = [0 0.5 1]; + topmiddle = [1 0 0]; + top = [0.5 0 0]; +end middle = [1 1 1]; -topmiddle = [1 0 0]; -top = [0.5 0 0]; % Find middle if nargin < 2 diff --git a/src/m/plot/colormaps/vik.m b/src/m/plot/colormaps/vik.m new file mode 100644 index 000000000..c7f9d135f --- /dev/null +++ b/src/m/plot/colormaps/vik.m @@ -0,0 +1,270 @@ +function cmap = vik(n) +% VIK - colormap from https://www.fabiocrameri.ch/colourmaps/ +% +% Usage: +% cmap = vik(n) + +J=[... + 0.0027 0.0696 0.3790 + 0.0035 0.0763 0.3831 + 0.0041 0.0830 0.3871 + 0.0048 0.0896 0.3912 + 0.0053 0.0960 0.3953 + 0.0058 0.1023 0.3993 + 0.0062 0.1086 0.4033 + 0.0066 0.1148 0.4074 + 0.0069 0.1210 0.4114 + 0.0071 0.1272 0.4153 + 0.0074 0.1333 0.4193 + 0.0075 0.1394 0.4233 + 0.0077 0.1455 0.4272 + 0.0078 0.1515 0.4312 + 0.0080 0.1575 0.4351 + 0.0081 0.1635 0.4390 + 0.0081 0.1695 0.4429 + 0.0082 0.1755 0.4468 + 0.0082 0.1815 0.4507 + 0.0083 0.1875 0.4545 + 0.0083 0.1935 0.4584 + 0.0084 0.1995 0.4623 + 0.0084 0.2055 0.4662 + 0.0084 0.2116 0.4701 + 0.0084 0.2176 0.4739 + 0.0085 0.2236 0.4778 + 0.0085 0.2297 0.4817 + 0.0085 0.2358 0.4856 + 0.0086 0.2419 0.4895 + 0.0087 0.2480 0.4935 + 0.0088 0.2542 0.4974 + 0.0089 0.2603 0.5014 + 0.0091 0.2665 0.5053 + 0.0093 0.2728 0.5093 + 0.0096 0.2790 0.5133 + 0.0100 0.2853 0.5174 + 0.0104 0.2916 0.5214 + 0.0112 0.2980 0.5255 + 0.0122 0.3043 0.5296 + 0.0132 0.3108 0.5337 + 0.0145 0.3172 0.5379 + 0.0161 0.3238 0.5421 + 0.0180 0.3303 0.5463 + 0.0204 0.3369 0.5506 + 0.0233 0.3436 0.5549 + 0.0268 0.3503 0.5592 + 0.0310 0.3571 0.5637 + 0.0361 0.3639 0.5681 + 0.0419 0.3708 0.5726 + 0.0481 0.3777 0.5772 + 0.0548 0.3848 0.5818 + 0.0620 0.3919 0.5865 + 0.0696 0.3990 0.5912 + 0.0773 0.4063 0.5960 + 0.0855 0.4136 0.6009 + 0.0942 0.4209 0.6058 + 0.1031 0.4284 0.6108 + 0.1123 0.4359 0.6158 + 0.1217 0.4434 0.6209 + 0.1315 0.4511 0.6260 + 0.1415 0.4588 0.6312 + 0.1517 0.4665 0.6364 + 0.1623 0.4743 0.6417 + 0.1730 0.4821 0.6470 + 0.1839 0.4900 0.6523 + 0.1950 0.4980 0.6577 + 0.2063 0.5059 0.6631 + 0.2177 0.5139 0.6685 + 0.2293 0.5219 0.6740 + 0.2410 0.5299 0.6794 + 0.2528 0.5380 0.6849 + 0.2647 0.5460 0.6904 + 0.2768 0.5540 0.6959 + 0.2889 0.5621 0.7013 + 0.3011 0.5701 0.7068 + 0.3134 0.5782 0.7123 + 0.3257 0.5862 0.7178 + 0.3381 0.5942 0.7232 + 0.3505 0.6022 0.7287 + 0.3629 0.6102 0.7341 + 0.3754 0.6182 0.7396 + 0.3879 0.6262 0.7450 + 0.4005 0.6341 0.7504 + 0.4130 0.6420 0.7558 + 0.4256 0.6499 0.7612 + 0.4381 0.6578 0.7666 + 0.4507 0.6657 0.7719 + 0.4633 0.6735 0.7773 + 0.4758 0.6813 0.7826 + 0.4884 0.6891 0.7879 + 0.5010 0.6969 0.7932 + 0.5135 0.7046 0.7985 + 0.5261 0.7124 0.8038 + 0.5386 0.7201 0.8090 + 0.5512 0.7278 0.8143 + 0.5637 0.7355 0.8195 + 0.5762 0.7431 0.8248 + 0.5887 0.7508 0.8300 + 0.6012 0.7584 0.8352 + 0.6137 0.7661 0.8404 + 0.6262 0.7737 0.8456 + 0.6387 0.7813 0.8507 + 0.6512 0.7888 0.8559 + 0.6637 0.7964 0.8610 + 0.6761 0.8039 0.8661 + 0.6886 0.8114 0.8712 + 0.7010 0.8189 0.8763 + 0.7134 0.8264 0.8813 + 0.7258 0.8338 0.8862 + 0.7381 0.8412 0.8910 + 0.7504 0.8485 0.8958 + 0.7627 0.8557 0.9004 + 0.7748 0.8628 0.9049 + 0.7869 0.8698 0.9091 + 0.7988 0.8766 0.9132 + 0.8106 0.8832 0.9169 + 0.8221 0.8896 0.9203 + 0.8334 0.8957 0.9233 + 0.8444 0.9014 0.9259 + 0.8550 0.9068 0.9279 + 0.8652 0.9118 0.9293 + 0.8748 0.9162 0.9301 + 0.8840 0.9202 0.9302 + 0.8924 0.9235 0.9296 + 0.9002 0.9262 0.9281 + 0.9072 0.9282 0.9258 + 0.9134 0.9295 0.9227 + 0.9187 0.9301 0.9187 + 0.9232 0.9300 0.9139 + 0.9269 0.9291 0.9083 + 0.9296 0.9276 0.9019 + 0.9315 0.9253 0.8947 + 0.9326 0.9225 0.8869 + 0.9330 0.9190 0.8784 + 0.9326 0.9150 0.8694 + 0.9315 0.9106 0.8599 + 0.9299 0.9057 0.8500 + 0.9277 0.9004 0.8397 + 0.9251 0.8948 0.8291 + 0.9221 0.8889 0.8183 + 0.9187 0.8828 0.8073 + 0.9150 0.8765 0.7962 + 0.9112 0.8700 0.7849 + 0.9071 0.8635 0.7736 + 0.9029 0.8568 0.7622 + 0.8985 0.8501 0.7508 + 0.8941 0.8433 0.7393 + 0.8896 0.8366 0.7279 + 0.8851 0.8297 0.7164 + 0.8805 0.8229 0.7050 + 0.8759 0.8161 0.6936 + 0.8713 0.8093 0.6821 + 0.8666 0.8024 0.6708 + 0.8620 0.7956 0.6594 + 0.8574 0.7888 0.6481 + 0.8528 0.7820 0.6368 + 0.8482 0.7753 0.6255 + 0.8436 0.7685 0.6142 + 0.8390 0.7617 0.6030 + 0.8344 0.7550 0.5918 + 0.8298 0.7483 0.5807 + 0.8253 0.7416 0.5695 + 0.8207 0.7349 0.5584 + 0.8162 0.7282 0.5473 + 0.8116 0.7216 0.5363 + 0.8071 0.7149 0.5252 + 0.8026 0.7083 0.5142 + 0.7980 0.7017 0.5032 + 0.7935 0.6950 0.4923 + 0.7890 0.6884 0.4813 + 0.7846 0.6818 0.4704 + 0.7801 0.6753 0.4595 + 0.7756 0.6687 0.4487 + 0.7711 0.6621 0.4378 + 0.7666 0.6555 0.4270 + 0.7621 0.6490 0.4161 + 0.7577 0.6424 0.4053 + 0.7532 0.6358 0.3946 + 0.7487 0.6293 0.3838 + 0.7442 0.6227 0.3730 + 0.7397 0.6161 0.3623 + 0.7352 0.6095 0.3515 + 0.7307 0.6029 0.3408 + 0.7262 0.5962 0.3300 + 0.7216 0.5896 0.3193 + 0.7171 0.5829 0.3086 + 0.7124 0.5762 0.2978 + 0.7078 0.5694 0.2871 + 0.7032 0.5626 0.2764 + 0.6985 0.5557 0.2656 + 0.6937 0.5487 0.2549 + 0.6889 0.5417 0.2442 + 0.6841 0.5347 0.2334 + 0.6792 0.5275 0.2228 + 0.6743 0.5203 0.2121 + 0.6693 0.5130 0.2014 + 0.6642 0.5056 0.1908 + 0.6591 0.4982 0.1802 + 0.6540 0.4906 0.1698 + 0.6488 0.4830 0.1593 + 0.6435 0.4753 0.1490 + 0.6382 0.4675 0.1388 + 0.6329 0.4596 0.1288 + 0.6275 0.4517 0.1189 + 0.6221 0.4438 0.1092 + 0.6167 0.4358 0.0996 + 0.6113 0.4278 0.0903 + 0.6058 0.4198 0.0813 + 0.6005 0.4118 0.0724 + 0.5951 0.4038 0.0639 + 0.5898 0.3958 0.0557 + 0.5845 0.3879 0.0477 + 0.5792 0.3801 0.0402 + 0.5740 0.3722 0.0331 + 0.5689 0.3645 0.0272 + 0.5638 0.3568 0.0222 + 0.5588 0.3492 0.0180 + 0.5539 0.3417 0.0146 + 0.5490 0.3342 0.0117 + 0.5441 0.3268 0.0093 + 0.5393 0.3194 0.0074 + 0.5346 0.3121 0.0059 + 0.5299 0.3049 0.0048 + 0.5252 0.2978 0.0040 + 0.5206 0.2906 0.0034 + 0.5160 0.2836 0.0030 + 0.5114 0.2765 0.0027 + 0.5068 0.2695 0.0026 + 0.5023 0.2625 0.0027 + 0.4978 0.2556 0.0028 + 0.4932 0.2486 0.0030 + 0.4887 0.2417 0.0033 + 0.4842 0.2348 0.0037 + 0.4797 0.2279 0.0040 + 0.4752 0.2211 0.0043 + 0.4707 0.2142 0.0045 + 0.4662 0.2074 0.0047 + 0.4617 0.2005 0.0048 + 0.4572 0.1937 0.0049 + 0.4527 0.1868 0.0049 + 0.4482 0.1800 0.0049 + 0.4437 0.1732 0.0049 + 0.4391 0.1663 0.0049 + 0.4346 0.1594 0.0048 + 0.4300 0.1526 0.0047 + 0.4255 0.1457 0.0046 + 0.4209 0.1387 0.0044 + 0.4163 0.1318 0.0042 + 0.4117 0.1247 0.0040 + 0.4071 0.1176 0.0038 + 0.4024 0.1104 0.0035 + 0.3978 0.1031 0.0032 + 0.3931 0.0956 0.0029 + 0.3885 0.0880 0.0025 + 0.3838 0.0802 0.0021 + 0.3791 0.0721 0.0017 +]; + +l = length(J); +if nargin < 1 + n = 256; +end +cmap = interp1(1:l, J, linspace(1,l,n), '*linear'); diff --git a/src/m/plot/plot_unit.py b/src/m/plot/plot_unit.py index 0706d18a9..318b676d7 100644 --- a/src/m/plot/plot_unit.py +++ b/src/m/plot/plot_unit.py @@ -118,7 +118,7 @@ def plot_unit(x, y, z, elements, data, is2d, isplanet, datatype, options, fig, a else: triangles = mpl.tri.Triangulation(x, y, elements) - tri = ax.tripcolor(triangles, data, colorlevels, cmap=cmap, norm=norm, alpha=alpha, edgecolors=edgecolor) + tri = ax.tripcolor(triangles, data, cmap=cmap, norm=norm, alpha=alpha, edgecolors=edgecolor) else: #first deal with colormap loccmap = plt.cm.ScalarMappable(cmap=cmap) diff --git a/src/m/plot/plotmodel.py b/src/m/plot/plotmodel.py index 436002497..5dd64101d 100644 --- a/src/m/plot/plotmodel.py +++ b/src/m/plot/plotmodel.py @@ -75,62 +75,89 @@ def plotmodel(md, *args): if plotnum == 1: plotnum = None - # NOTE: The inline comments for each of the following parameters are - # taken from https://matplotlib.org/api/_as_gen/mpl_toolkits.axes_grid1.axes_grid.ImageGrid.html - # - direction = options.list[0].getfieldvalue('direction', 'row') # {"row", "column"}, default: "row" - axes_pad = options.list[0].getfieldvalue('axes_pad', 0.25) # float or (float, float), default : 0.02; Padding or (horizonal padding, vertical padding) between axes, in inches - add_all = options.list[0].getfieldvalue('add_all', True) # bool, default: True - share_all = options.list[0].getfieldvalue('share_all', True) # bool, default: False - label_mode = options.list[0].getfieldvalue('label_mode', 'L') # {"L", "1", "all"}, default: "L"; Determines which axes will get tick labels: "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels;. "1": Only the bottom left axes is labelled. "all": all axes are labelled. - - # Translate MATLAB colorbar mode to matplotlib - # - # TODO: - # - Add 'edge' option (research if there is a corresponding option in - # MATLAB)? - # - colorbar = options.list[0].getfieldvalue('colorbar', 'on') # on, off (single) - - if colorbar == 'on': - colorbar = 'each' - elif colorbar == 'one': - colorbar = 'single' - elif colorbar == 'off': - colorbar = None + if options.list[0].getfieldvalue('use_subplots',0): + #TODO: Make this section to perform exactly same as "ImageGrid". + + #NOTE: This section force "plotmodel" to work original code. + share_all = options.list[0].getfieldvalue('share_all',True) + + #Make axgrid using "subplots" + axgrid = fig.subplots(nrows, ncols, + sharex=share_all,sharey=share_all, + width_ratios=options.list[0].getfieldvalue('width_ratios',[1]*ncols), + height_ratios=options.list[0].getfieldvalue('height_ratios',[1]*nrows), + squeeze=True) + if (nrows == 1) & (ncols == 1): + axgrid = [axgrid] # make axgrid iterable contents. + axgrid = axgrid.flatten() # flattening... + + for ax in axgrid: + ax.set_aspect('equal') + + # Control axes vertical and horizontal spaces. + if options.list[0].exist('axes_pad'): + axes_pad = options.list[0].getfieldvalue('axes_pad', 0.25) + if len(axes_pad) == 1: axes_pad = 2*axes_pad + fig.subplots_adjust(wspace=axes_pad[0],hspace=axes_pad[1]) else: - raise RuntimeError('plotmodel error: colorbar mode \'{}\' is not a valid option'.format(colorbar)) - - cbar_mode = colorbar # {"each", "single", "edge", None }, default: None - cbar_location = options.list[0].getfieldvalue('colorbarpos', 'right') # {"left", "right", "bottom", "top"}, default: "right" - cbar_pad = options.list[0].getfieldvalue('colorbarpad', 0.025) # float, default: None - cbar_size = options.list[0].getfieldvalue('colorbarsize', '5%') # size specification (see Size.from_any), default: "5%" - - # NOTE: Second parameter is: - # - # rect(float, float, float, float) or int - # - # The axes position, as a (left, bottom, width, height) tuple or as a - # three-digit subplot position code (e.g., "121"). - # - axgrid = ImageGrid( - fig, - 111, - nrows_ncols=(nrows, ncols), - direction=direction, - axes_pad=axes_pad, - share_all=share_all, - label_mode=label_mode, - cbar_mode=cbar_mode, - cbar_location=cbar_location, - cbar_size=cbar_size, - cbar_pad=cbar_pad - ) - - if cbar_mode == 'None': - for ax in axgrid.cbar_axes: - fig._axstack.remove(ax) - for i, ax in enumerate(axgrid.axes_all): + # NOTE: The inline comments for each of the following parameters are + # taken from https://matplotlib.org/api/_as_gen/mpl_toolkits.axes_grid1.axes_grid.ImageGrid.html + # + direction = options.list[0].getfieldvalue('direction', 'row') # {"row", "column"}, default: "row" + axes_pad = options.list[0].getfieldvalue('axes_pad', 0.25) # float or (float, float), default : 0.02; Padding or (horizonal padding, vertical padding) between axes, in inches + add_all = options.list[0].getfieldvalue('add_all', True) # bool, default: True + share_all = options.list[0].getfieldvalue('share_all', True) # bool, default: False + label_mode = options.list[0].getfieldvalue('label_mode', 'L') # {"L", "1", "all"}, default: "L"; Determines which axes will get tick labels: "L": All axes on the left column get vertical tick labels; all axes on the bottom row get horizontal tick labels;. "1": Only the bottom left axes is labelled. "all": all axes are labelled. + + # Translate MATLAB colorbar mode to matplotlib + # + # TODO: + # - Add 'edge' option (research if there is a corresponding option in + # MATLAB)? + # + colorbar = options.list[0].getfieldvalue('colorbar', 'on') # on, off (single) + + if colorbar == 'on': + colorbar = 'each' + elif colorbar == 'one': + colorbar = 'single' + elif colorbar == 'off': + colorbar = None + else: + raise RuntimeError('plotmodel error: colorbar mode \'{}\' is not a valid option'.format(colorbar)) + + cbar_mode = colorbar # {"each", "single", "edge", None }, default: None + cbar_location = options.list[0].getfieldvalue('colorbarpos', 'right') # {"left", "right", "bottom", "top"}, default: "right" + cbar_pad = options.list[0].getfieldvalue('colorbarpad', 0.025) # float, default: None + cbar_size = options.list[0].getfieldvalue('colorbarsize', '5%') # size specification (see Size.from_any), default: "5%" + + # NOTE: Second parameter is: + # + # rect(float, float, float, float) or int + # + # The axes position, as a (left, bottom, width, height) tuple or as a + # three-digit subplot position code (e.g., "121"). + # + axgrid = ImageGrid( + fig, + 111, + nrows_ncols=(nrows, ncols), + direction=direction, + axes_pad=axes_pad, + share_all=share_all, + label_mode=label_mode, + cbar_mode=cbar_mode, + cbar_location=cbar_location, + cbar_size=cbar_size, + cbar_pad=cbar_pad + ) + + if cbar_mode == 'None': + for ax in axgrid.cbar_axes: + fig._axstack.remove(ax) + + #for i, ax in enumerate(axgrid.axes_all): + for i in range(len(axgrid)): try: plot_manager(options.list[i].getfieldvalue('model', md), options.list[i], fig, axgrid, i) except KeyError: diff --git a/src/m/solve/listoutputs.m b/src/m/solve/listoutputs.m index f9b2e4d11..1721d0903 100644 --- a/src/m/solve/listoutputs.m +++ b/src/m/solve/listoutputs.m @@ -31,6 +31,7 @@ BalancethicknessSpcthickness BalancethicknessThickeningRate BasalCrevasse +BasalforcingsCoriolisF BasalforcingsDeepwaterMeltingRatearma BasalforcingsDeepwaterMeltingRateNoise BasalforcingsDeepwaterMeltingRateValuesAutoregression @@ -50,6 +51,10 @@ BasalforcingsIsmip6Tf BasalforcingsIsmip6TfShelf BasalforcingsIsmip6MeltAnomaly +BasalforcingsIsmip7Tf +BasalforcingsIsmip7TfShelf +BasalforcingsIsmip7Salinity +BasalforcingsIsmip7SalinityShelf BasalforcingsMeltrateFactor BasalforcingsOceanSalinity BasalforcingsOceanTemp @@ -445,6 +450,7 @@ SmbIsInitialized SmbMAdd SmbMappedforcingpoint +SmbMappedforcingprecipscaling SmbMassBalance SmbMassBalanceSnow SmbMassBalanceIce @@ -673,6 +679,7 @@ TotalFloatingBmbScaled TotalGroundedBmb TotalGroundedBmbScaled +TotalHydrologyBasalFlux TotalSmb TotalSmbMelt TotalSmbRefreeze diff --git a/src/m/solve/loadresultsfromcluster.m b/src/m/solve/loadresultsfromcluster.m index 12926b340..2376bbe7b 100644 --- a/src/m/solve/loadresultsfromcluster.m +++ b/src/m/solve/loadresultsfromcluster.m @@ -18,12 +18,12 @@ cluster=md.cluster; %Download outputs from the cluster -if ~nolog, +if ~nolog filelist={[md.miscellaneous.name '.outlog'],[md.miscellaneous.name '.errlog']}; else filelist={}; end -if md.qmu.isdakota, +if md.qmu.isdakota filelist{end+1}=[md.miscellaneous.name '.qmu.err']; filelist{end+1}=[md.miscellaneous.name '.qmu.out']; if isfield(md.qmu.params,'tabular_graphics_data'), @@ -56,7 +56,7 @@ delete(filename) end end -if exist([md.private.runtimename '.tar.gz']) & ~ispc(), +if exist([md.private.runtimename '.tar.gz']) & ~ispc() delete([md.private.runtimename '.tar.gz']); end @@ -68,7 +68,7 @@ if md.qmu.isdakota delete([md.miscellaneous.name '.qmu.in']); end - if ~ispc(), + if ~ispc() delete([md.miscellaneous.name '.queue']); else delete([md.miscellaneous.name '.bat']); diff --git a/src/m/solve/loadresultsfromdisk.m b/src/m/solve/loadresultsfromdisk.m index 3ca3abca4..a90a5a8df 100644 --- a/src/m/solve/loadresultsfromdisk.m +++ b/src/m/solve/loadresultsfromdisk.m @@ -56,19 +56,31 @@ %read log files onto fields (only keep the first 1000 lines!) if exist([md.miscellaneous.name '.errlog'],'file') - md.results.(structure(1).SolutionType)(1).errlog=char(textread([md.miscellaneous.name '.errlog'],'%s',1000,'delimiter','\n')); + if ~verLessThan('matlab', '9.9') % R2020b = version 9.9 + errlog = readlines([md.miscellaneous.name '.errlog'],'EmptyLineRule','skip'); + errlog = errlog(1:min(1000, end)); + else + errlog = char(textread([md.miscellaneous.name '.errlog'],'%s',1000,'delimiter','\n')); + end + md.results.(structure(1).SolutionType)(1).errlog= errlog; else md.results.(structure(1).SolutionType)(1).errlog=''; end if exist([md.miscellaneous.name '.outlog'],'file') - md.results.(structure(1).SolutionType)(1).outlog=char(textread([md.miscellaneous.name '.outlog'],'%c',4000,'delimiter','\n')); + if ~verLessThan('matlab', '9.9') % R2020b = version 9.9 + outlog = readlines([md.miscellaneous.name '.outlog']); + outlog = outlog(1:min(4000,end)); + else + outlog = char(textread([md.miscellaneous.name '.outlog'],'%s',4000,'delimiter','\n')); + end + md.results.(structure(1).SolutionType)(1).outlog= outlog; else md.results.(structure(1).SolutionType)(1).outlog=''; end if ~isempty(md.results.(structure(1).SolutionType)(1).errlog) - disp(['loadresultsfromdisk info message: error during solution. Check your errlog and outlog model fields']); + disp(['WARNING: possible error during solution. Check the errlog and outlog']); end %postprocess qmu results if necessary diff --git a/src/m/solve/loadresultsfromdisk.py b/src/m/solve/loadresultsfromdisk.py index 15b7594d1..6c88ef76d 100644 --- a/src/m/solve/loadresultsfromdisk.py +++ b/src/m/solve/loadresultsfromdisk.py @@ -62,7 +62,7 @@ def loadresultsfromdisk(md, filename): setattr(getattr(md.results, structure[0].SolutionType)[0], 'outlog', '') if getattr(md.results, structure[0].SolutionType)[0].errlog: - print('loadresultsfromdisk info message: error during solution. Check your errlog and outlog model fields.') + print('WARNING: possible error during solution. Check the errlog and outlog') # If only one solution, extract it from list for user friendliness if len(structure) == 1 and structure[0].SolutionType != 'TransientSolution': diff --git a/src/m/solve/marshall.m b/src/m/solve/marshall.m index d8dc842b2..d7518aaf4 100644 --- a/src/m/solve/marshall.m +++ b/src/m/solve/marshall.m @@ -1,20 +1,19 @@ -function marshall(md) -%MARSHALL - outputs a compatible binary file from @model md, for certain solution type. +function marshall(md, filename) +%MARSHALL - writes a binary file from a model % -% The routine creates a compatible binary file from @model md -% This binary file will be used for parallel runs in JPL-package +% The function creates a binary file from model md % % Usage: -% marshall(md) +% marshall(md, filename) if md.verbose.solution - disp(['marshalling file ' md.miscellaneous.name '.bin']); + disp(['marshalling file ' filename]); end %open file for binary writing -fid=fopen([ md.miscellaneous.name '.bin'],'wb'); +fid=fopen(filename, 'wb'); if fid==-1 - error(['marshall error message: could not open ' [md.miscellaneous.name '.bin'],' file for binary writing']); + error(['marshall error message: could not open ' filename ' file for binary writing']); end % Go through all model fields: check that it is a class and call checkconsistency @@ -42,9 +41,8 @@ function marshall(md) %close file st=fclose(fid); - -if st==-1, - error(['marshall error message: could not close file ' [md.miscellaneous.name '.bin']]); +if st==-1 + error(['marshall error message: could not close file ' filename]); end % Uncomment the following to make a copy of the binary input file for debugging diff --git a/src/m/solve/marshall.py b/src/m/solve/marshall.py index 7c874d5e8..63a7d8ec1 100644 --- a/src/m/solve/marshall.py +++ b/src/m/solve/marshall.py @@ -2,25 +2,23 @@ from WriteData import WriteData +def marshall(md, filename): + """marshall - writes a binary file from a model -def marshall(md): - """marshall - outputs a compatible binary file from @model md, for certain solution type. - - The routine creates a compatible binary file from @model md - his binary file will be used for parallel runs in JPL-package + The routine creates a binary file from a md Usage: - marshall(md) + marshall(md, filename) """ if md.verbose.solution: - print('marshalling file \'{}\'.bin'.format(md.miscellaneous.name)) + print('marshalling file \'{}\''.format(filename)) # Open file for binary writing try: - fid = open(md.miscellaneous.name + '.bin', 'wb') + fid = open(filename, 'wb') except IOError as e: - print('marshall error message: could not open \'{}.bin\' file for binary writing due to: {}'.format(md.miscellaneous.name, e)) + print('marshall error message: could not open \'{}\' file for binary writing due to: {}'.format(filename, e)) fields = md.properties() fields.sort() # sort fields so that comparison of binary files is easier @@ -45,7 +43,7 @@ def marshall(md): fid.close() except IOError as e: - print('marshall error message: could not close \'{}.bin\' file for binary writing due to: {}'.format(md.miscellaneous.name, e)) + print('marshall error message: could not close \'{}\' file for binary writing due to: {}'.format(filename, e)) # Uncomment the following to make a copy of the binary input file for # debugging purposes (can be fed into scripts/BinRead.py). diff --git a/src/m/solve/solve.m b/src/m/solve/solve.m index cc0a58b3a..4312e35ca 100644 --- a/src/m/solve/solve.m +++ b/src/m/solve/solve.m @@ -83,8 +83,28 @@ end options=pairoptions(varargin{:},'solutionstring',solutionstring); +%If we are restarting, actually use the provided runtime name: +restart=getfieldvalue(options,'restart',''); +if restart==1 + %Leave the runtimename as is +else + if ~isempty(restart) + md.private.runtimename=restart; + elseif getfieldvalue(options,'runtimename',true) + c=clock; + md.private.runtimename=sprintf('%s-%02i-%02i-%04i-%02i-%02i-%02i-%i',md.miscellaneous.name,c(2),c(3),c(1),c(4),c(5),floor(c(6)),feature('GetPid')); + else + md.private.runtimename=md.miscellaneous.name; + end +end + +%Do we load results only? +if getfieldvalue(options,'loadonly',false) + md=loadresultsfromcluster(md); + return; +end + %recover some fields -md.private.solution=solutionstring; cluster=md.cluster; if strcmpi(getfieldvalue(options,'batch','no'),'yes') batch=1; @@ -93,49 +113,30 @@ end %check model consistency -if strcmpi(getfieldvalue(options,'checkconsistency','yes'),'yes'), - if md.verbose.solution, +if strcmpi(getfieldvalue(options,'checkconsistency','yes'),'yes') + md.private.solution=solutionstring; + if md.verbose.solution disp('checking model consistency'); end ismodelselfconsistent(md); end -%If we are restarting, actually use the provided runtime name: -restart=getfieldvalue(options,'restart',''); -%First, build a runtime name that is unique -if restart==1 - %Leave the runtimename as is -else - if ~isempty(restart), - md.private.runtimename=restart; - elseif getfieldvalue(options,'runtimename',true), - c=clock; - md.private.runtimename=sprintf('%s-%02i-%02i-%04i-%02i-%02i-%02i-%i',md.miscellaneous.name,c(2),c(3),c(1),c(4),c(5),floor(c(6)),feature('GetPid')); - else - md.private.runtimename=md.miscellaneous.name; - end -end - %if running QMU analysis, some preprocessing of Dakota files using model fields needs to be carried out. -if md.qmu.isdakota, +if md.qmu.isdakota md=preqmu(md,options); end -%Do we load results only? -if getfieldvalue(options,'loadonly',false), - md=loadresultsfromcluster(md); - return; -end +%Prepare directory in execution %Write all input files -marshall(md); % bin file -ToolkitsFile(md.toolkits,[md.miscellaneous.name '.toolkits']); % toolkits file -BuildQueueScript(cluster,md.private.runtimename,md.miscellaneous.name,md.private.solution,md.settings.io_gather,md.debug.valgrind,md.debug.gprof,md.qmu.isdakota,md.transient.isoceancoupling); % queue file +marshall(md, [md.miscellaneous.name '.bin']); % bin file +ToolkitsFile(md.toolkits,[md.miscellaneous.name '.toolkits']); % toolkits file +BuildQueueScript(cluster, md, [md.miscellaneous.name '.queue']); % queue file %Upload all required files modelname = md.miscellaneous.name; filelist = {[modelname '.bin'] [modelname '.toolkits']}; -if ispc, +if ispc filelist{end+1}=[modelname '.bat']; else filelist{end+1}=[modelname '.queue']; @@ -145,7 +146,7 @@ filelist{end+1} = [modelname '.qmu.in']; end -if isempty(restart), +if isempty(restart) disp('uploading input files') UploadQueueJob(cluster,md.miscellaneous.name,md.private.runtimename,filelist); end @@ -154,32 +155,30 @@ disp('launching solution sequence') LaunchQueueJob(cluster,md.miscellaneous.name,md.private.runtimename,filelist,restart,batch); -%return if batch: +%return if batch: if batch - if md.verbose.solution - disp('batch mode requested: not launching job interactively'); - disp('launch solution sequence on remote cluster by hand'); - end + disp('batch mode requested: not launching job interactively'); + disp('launch solution sequence on remote cluster by hand'); return; end %wait on lock -if isnan(md.settings.waitonlock), - %load when user enters 'y' - disp('solution launched on remote cluster. log in to detect job completion.'); +if md.settings.waitonlock==0 + disp('Model results must be loaded manually with md=loadresultsfromcluster(md);'); + return +elseif isnan(md.settings.waitonlock) + disp('solution launched on remote cluster. Log in to detect job completion.'); choice=input('Is the job successfully completed? (y/n)','s'); - if ~strcmp(choice,'y'), + if ~strcmp(choice,'y') disp('Results not loaded... exiting'); + return; else - md=loadresultsfromcluster(md); - end -elseif md.settings.waitonlock>0, - %wait for done file - done=waitonlock(md); - if md.verbose.solution, - disp('loading results from cluster'); end - md=loadresultsfromcluster(md); -elseif md.settings.waitonlock==0, - disp('Model results must be loaded manually with md=loadresultsfromcluster(md);'); +elseif md.settings.waitonlock>0 + done = waitonlock(md); +else + error('not supported'); end + +if md.verbose.solution; disp('loading results from cluster'); end +md=loadresultsfromcluster(md); diff --git a/src/m/solve/solve.py b/src/m/solve/solve.py index 8e937573a..95951db19 100644 --- a/src/m/solve/solve.py +++ b/src/m/solve/solve.py @@ -86,23 +86,8 @@ def solve(md, solutionstring, *args): raise ValueError('solutionstring {} not supported!'.format(solutionstring)) options = pairoptions('solutionstring', solutionstring, *args) - # Recover some fields - md.private.solution = solutionstring - cluster = md.cluster - if options.getfieldvalue('batch', 'no') == 'yes': - batch = 1 - else: - batch = 0 - - # Check model consistency - if options.getfieldvalue('checkconsistency', 'yes') == 'yes': - if md.verbose.solution: - print('checking model consistency') - ismodelselfconsistent(md) - # If we are restarting, actually use the provided runtime name restart = options.getfieldvalue('restart', '') - # First, build a runtime name that is unique if restart == 1: pass # Leave the runtimename as is else: @@ -115,20 +100,35 @@ def solve(md, solutionstring, *args): else: md.private.runtimename = md.miscellaneous.name - # If running QMU analysis, some preprocessing of Dakota files using model - # fields needs to be carried out - if md.qmu.isdakota: - md = preqmu(md, options) - # Do we load results only? if options.getfieldvalue('loadonly', False): md = loadresultsfromcluster(md) return md + # Recover some fields + cluster = md.cluster + if options.getfieldvalue('batch', 'no') == 'yes': + batch = 1 + else: + batch = 0 + + # Check model consistency + if options.getfieldvalue('checkconsistency', 'yes') == 'yes': + md.private.solution = solutionstring + if md.verbose.solution: + print('checking model consistency') + ismodelselfconsistent(md) + + + # If running QMU analysis, some preprocessing of Dakota files using model + # fields needs to be carried out + if md.qmu.isdakota: + md = preqmu(md, options) + # Write all input files - marshall(md) # bin file - md.toolkits.ToolkitsFile(md.miscellaneous.name + '.toolkits') # toolkits file - cluster.BuildQueueScript(md.private.runtimename, md.miscellaneous.name, md.private.solution, md.settings.io_gather, md.debug.valgrind, md.debug.gprof, md.qmu.isdakota, md.transient.isoceancoupling) # queue file + marshall(md, md.miscellaneous.name + '.bin') # bin file + md.toolkits.ToolkitsFile(md.miscellaneous.name + '.toolkits') # toolkits file + cluster.BuildQueueScript(md, md.miscellaneous.name + '.queue') # queue file # Upload all required files modelname = md.miscellaneous.name @@ -152,19 +152,19 @@ def solve(md, solutionstring, *args): # Return if batch if batch: - if md.verbose.solution: - print('batch mode requested: not launching job interactively') - print('launch solution sequence on remote cluster by hand') + print('batch mode requested: not launching job interactively') + print('launch solution sequence on remote cluster by hand') return md # Wait on lock if md.settings.waitonlock > 0: - # Wait for done file done = waitonlock(md) - if md.verbose.solution: - print('loading results from cluster') - md = loadresultsfromcluster(md) + elif md.settings.waitonlock == 0: print('Model results must be loaded manually with md = loadresultsfromcluster(md).') + return md + #load results + if md.verbose.solution: print('loading results from cluster') + md = loadresultsfromcluster(md) return md diff --git a/src/m/solve/solveslm.m b/src/m/solve/solveslm.m index b14e27533..7e3c6073d 100644 --- a/src/m/solve/solveslm.m +++ b/src/m/solve/solveslm.m @@ -67,10 +67,10 @@ miscellaneousnames{end+1}=slm.earth.miscellaneous.name; nps{end+1}=slm.earth.cluster.np; -BuildQueueScriptMultipleModels(cluster,slm.private.runtimename,slm.miscellaneous.name,slm.private.solution,privateruntimenames,miscellaneousnames,nps); +filelist={[slm.miscellaneous.name '.queue']}; +BuildQueueScriptMultipleModels(cluster, slm, privateruntimenames, miscellaneousnames, nps, filelist{1}); %Upload all required files, given that each individual solution for icecaps and earth model already did: -filelist={[slm.miscellaneous.name '.queue']}; UploadQueueJob(cluster,slm.miscellaneous.name,slm.private.runtimename,filelist); %launch queue job: diff --git a/src/m/solve/solveslm.py b/src/m/solve/solveslm.py index da2b70086..590385650 100644 --- a/src/m/solve/solveslm.py +++ b/src/m/solve/solveslm.py @@ -1,13 +1,10 @@ from datetime import datetime import os - import numpy as np - from loadresultsfromcluster import loadresultsfromcluster from pairoptions import pairoptions from waitonlock import waitonlock - def solveslm(slm, solutionstringi, *args): """solveslm - apply solution sequence for this sealevel model diff --git a/src/m/solve/waitonlock.m b/src/m/solve/waitonlock.m index 376507f9e..5435f78ef 100644 --- a/src/m/solve/waitonlock.m +++ b/src/m/solve/waitonlock.m @@ -28,7 +28,6 @@ logfilename = [executionpath '/' md.private.runtimename '/' md.miscellaneous.name '.outlog']; end - %If we are using the generic cluster in interactive mode, job is already complete if (isa(cluster,'generic') & cluster.interactive) | isa(cluster,'generic_static'), %We are in interactive mode, no need to check for job completion diff --git a/src/wrappers/matlab/Makefile.am b/src/wrappers/matlab/Makefile.am index 7a7a57ed4..be0e6b836 100644 --- a/src/wrappers/matlab/Makefile.am +++ b/src/wrappers/matlab/Makefile.am @@ -47,8 +47,8 @@ lib_LTLIBRARIES += \ ElementConnectivity_matlab.la \ ExpSimplify_matlab.la \ ExpToLevelSet_matlab.la \ + InterpFromGrid_matlab.la \ InterpFromGridToMesh_matlab.la \ - InterpFromGrid.la \ InterpFromMesh2d_matlab.la \ InterpFromMeshToGrid_matlab.la \ InterpFromMeshToMesh2d_matlab.la \ @@ -228,9 +228,9 @@ ExpSimplify_matlab_la_CXXFLAGS = ${AM_CXXFLAGS} ExpSimplify_matlab_la_LIBADD = ${deps} #Very specific case for this mex: do not compile like other modules -InterpFromGrid_la_SOURCES = ../InterpFromGrid/InterpFromGrid.cpp -InterpFromGrid_la_CXXFLAGS = ${MEXLIB} -InterpFromGrid_la_LIBADD = +InterpFromGrid_matlab_la_SOURCES = ../InterpFromGrid/InterpFromGrid.cpp +InterpFromGrid_matlab_la_CXXFLAGS = ${AM_CXXFLAGS} +InterpFromGrid_matlab_la_LIBADD = ${LIBADD_FOR_MEX} InterpFromGridToMesh_matlab_la_SOURCES = ../InterpFromGridToMesh/InterpFromGridToMesh.cpp InterpFromGridToMesh_matlab_la_CXXFLAGS = ${AM_CXXFLAGS} diff --git a/test/Archives/Archive258.arch b/test/Archives/Archive258.arch index 92461413d..bcb71f605 100644 Binary files a/test/Archives/Archive258.arch and b/test/Archives/Archive258.arch differ diff --git a/test/NightlyRun/test119.m b/test/NightlyRun/test119.m index 84325808f..8a840dcac 100644 --- a/test/NightlyRun/test119.m +++ b/test/NightlyRun/test119.m @@ -23,7 +23,7 @@ %Fields and tolerances to track changes field_names ={'x1','y1','x2','y2','nbelements','elapsed time'}; -field_tolerances={2e-9,2e-9,1e-13,1e-13,1e-13,8.5}; +field_tolerances={2e-9,2e-9,1e-13,1e-13,1e-13,10}; field_values={... x1, y1,... x2, y2,... diff --git a/test/NightlyRun/test119.py b/test/NightlyRun/test119.py index 72acd868f..ee12552ad 100644 --- a/test/NightlyRun/test119.py +++ b/test/NightlyRun/test119.py @@ -27,5 +27,5 @@ #Fields and tolerances to track changes field_names = ['x1', 'y1', 'x2', 'y2', 'nbelements', 'elapsed time'] -field_tolerances = [2e-9, 2e-9, 1e-13, 1e-13, 1e-13, 8.5] +field_tolerances = [2e-9, 2e-9, 1e-13, 1e-13, 1e-13, 10] field_values = [x1, y1, x2, y2, nbewithinrange, elapsedtime] diff --git a/test/NightlyRun/test258.m b/test/NightlyRun/test258.m index f44b26caf..27eaab87b 100644 --- a/test/NightlyRun/test258.m +++ b/test/NightlyRun/test258.m @@ -44,6 +44,7 @@ md.smb.mappedforcingelevation=mean(md2.geometry.surface(md2.mesh.elements),2); md.smb.lapseTaValue=md.smb.lapseTaValue*ones(size(md.smb.mappedforcingelevation)); md.smb.lapsedlwrfValue=md.smb.lapsedlwrfValue*ones(size(md.smb.mappedforcingelevation)); +md.smb.mappedforcingprecipscaling=(1:md.mesh.numberofelements)'/md.mesh.numberofelements; %smb settings md.smb.requested_outputs={'SmbDz','SmbT','SmbD','SmbRe','SmbGdn','SmbGsp','SmbEC',... @@ -74,7 +75,7 @@ %Fields and tolerances to track changes field_names ={'Layers','SmbDz','SmbT','SmbD','SmbRe','SmbGdn','SmbGsp','SmbA' ,'SmbEC','SmbMassBalance','SmbMAdd','SmbDzAdd','SmbFAC','SmbMeanSHF','SmbMeanLHF','SmbMeanULW','SmbNetLW','SmbNetSW','SmbTs','SmbT10','SmbT30','SmbT50','SmbAccumulatedMassBalance','SmbAccumulatedRunoff','SmbAccumulatedMelt','SmbAccumulatedEC','SmbAccumulatedPrecipitation','SmbAccumulatedRain','SmbAccumulatedRefreeze','SmbRunoff','SmbMelt','SmbEC','SmbPrecipitation','SmbRain','SmbRefreeze','SmbWAdd'}; -field_tolerances ={1e-12,4e-11,2e-11,3e-11,6e-11,8e-11,8e-11,1e-12,5e-11,2e-12,1e-12,1e-12,4e-11,2e-11,5e-11,1e-11,9e-10,2e-11,2e-11,2e-11,2e-11,2e-11,1e-11,9e-10,2e-11,2e-09,1e-11,1e-11,1e-11,8e-10,2e-11,2e-11,1e-11,1e-11,2e-11,1e-11}; +field_tolerances ={1e-12,4e-11,2e-11,3e-11,6e-11,8e-11,8e-11,1e-12,5e-11,3e-12,1e-12,1e-12,4e-11,2e-11,5e-11,1e-11,9e-10,2e-11,2e-11,2e-11,2e-11,2e-11,1e-11,9e-10,2e-11,2e-09,1e-11,1e-11,1e-11,8e-10,2e-11,2e-11,1e-11,1e-11,2e-11,1e-11}; field_values={... (nlayers),... diff --git a/test/NightlyRun/test258.py b/test/NightlyRun/test258.py index 7a865b70b..629cccb2b 100644 --- a/test/NightlyRun/test258.py +++ b/test/NightlyRun/test258.py @@ -62,6 +62,7 @@ md.smb.mappedforcingelevation=np.mean(md2.geometry.surface[md2.mesh.elements-1],axis=1) md.smb.lapseTaValue=md.smb.lapseTaValue*np.ones(np.shape(md.smb.mappedforcingelevation)) md.smb.lapsedlwrfValue=md.smb.lapsedlwrfValue*np.ones(np.shape(md.smb.mappedforcingelevation)) +md.smb.mappedforcingprecipscaling=np.arange(1,md.mesh.numberofelements+1)/md.mesh.numberofelements #smb settings md.smb.requested_outputs = ['SmbDz','SmbT','SmbD','SmbRe','SmbGdn','SmbGsp','SmbEC', @@ -91,7 +92,7 @@ #Fields and tolerances to track changes field_names = ['Layers', 'SmbDz', 'SmbT', 'SmbD', 'SmbRe', 'SmbGdn', 'SmbGsp', 'SmbA', 'SmbEC', 'SmbMassBalance', 'SmbMAdd', 'SmbDzAdd', 'SmbFAC', 'SmbMeanSHF', 'SmbMeanLHF', 'SmbMeanULW', 'SmbNetLW', 'SmbNetSW', 'SmbTs', 'SmbT10', 'SmbT30', 'SmbT50', 'SmbAccumulatedMassBalance', 'SmbAccumulatedRunoff', 'SmbAccumulatedMelt', 'SmbAccumulatedEC', 'SmbAccumulatedPrecipitation', 'SmbAccumulatedRain', 'SmbAccumulatedRefreeze', 'SmbRunoff', 'SmbMelt', 'SmbEC', 'SmbPrecipitation', 'SmbRain', 'SmbRefreeze', 'SmbWAdd'] -field_tolerances = [1e-12, 4e-11, 2e-11, 3e-11, 6e-11, 8e-11, 8e-11, 1e-12, 5e-11, 2e-12, 1e-12, 1e-12, 4e-11, 2e-11, 5e-11, 1e-11, 9e-10, 2e-11, 2e-11, 2e-11, 2e-11, 2e-11, 1e-11, 9e-10, 2e-11, 2e-09, 1e-11, 1e-11, 1e-11, 8e-10, 2e-11, 2e-11, 1e-11, 1e-11, 2e-11, 1e-11] +field_tolerances = [1e-12, 4e-11, 2e-11, 3e-11, 6e-11, 8e-11, 8e-11, 1e-12, 5e-11, 3e-12, 1e-12, 1e-12, 4e-11, 2e-11, 5e-11, 1e-11, 9e-10, 2e-11, 2e-11, 2e-11, 2e-11, 2e-11, 1e-11, 9e-10, 2e-11, 2e-09, 1e-11, 1e-11, 1e-11, 8e-10, 2e-11, 2e-11, 1e-11, 1e-11, 2e-11, 1e-11] # Shape is different in python solution (fixed using reshape) which can cause test failure field_values = [ diff --git a/test/NightlyRun/test291.py b/test/NightlyRun/test291.py index 5300d15a7..2b84b688d 100644 --- a/test/NightlyRun/test291.py +++ b/test/NightlyRun/test291.py @@ -20,7 +20,7 @@ #Fields and tolerances to track changes field_names = ['Vx', 'Vy', 'Vz', 'Vel', 'Pressure'] -field_tolerances = [5e-5, 5e-5, 8e-5, 5e-5, 1e-7] +field_tolerances = [5e-5, 5e-5, 1e-4, 9e-5, 1e-7] field_values = [md.results.StressbalanceSolution.Vx, md.results.StressbalanceSolution.Vy, md.results.StressbalanceSolution.Vz, diff --git a/test/NightlyRun/test404.m b/test/NightlyRun/test404.m index 04e6fd0c6..c6c6c5614 100644 --- a/test/NightlyRun/test404.m +++ b/test/NightlyRun/test404.m @@ -9,7 +9,7 @@ %Fields and tolerances to track changes field_names ={'Vx','Vy','Vz','Vel','Pressure'}; -field_tolerances={2e-05,6e-06,2e-06,1e-06,8e-07}; +field_tolerances={2e-05,2e-05,2e-06,1e-06,8e-07}; field_values={... (md.results.StressbalanceSolution.Vx),... (md.results.StressbalanceSolution.Vy),... diff --git a/test/NightlyRun/test404.py b/test/NightlyRun/test404.py index 2b38c29f3..fff87b81f 100644 --- a/test/NightlyRun/test404.py +++ b/test/NightlyRun/test404.py @@ -18,7 +18,7 @@ #Fields and tolerances to track changes field_names = ['Vx', 'Vy', 'Vz', 'Vel', 'Pressure'] -field_tolerances = [2e-05, 6e-06, 2e-06, 1e-06, 8e-07] +field_tolerances = [2e-05, 2e-05, 2e-06, 1e-06, 8e-07] field_values = [md.results.StressbalanceSolution.Vx, md.results.StressbalanceSolution.Vy, md.results.StressbalanceSolution.Vz, diff --git a/test/NightlyRun/test437.m b/test/NightlyRun/test437.m index 2e6ceecda..65b4b6956 100644 --- a/test/NightlyRun/test437.m +++ b/test/NightlyRun/test437.m @@ -52,7 +52,7 @@ 'Enthalpy3','Temperature3','Waterfraction3','BasalMeltingRate3','Watercolumn3',... 'Enthalpy4','Temperature4','Waterfraction4','BasalMeltingRate4','Watercolumn4'}; field_tolerances={1.e-10,1.e-10,1.e-10,1.e-9,1.e-10,... - 1.e-10,1.e-10,1.e-10,2.e-9,2.e-10,... + 1.e-10,1.e-10,1.e-10,2.e-9,3.e-10,... 1.e-10,1.e-10,1.e-10,2.e-9,1.e-10,... 1.e-10,1.e-10,1.e-10,2.e-9,1.e-10}; i1=1; i2=ceil(t2/md.timestepping.time_step)+2; i3=ceil(md.timestepping.final_time/(2.*md.timestepping.time_step)); i4=size(md.results.TransientSolution,2); diff --git a/test/NightlyRun/test437.py b/test/NightlyRun/test437.py index 5b9d3e56e..cc65bfcd1 100644 --- a/test/NightlyRun/test437.py +++ b/test/NightlyRun/test437.py @@ -65,7 +65,7 @@ 'Enthalpy3', 'Temperature3', 'Waterfraction3', 'BasalMeltingRate3', 'Watercolumn3', 'Enthalpy4', 'Temperature4', 'Waterfraction4', 'BasalMeltingRate4', 'Watercolumn4'] field_tolerances = [1.e-10, 1.e-10, 1.e-10, 1.e-9, 1.e-10, - 1.e-10, 1.e-10, 1.e-10, 2.e-9, 2.e-10, + 1.e-10, 1.e-10, 1.e-10, 2.e-9, 3.e-10, 1.e-10, 1.e-10, 1.e-10, 2.e-9, 1.e-10, 1.e-10, 1.e-10, 1.e-10, 2.e-9, 1.e-10] i1 = 0