From 41eab61b41e79a3e24316b356666d6cfdce4c0da Mon Sep 17 00:00:00 2001 From: joyjxu Date: Thu, 29 Aug 2019 09:55:55 +0800 Subject: [PATCH] update docs --- docs/algo/afm_sona_en.md | 6 +++--- docs/algo/daw_sona_en.md | 6 +++--- docs/algo/dcn_sona_en.md | 6 +++--- docs/algo/deepfm_sona_en.md | 6 +++--- docs/algo/dnn_sona_en.md | 6 +++--- docs/algo/fm_sona_en.md | 6 +++--- docs/algo/kcore_sona_en.md | 6 +++--- docs/algo/line_sona_en.md | 6 +++--- docs/algo/linreg_sona_en.md | 6 +++--- docs/algo/louvain_sona_en.md | 6 +++--- docs/algo/lr_sona_en.md | 6 +++--- docs/algo/mlr_sona_en.md | 6 +++--- docs/algo/nfm_sona_en.md | 6 +++--- docs/algo/pnn_sona_en.md | 6 +++--- docs/algo/robust_sona_en.md | 6 +++--- docs/algo/softmax_sona_en.md | 6 +++--- docs/algo/svm_sona_en.md | 6 +++--- docs/algo/word2vec_sona_en.md | 6 +++--- 18 files changed, 54 insertions(+), 54 deletions(-) diff --git a/docs/algo/afm_sona_en.md b/docs/algo/afm_sona_en.md index 7f7c6f3..c0ac81b 100644 --- a/docs/algo/afm_sona_en.md +++ b/docs/algo/afm_sona_en.md @@ -117,9 +117,9 @@ ParamSharedFC layer is a fully connected layer with shared parameters, as explai Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/daw_sona_en.md b/docs/algo/daw_sona_en.md index f55d319..8edc58c 100644 --- a/docs/algo/daw_sona_en.md +++ b/docs/algo/daw_sona_en.md @@ -110,9 +110,9 @@ When Deep and wide have more parameters, they need to be specified in the form o Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/dcn_sona_en.md b/docs/algo/dcn_sona_en.md index 2f0ca6d..a68831f 100644 --- a/docs/algo/dcn_sona_en.md +++ b/docs/algo/dcn_sona_en.md @@ -117,9 +117,9 @@ Outputs of deep network and cross network are simply concatenated. Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/deepfm_sona_en.md b/docs/algo/deepfm_sona_en.md index f8a2a17..15234f5 100644 --- a/docs/algo/deepfm_sona_en.md +++ b/docs/algo/deepfm_sona_en.md @@ -152,9 +152,9 @@ There are many parameters of DeepFM, which need to be specified by Json configur Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/dnn_sona_en.md b/docs/algo/dnn_sona_en.md index 205d79b..a14f1aa 100644 --- a/docs/algo/dnn_sona_en.md +++ b/docs/algo/dnn_sona_en.md @@ -173,9 +173,9 @@ Refer to [Json definition](../basic/json_conf_en.md) for the meaning of the deta Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/fm_sona_en.md b/docs/algo/fm_sona_en.md index c13ad9c..8c6e4ea 100644 --- a/docs/algo/fm_sona_en.md +++ b/docs/algo/fm_sona_en.md @@ -52,9 +52,9 @@ The model of the FM algorithm consists of two parts, wide and embedding, where w Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/kcore_sona_en.md b/docs/algo/kcore_sona_en.md index c32acff..e62fb94 100644 --- a/docs/algo/kcore_sona_en.md +++ b/docs/algo/kcore_sona_en.md @@ -25,9 +25,9 @@ The algorithm stops until none of corenesses of nodes are updated last round. Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/line_sona_en.md b/docs/algo/line_sona_en.md index 34cbe1e..6bac718 100644 --- a/docs/algo/line_sona_en.md +++ b/docs/algo/line_sona_en.md @@ -79,9 +79,9 @@ The model is divided by node id range, it means that each partition contains par Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. ``` diff --git a/docs/algo/linreg_sona_en.md b/docs/algo/linreg_sona_en.md index 7709630..e775a87 100644 --- a/docs/algo/linreg_sona_en.md +++ b/docs/algo/linreg_sona_en.md @@ -65,9 +65,9 @@ The LR algorithm supports three types of models: DoubleDense, DoubleSparse, Doub Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/louvain_sona_en.md b/docs/algo/louvain_sona_en.md index 44d0844..47ed815 100644 --- a/docs/algo/louvain_sona_en.md +++ b/docs/algo/louvain_sona_en.md @@ -29,9 +29,9 @@ We maintain the community id of the node and the weight information correspondin Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/lr_sona_en.md b/docs/algo/lr_sona_en.md index 7d0f7f2..7cc5926 100644 --- a/docs/algo/lr_sona_en.md +++ b/docs/algo/lr_sona_en.md @@ -52,9 +52,9 @@ As shown in the result above, Spark on Angel has improved speed for training of Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/mlr_sona_en.md b/docs/algo/mlr_sona_en.md index 83d1ee6..77971b0 100644 --- a/docs/algo/mlr_sona_en.md +++ b/docs/algo/mlr_sona_en.md @@ -89,9 +89,9 @@ Each line of text represents a sample in the form of "y index 1: value 1 index 2 Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/nfm_sona_en.md b/docs/algo/nfm_sona_en.md index 0d45996..440c736 100644 --- a/docs/algo/nfm_sona_en.md +++ b/docs/algo/nfm_sona_en.md @@ -142,9 +142,9 @@ There are many parameters of NFM, which need to be specified by Json configurati Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/pnn_sona_en.md b/docs/algo/pnn_sona_en.md index 53999e3..4138c85 100644 --- a/docs/algo/pnn_sona_en.md +++ b/docs/algo/pnn_sona_en.md @@ -161,9 +161,9 @@ There are many parameters of PNN, which need to be specified by Json configurati Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/robust_sona_en.md b/docs/algo/robust_sona_en.md index bac9521..2ae90b6 100644 --- a/docs/algo/robust_sona_en.md +++ b/docs/algo/robust_sona_en.md @@ -53,9 +53,9 @@ The learning rate decays along iterations as ![](../imgs/LR_lr_ecay.gif), where: Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/softmax_sona_en.md b/docs/algo/softmax_sona_en.md index 81d26ee..d3d769e 100644 --- a/docs/algo/softmax_sona_en.md +++ b/docs/algo/softmax_sona_en.md @@ -52,9 +52,9 @@ where the "libsvm" format is as follows: Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/svm_sona_en.md b/docs/algo/svm_sona_en.md index 0282f63..70fd8f2 100644 --- a/docs/algo/svm_sona_en.md +++ b/docs/algo/svm_sona_en.md @@ -38,9 +38,9 @@ Angel MLLib uses mini-batch gradient descent optimization method for solving SVM Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task. diff --git a/docs/algo/word2vec_sona_en.md b/docs/algo/word2vec_sona_en.md index f9e5af5..80261ec 100644 --- a/docs/algo/word2vec_sona_en.md +++ b/docs/algo/word2vec_sona_en.md @@ -39,9 +39,9 @@ The Word2Vec algorithm used for Network Embedding needs to handle network with Several steps must be done before editing the submitting script and running. 1. confirm Hadoop and Spark have ready in your environment -2. unzip angel--bin.zip to local directory (ANGEL_HOME) -3. upload angel--bin directory to HDFS (ANGEL_HDFS_HOME) -4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION +2. unzip sona--bin.zip to local directory (SONA_HOME) +3. upload sona--bin directory to HDFS (SONA_HDFS_HOME) +4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.