Skip to content

Commit 41eab61

Browse files
author
joyjxu
committed
update docs
1 parent 5119631 commit 41eab61

18 files changed

+54
-54
lines changed

docs/algo/afm_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -117,9 +117,9 @@ ParamSharedFC layer is a fully connected layer with shared parameters, as explai
117117
Several steps must be done before editing the submitting script and running.
118118

119119
1. confirm Hadoop and Spark have ready in your environment
120-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
121-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
122-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
120+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
121+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
122+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
123123

124124
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
125125

docs/algo/daw_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -110,9 +110,9 @@ When Deep and wide have more parameters, they need to be specified in the form o
110110
Several steps must be done before editing the submitting script and running.
111111

112112
1. confirm Hadoop and Spark have ready in your environment
113-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
114-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
115-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
113+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
114+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
115+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
116116

117117
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
118118

docs/algo/dcn_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -117,9 +117,9 @@ Outputs of deep network and cross network are simply concatenated.
117117
Several steps must be done before editing the submitting script and running.
118118

119119
1. confirm Hadoop and Spark have ready in your environment
120-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
121-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
122-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
120+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
121+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
122+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
123123

124124
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
125125

docs/algo/deepfm_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -152,9 +152,9 @@ There are many parameters of DeepFM, which need to be specified by Json configur
152152
Several steps must be done before editing the submitting script and running.
153153

154154
1. confirm Hadoop and Spark have ready in your environment
155-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
156-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
157-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
155+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
156+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
157+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
158158

159159
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
160160

docs/algo/dnn_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -173,9 +173,9 @@ Refer to [Json definition](../basic/json_conf_en.md) for the meaning of the deta
173173
Several steps must be done before editing the submitting script and running.
174174

175175
1. confirm Hadoop and Spark have ready in your environment
176-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
177-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
178-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
176+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
177+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
178+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
179179

180180
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
181181

docs/algo/fm_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,9 @@ The model of the FM algorithm consists of two parts, wide and embedding, where w
5252
Several steps must be done before editing the submitting script and running.
5353
5454
1. confirm Hadoop and Spark have ready in your environment
55-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
56-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
57-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
55+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
56+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
57+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
5858
5959
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
6060

docs/algo/kcore_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ The algorithm stops until none of corenesses of nodes are updated last round.
2525
Several steps must be done before editing the submitting script and running.
2626

2727
1. confirm Hadoop and Spark have ready in your environment
28-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
29-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
30-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
28+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
29+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
30+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
3131

3232
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
3333

docs/algo/line_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -79,9 +79,9 @@ The model is divided by node id range, it means that each partition contains par
7979
Several steps must be done before editing the submitting script and running.
8080

8181
1. confirm Hadoop and Spark have ready in your environment
82-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
83-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
84-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
82+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
83+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
84+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
8585

8686
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
8787
```

docs/algo/linreg_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,9 @@ The LR algorithm supports three types of models: DoubleDense, DoubleSparse, Doub
6565
Several steps must be done before editing the submitting script and running.
6666

6767
1. confirm Hadoop and Spark have ready in your environment
68-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
69-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
70-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
68+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
69+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
70+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
7171

7272
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
7373

docs/algo/louvain_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,9 @@ We maintain the community id of the node and the weight information correspondin
2929
Several steps must be done before editing the submitting script and running.
3030

3131
1. confirm Hadoop and Spark have ready in your environment
32-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
33-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
34-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
32+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
33+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
34+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
3535

3636
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
3737

docs/algo/lr_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,9 @@ As shown in the result above, Spark on Angel has improved speed for training of
5252
Several steps must be done before editing the submitting script and running.
5353

5454
1. confirm Hadoop and Spark have ready in your environment
55-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
56-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
57-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
55+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
56+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
57+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
5858

5959
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
6060

docs/algo/mlr_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -89,9 +89,9 @@ Each line of text represents a sample in the form of "y index 1: value 1 index 2
8989
Several steps must be done before editing the submitting script and running.
9090

9191
1. confirm Hadoop and Spark have ready in your environment
92-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
93-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
94-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
92+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
93+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
94+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
9595

9696
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
9797

docs/algo/nfm_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -142,9 +142,9 @@ There are many parameters of NFM, which need to be specified by Json configurati
142142
Several steps must be done before editing the submitting script and running.
143143

144144
1. confirm Hadoop and Spark have ready in your environment
145-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
146-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
147-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
145+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
146+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
147+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
148148

149149
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
150150

docs/algo/pnn_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -161,9 +161,9 @@ There are many parameters of PNN, which need to be specified by Json configurati
161161
Several steps must be done before editing the submitting script and running.
162162

163163
1. confirm Hadoop and Spark have ready in your environment
164-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
165-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
166-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
164+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
165+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
166+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
167167

168168
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
169169

docs/algo/robust_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,9 @@ The learning rate decays along iterations as ![](../imgs/LR_lr_ecay.gif), where:
5353
Several steps must be done before editing the submitting script and running.
5454

5555
1. confirm Hadoop and Spark have ready in your environment
56-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
57-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
58-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
56+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
57+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
58+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
5959

6060
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
6161

docs/algo/softmax_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -52,9 +52,9 @@ where the "libsvm" format is as follows:
5252
Several steps must be done before editing the submitting script and running.
5353

5454
1. confirm Hadoop and Spark have ready in your environment
55-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
56-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
57-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
55+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
56+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
57+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
5858

5959
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
6060

docs/algo/svm_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -38,9 +38,9 @@ Angel MLLib uses mini-batch gradient descent optimization method for solving SVM
3838
Several steps must be done before editing the submitting script and running.
3939

4040
1. confirm Hadoop and Spark have ready in your environment
41-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
42-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
43-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
41+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
42+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
43+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
4444

4545
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
4646

docs/algo/word2vec_sona_en.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,9 @@ The Word2Vec algorithm used for Network Embedding needs to handle network with
3939
Several steps must be done before editing the submitting script and running.
4040

4141
1. confirm Hadoop and Spark have ready in your environment
42-
2. unzip angel-<version>-bin.zip to local directory (ANGEL_HOME)
43-
3. upload angel-<version>-bin directory to HDFS (ANGEL_HDFS_HOME)
44-
4. Edit $ANGEL_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, ANGEL_HOME, ANGEL_HDFS_HOME and ANGEL_VERSION
42+
2. unzip sona-<version>-bin.zip to local directory (SONA_HOME)
43+
3. upload sona-<version>-bin directory to HDFS (SONA_HDFS_HOME)
44+
4. Edit $SONA_HOME/bin/spark-on-angel-env.sh, set SPARK_HOME, SONA_HOME, SONA_HDFS_HOME and ANGEL_VERSION
4545

4646
Here's an example of submitting scripts, remember to adjust the parameters and fill in the paths according to your own task.
4747

0 commit comments

Comments
 (0)