You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Create a fresh conda/mamba virtualenv ([learn more](https://github.com/networkx/networkx/blob/main/CONTRIBUTING.rst#development-workflow))
@@ -55,7 +55,13 @@ git push origin <branch_name>
55
55
56
56
## Testing nx-parallel
57
57
58
-
The following command runs all the tests in networkx with a `ParallelGraph` object and for algorithms not in nx-parallel, it falls back to networkx's sequential implementations. This is to ensure that the parallel implementation follows the same API as networkx's.
58
+
Firstly, install the dependencies for testing:
59
+
60
+
```.sh
61
+
pip install -e ".[test]"
62
+
```
63
+
64
+
Then run the following command that executes all the tests in networkx's test suite with a `ParallelGraph` object and for algorithms not in nx-parallel, it falls back to networkx's sequential implementations. This is to ensure that the parallel backend follows the same API as networkx's.
59
65
60
66
```.sh
61
67
PYTHONPATH=. \
@@ -64,7 +70,9 @@ NETWORKX_FALLBACK_TO_NX=True \
64
70
pytest --pyargs networkx "$@"
65
71
```
66
72
67
-
For running additional tests:
73
+
Ref. [NetworkX Backend testing docs](https://networkx.org/documentation/latest/reference/backends.html#testing-the-custom-backend) to know about testing mechanisms in networkx.
74
+
75
+
For running additional tests specific to nx-parallel, you can run the following command:
68
76
69
77
```.sh
70
78
pytest nx_parallel
@@ -116,7 +124,7 @@ The default chunking in nx-parallel is done by first determining the number of a
116
124
- The algorithm that you are considering to add to nx-parallel should be in the main networkx repository and it should have the `_dispatchable` decorator. If not, you can consider adding a sequential implementation in networkx first.
117
125
- check-list for adding a new function:
118
126
-[ ] Add the parallel implementation(make sure API doesn't break), the file structure should be the same as that in networkx.
119
-
-[ ] add the function to the `Dispatcher` class in [interface.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/generated/networkx.utils.backends._dispatchable.html#dispatchable)))
127
+
-[ ] add the function to the `Dispatcher` class in [interface.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/backends.html)))
120
128
-[ ] update the `__init__.py` files accordingly
121
129
-[ ] docstring following the above format
122
130
-[ ] run the [timing script](https://github.com/networkx/nx-parallel/blob/main/timing/timing_individual_function.py) to get the performance heatmap
Copy file name to clipboardExpand all lines: README.md
+50-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# nx-parallel
2
2
3
-
nx-parallel is a NetworkX backend that uses joblib for parallelization. This project aims to provide parallelized implementations of various NetworkX functions to improve performance.
3
+
nx-parallel is a NetworkX backend that uses joblib for parallelization. This project aims to provide parallelized implementations of various NetworkX functions to improve performance. Refer [NetworkX backends documentation](https://networkx.org/documentation/latest/reference/backends.html) to learn more about the backend architecture in NetworkX.
4
4
5
5
## Algorithms in nx-parallel
6
6
@@ -36,16 +36,59 @@ for func in d:
36
36
37
37
</details>
38
38
39
+
## Installation
40
+
41
+
It is recommended to first refer the [NetworkX's INSTALL.rst](https://github.com/networkx/networkx/blob/main/INSTALL.rst).
42
+
nx-parallel requires Python >=3.10. Right now, the only dependencies of nx-parallel are networkx and joblib.
43
+
44
+
### Install the released version
45
+
46
+
You can install the stable version of nx-parallel using pip:
47
+
48
+
```sh
49
+
$ pip install nx-parallel
50
+
```
51
+
52
+
The above command also installs the two main dependencies of nx-parallel i.e. networkx
53
+
and joblib. To upgrade to a newer release use the `--upgrade` flag:
54
+
55
+
```sh
56
+
$ pip install --upgrade nx-parallel
57
+
```
58
+
59
+
### Install the development version
60
+
61
+
Before installing the development version, you may need to uninstall the
62
+
standard version of `nx-parallel` and other two dependencies using `pip`:
Note that for all functions inside `nx_code.py` that do not have an nx-parallel implementation their original networkx implementation will be executed. You can also use the nx-parallel backend in your code for only some specific function calls in the following ways:
83
+
41
84
```.py
42
85
import networkx as nx
43
86
import nx_parallel as nxp
44
87
45
88
G = nx.path_graph(4)
46
89
H = nxp.ParallelGraph(G)
47
90
48
-
# method 1 : passing ParallelGraph object in networkx function
91
+
# method 1 : passing ParallelGraph object in networkx function (Type-based dispatching)
49
92
nx.betweenness_centrality(H)
50
93
51
94
# method 2 : using the 'backend' kwarg
@@ -62,7 +105,7 @@ nxp.betweenness_centrality(H)
62
105
63
106
### Notes
64
107
65
-
1. Some functions in networkx have the same name but different implementations, so to avoid these name conflicts at the time of dispatching networkx differentiates them by specifying the `name` parameter in the [`_dispatchable`](https://networkx.org/documentation/latest/reference/generated/networkx.utils.backends._dispatchable.html#dispatchable) decorator of such algorithms. So, `method 3` and `method 4` are not recommended. But, you can use them if you know the correct `name`. For example:
108
+
1. Some functions in networkx have the same name but different implementations, so to avoid these name conflicts at the time of dispatching networkx differentiates them by specifying the `name` parameter in the `_dispatchable` decorator of such algorithms. So, `method 3` and `method 4` are not recommended. But, you can use them if you know the correct `name`. For example:
66
109
67
110
```.py
68
111
# using `name` parameter - nx-parallel as an independent package
@@ -82,4 +125,8 @@ nxp.betweenness_centrality(H)
82
125
83
126
Feel free to contribute to nx-parallel. You can find the contributing guidelines [here](https://github.com/networkx/nx-parallel/blob/main/CONTRIBUTING.md). If you'd like to implement a feature or fix a bug, we'd be happy to review a pull request. Please make sure to explain the changes you made in the pull request description. And feel free to open issues for any problems you face, or for new features you'd like to see implemented.
84
127
128
+
This project is managed under the NetworkX organisation, so the [code of conduct of NetworkX](https://github.com/networkx/networkx/blob/main/CODE_OF_CONDUCT.rst) applies here as well.
129
+
130
+
All code in this repository is available under the Berkeley Software Distribution (BSD) 3-Clause License (see LICENSE).
0 commit comments