-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
I was trying to reproduce the result of
Line 49 in 1c72b06
| # it looks like that's actually slower than parallelizing over corpora, for some |
I found that pooling did result in a 2x speed up of the run.
Without parallel:
python run_mindep.py run en fr 866.40s user 0.48s system 99% cpu 14:28.04 total
python run_mindep.py run en fr 893.17s user 0.53s system 99% cpu 14:55.14 total
python run_mindep.py run en fr 905.34s user 0.56s system 99% cpu 15:08.00 total
With parallel (pmap):
python run_mindep.py run en fr 404.78s user 13.91s system 48% cpu 14:23.18 total
python run_mindep.py run en fr 410.19s user 14.25s system 47% cpu 15:01.91 total
python run_mindep.py run en fr 418.29s user 14.64s system 54% cpu 13:09.16 total
This was ran on "Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz", quadcore.
I think the run could be ~an order of magnitude faster by inserting several numba @jits to deptransform/depgraph. So far I had tested with @jit-ing gen_row but didn't observe any speed up.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels