You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 8, 2023. It is now read-only.
Workflow for extracellular array electrophysiology data acquired with a polytrode probe
4
-
(e.g. [Neuropixels](https://www.neuropixels.org), Neuralynx) using the [SpikeGLX](https://github.com/billkarsh/SpikeGLX) or
3
+
Workflow for extracellular array electrophysiology data acquired with a polytrode probe
4
+
(e.g. [Neuropixels](https://www.neuropixels.org), Neuralynx) using the [SpikeGLX](https://github.com/billkarsh/SpikeGLX) or
5
5
[OpenEphys](https://open-ephys.org/gui) acquisition software and processed with [MATLAB-based Kilosort](https://github.com/MouseLand/Kilosort) or [python-based Kilosort](https://github.com/MouseLand/pykilosort) spike sorting software.
6
6
7
7
A complete electrophysiology workflow can be built using the DataJoint Elements.
4. Export of `no_curation` schema to NWB and DANDI (see (notebooks/09-NWB-export.ipynb)[notebooks/09-NWB-export.ipynb]
22
+
4. Export of `no_curation` schema to NWB and DANDI (see [notebooks/09-NWB-export.ipynb](notebooks/09-NWB-export.ipynb)).
21
23
22
24
See the [Element Array Electrophysiology documentation](https://elements.datajoint.org/description/array_ephys/) for the background information and development timeline.
23
25
24
-
For more information on the DataJoint Elements project, please visit https://elements.datajoint.org. This work is supported by the National Institutes of Health.
26
+
For more information on the DataJoint Elements project, please visit <https://elements.datajoint.org>. This work is supported by the National Institutes of Health.
25
27
26
28
## Workflow architecture
27
29
@@ -31,52 +33,58 @@ The electrophysiology workflow presented here uses components from 4 DataJoint E
If your work uses DataJoint and DataJoint Elements, please cite the respective Research Resource Identifiers (RRIDs) and manuscripts.
73
80
74
-
+ DataJoint for Python or MATLAB
75
-
+ Yatsenko D, Reimer J, Ecker AS, Walker EY, Sinz F, Berens P, Hoenselaar A, Cotton RJ, Siapas AS, Tolias AS. DataJoint: managing big scientific data using MATLAB or Python. bioRxiv. 2015 Jan 1:031658. doi: https://doi.org/10.1101/031658
76
81
77
-
+ DataJoint ([RRID:SCR_014543](https://scicrunch.org/resolver/SCR_014543)) - DataJoint for `<Select Python or MATLAB>` (version `<Enter version number>`)
82
+
* DataJoint for Python or MATLAB
83
+
+ Yatsenko D, Reimer J, Ecker AS, Walker EY, Sinz F, Berens P, Hoenselaar A, Cotton RJ, Siapas AS, Tolias AS. DataJoint: managing big scientific data using MATLAB or Python. bioRxiv. 2015 Jan 1:031658. doi: <https://doi.org/10.1101/031658>
84
+
85
+
+ DataJoint ([RRID:SCR_014543](https://scicrunch.org/resolver/SCR_014543)) - DataJoint for `<Select Python or MATLAB>` (version `<Enter version number>`)
78
86
79
87
+ DataJoint Elements
80
-
+ Yatsenko D, Nguyen T, Shen S, Gunalan K, Turner CA, Guzman R, Sasaki M, Sitonic D, Reimer J, Walker EY, Tolias AS. DataJoint Elements: Data Workflows for Neurophysiology. bioRxiv. 2021 Jan 1. doi: https://doi.org/10.1101/2021.03.30.437358
88
+
+ Yatsenko D, Nguyen T, Shen S, Gunalan K, Turner CA, Guzman R, Sasaki M, Sitonic D, Reimer J, Walker EY, Tolias AS. DataJoint Elements: Data Workflows for Neurophysiology. bioRxiv. 2021 Jan 1. doi: <https://doi.org/10.1101/2021.03.30.437358>
81
89
82
-
+ DataJoint Elements ([RRID:SCR_021894](https://scicrunch.org/resolver/SCR_021894)) - Element Array Electrophysiology (version `<Enter version number>`)
90
+
+ DataJoint Elements ([RRID:SCR_021894](https://scicrunch.org/resolver/SCR_021894)) - Element Array Electrophysiology (version `<Enter version number>`)
Copy file name to clipboardExpand all lines: notebooks/00-data-download-optional.ipynb
+27-36
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,8 @@
4
4
"cell_type": "markdown",
5
5
"metadata": {},
6
6
"source": [
7
+
"# Download example data\n",
8
+
"\n",
7
9
"This workflow will need Ephys data collected from either SpikeGLX or OpenEphys and the output from kilosort2. We provided an example dataset to be downloaded to run through the pipeline. This notebook walks you through the process to download the dataset."
8
10
]
9
11
},
@@ -41,19 +43,12 @@
41
43
{
42
44
"cell_type": "code",
43
45
"execution_count": null,
46
+
"id": "e55f4f77",
44
47
"metadata": {},
45
48
"outputs": [],
46
49
"source": [
47
50
"import os\n",
48
-
"import djarchive_client\n",
49
-
"client = djarchive_client.client()"
50
-
]
51
-
},
52
-
{
53
-
"cell_type": "markdown",
54
-
"metadata": {},
55
-
"source": [
56
-
"To browse the datasets that are available in djarchive:"
51
+
"import djarchive_client"
57
52
]
58
53
},
59
54
{
@@ -62,14 +57,14 @@
62
57
"metadata": {},
63
58
"outputs": [],
64
59
"source": [
65
-
"list(client.datasets())"
60
+
"client = djarchive_client.client()"
66
61
]
67
62
},
68
63
{
69
64
"cell_type": "markdown",
70
65
"metadata": {},
71
66
"source": [
72
-
"Each of the datasets have different versions associated with the version of workflow package. To browse the revisions:"
67
+
"To browse the datasets that are available in djarchive:"
73
68
]
74
69
},
75
70
{
@@ -78,14 +73,14 @@
78
73
"metadata": {},
79
74
"outputs": [],
80
75
"source": [
81
-
"list(client.revisions())"
76
+
"list(client.datasets())"
82
77
]
83
78
},
84
79
{
85
80
"cell_type": "markdown",
86
81
"metadata": {},
87
82
"source": [
88
-
"To download the dataset, let's prepare a root directory, for example in `/tmp`:"
83
+
"Each of the datasets have different versions associated with the version of workflow package. To browse the revisions:"
89
84
]
90
85
},
91
86
{
@@ -94,36 +89,23 @@
94
89
"metadata": {},
95
90
"outputs": [],
96
91
"source": [
97
-
"os.mkdir('/tmp/test_data')"
92
+
"list(client.revisions())"
98
93
]
99
94
},
100
95
{
101
96
"cell_type": "markdown",
102
97
"metadata": {},
103
98
"source": [
104
-
"Get the dataset revision with the current version of the workflow:"
99
+
"To download the dataset, let's prepare a root directory, for example in `/tmp`:"
0 commit comments