Skip to content
This repository was archived by the owner on Dec 8, 2023. It is now read-only.

Commit 0e073ae

Browse files
committed
Add jupytext paired script
1 parent 738f427 commit 0e073ae

File tree

1 file changed

+127
-0
lines changed

1 file changed

+127
-0
lines changed
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# ---
2+
# jupyter:
3+
# jupytext:
4+
# formats: ipynb,py
5+
# text_representation:
6+
# extension: .py
7+
# format_name: light
8+
# format_version: '1.5'
9+
# jupytext_version: 1.14.0
10+
# kernelspec:
11+
# display_name: Python 3.7.9 ('workflow-calcium-imaging')
12+
# language: python
13+
# name: python3
14+
# ---
15+
16+
# # Allen Institute Ephys Workshop
17+
# September 22, 2022
18+
# + In this notebook, we will show how to interact with a database in Python and how export data into a Neurodata Without Borders (NWB) file.
19+
#
20+
# + Other notebooks in this directory describe the process for running the analysis steps in more detail.
21+
#
22+
# + This notebook is meant to be run on CodeBook (`https://codebook.datajoint.io`) which contains example data.
23+
#
24+
# + First run the `01-configure` and `04-automate` notebooks to set up your environment and load example data into the database, respectively.
25+
26+
# ## Configuration
27+
28+
import datajoint as dj
29+
import numpy as np
30+
from matplotlib import pyplot
31+
32+
# Enter database credentials. A DataJoint workflow requires a connection to an existing relational database. The connection setup parameters are defined in the `dj.config` python dictionary.
33+
34+
# + tags=[]
35+
dj.config['custom'] = {'database.prefix': '<username>_allen_ephys_',
36+
'ephys_root_data_dir': ["/tmp/test_data/workflow_ephys_data1/",
37+
"/tmp/test_data/workflow_ephys_data2/",
38+
"/tmp/test_data/workflow_localization/",
39+
"/home/inbox/0.1.0a4/workflow_ephys_data1/",
40+
"/home/inbox/0.1.0a4/workflow_ephys_data2/",
41+
"/home/inbox/0.1.0a4/workflow_localization/"
42+
]}
43+
# -
44+
45+
# Import the workflow. The current workflow is composed of multiple database schemas, each of them corresponding to a module within the `workflow_array_ephys.pipeline` file.
46+
47+
from workflow_array_ephys.pipeline import lab, subject, session, probe, ephys
48+
49+
# ## Workflow diagram
50+
#
51+
# Plot the workflow diagram. In relational databases, the entities (i.e. rows) in different tables are connected to each other. Visualization of this relationship helps one to write accurate queries. For the array ephys workflow, this connection is as follows:
52+
53+
# + tags=[]
54+
dj.Diagram(lab.Lab) + dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + \
55+
dj.Diagram(probe) + dj.Diagram(ephys)
56+
# -
57+
58+
subject.Subject()
59+
60+
ephys.EphysRecording()
61+
62+
ephys.CuratedClustering.Unit()
63+
64+
# ## Fetch data from the database and generate a raster plot
65+
66+
subset=ephys.CuratedClustering.Unit & 'unit IN ("6","7","9","14","15","17","19")'
67+
subset
68+
69+
# Fetch the spike times from the database for the units above.
70+
71+
units, unit_spiketimes = (subset).fetch("unit", "spike_times")
72+
73+
# Generate the raster plot.
74+
75+
# +
76+
x = np.hstack(unit_spiketimes)
77+
y = np.hstack([np.full_like(s, u) for u, s in zip(units, unit_spiketimes)])
78+
79+
pyplot.plot(x, y, "|")
80+
pyplot.set_xlabel("Time (s)")
81+
pyplot.set_ylabel("Unit")
82+
# -
83+
84+
# ## Export to NWB
85+
#
86+
# The Element's `ecephys_session_to_nwb` function provides a full export mechanism, returning an NWB file with raw data, spikes, and LFP. Optional arguments determine which pieces are exported. For demonstration purposes, we recommend limiting `end_frame`.
87+
88+
from workflow_array_ephys.export import ecephys_session_to_nwb, write_nwb
89+
90+
help(ecephys_session_to_nwb)
91+
92+
# Select an experimental session to export.
93+
94+
dj.Diagram(subject.Subject) + dj.Diagram(session.Session) + \
95+
dj.Diagram(probe) + dj.Diagram(ephys)
96+
97+
session_key=dict(subject="subject5",
98+
session_datetime="2018-07-03 20:32:28")
99+
100+
# Return the NWBFile object for the selected experimental session.
101+
102+
nwbfile = ecephys_session_to_nwb(session_key=session_key,
103+
raw=True,
104+
spikes=True,
105+
lfp="dj",
106+
end_frame=100,
107+
lab_key=dict(lab='LabA'),
108+
project_key=dict(project='ProjA'),
109+
protocol_key=dict(protocol='ProtA'),
110+
nwbfile_kwargs=None)
111+
112+
nwbfile
113+
114+
# `write_nwb` can then be used to write this file to disk.
115+
116+
# +
117+
import time
118+
nwb_filename = f"/home/{dj.config['database.user']}/"+time.strftime("_test_%Y%m%d-%H%M%S.nwb")
119+
120+
write_nwb(nwbfile, nwb_filename)
121+
# -
122+
123+
# Next, the NWB file can be uploaded to DANDI. See the `09-NWB-Export` notebook for more details.
124+
125+
# ## Summary and next steps
126+
#
127+
# In this notebook we explored how to query and fetch data from the database, and export an experimental ephys session to a NWB file. Next, please explore more of the features of the DataJoint Elements in the other notebooks. Once you are ready to begin setting up your pipeline, fork this repository on GitHub and begin adapting it for your projects requirements.

0 commit comments

Comments
 (0)