Skip to content

Commit 0dd566e

Browse files
committed
added notebooks and data for ucla workshop
1 parent 32fda32 commit 0dd566e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+107742
-0
lines changed

tutorial/UCLAWorkshop/.DS_Store

6 KB
Binary file not shown.
Lines changed: 277 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,277 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "6b3e57da-a5b2-485f-9986-6c6af4793aa3",
6+
"metadata": {},
7+
"source": [
8+
"# Notebook 1: Introduction to Hyperscanning Analysis with HyPyP\n",
9+
"\n",
10+
"In this notebook, we introduce the basics of hyperscanning analysis using the HyPyP library. We will:\n",
11+
"- Load epoch data for two participants.\n",
12+
"- Construct a dyad (by combining the data into a single array).\n",
13+
"- Compute a synchronization metric (circular correlation) using a connectivity analysis function.\n",
14+
"- Visualize the resulting inter-brain synchrony connectivity matrix."
15+
]
16+
},
17+
{
18+
"cell_type": "code",
19+
"execution_count": null,
20+
"id": "afeec199-af57-4ae7-9f43-e37209b49810",
21+
"metadata": {},
22+
"outputs": [],
23+
"source": [
24+
"import mne\n",
25+
"import numpy as np\n",
26+
"from collections import OrderedDict\n",
27+
"\n",
28+
"# HyPyP modules for I/O, analyses, and visualization\n",
29+
"import hypyp.io as io # For loading and constructing dyads\n",
30+
"import hypyp.analyses as analyses # For computing synchronization metrics\n",
31+
"import hypyp.prep as prep # Preprocessing module (for ICA and other cleaning routines)\n",
32+
"import hypyp.viz as viz # For visualizing results\n",
33+
"\n",
34+
"# Confirm successful import of libraries\n",
35+
"print(\"Libraries imported successfully.\")"
36+
]
37+
},
38+
{
39+
"cell_type": "markdown",
40+
"id": "4055eb95-0f79-4ad7-8f32-dbe3440ae2f6",
41+
"metadata": {},
42+
"source": [
43+
"## Loading the Data\n",
44+
"\n",
45+
"We load the epoch files for two participants from:\n",
46+
"- `./data/participant1-epo.fif`\n",
47+
"- `./data/participant2-epo.fif`\n",
48+
"\n",
49+
"Each file contains one epoch (a single trial) for one participant. After loading, we equalize the number of epochs between participants and print summaries for verification."
50+
]
51+
},
52+
{
53+
"cell_type": "code",
54+
"execution_count": null,
55+
"id": "1627f0e9-1e3c-4681-8db7-13c528a7b61c",
56+
"metadata": {},
57+
"outputs": [],
58+
"source": [
59+
"# Load epochs for participant 1 and participant 2\n",
60+
"epo1 = mne.read_epochs(\"./data/participant1-epo.fif\", preload=True)\n",
61+
"epo2 = mne.read_epochs(\"./data/participant2-epo.fif\", preload=True)\n",
62+
"\n",
63+
"# Equalize the number of epochs between participants to ensure consistent analysis\n",
64+
"mne.epochs.equalize_epoch_counts([epo1, epo2])\n",
65+
"\n",
66+
"# Print summaries to verify that the epochs have been loaded correctly\n",
67+
"print(\"Participant 1 Epochs:\")\n",
68+
"print(epo1)\n",
69+
"print(\"\\nParticipant 2 Epochs:\")\n",
70+
"print(epo2)"
71+
]
72+
},
73+
{
74+
"cell_type": "markdown",
75+
"id": "1af0aa41-2e5e-449c-bdb6-ac8fcf51ef97",
76+
"metadata": {},
77+
"source": [
78+
"## Preprocessing with ICA\n",
79+
"\n",
80+
"Before computing connectivity, we perform additional preprocessing to remove artifacts such as eye blinks. \n",
81+
"Here we apply ICA using functions from `hypyp.prep`. Adjust parameters (e.g., method, number of components) as needed."
82+
]
83+
},
84+
{
85+
"cell_type": "code",
86+
"execution_count": null,
87+
"id": "3c7975f2-bb30-42e4-a63f-6bff7255b37b",
88+
"metadata": {},
89+
"outputs": [],
90+
"source": [
91+
"# Compute ICA for each participant with 15 components\n",
92+
"icas = prep.ICA_fit([\n",
93+
" epo1, epo2\n",
94+
"],\n",
95+
" n_components=15,\n",
96+
" method='infomax',\n",
97+
" fit_params=dict(extended=True),\n",
98+
" random_state=42\n",
99+
")\n",
100+
"\n",
101+
"# Select the relevant independent components for artefact rejection\n",
102+
"cleaned_epochs_ICA = prep.ICA_choice_comp(icas, [epo1, epo2])\n",
103+
"print('ICA correction completed.')\n",
104+
"\n",
105+
"# Apply local AutoReject on the ICA-cleaned epochs\n",
106+
"cleaned_epochs_AR, dic_AR = prep.AR_local(\n",
107+
" cleaned_epochs_ICA,\n",
108+
" strategy=\"union\",\n",
109+
" threshold=50.0,\n",
110+
" verbose=True\n",
111+
")\n",
112+
"print('AutoReject completed.')\n",
113+
"\n",
114+
"# Assign cleaned epochs to individual participant variables\n",
115+
"epo1_clean = cleaned_epochs_AR[0]\n",
116+
"epo2_clean = cleaned_epochs_AR[1]\n",
117+
"print('Preprocessed epochs for both participants are ready.')\n",
118+
"\n",
119+
"# Update dyad with cleaned data for subsequent analysis\n",
120+
"dyad_clean = [epo1_clean.get_data(copy=True), epo2_clean.get_data(copy=True)]"
121+
]
122+
},
123+
{
124+
"cell_type": "markdown",
125+
"id": "85efe31a-f9dc-4ef3-aaa1-21ea8d243974",
126+
"metadata": {},
127+
"source": [
128+
"## Computing the Inter-Brain Synchrony (Circular Correlation)\n",
129+
"\n",
130+
"In this section, we compute a synchronization metric using the circular correlation coefficient (\"ccorr\") rather than PLV. The steps are as follows:\n",
131+
"\n",
132+
"1. **Determine Sampling Rate:** \n",
133+
" We extract the sampling rate from one of the epochs.\n",
134+
"\n",
135+
"2. **Define Frequency Bands:** \n",
136+
" We define two frequency bands as an OrderedDict. Here, we focus on the \"Alpha-Low\" band for further analysis.\n",
137+
"\n",
138+
"3. **Prepare Data:** \n",
139+
" We combine the epochs from both participants into a single 4D array with shape *(2, n_epochs, n_channels, n_times)*.\n",
140+
"\n",
141+
"4. **Compute Analytic Signal:** \n",
142+
" The function `compute_freq_bands` filters the data and applies the Hilbert transform for each frequency band.\n",
143+
"\n",
144+
"5. **Compute Connectivity:** \n",
145+
" Using the `compute_sync` function with mode `'ccorr'`, we compute the inter-brain connectivity and then slice out the inter-brain connectivity matrix for the Alpha-Low band.\n",
146+
"\n",
147+
"6. **Normalization:** \n",
148+
" Finally, we compute a Z-score normalized connectivity matrix."
149+
]
150+
},
151+
{
152+
"cell_type": "code",
153+
"execution_count": null,
154+
"id": "bd0c8f25-0376-4a10-9296-cfcf237727f2",
155+
"metadata": {},
156+
"outputs": [],
157+
"source": [
158+
"# Extract the sampling rate from the epoch (assumes both participants share the same sfreq)\n",
159+
"sampling_rate = epo1.info['sfreq']\n",
160+
"\n",
161+
"# Define frequency bands as a dictionary (here two alpha sub-bands)\n",
162+
"freq_bands = {\n",
163+
" 'Alpha-Low': [7.5, 11],\n",
164+
" 'Alpha-High': [11.5, 13]\n",
165+
"}\n",
166+
"# Convert to an OrderedDict to maintain the order\n",
167+
"freq_bands = OrderedDict(freq_bands)\n",
168+
"\n",
169+
"# Prepare data for connectivity analysis by combining both participants' epochs.\n",
170+
"# The resulting data_inter array will have shape: (2, n_epochs, n_channels, n_times)\n",
171+
"dyad_clean = np.array([epo1_clean.get_data(copy = True), epo2_clean.get_data(copy = True)])\n",
172+
"\n",
173+
"# Compute the analytic signal in each frequency band using FIR filtering and Hilbert transform.\n",
174+
"complex_signal = analyses.compute_freq_bands(\n",
175+
" dyad_clean,\n",
176+
" sampling_rate,\n",
177+
" freq_bands,\n",
178+
" filter_length=int(sampling_rate), # Adjust filter length based on sampling rate\n",
179+
" l_trans_bandwidth=5.0, # Reduced transition bandwidth for sharper filtering\n",
180+
" h_trans_bandwidth=5.0\n",
181+
")\n",
182+
"\n",
183+
"# Compute connectivity using the circular correlation ('ccorr') metric and average across epochs.\n",
184+
"result = analyses.compute_sync(complex_signal, mode='ccorr', epochs_average=True)\n",
185+
"\n",
186+
"# Determine the number of channels per participant\n",
187+
"n_ch = len(epo1_clean.info['ch_names'])\n",
188+
"\n",
189+
"# Slice the connectivity matrix to extract inter-brain connectivity.\n",
190+
"# The matrix 'result' has shape (n_freq, 2*n_channels, 2*n_channels).\n",
191+
"# We slice to get connectivity values between channels of participant 1 (first n_ch)\n",
192+
"# and participant 2 (last n_ch) for each frequency band.\n",
193+
"alpha_low, alpha_high = result[:, 0:n_ch, n_ch:2*n_ch]\n",
194+
"\n",
195+
"# For further analysis, choose the Alpha-Low band values.\n",
196+
"values = alpha_low\n",
197+
"\n",
198+
"# Compute a Z-score normalized connectivity matrix for improved comparability.\n",
199+
"C = (values - np.mean(values[:])) / np.std(values[:])"
200+
]
201+
},
202+
{
203+
"cell_type": "markdown",
204+
"id": "db571218-40ab-4053-b2a0-5c590db04863",
205+
"metadata": {},
206+
"source": [
207+
"## Visualizing the Results\n",
208+
"\n",
209+
"We now visualize the computed inter-brain connectivity using both 2D and 3D representations. \n",
210+
"- The **2D topographic plot** helps identify regions with stronger inter-brain synchrony.\n",
211+
"- The **3D visualization** provides a spatial representation of the connectivity.\n",
212+
"\n",
213+
"The functions `viz.viz_2D_topomap_inter` and `viz.viz_3D_inter` handle the visualization."
214+
]
215+
},
216+
{
217+
"cell_type": "code",
218+
"execution_count": null,
219+
"id": "4635da92-b7da-4c36-99b1-515702cec4bf",
220+
"metadata": {},
221+
"outputs": [],
222+
"source": [
223+
"# Plot the 2D topographic map of the normalized connectivity matrix\n",
224+
"viz.viz_2D_topomap_inter(epo1_clean, epo2_clean, C, threshold='auto', steps=10, lab=True)"
225+
]
226+
},
227+
{
228+
"cell_type": "code",
229+
"execution_count": null,
230+
"id": "dea11dcc-502d-4d91-9d8b-554ec8e51b0f",
231+
"metadata": {},
232+
"outputs": [],
233+
"source": [
234+
"# Plot the 3D visualization of the inter-brain connectivity\n",
235+
"viz.viz_3D_inter(epo1_clean, epo2_clean, C, threshold='auto', steps=10, lab=False)\n",
236+
"print('3D inter-brain connectivity visualization completed.')"
237+
]
238+
},
239+
{
240+
"cell_type": "markdown",
241+
"id": "755de848-71e5-4320-b5ec-ffd6b8aaddbe",
242+
"metadata": {},
243+
"source": [
244+
"## Conclusion\n",
245+
"\n",
246+
"In this notebook, we have:\n",
247+
"- Loaded epoch data for two participants.\n",
248+
"- Constructed a dyad by combining the data arrays.\n",
249+
"- Computed a synchronization metric (circular correlation, \"ccorr\") to assess inter-brain synchrony across defined frequency bands.\n",
250+
"- Visualized the resulting connectivity matrix using both 2D and 3D plots.\n",
251+
"\n",
252+
"This foundational analysis prepares us for further hyperscanning investigations using HyPyP. In upcoming notebooks, we will explore more advanced preprocessing techniques, compare different synchronization metrics, and implement detailed statistical analyses."
253+
]
254+
}
255+
],
256+
"metadata": {
257+
"kernelspec": {
258+
"display_name": "Python 3 (ipykernel)",
259+
"language": "python",
260+
"name": "python3"
261+
},
262+
"language_info": {
263+
"codemirror_mode": {
264+
"name": "ipython",
265+
"version": 3
266+
},
267+
"file_extension": ".py",
268+
"mimetype": "text/x-python",
269+
"name": "python",
270+
"nbconvert_exporter": "python",
271+
"pygments_lexer": "ipython3",
272+
"version": "3.10.11"
273+
}
274+
},
275+
"nbformat": 4,
276+
"nbformat_minor": 5
277+
}

0 commit comments

Comments
 (0)