diff --git a/JOSS/Joss_MEG_chicken.md b/JOSS/Joss_MEG_chicken.md index 36c9e08..d66b9cb 100644 --- a/JOSS/Joss_MEG_chicken.md +++ b/JOSS/Joss_MEG_chicken.md @@ -42,14 +42,14 @@ bibliography: references.bib Detecting and processing signal artifacts is crucial for analyzing neural time-series data from magnetoencephalography (MEG) or electroencephalography (EEG) recordings [@luck_electroencephalography_2017]. While numerous automated and semi-automated artifact detection algorithms exist (e.g., [@jas_autoreject_2017]), visual inspection and manual labeling remain the most widely used methods for identifying components contaminated by eye movements, muscle activity, or electrical noise. Despite excellent resources describing common physiological and electrical artifacts in MEG and EEG data [@burgess_recognizing_2020;@uriguen_eeg_2015], decisions about segment or component rejection are ultimately subjective. This subjectivity often results in inconsistencies, particularly when training new lab members. -Implicit or procedural learning refers to the acquisition of skills and knowledge through repeated exposure and practice, without explicit instruction. A well established example of implicit rule learning is chicken sexing, where workers learn to distinguish the sex of day-old chicks based on subtle visual cues. Despite often not being able to articulate which exact features distinguish between male and female chicks, experienced chicken sexers can classify chicks accurately and reliably through extensive experience and feedback [@horsey_art_nodate]. +Implicit or procedural learning refers to the acquisition of skills and knowledge through repeated exposure and practice, without explicit instruction. A well-established example of implicit rule learning is chicken sexing, where workers learn to distinguish the sex of days-old chicks based on subtle visual cues. Despite often not being able to articulate which exact features distinguish between male and female chicks, experienced chicken sexers can classify chicks accurately and reliably through extensive experience and feedback [@horsey_art_2002]. -`MEG Chicken` uses the principle of implicit learning through immediate feedback to streamline and standardize the process of learning to detect artifacts in electrophysiology data. The open source software tool presents trainees with data containing various types of artifacts and provides immediate feedback on their decisions, enabling consistent rejection criteria to be learned implicitly. -Labs can customize the training with their own annotated datasets to ensure alignment with lab-specific standards. +`MEG Chicken` uses the principle of implicit learning to streamline and standardize the process of learning to detect artifacts in electrophysiology data. The open source software tool presents trainees with data containing various types of artifacts and provides immediate feedback on their decisions, enabling consistent rejection criteria to be learned implicitly. +Labs can customize the training with their own annotated datasets to ensure alignment with lab-specific standards. In a second step, annotated data can be made available to others to establish a reference library that facilitates the establishment of community standards. # Functionality -`MEG Chicken` is built on the MNE-Python library and employs MNE's user-friendly graphical interface for visualizing time-series data and sensor topographies. The software includes labeled example datasets available for download via Zenodo, and labs can import or create their own labeled datasets through the interactive interface. +`MEG Chicken` is built on the MNE-Python library [@gramfort_mne_2014] and employs MNE's user-friendly graphical interface for visualizing time-series data and sensor topographies. The software includes labeled example datasets available for download via Zenodo, and labs can import or create their own labeled datasets through the interactive interface. The training program includes modules for: @@ -64,7 +64,8 @@ Immediate feedback after each decision supports implicit learning in the absense ### Evaluation and Performance -Testing on 5 observes, naive to MEG and EEG artefacts, showed: +Testing on 5 observers, naive to MEG and EEG artifacts, showed: + - Participants required an average of `xx` minutes to achieve `xx%` accuracy in bad channel selection. - Participants required an average of `xx` minutes to achieve `xx%` accuracy in ICA component selection. - Performance remained consistent in follow-up tests conducted several days later. @@ -102,4 +103,4 @@ We thank Laetitia Grabot for providing the example dataset, and Coline Haro for - `playsound` - `copy` -# References +# References diff --git a/JOSS/paper.pdf b/JOSS/paper.pdf index a1b826e..0afe69c 100644 Binary files a/JOSS/paper.pdf and b/JOSS/paper.pdf differ diff --git a/JOSS/references.bib b/JOSS/references.bib index e6d9bb5..08a4c81 100644 --- a/JOSS/references.bib +++ b/JOSS/references.bib @@ -1,8 +1,13 @@ -@unpublished{horsey_art_nodate, - title = {The art of chicken sexing}, - url = {https://web-archive.southampton.ac.uk/cogprints.org/3255/1/chicken.pdf}, - author = {Horsey, Richard}, +@article{gramfort_mne_2014, + title = {{MNE} software for processing {MEG} and {EEG} data}, + volume = {86}, + issn = {1053-8119}, + journal = {Neuroimage}, + author = {Gramfort, Alexandre and Luessi, Martin and Larson, Eric and Engemann, Denis A and Strohmeier, Daniel and Brodbeck, Christian and Parkkonen, Lauri and Hämäläinen, Matti S}, + year = {2014}, + note = {Publisher: Elsevier}, + pages = {446--460}, } @article{luck_electroencephalography_2017, @@ -46,4 +51,12 @@ @article{jas_autoreject_2017 year = {2017}, note = {Publisher: Elsevier}, pages = {417--429}, -} \ No newline at end of file +} + +@article{horsey_art_2002, + title = {The art of chicken sexing}, + volume = {14}, + journal = {UCL Working Papers in Linguistics}, + author = {Horsey, Richard}, + year = {2002}, +} diff --git a/README.md b/README.md index 426daba..30f593a 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ # EEG and MEG artifact detection -This repository contains code and resources for an EEG and MEG artifact detection training program. Trainees can review and annotate data and receive immediate feedback on their choices. Trainers are encouraged to upload their own annotated data. +This repository contains code and resources for an EEG and MEG artifact detection training program. Trainees can review and annotate data and receive immediate feedback on their choices. Trainers are encouraged to upload their own annotated data. ## Table of Contents diff --git a/__pycache__/FeedbackWindow.cpython-312.pyc b/__pycache__/FeedbackWindow.cpython-312.pyc new file mode 100644 index 0000000..8520465 Binary files /dev/null and b/__pycache__/FeedbackWindow.cpython-312.pyc differ diff --git a/__pycache__/config.cpython-312.pyc b/__pycache__/config.cpython-312.pyc new file mode 100644 index 0000000..b27d127 Binary files /dev/null and b/__pycache__/config.cpython-312.pyc differ diff --git a/__pycache__/ica_plot.cpython-312.pyc b/__pycache__/ica_plot.cpython-312.pyc new file mode 100644 index 0000000..6d7a31c Binary files /dev/null and b/__pycache__/ica_plot.cpython-312.pyc differ diff --git a/__pycache__/run_funcs.cpython-312.pyc b/__pycache__/run_funcs.cpython-312.pyc new file mode 100644 index 0000000..841722f Binary files /dev/null and b/__pycache__/run_funcs.cpython-312.pyc differ diff --git a/__pycache__/slides.cpython-312.pyc b/__pycache__/slides.cpython-312.pyc new file mode 100644 index 0000000..e8b6831 Binary files /dev/null and b/__pycache__/slides.cpython-312.pyc differ