- Install Miniconda or Anaconda.
- Create the environment:
conda env create -f environment.yml conda activate cloudspace
- (Optional) Update later:
conda env update -f environment.yml --name cloudspace
Lightning Studios gives you one default Conda environment (often called cloudspace). Update that active env in place:
# from the repo root
conda env update -f environment.ymlThe datasets used in this project are not included in this repository.
You can access them through the following link:
Alternatively, you may collect the audio files directly from their original sources if you prefer.
Please follow these guidelines when preparing your local dataset structure:
- Folder location: place all datasets inside the
Original_datasetsfolder located in the project root. - Folder organization: within
Original_datasets, create a separate folder for each species and store the corresponding WAV files inside it. - File naming: Do not rename audio files. Keep the filenames as distributed in the Zenodo Datasets/ folder to match the provided metadata and notebooks.
Once your dataset is in place, you can start running the Jupyter notebooks.
The metadata of datasets used in this project are not included in this repository You can access them through the following shared folder:
Then, paste the downloaded files into the Output_metadata folder using the following structure:
Output_metadata
├── GreatTit_metadata
│ ├── final_greatTit_metadata.csv
│ ├── test_metadata.csv
│ ├── train_metadata.csv
│ └── val_metadata.csv
├── chiffchaff-fg
│ ├── chiffchaff-withinyear-fg-trn.csv
│ └── chiffchaff-withinyear-fg-tst.csv
├── KiwiTrimmed
│ └── kiwi_metadata.csv
├── littleowl-fg
│ ├── littleowl-acrossyear-fg-trn.csv
│ └── littleowl-acrossyear-fg-tst.csv
├── littlepenguin_metadata
│ └── littlepenguin_metadata_corrected.csv
├── pipit-fg
│ ├── pipit-withinyear-fg-trn.csv
│ └── pipit-withinyear-fg-tst.csv
└── rtbc_metadata
└── rtbc_metadata.csv
Before extracting embeddings, each vocalization must be padded so its duration is a multiple of 3 seconds.
Run the following notebook first:
Notebooks/3_Adding silence/Adding_silence_to_audios.ipynb
This notebook adds the necessary silence and outputs audio files ready to be processed by BirdNET.
For large datasets, this step can be time-consuming, so please be patient.
Next, extract the embeddings with:
Notebooks/4_gettingEmbeddings/1_gettingEmbeddings_parquet.ipynb
This notebook uses the BirdNETlib library to process the padded audio datasets, extract embeddings, and save the results in Parquet format.
Make sure to adjust the file paths and parameters inside the notebook to match your specific dataset and requirements.
Embeddings are extracted using BirdNET v2.4 via birdnetlib (1024-D embeddings, classification head removed). Audio is processed in non-overlapping 3 s windows after zero-padding to the next 3 s multiple. “birdnetlib handles resampling to 48 kHz and spectrogram generation internally.”
Each dataset will produce a set of Parquet parts, saved under:
Output_files/Embeddings_from_3sPadding/<dataset_name>_parquet_parts/
Example:
Output_files/Embeddings_from_3sPadding/littleowl_parquet_parts/part_0000.parquet
Output_files/Embeddings_from_3sPadding/littleowl_parquet_parts/littleowl_processed_files.parquet
Once embeddings are extracted, they can be used as input for the embedding to individual repository
This project is licensed under the MIT License.