CIDDA RMIT IR participation at the NTCIR FairWeb-1 task
-
Open the terminal on your Linux machine.
cd ~ -
Generate an SSH key (if you haven't already) by using the command:
ssh-keygen -t rsa -b 4096 -C "[email protected]"Follow the prompts to generate the key pair. Note: Instead of overwriting the existing rsa file, it is recommended to create a new one.
-
Add your SSH key to your GitHub account by copying the contents of the public key to your clipboard (i.e., from the newly created pub.key). You can use the command:
cat ~/.ssh/id_rsa.pubThis command will output the contents of your public key to the terminal. Copy the entire output to your clipboard.
Then, go to your GitHub account settings, click on "SSH and GPG keys", and click on "New SSH key".
Give the key a descriptive title and paste the contents of your public key into the "Key" field. Click "Add SSH key" to save it. -
Then in the terminal, go to the directory where you want the repository cloned and enter the following command to clone:
git clone -b dev [email protected]:rmit-ir/fairweb-1.gitEnter your passphrase if/when prompted.
Note: Make sure you have appropriate access permissions and authentication credentials (such as an SSH key) before attempting to clone a private repository.
-
Go the NTCIREVAL folder and enter the following command:
makeYou should see
gcc -o ntcir_eval ntcir_eval.o -lmon your terminal as a result. Now, to test that the evaluation scripts are working, lets go to the folder namedtoy -
Run the following command:
./run_eval_script.shThis should generate files with the following suffix,
.tid,GFRnev, and foldersP001,P002,P003,P004i.e., based on the number of topics and theresultsfolder which will contain all the.tsmfiles which contain the scores. More details about the scores will be explained in a later section. Please open one of the.tsmfile in the results folder and ensure the values are not 0 or the file is not empty. Also, If you want to clear all the generated files, you may also use the following command:./clear_generations.shNote: Please do make sure the above mentioned shell scripts have
executepermissions. For example, this can be done by the following command:chmod +x run_eval_script.shchmod +x clear_generations.shYou have succesfully run the evaluation on the
toydataset.
- Go to the folder
FW1pilotpack - The organizers have provided the following baseline run files which are available in this folder.
run.qld-depThre6 run.qljm-depThre6 run.bm25-depThre6 - The evaluation script runs on all the runs named in the file
runlist. - Run the following command to evaluate all the runs named in the file
runlist./run_eval_script.sh - If all the files are generated successfully for existing runs, you may add your own runs to this folder for evaluation.
- Add the run name to the the
runlistfile.
After running the ./run_eval_script.sh, please run the following to aggregate the results and create visualisations:
python eval_test.py
After running the python script you should be able to see .pdf files and .csv files created inside the 'results' folder that was earlier generated in the same path.