Skip to content

Parameter problem #3

@zjd2024

Description

@zjd2024

After changing the --method and --max_capacity_prompts settings, the generated JSON and scores remain exactly the same.
Is this normal? How should I tune the parameters? 🤦‍♂️
Could you provide a detailed parameter configuration guide?

Below are the three commands I used — all of them produced identical JSON outputs and identical scores.
Looking for some clarification. Thanks!
(1)
bash ./scripts/scripts_longBench/eval.sh
--max_capacity_prompts 2048
--method fullkv
--attn_implementation eager
--source_path ./results/
--model_path /home/jdzhou/exper/uncomp/models/tinyllama
--eval_batch_size 1
--name tinyllama_fullkv
--gpu_id 0
--fp16 1
--seed 43
--logger_pattern info
--port 2236
(2)
bash ./scripts/scripts_longBench/eval.sh
--max_capacity_prompts 384
--method uncomp
--attn_implementation eager
--source_path ./results/
--model_path /home/jdzhou/exper/uncomp/models/tinyllama
--eval_batch_size 1
--name tinyllama_uncomp
--gpu_id 0
--fp16 1
--seed 43
--logger_pattern info
--port 2237
(3)
bash ./scripts/scripts_longBench/eval.sh
--max_capacity_prompts 384
--method uncomp_stage
--attn_implementation eager
--source_path ./results/
--model_path /home/jdzhou/exper/uncomp/models/tinyllama
--eval_batch_size 1
--name tinyllama_uncomp_stage
--gpu_id 0
--fp16 1
--seed 43
--logger_pattern info
--port 2238

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions