We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@xiaoqian-shen Hello, I ran the eval code, but the accuracy on the mvBench dataset is low. Is this normal? Thanks. command:
num_gpus=1 torchrun --standalone --nnodes 1 --nproc_per_node $num_gpus eval/eval_mvbench.py --data_path ./data/opendata/MVBench/ \ --version llama3 --model_path checkpoints/LongVU_Llama3_2_3B
results:
Accuracy: 34.225 Task ccuracy {'Action Sequence': 25.0, 'Action Prediction': 27.5, 'Action Antonym': 62.5, 'Fine-grained Action': 22.5, 'Unexpected Action': 34.0, 'Object Existence': 52.0, 'Object Interaction': 29.5, 'Object Shuffle': 30.5, 'Moving Direction': 24.0, 'Action Localization': 23.5, 'Scene Transition': 28.5, 'Action Count': 41.5, 'Moving Count': 36.5, 'Moving Attribute': 42.5, 'State Change': 38.5, 'Fine-grained Pose': 26.5, 'Character Order': 41.5, 'Egocentric Navigation': 33.5, 'Episodic Reasoning': 30.5, 'Counterfactual Inference': 34.0}
The text was updated successfully, but these errors were encountered:
I can run eval_mvbench.py on a single 20G GPU, but it will report an out of memory error when running on 8 GPU cards with 80G.
Sorry, something went wrong.
No branches or pull requests
@xiaoqian-shen Hello, I ran the eval code, but the accuracy on the mvBench dataset is low. Is this normal? Thanks.
command:
results:
The text was updated successfully, but these errors were encountered: