Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: cannot import name 'url_to_fs' from 'fsspec' #507

Closed
Nevermetyou65 opened this issue Jan 22, 2025 · 2 comments · Fixed by #512
Closed

ImportError: cannot import name 'url_to_fs' from 'fsspec' #507

Nevermetyou65 opened this issue Jan 22, 2025 · 2 comments · Fixed by #512

Comments

@Nevermetyou65
Copy link

Hi

I am trying this lighteval for the first time and got struck immediately.

I tired to run this example

lighteval accelerate \
     "pretrained=gpt2" \
     "leaderboard|truthfulqa:mc|0|0"

And I got this error
ImportError: cannot import name 'url_to_fs' from 'fsspec' (/root/miniconda3/envs/lightevals/lib/python3.11/site-packages/fsspec/__init__.py)

Steps to reproduce

git clone https://github.com/huggingface/lighteval.git
cd lighteval
pip install -e .
@LoserCheems
Copy link
Contributor

I ran into the same problem and below is my log error.

[2025-01-24 12:41:06,387] [    INFO]: PyTorch version 2.4.1+cpu available. (config.py:54)
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ E:\conda\envs\lighteval\lib\site-packages\lighteval\main_accelerate.py:109 in accelerate         │
│                                                                                                  │
│   106 │   import yaml                                                                            │
│   107 │   from accelerate import Accelerator, InitProcessGroupKwargs                             │
│   108 │                                                                                          │
│ ❱ 109 │   from lighteval.logging.evaluation_tracker import EvaluationTracker                     │
│   110 │   from lighteval.models.model_input import GenerationParameters                          │
│   111 │   from lighteval.models.transformers.adapter_model import AdapterModelConfig             │
│   112 │   from lighteval.models.transformers.delta_model import DeltaModelConfig                 │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │                 cache_dir = None                                                             │ │
│ │              custom_tasks = None                                                             │ │
│ │ dataset_loading_processes = 1                                                                │ │
│ │                    job_id = 0                                                                │ │
│ │               max_samples = None                                                             │ │
│ │                model_args = 'pretrained=JingzeShi/Doge-160M-checkpoint,max_length=2048,trus… │ │
│ │         num_fewshot_seeds = 1                                                                │ │
│ │                output_dir = './lighteval_results'                                            │ │
│ │       override_batch_size = 16                                                               │ │
│ │                public_run = False                                                            │ │
│ │               push_to_hub = False                                                            │ │
│ │       push_to_tensorboard = False                                                            │ │
│ │               results_org = None                                                             │ │
│ │              save_details = False                                                            │ │
│ │             system_prompt = None                                                             │ │
│ │                     tasks = 'lighteval|triviaqa|0|0'                                         │ │
│ │                     torch = <module 'torch' from                                             │ │
│ │                             'E:\\conda\\envs\\lighteval\\lib\\site-packages\\torch\\__init_… │ │
│ │         use_chat_template = False                                                            │ │
│ │                      yaml = <module 'yaml' from                                              │ │
│ │                             'E:\\conda\\envs\\lighteval\\lib\\site-packages\\yaml\\__init__… │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │
│ E:\conda\envs\lighteval\lib\site-packages\lighteval\logging\evaluation_tracker.py:37 in <module> │
│                                                                                                  │
│    34 import torch                                                                               │
│    35 from datasets import Dataset, load_dataset                                                 │
│    36 from datasets.utils.metadata import MetadataConfigs                                        │
│ ❱  37 from fsspec import url_to_fs                                                               │
│    38 from huggingface_hub import DatasetCard, DatasetCardData, HfApi, HFSummaryWriter, hf_hub   │
│    39                                                                                            │
│    40 from lighteval.logging.info_loggers import (                                               │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │    json = <module 'json' from 'E:\\conda\\envs\\lighteval\\lib\\json\\__init__.py'>          │ │
│ │ logging = <module 'logging' from 'E:\\conda\\envs\\lighteval\\lib\\logging\\__init__.py'>    │ │
│ │      os = <module 'os' from 'E:\\conda\\envs\\lighteval\\lib\\os.py'>                        │ │
│ │      re = <module 're' from 'E:\\conda\\envs\\lighteval\\lib\\re.py'>                        │ │
│ │    time = <module 'time' (built-in)>                                                         │ │
│ │   torch = <module 'torch' from                                                               │ │
│ │           'E:\\conda\\envs\\lighteval\\lib\\site-packages\\torch\\__init__.py'>              │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'url_to_fs' from 'fsspec' (E:\conda\envs\lighteval\lib\site-packages\fsspec\__init__.py)

@LoserCheems
Copy link
Contributor

I would try fix this bug.

LoserCheems added a commit to LoserCheems/lighteval that referenced this issue Jan 24, 2025
clefourrier added a commit that referenced this issue Jan 24, 2025
* Fixed bug of import url_to_fs from fsspec (#507)

* Add a try catch as import depend on fsspec version

* Fixed code quality

---------

Co-authored-by: Clémentine Fourrier <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants