Skip to content

Commit

Permalink
add requirements.txt for no-deps for timm and open-clip packages
Browse files Browse the repository at this point in the history
  • Loading branch information
hsubramony committed Jan 28, 2025
1 parent bbcc1fb commit 3052785
Show file tree
Hide file tree
Showing 6 changed files with 23 additions and 5 deletions.
4 changes: 4 additions & 0 deletions examples/image-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,10 @@ python run_image_classification.py \

This directory contains an example script that demonstrates using FastViT with graph mode.

```bash
pip install --no-deps -r requirements_no_deps.txt
```

### Single-HPU inference

```bash
Expand Down
1 change: 1 addition & 0 deletions examples/image-classification/requirements_no_deps.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
timm
8 changes: 6 additions & 2 deletions examples/visual-question-answering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@ limitations under the License.

# Visual Question Answering Examples

```bash
pip install -r requirements.txt
```

## Single-HPU inference

The `run_pipeline.py` script showcases how to use the Transformers pipeline API to run visual question answering task on HPUs.
Expand All @@ -34,7 +38,7 @@ The `run_openclip_vqa.py` can be used to run zero shot image classification with
The requirements for `run_openclip_vqa.py` can be installed with `openclip_requirements.txt` as follows:

```bash
pip install -r openclip_requirements.txt
pip install --no-deps -r openclip_requirements.txt
```

By default, the script runs the sample outlined in [BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 notebook](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/blob/main/biomed_clip_example.ipynb). One can also can also run other OpenCLIP models by specifying model, classifier labels and image URL(s) like so:
Expand All @@ -46,4 +50,4 @@ python run_openclip_vqa.py \
--image_path "http://images.cocodataset.org/val2017/000000039769.jpg" \
--use_hpu_graphs \
--bf16
```
```
3 changes: 1 addition & 2 deletions examples/visual-question-answering/openclip_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
open_clip_torch==2.23.0
matplotlib

timm
2 changes: 2 additions & 0 deletions examples/visual-question-answering/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
matplotlib
ftfy
10 changes: 9 additions & 1 deletion tests/test_openclip_vqa.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,16 @@

def _install_requirements():
PATH_TO_EXAMPLE_DIR = Path(__file__).resolve().parent.parent / "examples"

cmd_line = (
f"pip install -r {PATH_TO_EXAMPLE_DIR / 'visual-question-answering' / 'requirements.txt'}".split()
)
p = subprocess.Popen(cmd_line)
return_code = p.wait()
assert return_code == 0

cmd_line = (
f"pip install -r {PATH_TO_EXAMPLE_DIR / 'visual-question-answering' / 'openclip_requirements.txt'}".split()
f"pip install --no-deps -r {PATH_TO_EXAMPLE_DIR / 'visual-question-answering' / 'openclip_requirements.txt'}".split()
)
p = subprocess.Popen(cmd_line)
return_code = p.wait()
Expand Down

0 comments on commit 3052785

Please sign in to comment.