generated from Lightning-AI/deep-learning-project-template
-
Notifications
You must be signed in to change notification settings - Fork 139
Open
Description
Hi,
I'm currently working on using predictions to filter a large amount of data. I've been using the filter method from the HF dataset, but it's taking too long given the size of the dataset. I'm considering running the model on multiple GPUs to speed up the process.
Any suggestions on how to do this? What would be the easiest and most straightforward way to run the model on multiple GPUs?
I'm using something like this:
def tox_filter_list(x):
detox_r = model_tox.predict(x['text'])
result = [max(col) < 0.2 for col in zip(*detox_r.values())]
return result
df.filter(tox_filter_list, batched=True, batch_size=300)Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels