Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EasyNMT #79

Open
abidikarima opened this issue Sep 12, 2022 · 0 comments
Open

EasyNMT #79

abidikarima opened this issue Sep 12, 2022 · 0 comments

Comments

@abidikarima
Copy link

Hi, Thank you for this useful library. I tried to install in my machine and it gave me this error.
Any help please?

print(model.translate('This is a sentence we want to translate to German', target_lang='de'))
File "/home/karima/.local/lib/python3.8/site-packages/easynmt/EasyNMT.py", line 154, in translate
raise e
File "/home/karima/.local/lib/python3.8/site-packages/easynmt/EasyNMT.py", line 149, in translate
translated = self.translate(**method_args)
File "/home/karima/.local/lib/python3.8/site-packages/easynmt/EasyNMT.py", line 181, in translate
translated_sentences = self.translate_sentences(splitted_sentences, target_lang=target_lang, source_lang=source_lang, show_progress_bar=show_progress_bar, beam_size=beam_size, batch_size=batch_size, **kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/easynmt/EasyNMT.py", line 278, in translate_sentences
output.extend(self.translator.translate_sentences(sentences_sorted[start_idx:start_idx+batch_size], source_lang=source_lang, target_lang=target_lang, beam_size=beam_size, device=self.device, **kwargs))
File "/home/karima/.local/lib/python3.8/site-packages/easynmt/models/OpusMT.py", line 49, in translate_sentences
translated = model.generate(**inputs, num_beams=beam_size, **kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1182, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
File "/home/karima/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 525, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/transformers/models/marian/modeling_marian.py", line 749, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/karima/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/karima/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/karima/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2199, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant