Skip to content

wip: comment out prepend full_text #3079

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jrc2139
Copy link

@jrc2139 jrc2139 commented Mar 7, 2025

What does this PR do?

This is an investigation as to why using "return_full_text": true as a parameter in hitting the /generate endpoint produces a valid translation while running madlad400

Background: #1416 (comment)

Fixes #1416

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@jrc2139 jrc2139 mentioned this pull request Mar 7, 2025
4 tasks
if let Some(token) = authorization_token {
builder = builder.with_token(Some(token));
}
let mut builder = ApiBuilder::new()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this chunk will be reversed, sorry it got in here

@Narsil
Copy link
Collaborator

Narsil commented Mar 10, 2025

return_full_text is a legacy option linked to transformers.pipelines initial implementation (something like 4+ years ago).
We had API dependencies on that behavior and therefore implemented here, it can mostly be disregarded nowadays.

What you did here is effectively deactivate its job. Which is really simply to decode all the decoder tokens, instead of only the new ones.

I was under the impression that T5 was a encoder/decoder model so I'm surprised to see a decoder-only model here.

I'm pretty sure the "bug" will simply end up being a tokenizer issue, where possibly you're hitting a very old T5 non flash version that doesn't support some flags used in those tokenizers.

Thanks for the fix for your use case, I'll leave it up for others to see, but if we're going to fix it, we need to find the root cause and fix that instead.

@jrc2139
Copy link
Author

jrc2139 commented Mar 10, 2025

@Narsil thanks for looking into this and explaining it to me to give me a sense of what the root of the problem could be. it was nice to see madlad being served with good results and may just use this for the time being. looking forward to tracking this solution and learning more about this great project💪

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Madlad400 support
2 participants