-
Notifications
You must be signed in to change notification settings - Fork 7
multiple requests according to model type #544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
default_request = service_pb2.PostModelOutputsRequest( | ||
model=resources_pb2.Model(model_version=model_version_proto), | ||
inputs=[ | ||
resources_pb2.Input( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dont need a default request, can just use _build_text_to_text_requests
as default request no?
def _build_text_to_text_requests(): | ||
requests = [ | ||
service_pb2.PostModelOutputsRequest( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think we can reduce duplication by creating a function: CreateRequest(model_version_proto, image_input, text_input), where you can pass in the Image, Text and ModelVersionProto as params?
Would make it reusable elsewhere too
except Exception as e: | ||
logger.error(f"Model Generation failed: {e}") | ||
traceback.print_exc() | ||
generate_response = service_pb2.MultiOutputResponse( | ||
status=status_pb2.Status( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this PR, but shouldn't the predict / generate stream functions themselves perform this fallback to calling MultiOutputResponse
API?
command = [ | ||
self.python_executable, | ||
"-m", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously was better?
[ | ||
"docker", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as another one, single like was better no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test-locally will work differently in the pythonic interface, so not sure if we need this
Ticket1: https://clarifai.atlassian.net/browse/PR-468
Ticket2: https://clarifai.atlassian.net/browse/PR-471
For multimodal, the image and the text in the test is unrelated. Made them related to make sure the image is taken into consideration by the model
Only image url seems to be considered. Enabled passing imagebytes too, as some models do image processing before prediction
Also added test for multimodal models with only text in the input
Logged the last token in the streamed response