Skip to content

multiple requests according to model type #544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open

Conversation

mansi-k
Copy link
Contributor

@mansi-k mansi-k commented Mar 27, 2025

Ticket1: https://clarifai.atlassian.net/browse/PR-468

Ticket2: https://clarifai.atlassian.net/browse/PR-471

For multimodal, the image and the text in the test is unrelated. Made them related to make sure the image is taken into consideration by the model

Only image url seems to be considered. Enabled passing imagebytes too, as some models do image processing before prediction

Also added test for multimodal models with only text in the input

Logged the last token in the streamed response

@mansi-k mansi-k requested review from luv-bansal and mogith-pn March 27, 2025 10:48
Comment on lines +103 to +106
default_request = service_pb2.PostModelOutputsRequest(
model=resources_pb2.Model(model_version=model_version_proto),
inputs=[
resources_pb2.Input(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dont need a default request, can just use _build_text_to_text_requests as default request no?

Comment on lines +130 to +132
def _build_text_to_text_requests():
requests = [
service_pb2.PostModelOutputsRequest(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think we can reduce duplication by creating a function: CreateRequest(model_version_proto, image_input, text_input), where you can pass in the Image, Text and ModelVersionProto as params?

Would make it reusable elsewhere too

Comment on lines +254 to +258
except Exception as e:
logger.error(f"Model Generation failed: {e}")
traceback.print_exc()
generate_response = service_pb2.MultiOutputResponse(
status=status_pb2.Status(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated to this PR, but shouldn't the predict / generate stream functions themselves perform this fallback to calling MultiOutputResponse API?

Comment on lines +365 to +367
command = [
self.python_executable,
"-m",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously was better?

Comment on lines +447 to +448
[
"docker",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as another one, single like was better no?

Copy link
Contributor

@luv-bansal luv-bansal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test-locally will work differently in the pythonic interface, so not sure if we need this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants