-
Notifications
You must be signed in to change notification settings - Fork 8
LitTool.from_model
method to create LitTool from Pydantic
#57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
LitTool.from_model
method to create LitTool from Pydantic
#57
Conversation
Added a class method 'from_model' to create a LitTool from a Pydantic model, including setup and run methods for validation.
src/litai/tools.py
Outdated
def run(self, *args, **kwargs) -> Any: | ||
# Default implementation: validate & return an instance | ||
return model(*args, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious—how is this meant to be invoked? In the run
method, it looks like it would just return a Pydantic model instance, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check out the shell output in the description. when invoked it does just return the model. if I didn't implement this, then calling tools would break, even though it isn't helpful to invoke. kind of just pass-through behavior
LitTool.from_model
method to create LitTool from Pydantic
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #57 +/- ##
==================================
Coverage 84% 85%
==================================
Files 8 8
Lines 431 443 +12
==================================
+ Hits 364 376 +12
Misses 67 67 🚀 New features to boost your workflow:
|
this almost feels like it could be part of |
upon some further testing, i came across an issue when trying to use that (1) modify the def run(self, *args, **kwargs) -> Any: # type: ignore
# Default implementation: validate & return an instance
return model(*args, **kwargs).model_dump() # <-- change here to make it json-serializable (2) modify @staticmethod
def call_tool(
response: Union[List[dict], dict, str], tools: Optional[Sequence[Union[LitTool, "StructuredTool"]]] = None
) -> Optional[Union[str, BaseModel, list[BaseModel]]]:
...
try:
return json.dumps(results) if len(results) > 1 else results[0]
except TypeError:
return results if len(results) > 1 else results[0] my preference is (2) so that the invocation of however, option (3) is also available: move the logic to a dedicated method such as for example: def predict( # noqa: D417
self,
prompt: str,
contracts: Sequence[type[BaseModel]],
system_prompt: Optional[str] = None,
model: Optional[str] = None,
max_tokens: int = 500,
images: Optional[Union[List[str], str]] = None,
conversation: Optional[str] = None,
metadata: Optional[Dict[str, str]] = None,
stream: bool = False,
auto_call_tools: bool = False,
**kwargs: Any,
) -> Optional[Union[BaseModel, list[BaseModel]]]:
"""Sends a message to the LLM and retrieves a structured response based on the provided Pydantic models."""
tools = [LitTool.from_model(c) for c in contracts]
response = self.chat(
prompt=prompt,
system_prompt=system_prompt,
model=model,
max_tokens=max_tokens,
images=images,
conversation=conversation,
metadata=metadata,
stream=stream,
tools=tools,
auto_call_tools=auto_call_tools,
**kwargs,
)
# Call tool(s) with the given response.
if isinstance(response, str):
try:
response = json.loads(response)
except json.JSONDecodeError:
raise ValueError("Tool response is not a valid JSON string")
results = []
if isinstance(response, dict):
response = [response]
for tool_response in response:
if not isinstance(tool_response, dict):
continue
tool_name = tool_response.get("function", {}).get("name")
if not tool_name:
continue
tool_args = tool_response.get("function", {}).get("arguments", {})
if isinstance(tool_args, str):
try:
tool_args = json.loads(tool_args)
except json.JSONDecodeError:
print(f"❌ Failed to parse tool arguments: {tool_args}")
return None
if isinstance(tool_args, dict):
tool_args = {k: v for k, v in tool_args.items() if v is not None}
for tool in tools:
if tool.name == tool_name:
results.append(tool.run(**tool_args))
if len(results) == 0:
return None
return results if len(results) > 1 else results[0] upside of this is a dedicated method and avoidance of the user needing to call let me know which path is suitable and I'll push up another commit. @bhimrazy |
Hi @mathematicalmichael, thanks for the updates. I’m a bit unsure about the purpose here — this feels more like structured data extraction than a tool implementation. Let’s hear what the maintainers think, and you can proceed accordingly.
|
that is correct @bhimrazy structured extraction is the goal, tool use is almost identical under the hood though. semantics aside (what to call the method), I did want to put the functionality forward (it's 95% of the business use cases I encounter). |
@mathematicalmichael I agree with you—option 2 feels like the best way forward. Option 3 has some interesting points, but it might be a bit harder to maintain. |
re (3): I've been putting option (3) through its paces (hundreds of API calls via i'll push an update with (2) shortly. thank you! |
This PR adds a class method
from_model
to create aLitTool
from a Pydantic model, including setup and run methods for validation.Before submitting
What does this PR do?
Adds a convenience method which allows converting existing classes which inherit from
pydantic.BaseModel
intoLitTool
classes.Example:
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.
Did you have fun?
yes!
Additional Information
what feels wrong to me ergonomics-wise with the example above is this use-case being served by the
.chat
interface instead of something dedicated for this purpose.open to suggestions. in theory, the goal is to just go FROM text TO json that adheres to the pydantic model.
That question is distinct from the contribution in the PR, though.