Cost: Reduce LLM Token Usage in Log/Event Analysis #739
Labels
area/ai
AI-related features
help wanted
Extra attention is needed
medium
Requires a moderate level of project knowledge and skills, but does not require deep core technical
priority/important-longterm
P2 Important over the long term,but may not be staffed and/or may need multiple releases to complete
Milestone
What would you like to be added?
Implement frontend pre-processing for logs and events to extract key information before sending to LLM API, reducing token consumption and improving cost efficiency.
Why is this needed?
Currently, the log and event aggregators send full content to LLM APIs (despite length limits), causing unnecessary token consumption.
This improvement will:
This optimization will make our AI features more cost-effective while maintaining analysis quality.
The text was updated successfully, but these errors were encountered: