-
Notifications
You must be signed in to change notification settings - Fork 3.7k
add Agentic #9245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
add Agentic #9245
Conversation
|
I fixed messages.json and renamed commands and custom document settings to all be prefixed with "agentic_..." |
|
I also tested the plugin with sublime 3211 and it runs fine. I think Agentic should mostly work for any >= 3000. I am only using python 3.3 features and Python standard library, with basic sublime features for file manipulation. |
That's fine. I would actually recommend opting in to the python 3.8 runtime if you can, and testing on python 3.13+ as that runtime version is just around the corner for ST. Future-proofing is more important that worrying about versions from more than 6 years ago 😄 People who don't upgrade their ST installation for that long probably aren't looking for the latest LLM implementation 😉 |
|
Please check for prints to the console (e.g. https://github.com/alecGraves/Agentic/blob/master/chat_stream.py#L228) that can be removed (or put behind a debug mode). It looks like your package would need quite a bit of setup to work, but with some reading through the readme and settings file I think I've got most of it figured out... It could perhaps use a getting started guide, since unless you're lucky and have models running at the exact URLs in your examples, not much will work out of the box. Really interesting package though, nice 👍🏻 |
Co-authored-by: Koen Lageveen <[email protected]>
|
Thanks for the review. I appreciate the suggestions. |
oops, fixed name
"context"key that looks for file settings set by the plugin.My package is Agentic, a minimal OpenAI API
v1/chat/completionsinterface for launching multiple custom LLM 'actions' on a file or code snippet in a project and streaming results.My package is similar to others such as OpenAI completion however it should still be added because it focuses on achieving a minimal interface for rapidly launching repetitive user-defined custom agentic actions like 'simplify this code' or 'comment this code'. My package further focuses on streaming from multiple LLMs concurrently to enable multi-LLM workflows, and it supports configuration of pools of models for specific actions through package settings.