You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @jbdatascience! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together!
I couldn't find specific information on integrating paper-qa with GROQ LLMs. However, paper-qa is designed to be configurable with different language models through its Settings class. You can customize the LLM configuration by setting parameters like llm, llm_config, summary_llm, and summary_llm_config to specify the models and configurations you want to use [1].
To integrate a non-default LLM like GROQ, you would typically need to ensure that the LLM is compatible with the interfaces expected by paper-qa. This might involve implementing or adapting an interface that allows paper-qa to communicate with the GROQ LLM, similar to how it interacts with other LLMs via LiteLLM.
If GROQ LLMs have a specific API or SDK, you would need to configure paper-qa to use that API for embedding, summarization, and answer generation tasks. This might require custom code to handle the specifics of the GROQ LLM API.
Our docs here are pretty suboptimal, but hopefully you can get 90% of the way there by reading our issues and docs. Basically you'll want to configure the LLM stuff here to align with Groq.
Please feel free to let us know where you get stuck and can try to help.
I would like to use paper-qa with GROQ LLMs. Is this possible and if so, how exactly to do that?
For reference see:
https://console.groq.com/docs/models
Etc.
Also for Python code examples using GROQ LLMs.
The text was updated successfully, but these errors were encountered: