Replies: 4 comments 8 replies
-
Hi @Nniol, as the prompt evolves in complexity to grant better results, unfortunately some models cannot keep the pace. I wonder if it even makes sense to keep supporting this workaround. @mspronesti what's your thoughts about it? |
Beta Was this translation helpful? Give feedback.
-
I saw no reference to Llama 2 in the documentation.
Do you have an example please?
***@***.***
…On Tue, 5 Sept 2023, 13:02 Massimiliano Pronesti, ***@***.***> wrote:
@gventuri <https://github.com/gventuri> @Nniol <https://github.com/Nniol>
I'm subjectively skeptical of the capabilities of Starcoder and Falcon. I
only use OpenAI and Meta's models (Llama 2 and CodeLlama), which work
pretty well for pandasai.
As the number of new issues and discussions reporting poor performances
when using Starcoder and Falcon has not dropped, I'd personally deprecate
them for good.
One can still use these models via langchain, at their own "risk", whereas
an "official" support implies (in my opinion) some garantees on the
performances :)
—
Reply to this email directly, view it on GitHub
<#525 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAOBY6GMIWP5KQGQ7SOYCJDXY4BENANCNFSM6AAAAAA4KVNFXM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
@Nniol Falcon and Starcoder have been deprecated and will be removed in a future release. |
Beta Was this translation helpful? Give feedback.
-
Hi there, I worked on StarCoder and would be happy to understand what the limitations are you are facing. Note that StarCoder is a base model trained on code and as such cannot chat. If you need a chatty model for your API maybe StarChat would solve your issues. Also note that these models run on the inference API (backed by TGI) and you can generate more than 10 tokens. You can find the chat demo code here using the inference API: https://huggingface.co/spaces/HuggingFaceH4/starchat-playground/blob/main/app.py Also CodeLlama/Llama 2 are available via the inference API. We are working on the next generation of StarCoder and if you have specific feedback on the model format/limitations let us know. |
Beta Was this translation helpful? Give feedback.
-
Hi,
We are using SmartDataFrame with OpenAI and it works well, however we want to showcase other LLMs so we are trying to use the HuggingFace LLMS
However everytime we run the code we get:
Unfortunately, I was not able to answer your question, because of the following error:\n\nNo code found in the response\n
If we swap out the Starcode or Falcon to
It all works fine
Beta Was this translation helpful? Give feedback.
All reactions