-
Notifications
You must be signed in to change notification settings - Fork 17
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
…dd-validator-adapter
@tobySolutions Great work on this! I'm loving it. @multipletwigs can you check that the code in run thread is aligned with your architecture of that code? |
Thanks Gui, really appreciate it!!! |
@tobySolutions i'm actually a bit confused on what we're trying to achieve with this, since if (assistantData.model === "llama-2-7b-chat-int8") {
const validatorAdapterRes: any = await validatorAdapter(
everyRoleAndContent,
"llama-2-7b-chat-int8",
assistantData.instruction
);
const assistantResponse: string =
validatorAdapterRes.choices[0].message.content;
// add assistant response to the thread
await createMessage(assistant_id, thread_id, assistantResponse);
const threadRunResponse: ThreadRun = {
id: uuidv4(),
assistant_id: assistant_id,
thread_id: thread_id,
created_at: new Date(),
};
return threadRunResponse;
}
} This can get messy very quickly when we have a lot of models to validate. @GuiBibeau In a way yes, the architecture is kinda how a thread run should be. I ran through the validator code, and it seems like the validator will take in the original Thread content, find the fastest miner, and make a request to it. Just to confirm my understanding, Akeru can make a request to a Validator, and ask "hey i have a thread with these messages, can anyone give me a response", and Validators will see which Miner to request from, and give the response back to Akeru. Which is different from our existing Assistants right? So it makes sense to have two different domains anyways for it. |
If we were to still make Bittensor validators as part of our assistants, then the best course is to just differentiate the models with |
@tobySolutions @multipletwigs just to confirm I love the idea of prefixing the models coming from the subnet with something like On the main issues at hand: the idea would be that the validator/miner combo can act fully on the same level as the gpt-3.5/4.0 adapter. So if somebody creates an assistant with ex: Does that make sense? |
@GuiBibeau in that case, yes this makes sense now! So the Models available should be |
packages/akeru-server/src/core/application/services/runService.ts
Outdated
Show resolved
Hide resolved
packages/akeru-server/src/infrastructure/adaptaters/validators/validatorAdapter.ts
Outdated
Show resolved
Hide resolved
Made updates with all your feedback @multipletwigs!! Thanks for everything. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Awesome, thanks! |
Description of the pull request
Changes made
Fixes #93
Related issues
Testings done
Screenshots (if any)
Checklist