Automatic and continuous indexing of existing data sources #608
Replies: 1 comment 1 reply
-
Right now for this use-case I had to add some custom logic to my project, which works like this: when user adds new knowledge from a URL and user checks the checkbox to keep it up-to-date I create a record in my We needed this feature in my company because we have some wiki-style resources which are updated regularly and we wanted Kernel Memory to be in sync with the latest changes. So now when we provide a parent url of our wiki, it can also check all children links recursively and if they are of the same origin it will add them as documents, but we also have to track what was a parent document for each of them. When user deletes parent we also want to delete all children. It would be great if we integrate these features to Kernel Memory itself when it runs as a service. |
Beta Was this translation helpful? Give feedback.
-
Context / Scenario
Common use case of using AI is to provide answer based on given existing knowledge base (it might be some company wiki, git repository, system database)
The problem
Right now the only way to fill kernel memory is by importing all documents one by one.
Proposed solution
It would be nice to have some mechanism to synchronize kernel memory with external data source (eg. azure blob storage, git repository, wiki, sharepoint). The idea is to provide abstraction so people could use it to implement their own data sources. Concepts that it has to provide is:
Importance
None
Beta Was this translation helpful? Give feedback.
All reactions