Replies: 21 comments
-
|
<@338099417620152320> please continue the discussion here. |
Beta Was this translation helpful? Give feedback.
-
|
thanks for sending the full lock output from the logs - the delete isn't crashing. It's waiting. postgres is refusing to drop the table because another connection is still open and hasn't finished its work. That open connection is blocking the delete. that's why your request hangs. can you try this on a completely fresh setup brand new containers, no other requests, no UI calls and then immediately run the |
Beta Was this translation helpful? Give feedback.
-
|
The only thing accessing cognee is the cognee container itself as I have downed my UI and API. The delete script is running in a shell script and all it does is login to get a token, check the dataset name to find the id and then tries to delete it. |
Beta Was this translation helpful? Give feedback.
-
|
okay that helps. since it's just the cognee container running can you try one quick thing restart everything and then call the |
Beta Was this translation helpful? Give feedback.
-
|
No same issue, nothing running at all and the delete still hangs when only trying to delete via the api the only dataset i have in the db e154b314-db06-5018-9515-bfd90f230597 |
Beta Was this translation helpful? Give feedback.
-
|
same kind of lock logs |
Beta Was this translation helpful? Give feedback.
-
|
and same line in the cognee logs:
|
Beta Was this translation helpful? Give feedback.
-
|
After restarting both containers and making only a single <@&1459132796818821192> could you take a look? full lock trace above. |
Beta Was this translation helpful? Give feedback.
-
|
hey <@338099417620152320> , I'll try and take a look at this today, thanks for flagging the issue! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks <@506030217710665738> |
Beta Was this translation helpful? Give feedback.
-
|
is it possible to use my postgres db for the graph database provider instead of kuzu and maybe that will fix it? |
Beta Was this translation helpful? Give feedback.
-
|
I am using postgres for the storage atm but wondering if using it for the graphs (which i thought it was doing tbh until i saw the Deleted Kuzu databse files line in the logs) |
Beta Was this translation helpful? Give feedback.
-
|
No, we tried looking into the Apache AGE extension for postgres to enable graphs there but the performance for graph operations was not good for our needs |
Beta Was this translation helpful? Give feedback.
-
|
It is possible to use QDrant for the vector storage instead of PGVector while we look into and resolve the issue with delete you are having |
Beta Was this translation helpful? Give feedback.
-
|
So you suggest for now just leaving it as Kuzu or look into neptune or something if the software does everything we need. Im testing it atm you see so if neptune only increases performance we can leave that til last? |
Beta Was this translation helpful? Give feedback.
-
|
In that case I'd stick with kuzu for the graph DB as there is no need for multiple users to read write and access data at the same time, other options would require hosting which would not be cheap. Kuzu is a simple and cheap file based graph DB that performs well in a single user per DB scenario |
Beta Was this translation helpful? Give feedback.
-
|
So Neo4j or something better for multi tenant scenarios and kuzu for single user access |
Beta Was this translation helpful? Give feedback.
-
|
Kuzu works fine as well in a multi-tenant / multi-user scenario as long as users are not working on the same dataset (Kuzu graph DB) As when there is a write operation on the Kuzu DB it creates a lock and other users can't access the data until the write operation is finished (so they have to wait until the lock is released) |
Beta Was this translation helpful? Give feedback.
-
|
I have switched to using a local neo4j instance for dev and it seems to be working quite well atm. And it allows for the deleting of datasets now since switching. |
Beta Was this translation helpful? Give feedback.
-
|
We don't have support yet for local neo4j for the multi-tenant/multi-user scenario in Cognee though only with neo4j aura hosting As each dataset has to be a new neo4j instance neo4j doesn't allow creation of new databases in a neo4j instance for their local community edition version. When you set ENABLE_BACKEND_ACCESS_CONTROL to False in Cognee it also disables the functionality of datasets not just the permission system |
Beta Was this translation helpful? Give feedback.
-
|
<@338099417620152320> we found and resolved the issue with the dataset deletion for PGVector, it will be part of our next release |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
=I am wondering if anyone has had a similar issue with the Database locking I am having atm with trying to issue the delete dataset api. I am trying to call the http://localhost:9081/api/v1/datasets/{dataset_id} on a locally running docker image of image: cognee/cognee:latest. But every time I call this the image: pgvector/pgvector:pg17 database locks and I have to restart the cognee container and the postgres container in order for cognee to start working again but it means I can never actually delete any data and instead I have to completely delete all the data from the volume and re upload all my main dataset again.
This discussion was automatically pulled from Discord.
Beta Was this translation helpful? Give feedback.
All reactions