Skip to content

Conversation

@KarthikeyaKollu
Copy link
Contributor

Fixes #214 - Error: cannot pickle '_thread.RLock' object when using OpenAIChat with Memori + Agno in streaming mode.

The issue occurred because InvokeAsync was trying to call handle_post_response() on the raw stream object, which contains unpicklable _thread.RLock objects.

The fix detects when stream=True is in kwargs and returns a wrapped async generator that:

  1. Iterates through the stream and yields each chunk
  2. Collects chunks into raw_response using merge_chunk
  3. Only calls handle_post_response after the stream is fully consumed

This matches the behavior of InvokeAsyncStream but allows InvokeAsync to handle both streaming and non-streaming scenarios dynamically.

Fixes MemoriLabs#214 - Error: cannot pickle '_thread.RLock' object when using
OpenAIChat with Memori + Agno in streaming mode.

The issue occurred because InvokeAsync was trying to call
handle_post_response() on the raw stream object, which contains
unpicklable _thread.RLock objects.

The fix detects when stream=True is in kwargs and returns a wrapped
async generator that:
1. Iterates through the stream and yields each chunk
2. Collects chunks into raw_response using merge_chunk
3. Only calls handle_post_response after the stream is fully consumed

This matches the behavior of InvokeAsyncStream but allows InvokeAsync
to handle both streaming and non-streaming scenarios dynamically.
@devwdave
Copy link
Collaborator

devwdave commented Dec 8, 2025

@KarthikeyaKollu Thanks for the contribution! The team will review this PR later in the week and provide feedback if necessary.

@devwdave
Copy link
Collaborator

@KarthikeyaKollu Can you confirm that this issue persists if you state that your using streaming in the memori initialization?

mem = Memori(conn=conn_factory).llm.register(my_llm_client, stream=True)

@KarthikeyaKollu
Copy link
Contributor Author

Hi @devwdave I've tested this thoroughly, and here are the results:

  1. When I tried using mem = Memori(conn=conn_factory).llm.register(my_llm_client, stream=True) on the main branch, it failed with a TypeError, as the register method on main doesn't accept a stream argument.

  2. I confirmed that the original issue still persists on the main branch when streaming is enabled correctly in the agent.arun(..., stream=True) call. It produces the cannot pickle '_thread.RLock' object error.

  3. As this branch (fix/agno-openai-streaming-pickle-error-214) completely resolves the issue. The streaming works as expected without any pickle errors.

i feel things are working in the fix branch, let me know if you have anything!

@KarthikeyaKollu
Copy link
Contributor Author

@devwdave

@devwdave
Copy link
Collaborator

@KarthikeyaKollu Thanks again for this contribution. We’ve been working through a few related fixes and believe this may have been addressed in a more recent release.

I noticed you were on v3.0.6 when the issue was opened, would you mind upgrading to the latest (v3.1.2) and letting us know if you can still reproduce the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Error: cannot pickle '_thread.RLock' object when using OpenAIChat with Memori + Agno (Streaming Mode)

2 participants