Skip to content

Conversation

@devbyteai
Copy link
Contributor

Summary

Fixes #3847 where cancelling release() during execution could leave the async lock in an inconsistent deadlock state.

Root Cause

The release() method cleared self.local.token synchronously before returning the awaitable do_release(). If the caller cancelled the await, the token was already gone but the Redis key still existed, causing:

  1. lock.owned() returns False (no token)
  2. lock.locked() returns True (Redis key exists)
  3. Lock cannot be released or acquired - permanent deadlock

Solution

Convert release() to async and clear token only AFTER successful release.

Edge Cases Handled

  1. Normal release - clear token after success
  2. LockNotOwnedError - clear token (lock gone from Redis)
  3. Network error - preserve token (lock might still exist)
  4. CancelledError - preserve token (release didn't complete)

Testing

  1. Added test_release_cancellation_preserves_lock_state regression test
  2. All 60 existing async lock tests pass
  3. Tested on local Redis instance

Backward Compatibility

  1. All existing callers already use await lock.release() - no changes needed
  2. Return type effectively unchanged (awaitable that returns None)

Fixes redis#3847 where cancelling release() during execution could leave
the async lock in an inconsistent deadlock state.

Root cause:
The release() method cleared self.local.token synchronously before
returning the awaitable do_release(). If the caller cancelled the await,
the token was already gone but the Redis key still existed, causing:
- lock.owned() returns False (no token)
- lock.locked() returns True (Redis key exists)
- Lock cannot be released or acquired - permanent deadlock

Solution:
Convert release() to async and clear token only AFTER successful release.
Special case: Also clear token on LockNotOwnedError since the lock
doesn't exist in Redis anyway.

Edge cases handled:
1. Normal release: clear token after success
2. LockNotOwnedError: clear token (lock gone from Redis)
3. Network error: preserve token (lock might still exist)
4. CancelledError: preserve token (release didn't complete)

Testing:
- Added test_release_cancellation_preserves_lock_state regression test
- All 60 existing async lock tests pass

Signed-off-by: Devbyteai <[email protected]>
@jit-ci
Copy link

jit-ci bot commented Dec 30, 2025

Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset.

In case there are security findings, they will be communicated to you as a comment inside the PR.

Hope you’ll enjoy using Jit.

Questions? Comments? Want to learn more? Get in touch with us.

@petyaslavova
Copy link
Collaborator

Hi @devbyteai, thank you for your contribution! We will review it soon.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a deadlock issue (#3847) where cancelling lock.release() during execution could leave the async lock in an inconsistent state with the local token cleared but the Redis key still present.

Key changes:

  • Converted release() from returning an awaitable to being an async method that clears the token only after successful Redis release
  • Added proper exception handling to clear token on LockNotOwnedError but preserve it on network errors or cancellation
  • Added a regression test to verify lock state consistency when release is cancelled

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
redis/asyncio/lock.py Modified release() to async method with deferred token clearing and proper error handling to prevent deadlock
tests/test_asyncio/test_lock.py Added regression test test_release_cancellation_preserves_lock_state to verify cancellation handling

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@devbyteai
Copy link
Contributor Author

@petyaslavova Thank you :)

Copy link
Collaborator

@petyaslavova petyaslavova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@petyaslavova petyaslavova merged commit 334cb3b into redis:master Jan 9, 2026
104 of 106 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Async lock can enter a deadlock if it is cancelled while being released

2 participants