-
Notifications
You must be signed in to change notification settings - Fork 5
[ISSUE #36] Fix ConcurrentHashMap memory leak and improve aspect cache performance #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…cache performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR addresses memory leak risks and improves performance by replacing unbounded ConcurrentHashMap instances with bounded LRU caches and fixing race conditions in lock operations.
- Introduces LRUCache utility class with O(1) operations using LinkedHashMap
- Replaces ConcurrentHashMap with LRUCache in Spring aspect classes to prevent memory leaks
- Fixes race condition in KeyLocalLock unlock operations using atomic compute operations
Reviewed Changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
LRUCache.kt | New thread-safe LRU cache implementation with configurable max size |
LRUCacheTest.kt | Comprehensive test suite covering LRU behavior and thread safety |
LinkedHashMapLRUTest.kt | Demonstration tests showing LinkedHashMap LRU mechanics |
ReqShieldAspectProperties.kt | Configuration properties for cache sizing |
ReqShieldAspect.kt (3 variants) | Updated to use LRUCache instead of ConcurrentHashMap |
KeyLocalLock.kt (3 variants) | Fixed race condition in unlock using atomic compute operation |
KeyLocalLockTest.kt | Code formatting improvements for test readability |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
@Synchronized | ||
fun size(): Int = cache.size | ||
|
||
val size: Int get() = cache.size |
Copilot
AI
Sep 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The size
property exposes an unsynchronized getter while all other operations are synchronized. This creates an inconsistency where size()
returns a synchronized value but size
property returns an unsynchronized value, potentially leading to race conditions. Either remove this property or make it synchronized by delegating to size()
.
val size: Int get() = cache.size | |
val size: Int get() = size() |
Copilot uses AI. Check for mistakes.
@Component | ||
class ReqShieldAspect<T>( | ||
private val reqShieldCache: ReqShieldCache<T>, | ||
private val aspectProperties: ReqShieldAspectProperties = ReqShieldAspectProperties(), |
Copilot
AI
Sep 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a default parameter with constructor instantiation bypasses Spring's dependency injection mechanism. This means the configuration properties won't be properly injected from application properties. Consider making this parameter required or using @Autowired
to ensure proper Spring configuration.
private val aspectProperties: ReqShieldAspectProperties = ReqShieldAspectProperties(), | |
private val aspectProperties: ReqShieldAspectProperties, |
Copilot uses AI. Check for mistakes.
} | ||
} | ||
|
||
@Synchronized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most methods are using @Synchronized
.
Could there be a performance issue?
🐛 Problem
Fixes #36 - ConcurrentHashMap entry eviction policy confirmation
Issues Identified:
keyGeneratorMap
,reqShieldMap
) use unboundedConcurrentHashMap
unLock()
method has non-atomic read-remove operation✨ Solution
1. LRU Cache with Automatic Eviction
ConcurrentHashMap
withLRUCache
(default: 1000 entries)LinkedHashMap
withaccessOrder=true
for O(1) LRU operations2. Race Condition Prevention
// Before: Non-atomic operation
// After: Atomic operation
📈 Performance Impact
📋 Migration Notes
For Library Users:
Configuration Options:
// Optional: Customize cache size for your environment
🎯 Resolves