Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory fragments are very very large #4418

Open
kissingtiger opened this issue Jan 7, 2025 · 8 comments
Open

memory fragments are very very large #4418

kissingtiger opened this issue Jan 7, 2025 · 8 comments
Labels
bug Something isn't working

Comments

@kissingtiger
Copy link

Current running instance version: dragonfly v1.20.1-501b7f7b4fb049de2a8a5fff15d945cd7da1046a
But the memory fragments are very large, and the storage of keys takes up 260GB, but the entire instance occupies 400GB of operating system memory

Memory

used_memory:276957747272
used_memory_human:257.94GiB
used_memory_peak:359901227288
used_memory_peak_human:335.18GiB
fibers_stack_vms:694637168
fibers_count:21209
used_memory_rss:330937298944
used_memory_rss_human:308.21GiB
used_memory_peak_rss:371803025408
maxmemory:429496729600
maxmemory_human:400.00GiB
used_memory_lua:4856048
object_used_memory:276486516808
type_used_memory_STRING:135704128
type_used_memory_HASH:276350812680
table_used_memory:457400296
num_buckets:491520
num_entries:4240754
inline_keys:0
listpack_blobs:0
listpack_bytes:0
small_string_bytes:135704128
pipeline_cache_bytes:549977579
dispatch_queue_bytes:1147574
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:549240309
client_read_buffer_peak_bytes:946024960
tls_bytes:5664
snapshot_serialization_bytes:0
cache_mode:cache
maxmemory_policy:eviction

image
@kissingtiger kissingtiger added the bug Something isn't working label Jan 7, 2025
@adiholden
Copy link
Collaborator

Hi @kissingtiger
Memory defragmentation logic is currently applied only when reaching 70% of maxmemory.
You can run command memory defragment to foce execute the defragmentation logic.
Moreover we did many improvements in defragmentation from v1.20 to apply on many more type object types. I suggest to move to our latest version

@kissingtiger
Copy link
Author

kissingtiger commented Jan 8, 2025

@adiholden
memory defragment
I executed this command and I noticed no change in the operating system memory usage.
Is there any optimization method in the current version (dragonfly v1.20.1-501b7f7b4fb049de2a8a5fff15d945cd7da1046a)?
Due to the upgrade version, we need to restart the service, which has a significant impact on us.

@romange
Copy link
Collaborator

romange commented Jan 8, 2025

you can try to replicate the data into a replica and failover into it.

@adiholden
Copy link
Collaborator

I now see in the stats that you attached that the process does not take 400G but 300G
used_memory_rss_human:308.21GiB
used_memory_human:257.94GiB

In this state the defragmentaion logic will not be executed.
I suggest the following options

  1. you can try run memory decommit , this will free up some memory if it is not used
  2. consider what Roman suggested to use a replica for version upgrade

@kissingtiger
Copy link
Author

Is there any other way to optimize on the currently running instance? To create a new replica node, all businesses need to be restarted

@adiholden
Copy link
Collaborator

Did you try running memory decommit ? this is the only option that I can think of that can help you with reducing the memory usage.
Also as the rss used is 300G and used memory is 260G (~40G gap) but max memory is 400G, the instance is not in risk of OOM.

@kissingtiger
Copy link
Author

memory decommit
I have tried this command, but the effect is not significant, and the memory used by the operating system has not decreased.
The current Dragonfly instance stores 200GB of key memory, but the operating system shows that the Dragonfly program occupies 400GB of memory. If the memory cannot be actively released, I am concerned that the operating system may cause the Dragonfly process to OOM.

@romange
Copy link
Collaborator

romange commented Jan 12, 2025

I am afraid we can not provide solutions besides those that were already suggested

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants