Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leak due to ByteBuffer not being released before garbage collection #36475

Closed
tkaesler opened this issue Jul 20, 2023 · 2 comments
Closed

Leak due to ByteBuffer not being released before garbage collection #36475

tkaesler opened this issue Jul 20, 2023 · 2 comments
Assignees
Labels
for: external-project For an external project and not something we can fix status: invalid An issue that we don't feel is valid

Comments

@tkaesler
Copy link

Repository with application that reproduces the issue: https://github.com/tkaesler/spring-leak-reproducer

Spring Boot Starter Parent Version: 3.1.1

When continuously calling a function that reduces a Flux, after a certain time (3 minutes in case of the reproducer) a leak is detected:

2023-07-20T13:02:13.629+02:00 ERROR 51552 --- [tor-tcp-epoll-1] io.netty.util.ResourceLeakDetector       : LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
#1:
	io.netty.buffer.AbstractPooledDerivedByteBuf.deallocate(AbstractPooledDerivedByteBuf.java:87)
	io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:111)
	io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:101)
	io.netty.buffer.WrappedByteBuf.release(WrappedByteBuf.java:1037)
	io.netty.buffer.SimpleLeakAwareByteBuf.release(SimpleLeakAwareByteBuf.java:102)
	io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:942)
	io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:90)
        ...

What I haven't tested properly:

  • Whether the entity has to come from a database, I couldn't reproduce it without thus far
  • Whether never versions of reactor fix this issue
  • Whether it's a problem with reactor itself (seems like it, but my knowledge there is still somewhat limited)
@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Jul 20, 2023
@wilkinsona wilkinsona self-assigned this Jul 21, 2023
@wilkinsona
Copy link
Member

Thanks for the sample. FWIW, it took almost 6 minutes for the problem to occur on my machine (an Intel Mac running macOS 13.4.1 (c)) using Java 17.0.5.

The complete error was the following:

2023-07-21T10:01:23.379+01:00 ERROR 1972 --- [ctor-tcp-nio-14] io.netty.util.ResourceLeakDetector       : LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
	io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
	io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
	io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
	io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
	reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
	reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:292)
	reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:401)
	reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:411)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:333)
	io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:454)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	java.base/java.lang.Thread.run(Thread.java:833)

I agree that this seems like a Reactor problem, particularly as you have identified that it's caused in some way by the reduce operator. As such, I think that it would be best for the Reactor team to investigate in the first instance. Please open a Reactor issue so that they can do so.

@wilkinsona wilkinsona closed this as not planned Won't fix, can't repro, duplicate, stale Jul 21, 2023
@wilkinsona wilkinsona added status: invalid An issue that we don't feel is valid for: external-project For an external project and not something we can fix and removed status: waiting-for-triage An issue we've not yet triaged labels Jul 21, 2023
@tkaesler
Copy link
Author

Thanks for the info/feedback, for anyone curious, here's the reactor issue: reactor/reactor-core#3541

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for: external-project For an external project and not something we can fix status: invalid An issue that we don't feel is valid
Projects
None yet
Development

No branches or pull requests

3 participants