-
-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of memory issue #526
Comments
I'm using kafkajs 1.11.0 with kafkajs-lz4 1.2.1 |
Normally I'd say any out of memory issues are through breaking back-pressure, but that doesn't hold up where memory grows 100x between 2 messages 😅. I don't really have any useful intuition about where else this might be coming from, but here's some suggestions on how I might try to narrow down on the root cause:
@tulios @Nevon have a more intimate knowledge of the internals, so perhaps they can be of more concrete help! |
I would strongly suspect While it's not impossible that it's something on our side related to perhaps some specific node version, it feels more likely to be It would be good if you could try without compression and see if you have the same issue, just to verify whether or not the leak is coming from us or not. |
Thanks. Unfortunately, I'm unable to disable lz4 on the topic and was unable to spend any more time debugging this. I've migrated to |
Mostly for the record: I saw this issue today, and found the idea of a leak in But, after some playing it looks like the lz4 leak in the linked issue is actually a problem in the benchmark: There simply wasn't any GCs running. See my comment over there for details. For us this did bring up something else though: Looking at our batch processing logic I found the same issue as the test had! We would starve our event loop processing because our batch processing was essentially a tight loop of async/await code, and never did we give NodeJS time to process everything else until eventually the batch was processed. We've now fixed this problem by artifically introducing event loop runs (essentially we wrap our per-message processing in a |
@sathibault Hey, I am facing the same issue of memory leaking in my node app and I want to find out whether it's because of kafka only. So, I want to print the process size in error log like you are doing. I couldn't find how to enable this, can you please help me with this? |
I have a consumer that is consuming massive amounts of memory and being killed by the OS.
I've commented actual processing out, so the consumer is empty:
From strace I see the following:
This are between two successive log messages, so the process size is growing from 0x57e4000 to 0x1610dc000 between these two messages.
Any leads on how I can further troubleshoot this?
The text was updated successfully, but these errors were encountered: