-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
next/og
cause memory leak in production standalone build
#65451
Comments
next/og
cause memory leak in production standalone buildnext/og
cause memory leak in production standalone build
Attachment: 2024-05-07T12-30-03-488Z.heapsnapshot.zip |
Hi @chipcop106 What's the metrics tracing platform? |
I'm using AWS ECS, which is the metrics of the docker's container. |
Thanks! |
Possible fix already sitting on your doorstep for free since March 13: https://github.com/orgs/vercel/discussions/6117#discussioncomment-8776252 |
This leak is still not fixed in the latest 15-canary.102. Memory is stable around 200MB, then 1 request to generate dynamic |
Self assigned - will take a look! |
Thanks @shuding! This approach worked well for us https://github.com/orgs/vercel/discussions/6117#discussioncomment-8776252. After applying the patch, our memory stopped growing as aggressively. |
@frankharkins My other apps that run on |
(Coworker of Frank). No, I don't think it's normal. Our usage stats suggest it is not due to more usage over time, but rather we suspect a memory leak. We're still trying to figure out what causes it, with two leading hypotheses:
|
Hey @Eric-Arellano, I've seen a few issues on the next/image optimization leakage too. But my app continues to grow in memory over time, when it does garbage collect the memory drops but is always a little higher than the previous GC, so over time is continues to grow. I have a |
@khuezy If you dig a little deeper, there is an open issue in We have applied the above patch to |
Sorry if it's the wrong thread but just to share the experience. I'm having the same memory increase issue. The memory keeps increasing without being cleared. I've removed all usage of next/image and I'm not using resvg-wasm but the issue still persists. I have a suspicion that it might be related to the cache being stored in the memory and not in files but have to look into more details into my docker image. |
Which version of Node are you using ? |
Version: |
Try with 18-alpine, starting from version 19, I have the same memory leak issue, but everything is fine with version 18. 🤷♂️ |
@QuentinScDS does the node team know about the leakage? I thought they fixed the |
Discover this yesterday after a lot of testing... Indeed, I have seen mentions of memory leaks in version 18, but I haven’t found anything about more recent versions. I haven’t had time to create a reproducible case. I have the impression that it only concerns Docker images. |
I don't think it's related to Docker images - I'm deploying on fly.io and they only use the Dockerfile to pull down the dependencies, then they run the stack raw on firecracker. I have a typescript temporal worker stack running on |
That fix the memory leak for me too. Thanks! |
Not sure the suggested patch completely fixes the issue. Still getting the same error, but less frequently |
The fix is patched in v14.2.13, please upgrade to the new version 🙏 |
@huozhi I'm not sure if I should comment here or open a new issue. Please redirect me if needed! We bumped up to v14.2.13, but continue to see the memory leak running nextjs in a long-lived docker container. The upgrade did slow the memory leak a bit, but it still appears to be present. This app only has a few server-side routes, all of which are using the |
@ethos-seth please import from |
This closed issue has been automatically locked because it had no new activity for 2 weeks. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you. |
Link to the code that reproduces this issue
https://github.com/Innei/next-og-oom-repro
To Reproduce
node server.js
/og
The initial memory usage about to 50M, and refresh
/og
about 10 times, got 300MI can provide some ways to try to troubleshoot memory issues.
Add memory dump code in
.next/standalone/server.js
And then refresh the page and refresh the page several times and observe the app memory usage afterwards. When memory overflows and is not freed, hit the heap of memory at that point with
kill -SIGUSR2 <pid>
.As you can see from the following dump, it's the ImageResponse-related modules that are leaking memory. ImageResponse and FigmaImageResponse that means @vercel/og causes memory leak?
Link to #44685 (comment).
Current vs. Expected behavior
expected:
Memory can be freed up.
Provide environment information
Operating System: Platform: darwin/linux Arch: arm64/amd64 Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020/ or Linux Available memory (MB): 32768 Available CPU cores: 12 Binaries: Node: 18.18.0/20.x npm: 10.2.4 Yarn: 1.22.21 pnpm: 9.1.0 Relevant Packages: next: 14.2.3 // Latest available version is detected (14.2.3). eslint-config-next: N/A react: 18.3.1 react-dom: 18.3.1 typescript: 5.4.5 Next.js Config: output: standalone
Which area(s) are affected? (Select all that apply)
Image (next/image)
Which stage(s) are affected? (Select all that apply)
Other (Deployed)
Additional context
No response
The text was updated successfully, but these errors were encountered: