-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/tools/gopls: poor performance with very large packages in import graph #61207
Comments
Can you please explain more? The latest gopls should not be appreciably slower than v0.11.0. If you are observing this, it would really help us if you could share more of what you see. Are you working in an open-source project? What do you perceive as being slow? When do you notice the CPU? |
I'm not working on an open source project, i rely on gopls to do auto formatting and auto complete, with version v0.12.3, when i save, it take about 10-15 seconds to complete the save. Up on save, a notice pop up saying: Saving xxx.go:Getting code actions from "Go" |
@sff2578 what you describe is a bug, plain and simple. We want [email protected]+ to perform approximately the same as v0.11.0, using much less memory (and in the future, we want it to perform better than v0.11.0 across the board). I know of certain areas where we could reduce CPU in v0.12.x (#60926), but those would be say a 50% reduction in CPU while typing. I wasn't aware of any order-of-magnitude regressions. We would greatly appreciate it if you would work with us to help us understand the problems you are facing. This could help many other users. To prevent auto-upgrade, you can use the following setting, but as mentioned above we hope that you will work with us to resolve your problems with v0.12.x: Can you please run the |
CC @adonovan, who can also help track down this regression. |
Thanks for the pointer, it now works fine without upgrading gopls, here's the output of the gopls stats -anon command:
|
Thanks @sff2578, that's really useful. I notice something about your workspace, which may contribute tot he problem. According to that stats command, you have a package with 505 files in it. There are certain aspects of our new implementation that may perform very poorly on such huge packages. Are you often working in that package? Did you notice the performance problems more significantly when working in that package? |
Thanks for the reply. i do heavily rely on this package, i use the interfaces and functions in this package a lot. i tried manually created a simple package, it is not hitting this issue |
I just ran an ecosystem metrics query for large packages, and quickly found several with >500 files. For example, this one has 672: https://github.com/oracle/oci-go-sdk/blob/master/dataintegration/. I ran gopls v0.12 on it and observed that it was indeed slow, especially to save a file, though still quite usable. A CPU profile showed time spent in parsing, type-checking, and analysis, but not typerefs. (It also showed the usual exaggerated hotspot of filecache.gc because of macOS setitimer syscall skew. I need to try it again on Linux) |
I'm going to repurpose this issue to track fixing performance with large packages. |
I and my colleagues is experiencing the same problem. gopls v0.12.x is very slow when working on any large golang package. I and my colleagues switched to go pls v0.11.0 and solved this problem. We have tested on MacOs and linux. gopls produces same behavior. |
@findleyr here is my output of
|
Change https://go.dev/cl/509558 mentions this issue: |
This repo has a package with a large number of files, which has interesting performance characteristics. For golang/go#61207 Change-Id: Id313762019ca85dc2c03c7dc23b005a81d900419 Reviewed-on: https://go-review.googlesource.com/c/tools/+/509558 TryBot-Result: Gopher Robot <[email protected]> Run-TryBot: Robert Findley <[email protected]> gopls-CI: kokoro <[email protected]> Reviewed-by: Alan Donovan <[email protected]>
Change https://go.dev/cl/511337 mentions this issue: |
Change https://go.dev/cl/503440 mentions this issue: |
Our arbitrary choice of caching 200 recent files has proven to be problematic when editing very large packages (golang/go#61207). Fix this by switching to time-based eviction, but with a minimum cache size to optimize the case where editing is resumed after a while. This caused an increase in high-water mark during initial workspace load, but that is mitigated by avoiding the parse cache when type checking for import (i.e. non-workspace files): such parsed files are almost never cache hits as they are only ever parsed once, and would instead benefit more from avoiding ParseComments. Move the ownership of the parseCache to the cache.Session (and pass it to each View) to make its lifecycle clearer and avoid passing it around to each snapshot. For golang/go#61207 Change-Id: I357d8b1fa36eabb516dbb7147266df0e5153ac11 Reviewed-on: https://go-review.googlesource.com/c/tools/+/511337 TryBot-Result: Gopher Robot <[email protected]> Reviewed-by: Alan Donovan <[email protected]> Run-TryBot: Robert Findley <[email protected]> gopls-CI: kokoro <[email protected]>
When working in a package we must repeatedly re-build package handles, which requires parsing with purged func bodies. Although purging func bodies leads to faster parsing, it is still expensive enough to warrant caching. Move the 'purgeFuncBodies' field into the parse cache. For golang/go#61207 Change-Id: I90575e5b6be7181743e8376c24312115a1029188 Reviewed-on: https://go-review.googlesource.com/c/tools/+/503440 gopls-CI: kokoro <[email protected]> Reviewed-by: Alan Donovan <[email protected]> TryBot-Result: Gopher Robot <[email protected]> Run-TryBot: Robert Findley <[email protected]>
@sff2578 @crclz we'd love it if you'd try the v0.13.0 prerelease, and let us know if it improves things for you.
Also, do any of you have Thanks. |
We believe this is largely mitigated, based on our own benchmarking. If this is not the case for your repos, please comment with more information. |
@findleyr Thank you for the new version of gopls! Comparing with v0.12.2, the speed of code formatting of v0.13.0 is largely optimized. |
@crclz that's a surprising difference, which is unexpected. Out of curiosity, does this reproduce when only a single file is open (there are reasons why intellisense may be slower with many files open). It would be helpful if we could dig into this more, grabbing a profile. |
@findleyr Only a single file is open. I have recorded my screen, wish it could help. |
Re-opening due to above report of slow completion in large packages. |
I suspect this may be a result of our change to the meaning of "completionBudget": previously, we would start the timer before type checking, now we start it after (and we ensure that the depth=0 pass completes). In that case, I think the fix here is to support streaming completion results, which will significantly reduce the perceived latency. Then we don't have to care so much about e.g. how many unimported completion items we compute. |
Change https://go.dev/cl/516678 mentions this issue: |
Change https://go.dev/cl/511435 mentions this issue: |
@sff2578 I'm having trouble reproducing the behavior you observe, even in very large packages. However, I have theories. I'm curious if the CL above improves completion performance for you. If you don't mind, could you test it out?
That should significantly reduce CPU during autocompletion in large packages, but doesn't have a large impact on latency for me. It may be different on your computer, if you have less processing available. Could you also try reducing your completion budget, setting Thanks. |
BTW, unfortunately streaming support is not available in VS Code: microsoft/vscode#105870. We could perhaps add it for other clients, but it's harder to justify the engineering effort if not available in VS Code. |
The enormous dataintegration package in oracle demonstrates a clear regression from [email protected] in completion CPU utilization during autocompletion. On my machine latency appears about the same, but on more resource-constrained machines this may not be the case. For golang/go#61207 Change-Id: I59631f34fe0d8d5d3329c9444d4e485840ad85ed Reviewed-on: https://go-review.googlesource.com/c/tools/+/516678 TryBot-Result: Gopher Robot <[email protected]> gopls-CI: kokoro <[email protected]> Run-TryBot: Robert Findley <[email protected]> Reviewed-by: Hyang-Ah Hana Kim <[email protected]>
Following a keystroke, it is common to compute both diagnostics and completion results. For small packages, this sometimes results in redundant work, but not enough to significantly affect benchmarks. However, for very large packages where type checking takes >100ms, these two operations always run in parallel recomputing the same shared state. This is made clear in the oracle completion benchmark. Fix this by guarding type checking with a mutex, and slightly delaying initial diagnostics to yield to other operations (though because diagnostics will also recompute shared, it doesn't matter too much which operation acquires the mutex first). For golang/go#61207 Change-Id: I761aef9c66ebdd54fab8c61605c42d82a8f412cc Reviewed-on: https://go-review.googlesource.com/c/tools/+/511435 gopls-CI: kokoro <[email protected]> Run-TryBot: Robert Findley <[email protected]> TryBot-Result: Gopher Robot <[email protected]> Reviewed-by: Hyang-Ah Hana Kim <[email protected]>
@findleyr I was a little busy on weekdays, and I will test it on weekends. And I realized that optimizing the performance requires a code repo that can reproduce the high completion time. Sharing code is forbidden by my company, so I am also trying to create a sharable repo from my company's repo. |
No worries. I'm trying to decide whether to land this CL in [email protected]. If you can determine whether it helps your use-case, it will help me decide. If you could test it this weekend, that would be great. I don't think it's necessary to create a sharable repo. We have other repos to test on, and this CL helps there. If it doesn't help you, we'll figure out why and find a public repo that has the same behavior. Thanks. |
@findleyr I did some test just now and had some conclusion:
Here is the experiment record: More information is written in this document: https://bytedance.feishu.cn/docx/H8X3dQpfYovJdbxXkZLcQnOQnlf |
What version of Go, VS Code & VS Code Go extension are you using?
Version Information
Run
go version
to get version of Go from the VS Code integrated terminal.Run
gopls -v version
to get version of Gopls from the VS Code integrated terminal.golang.org/x/tools/[email protected] h1:u0wCI9uvt7mnmri6bFBIaWw1XCN6PN8hKv55Zwd+GbE=
Run
code -v
orcode-insiders -v
to get version of VS Code or VS Code Insiders.Check your installed extensions to get the version of the VS Code Go extension
Run Ctrl+Shift+P (Cmd+Shift+P on Mac OS) >
Go: Locate Configured Go Tools
command.gotests: /usr/local/gopkgs/bin/gotests (version: v1.6.0 built with go: go1.18.10)
gomodifytags: /usr/local/gopkgs/bin/gomodifytags (version: v1.16.0 built with go: go1.18.10)
impl: /usr/local/gopkgs/bin/impl (version: v1.1.0 built with go: go1.18.10)
goplay: /usr/local/gopkgs/bin/goplay (version: v1.0.0 built with go: go1.18.10)
dlv: /usr/local/gopkgs/bin/dlv (version: v1.20.2 built with go: go1.18.10)
staticcheck: /usr/local/gopkgs/bin/staticcheck (version: v0.3.3 built with go: go1.18.10)
gopls: /usr/local/gopkgs/bin/gopls (version: v0.12.3 built with go: go1.18.10)
Share the Go related settings you have added/edited
Run
Preferences: Open Settings (JSON)
command to open your settings.json file.Share all the settings with the
go.
or["go"]
orgopls
prefixes.Describe the bug
i manually installed gopls with version v0.11.0, but after some time(less than an hour), go extension will automatically update it to the latest version. I'm having issue with latest gopls(comsuming too much CPU and slow) so i need to stay with older version
Steps to reproduce the behavior:
Screenshots or recordings
If applicable, add screenshots or recordings to help explain your problem.
The text was updated successfully, but these errors were encountered: