Problem:
Empty response affects server start boundary computed before.
Solution:
Ignore empty responses. This is mostly micro-optimization that avoids
extending existing results with empty responses.
Problem:
Diagnostic lifecycle invariants (clearing on empty publish and buffer
deletion) were previously implicit and not directly covered by functional
tests, allowing regressions to go unnoticed.
Solution:
Add functional regression tests asserting that diagnostics are cleared
when an LSP server publishes an empty diagnostic set and when the
associated buffer is deleted. Assertions are scoped to the client
diagnostic namespace and use public diagnostic APIs only.
Problem: Code lenses currently display as virtual text on the same line
and after the relevant item. While the spec does not say how lenses
should be rendered, above the line is most typical. For longer lines,
lenses rendered as virtual text can run off the side of the screen.
Solution: Display lenses as virtual lines above the text.
Closes https://github.com/neovim/neovim/issues/33923
Co-authored-by: Yi Ming <ofseed@foxmail.com>
From the LSP Spec:
> There are two uses cases where it can be beneficial to only compute
> semantic tokens for a visible range:
>
> - for faster rendering of the tokens in the user interface when a user
> opens a file. In this use case, servers should also implement the
> textDocument/semanticTokens/full request as well to allow for flicker
> free scrolling and semantic coloring of a minimap.
> - if computing semantic tokens for a full document is too expensive,
> servers can only provide a range call. In this case, the client might
> not render a minimap correctly or might even decide to not show any
> semantic tokens at all.
This commit unifies the usage of range and full/delta requests as
recommended by the LSP spec and aligns neovim with the way other LSP
clients use these request types for semantic tokens.
When a server supports range requests, neovim will simultaneously send a
range request and a full/delta request when first opening a file, and
will continue to issue range requests until a full response is
processed. At that point, range requests cease and full (or delta)
requests are used going forward. The range request should allow servers
to return a result faster for quicker highlighting of the file while it
works on the potentially more expensive full result. If a server decides
the full result is too expensive, it can just error out that request,
and neovim will continue to use range requests.
This commit also fixes and cleans up some other things:
- gen_lsp: registrationMethod or registrationOptions imply dynamic
registration support
- move autocmd creation/deletion to on_attach/on_detach
- debounce requests due to server refresh notifications
- fix off by one issue in tokens_to_ranges() iteration
Previously, adjust_start_col returned nil when completion items had
different start position from lsp textEdit range
This caused the completion to fall back to \k*$ which ignores the
non-keyword characters
Changes:
- adjust_start_col: now returns the minimum start postion among all
items instead of nil
- _lsp_to_complete_items - normalizes the items by adding the gap between
current and minimum start
Fixes: https://github.com/neovim/neovim/issues/37441
* cache all tokens from various range requests for a given document
version
- all new token highlights are merged with previous highlights to
maintain order and the "marked" property
- this allows the tokens to stop flickering once they've loaded once
per document version
* abandon the processing coroutine if the request_id has changed instead
of relying only on the document version
- this will improve efficiency if a new range request is made while a
previous one was processing its result
* apply new highlights from processing coroutine directly to the current
result when the version hasn't changed
- this allows new highlights to be immediately drawable once they've
processed instead of waiting for the whole response to be processed
at once
* rpc layer was changed to provide the request ID back in success
callbacks, which is then provided as a request_id field on the handler
context to lsp handlers
Problem: When fuzzy is enabled and the prefix is not empty,
items are not sorted by fuzzy score before calling fn.complete.
Solution: Use matchfuzzypos to get the scores and sort the items
by fuzzy score before calling fn.complete.
Problem:
If the last visible line in a window is not fully displayed, this line
may not get injection highlighting. This happens because line('w$')
actually means the last *completely displayed* line.
Solution:
Use line('w$') + 1 for the botline.
This reverts 4244a96774
"test: fix failing lsp/utils_spec #36609",
which changed the test based on the wrong behavior.
Problem:
Nvim supports `textDocument/semanticTokens/full` and `…/full/delta`
already, but most servers don't support `…/full/delta` and Nvim will try
to request and process full semantic tokens response on every buffer
change. Even though the request is debounced, there is noticeable lag if
the token response is large (in a big file).
Solution:
Support `textDocument/semanticTokens/range`, which requests semantic
tokens for visible screen only.
Problem:
With the typescript LSes typescript-language-server and vtsls,
omnicompletion on partial tokens for certain types, such as array
methods, and functions that are attached as attributes to other
functions, either results in no entries populated in the completion menu
(typescript-language-server), or an unfiltered completion menu with all
array methods included, even if they don't share the same prefix as the
partial token being completed (vtsls).
Solution:
Enable insertReplaceSupport and uses the insert portion of the lsp
completion response in adjust_start_col if it's included in the
response.
Completion results are still filtered client side.
Problem: No way to customize completion order across multiple servers.
Solution: Add `cmp` function to `vim.lsp.completion.enable()` options
for custom sorting logic.
Problem:
Users often jump and navigate through LSP windows to yank text.
Concealed markdown can make navigation through hyperlinks and code
blocks more difficult.
Solution:
Change 'concealcursor' from 'n' to '' to preserve clean display
while improving navigation and selection of the LSP response.
Closes#36537
Problem:
Some servers write log to stdout and there's no way to avoid it.
See https://github.com/neovim/neovim/pull/35743#pullrequestreview-3379705828
Solution:
We can extract `content-length` field byte by byte and skip invalid
lines via a simple state machine (name/colon/value/invalid), with minimal
performance impact.
I chose byte parsing here instead of pattern. Although it's a bit more complex,
it provides more stable performance and allows for more accurate error info when
needed.
Here is a bench result and script:
parse header1 by pattern: 59.52377ms 45
parse header1 by byte: 7.531128ms 45
parse header2 by pattern: 26.06936ms 45
parse header2 by byte: 5.235724ms 45
parse header3 by pattern: 9.348495ms 45
parse header3 by byte: 3.452389ms 45
parse header4 by pattern: 9.73156ms 45
parse header4 by byte: 3.638386ms 45
Script:
```lua
local strbuffer = require('string.buffer')
--- @param header string
local function get_content_length(header)
for line in header:gmatch('(.-)\r?\n') do
if line == '' then
break
end
local key, value = line:match('^%s*(%S+)%s*:%s*(%d+)%s*$')
if key and key:lower() == 'content-length' then
return assert(tonumber(value))
end
end
error('Content-Length not found in header: ' .. header)
end
--- @param header string
local function get_content_length_by_byte(header)
local state = 'name'
local i, len = 1, #header
local j, name = 1, 'content-length'
local buf = strbuffer.new()
local digit = true
while i <= len do
local c = header:byte(i)
if state == 'name' then
if c >= 65 and c <= 90 then -- lower case
c = c + 32
end
if (c == 32 or c == 9) and j == 1 then
-- skip OWS for compatibility only
elseif c == name:byte(j) then
j = j + 1
elseif c == 58 and j == 15 then
state = 'colon'
else
state = 'invalid'
end
elseif state == 'colon' then
if c ~= 32 and c ~= 9 then -- skip OWS normally
state = 'value'
i = i - 1
end
elseif state == 'value' then
if c == 13 and header:byte(i + 1) == 10 then -- must end with \r\n
local value = buf:get()
return assert(digit and tonumber(value), 'value of Content-Length is not number: ' .. value)
else
buf:put(string.char(c))
end
if c < 48 and c ~= 32 and c ~= 9 or c > 57 then
digit = false
end
elseif state == 'invalid' then
if c == 10 then -- reset for next line
state, j = 'name', 1
end
end
i = i + 1
end
error('Content-Length not found in header: ' .. header)
end
--- @param fn fun(header: string): number
local function bench(label, header, fn, count)
local start = vim.uv.hrtime()
local value --- @type number
for _ = 1, count do
value = fn(header)
end
local elapsed = (vim.uv.hrtime() - start) / 1e6
print(label .. ':', elapsed .. 'ms', value)
end
-- header starting with log lines
local header1 =
'WARN: no common words file defined for Khmer - this language might not be correctly auto-detected\nWARN: no common words file defined for Japanese - this language might not be correctly auto-detected\nContent-Length: 45 \r\n\r\n'
-- header starting with content-type
local header2 = 'Content-Type: application/json-rpc; charset=utf-8\r\nContent-Length: 45 \r\n'
-- regular header
local header3 = ' Content-Length: 45\r\n'
-- regular header ending with content-type
local header4 = ' Content-Length: 45 \r\nContent-Type: application/json-rpc; charset=utf-8\r\n'
local count = 10000
collectgarbage('collect')
bench('parse header1 by pattern', header1, get_content_length, count)
collectgarbage('collect')
bench('parse header1 by byte', header1, get_content_length_by_byte, count)
collectgarbage('collect')
bench('parse header2 by pattern', header2, get_content_length, count)
collectgarbage('collect')
bench('parse header2 by byte', header2, get_content_length_by_byte, count)
collectgarbage('collect')
bench('parse header3 by pattern', header3, get_content_length, count)
collectgarbage('collect')
bench('parse header3 by byte', header3, get_content_length_by_byte, count)
collectgarbage('collect')
bench('parse header4 by pattern', header4, get_content_length, count)
collectgarbage('collect')
bench('parse header4 by byte', header4, get_content_length_by_byte, count)
```
Also, I removed an outdated test
accd392f4d/test/functional/plugin/lsp_spec.lua (L1950)
and tweaked the boilerplate in two other tests for reusability while keeping the final assertions the same.
accd392f4d/test/functional/plugin/lsp_spec.lua (L5704)accd392f4d/test/functional/plugin/lsp_spec.lua (L5721)
* feat(lua): `Range:is_empty()` to check vim.range emptiness
* fix(lsp): don't overlay insertion-style inline completions
**Problem:** Some servers commonly respond with an empty inline
completion range which acts as a position where text should be inserted.
However, the inline completion module assumes that all responses with a
range are deletions + insertions that thus require an `overlay` display
style. This causes an incorrect preview, because the virtual text should
have the `inline` display style (to reflect that this is purely an
insertion).
**Solution:** Only use `overlay` for non-empty replacement ranges.
The current implementation has a race condition where items are appended
to the completion list twice when a second completion runs while the
first is still going. This hotfix just deduplicates the entire list.
Co-authored-by: Tomasz N <przepompownia@users.noreply.github.com>
Problem:
If there are 2 language servers with different trigger chars (`-` and
`>`), and a keymap inputs both simultaneously (`->`), then `>` doesn't
trigger. We get completion items from server1 only.
This happens because the `completion_timer` for the `-` trigger is still
pending.
Solution:
If the next character arrived enough quickly (< 25 ms), replace the
existing deferred autotrigger with a new one that matches this later
character.
Overriding vim.lsp.handlers['textDocument/formatting'] doesn't work here
because fake_lsp_server_setup() uses a table with __index to specify
client handlers, which takes priority over vim.lsp.handlers[], and as a
result the overridden handler is never called, and the test ends before
the vim.wait() even finishes.
Instead, set a global variable from the handler that is actually reached
(by vim.rpcrequest() from client handler), and avoid stopping the event
loop too early.
This also fixes the following warning in tests with ASAN or TSAN:
-------- Running tests from test/functional/plugin/lsp/inline_completion_spec.lua
RUN T4604 vim.lsp.inline_completion enable() requests or abort when entered/left insert mode: 225.00 ms OK
RUN T4605 vim.lsp.inline_completion get() applies the current candidate: 212.00 ms OK
nvim took 2013 milliseconds to exit after last test
This indicates a likely problem with the test even if it passed!
RUN T4606 vim.lsp.inline_completion get() accepts on_accept callback: 212.00 ms OK
RUN T4607 vim.lsp.inline_completion select() selects the next candidate: 220.00 ms OK
-------- 4 tests from test/functional/plugin/lsp/inline_completion_spec.lua (3437.00 ms total)
-------- Running tests from test/functional/plugin/lsp/linked_editing_range_spec.lua
nvim took 2011 milliseconds to exit after last test
This indicates a likely problem with the test even if it passed!
The flakiness happens because get() uses vim.schedule(), and a following
key may be processed before the scheduled event. Use poke_eventloop() to
ensure that the scheduled event is processed.
Problem: make_floating_popup_options only shows when opts.border is explicitly set, ignoring global winborder setting
Solution: check both opts.border and vim.o.winborder when determining whether to show title
The cursor movement autocommand can not detect when the final tabstop $0
is directly adjacent to another tabstop, which prevents ending the
snippet session. The fix is an early return when jumping.
Problem:
Previously, 'null' value in LSP responses were decoded as 'nil'.
This caused ambiguity for fields typed as '? | null' and led to
loss of explicit 'null' values, particularly in 'data' parameters.
Solution:
Decode all JSON 'null' values as 'vim.NIL' and adjust handling
where needed. This better aligns with the LSP specification,
where 'null' and absent fields are distinct, and 'null' should
not be used to represent missing values.
This also enables proper validation of response messages to
ensure that exactly one of 'result' or 'error' is present, as
required by the JSON-RPC specification.
**Problem:** For unchanged document diagnostic reports, the `resultId`
is ignored completely, even though it should still be saved for the
request (in fact, the spec marks it as mandatory for unchanged reports,
so it should be extra important).
**Solution:** Always store the `resultId`.