Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code completion not working #239

Open
andremald opened this issue May 6, 2024 · 29 comments · Fixed by #269
Open

Code completion not working #239

andremald opened this issue May 6, 2024 · 29 comments · Fixed by #269
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@andremald
Copy link

Hi! I am trying to use the tool but somehow the code completion is not working. The chat functionality works just fine so I am quite sure I configured the connectors properly. Unfortunately, I couldn't find any logging. Hence I am not even sure whether the completion request is being sent.

It tried the following models: codegemma, codellama and starcoder. ( always the fim version)

The path is /api/generate

Though it is also not working in a host vs code, I normally work inside a dev container one. Hence, I updated the hostname to host.docker.internal

As I said, the chat functionality works just fine. I am wondering what could be the issue with the coding completion one?

@rjmacarthy
Copy link
Collaborator

rjmacarthy commented May 6, 2024

Hello,

Please confirm all settings used for FIM completion providers. Also, please enable the debugging information in the extension settings and tick enable logging, then go to Help -> Toggle Developer Tools inside Visual Studio Code to look out for any errors.

Many thanks,

@rjmacarthy rjmacarthy added help wanted Extra attention is needed question Further information is requested labels May 6, 2024
@andremald
Copy link
Author

andremald commented May 7, 2024

Hi! While debugging I could see that there is a request being sent if I use the chat functionality. However, nothing shows up in the console for the code completion , even when I request it with Option + \ ( I am a mac user ).

I also found a
"Problem creating default templates "/root/.twinny/templates""

Can it be that's the issue? No template-> no FIM?

@rjmacarthy
Copy link
Collaborator

Hey, that shouldn't be an issue for fim as they are built in. Please provide all the provider configuration settings as previously requested.

@cold-eye
Copy link

cold-eye commented May 7, 2024

c41e9d3372f0fb5080c93d66b7a04fc9

@andremald
Copy link
Author

andremald commented May 7, 2024

Type: FIM
Fim template: codegemma
Provider: ollama
Protocol: http
Model name: codegemma:2b
Hostname: host.docker.internal
Port: 11434
Path: /api/generate

As I mentioned in the previous message, I don't get request in the console as you do ( based on your photo )

EDIT: out of curiosity I did a ls at /root/.twinny/templates and cat in the fim *.hbs files ( there were two: fim.hbs and fim-system.hbs )

fim-system.hbs is empty.

fim.hbs contains the following

<PRE>{{{prefix}}} <SUF>>{{{sufix}}} <MID>

Hope that rings a bell. I would expect to have either more templates in a file or more template files.

EDIT 2: After staying stuck in the train I had the chance to 1) check your repo with more care, 2) debug a bit further.
With regards to 1: Just ignore the message about the *.hbs files. I already understand that what you meant by "built-in".

With regards to 2: despite the fact that I don't get any logs about the request being sent, like I get when using the chat functionality, I do get the following.

2024-05-07 23:36:04.369 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-05-07 23:36:04.369 [info] [KeybindingService]: \ Keyboard event cannot be dispatched
2024-05-07 23:36:04.369 [info] [KeybindingService]: / Received  keydown event - modifiers: [alt], code: AltRight, keyCode: 18, key: Alt
2024-05-07 23:36:04.370 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: AltRight, keyCode: 6 ('Alt')
2024-05-07 23:36:04.370 [info] [KeybindingService]: \ Keyboard event cannot be dispatched in keydown phase.
2024-05-07 23:36:04.408 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Resolving alt+[Backslash]
2024-05-07 23:36:04.408 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.inlineSuggest.trigger, when: editorTextFocus && !editorReadonly, source: user extension rjmacarthy.twinny.
2024-05-07 23:36:04.408 [info] [KeybindingService]: / Received  keydown event - modifiers: [alt], code: Backslash, keyCode: 220, key: «
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: Backslash, keyCode: 93 ('\')
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Resolving alt+[Backslash]
2024-05-07 23:36:04.409 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.inlineSuggest.trigger, when: editorTextFocus && !editorReadonly, source: user extension rjmacarthy.twinny.
2024-05-07 23:36:04.409 [info] [KeybindingService]: + Invoking command editor.action.inlineSuggest.trigger.
2024-05-07 23:36:04.586 [info] [KeybindingService]: + Ignoring single modifier alt due to it being pressed together with other keys.

Attention to 2024-05-07 23:36:04.409 [info] [KeybindingService]: + Invoking command editor.action.inlineSuggest.trigger.

Hope it rings a bell now. I went through your code and though I am not a typescript programmer, I could follow most of it and it looks alright. I am somewhat clueless.

@rjmacarthy
Copy link
Collaborator

I would recommend trying codellama:7b-code to see if it works.

@andremald
Copy link
Author

andremald commented May 8, 2024

Just gave a try, still nothing:

Settings:
Screenshot 2024-05-08 at 23 28 38
Edit: obviously with hostname replaced by localhost

Console after a successful call to the chat api and several "Option + \ " in a python file:
Screenshot 2024-05-08 at 23 33 52

@dishbrains
Copy link

dishbrains commented May 25, 2024

I have the same problem as you andremald, i.e. chat is working fine but FIM does nothing

anyway, and input on how to fix this would be appreciated. this otherwise great ext is not usable for me like that

@oregonpillow
Copy link

Same problem here. All settings correct. 13b or 7b, doesn't matter. Only chat seems to work. I see the robot icon loading when i start coding, but no autocomplete prompts ever how

@localbarrage
Copy link

Failing for me too. I can see the message being received by the provider, but no response and no error. I am using Aphrodite's openai api server. I have tried different providers, yet none give a resopnse.

@localbarrage
Copy link

My issue might partly related to there not being an actual supported OpenAi provider. I setup a litellm proxy to forward to my model and I am still not getting any completions.

@hitzhangjie
Copy link

+1

@jleivo
Copy link

jleivo commented Jun 14, 2024

Hi.

I have the same issue: chat works, FIM doesn't, no matter what I do in the configurations.
Setup: Ollama on a separate server, coding done within WSL => twinny is in WSL

I was looking at the developer tools. as suggested, and when I was writing in the VScode I saw this in the Developer console

ERR memory access out of bounds: RuntimeError: memory access out of bounds
at wasm://wasm/000bc226:wasm-function[254]:0x2b979
at Parser.parse (/home/juleivo/.vscode-server/extensions/rjmacarthy.twinny-3.11.39/out/index.js:2:218649)
at t.CompletionProvider.provideInlineCompletionItems (/home/juleivo/.vscode-server/extensions/rjmacarthy.twinny-3.11.39/out/index.js:2:123675)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async B.provideInlineCompletions (/home/juleivo/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685/out/vs/workbench/api/node/extensionHostProcess.js:155:108949)

After receiving this error I went to the WSL to ~/.vscode and deleted all twinny related folders. Started VS code and installed fresh copy of twinny. Now it works. I had twinny 3.11.10 and 3.11.31 on the host, now I have 3.11.39 and all is good again. I'll repeat this on my work computer later on, to see if this for some reason fixes the issue...

@NeoMatrixJR
Copy link

NeoMatrixJR commented Jun 14, 2024

Same issues here. Ollama is running in docker on an external server. Chat works, no FIM. Have tried other extensions (continue dev at least) and get FIM...not good...but it at least does something....so I know it's not an issue with Ollama.
EDIT:
I dropped this back to a 3.10.* version, kept everything as stock as possible...installed the default codellama models and set it to the IP of my server....now it seems to work. I'll try and tweak it later to see if it's a plugin version issue, a model issue....???

@KizzyCode
Copy link

KizzyCode commented Jun 17, 2024

Same issue, OS is macOS, provider is ollama with starcoder:3b – ollama gets request and does computation, but whatever is computed does not show up in VSCode...

Weird is that some time ago, the extension worked flawlessly and I did no manual change except installing auto-updates.

@yuhanj2
Copy link

yuhanj2 commented Jun 26, 2024

I found that setting Folding Range Provider in vscode settings to twinny would solve this issue for me, not sure if it will work for everyone.
image

@KizzyCode
Copy link

@yuhanj2 Thx, that workaround works for me too!

@GreatApo
Copy link

GreatApo commented Jul 1, 2024

Same issue here. Could it be bugging out due to other extensions?

@rjmacarthy
Copy link
Collaborator

Hello all, do we have any indication of what is broken? I have been using the extension myself recently and it's working with latest version of code. Many thanks.

❯ code -v
1.90.2
5437499feb04f7a586f677b155b039bc2b3669eb
x64

@swedenSnow
Copy link

First of all, thank you for all your work @rjmacarthy 🙌🏽

I'm having the same issue, have a setup with ollama and just updated VSC to 1.90.2 and ollama 0.1.48
llama3 for chat ( working )
dolphincoder for FIM (I've tried with codellama, same issue)
image

@rjmacarthy
Copy link
Collaborator

Hello, I think starcoder 2 is not good for fim completion I tried it. Please try one of the recommended models in the docs. Many thanks.

@swedenSnow
Copy link

swedenSnow commented Jul 3, 2024

Thanks for the heads up. Was working ok but now looking at the logs it seems trying to get a model I dont have locally anymore: "model": "deepseek-coder:1.3b-base-fp16", even when I changed provider to codellama.
image
image

@rjmacarthy
Copy link
Collaborator

Maybe try to restart ide.

@swedenSnow
Copy link

swedenSnow commented Jul 3, 2024

I have. Its a bit confusing now: I installed the deepseek model as mentioned above, and then my FIM started working althogh still not requiring the model I have entered as provider (codellama) but deepseek again 🤔
Twinny Stream Debug
Streaming response from 0.0.0.0:11434.
Request body:
"model": "deepseek-coder:1.3b-base-fp16",
"prompt": "<|fim▁begin|>/**/\n\n/* Language: Typescript React (typescriptreact) /\n/ File uri

image

EDIT: Resetting providers or deleting and adding again seems to work. Funny how I must been using deepseek all this time until I deleted it, and thought I was using starcoder2 :) Its really not good for FIM as you pointed out

@GreatApo
Copy link

GreatApo commented Jul 3, 2024

Hello all, do we have any indication of what is broken? I have been using the extension myself recently and it's working with latest version of code. Many thanks.

To start with, thank you for you work!

I was getting the following error upon writing code:
image

I disabled-enabled the extension, and now it prints the request, but nothing happens:
image
image

Then, after a while, the following errors appear:
image

The chat works...
image

VSCode info:
image

Edit:
Upon resetting the providers, something looks like it going on, but file errors start popping up again:
Screenshot 2024-07-03 210426
(mind that my code & ollama are on a local network server)
Edit 2:
Running ollama locally (same machine) seems to work okish.

@fredrikburmester
Copy link

I also have the issue of FIM not working. Running Ollama locally on M3 Mac.
VSCode Version: 1.91.0
Screenshot 2024-07-04 at 21 41 45

@fredrikburmester
Copy link

fredrikburmester commented Jul 4, 2024

This is an error in the console:

ERR Cannot read properties of undefined (reading 'apply'): TypeError: Cannot read properties of undefined (reading 'apply')
    at wasmImports.<computed>.stub.e.<computed> (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:177464)
    at null.<anonymous> (wasm://wasm/0003dbda:1:39736)
    at null.<anonymous> (wasm://wasm/0003dbda:1:35226)
    at postInstantiation (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:178474)
    at /Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:178735
    at t.getParser (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:114775)
    at t.CompletionProvider.provideInlineCompletionItems (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:123581)
    at J.provideInlineCompletions (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:154:119139)

Also:

ERR Parsing failed: Error: Parsing failed
    at Parser.parse (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:218987)
    at t.CompletionProvider.provideInlineCompletionItems (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:123675)
    at J.provideInlineCompletions (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:154:119139)

@arungithub9
Copy link

This is an error in the console:

ERR Cannot read properties of undefined (reading 'apply'): TypeError: Cannot read properties of undefined (reading 'apply')
    at wasmImports.<computed>.stub.e.<computed> (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:177464)
    at null.<anonymous> (wasm://wasm/0003dbda:1:39736)
    at null.<anonymous> (wasm://wasm/0003dbda:1:35226)
    at postInstantiation (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:178474)
    at /Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:178735
    at t.getParser (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:114775)
    at t.CompletionProvider.provideInlineCompletionItems (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:123581)
    at J.provideInlineCompletions (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:154:119139)

Also:

ERR Parsing failed: Error: Parsing failed
    at Parser.parse (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:218987)
    at t.CompletionProvider.provideInlineCompletionItems (/Users/username/.vscode/extensions/rjmacarthy.twinny-3.11.43/out/index.js:2:123675)
    at J.provideInlineCompletions (/Applications/Visual Studio Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:154:119139)

I am facing the same error with VS Code Ver 1.90.2.

@rjmacarthy
Copy link
Collaborator

I just released a new version which adds a try catch to the parsing of the document to attempt to fix this problem. Please let me know if it helps, if not it would be helpful to know what file/language/extension is being used when this error appears. Many thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.