-
Notifications
You must be signed in to change notification settings - Fork 33
Use the model's template (ollama show --template <model>)
#53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
This does NOT work with gemma3, which supports FIM though the tag The method in my patch can be detected by |
|
Hi, thank you for the contribution. I was going this route initially, but had the problem that the templates in ollama where not working for some models. I will need to test this and if it works more reliable now, than in the past I can get rid of my own templates and token configurations. I can imagine to keep both. So you can override bogus behavior with local configurations, but if it is missing it will default to what ollama provides. I'm also working on repo completions and inserting of complete files (or vim buffers) as context. For this I will still need to create my own prompts. However, if I can read out the FIM tokens reliable from the models template I would be happy to do so. |
|
Hi, I tested it with my default starcoder2:3b completion model. Technically, it works, but the results are radically different and not really useful. I don't know why this happens with the ollama REST API. See yourself: First I try it with your branch. The task is trivial, it should only complete a missing 'f' for a printf call, which tests the fill-in-the-middle problem pretty good. With your solution it generates some nonsense and also garbage on the next lines instead of the required completion. Then I switch back to mast branch with my manual FIM code and it works as expected. vim-pull-53-2025-03-15_11.06.09.mp4 |
|
I should contribute this pile of tests I made for FIM functionality because
I was getting ridiculous results with basically all the models. Most of
them just pasted random, syntactically incorrect snippets from elsewhere in
the file. Have you seen Aider's polyglot test? I'm halfway to having
something like that for FIM. I've had the most luck with deepseek-coder-v2.
Phi4 and Gemma3 were released in the last few days and both support FIM but
not through the template. I'll have to play with them a bit more but as
newer models I have a feeling they're going to be superior.
They both seem to support <|fim_middle|> but not the other tags.
…On Sat, Mar 15, 2025, 6:33 AM Gerhard Gappmeier ***@***.***> wrote:
I does not look better with codellama either:
Ollama REST API result (your branch):
codellama1.png (view on web)
<https://github.com/user-attachments/assets/d32e6c56-39ce-4013-9cd8-76fb7ee238ce>
Manual templates (master branch):
codellama2.png (view on web)
<https://github.com/user-attachments/assets/d902ef3f-d43f-44ac-8b5c-91c3a890c884>
I also tried to change the "Raw" option to false on your branch, but this
didn't help either.
Looks to me like ollama stuff still does not work as it should.
I'm running ollama version is 0.5.4.
Please let me know if you are running a newer version with better results
for this simple example task.
—
Reply to this email directly, view it on GitHub
<#53 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANKOXBDCPXJQPAPP2ON4J32UP6V7AVCNFSM6AAAAABZBTLWGGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRWGQZDMMRRGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
[image: gergap]*gergap* left a comment (gergap/vim-ollama#53)
<#53 (comment)>
I does not look better with codellama either:
Ollama REST API result (your branch):
codellama1.png (view on web)
<https://github.com/user-attachments/assets/d32e6c56-39ce-4013-9cd8-76fb7ee238ce>
Manual templates (master branch):
codellama2.png (view on web)
<https://github.com/user-attachments/assets/d902ef3f-d43f-44ac-8b5c-91c3a890c884>
I also tried to change the "Raw" option to false on your branch, but this
didn't help either.
Looks to me like ollama stuff still does not work as it should.
I'm running ollama version is 0.5.4.
Please let me know if you are running a newer version with better results
for this simple example task.
—
Reply to this email directly, view it on GitHub
<#53 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANKOXBDCPXJQPAPP2ON4J32UP6V7AVCNFSM6AAAAABZBTLWGGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRWGQZDMMRRGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
|
Are you on Discord? Or do you belong to any Discord/Slack or other chat that discusses FIM usage? There are a bunch of things I want to do here and it would be good to discuss it with someone. ;-) |
|
Hi, at the moment I don't have time for discussions, but you might find the |


This is to fix #52 .
I've tested it with:
Do you agree? I think it might be interesting to put some templates in the repo showing how to use FIM for larger models that don't natively support it, but may be good at following instructions.