AI autocomplete with Treesitter Context Engine, and Local LLM and GEMINI Support.
Model-cmp uses various features of neovim to Deliver AI autocompletion features by predicting the current line text. We use Treesitter deeply to generate Context around the cursor (Treesitter implementation of Context Engine is under development). We use Few shot prompting to get custom suggestions from the LLM.
You can use which ever LLM you want till its on port you specified in your config.

model-cmp.mp4
Before installing the plugin, please make sure you have following pre-requisities:
- llama.cpp or gemini api key
- system specs for local llm inferencing Checkout Here for system requriements
{
"PyDevC/model-cmp.nvim",
config = function()
require("model_cmp").setup()
vim.keymap.set("i", "<C-s>", "<cmd>ModelCmp capture first<CR>")
end
}There are two ways to setup gemini api keys:
- Environment variable
- Inside config
- Environment variable:
export GEMINI_API_KEY="<your-key>"or
you can setup api key in your ~/.bashrc or ~/.zshrc which I highly don't recommend
- Inside Config:
{
api = {
apikeys = {
GEMINI_API_KEY = "<your-key>"
}
}
}return {
"PyDevC/model-cmp.nvim",
config = function()
require("model_cmp").setup({
requests = {
delay_ms = 1000,
max_retries = 5,
timeout_ms = 300000,
},
api = {
apikeys = {
GEMINI_API_KEY = "<your-key>"
}
custom_url = {
url = "http://127.0.0.1",
port = "8080"
}
},
virtualtext = {
enable = false,
type = "inline",
style = { -- This is just a highlight group
fg = "#b53a3a",
italic = false,
bold = false
}
},
})
vim.keymap.set("i", "<C-s>", "<cmd>ModelCmp capture first<CR>")
end,
}support me!!! it is difficult to develop open source while searching for jobs. scan and support
