You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The mapcoder-lite research paper showed that instead of using one general-purpose model, using specialized LoRA adapters for distinct steps (like 'planning,' 'coding,' 'debugging') significantly improved performance, particularly with smaller models.
I've tried the spec-kit framework both with Claude as well as Qwen Coder 30b and found that Qwen is completely overwhelmed by even fairly simple tasks.
Since spec-kit already breaks development into distinct phases (specification, implementation, testing, etc.), it seems like a natural fit for the per-step adapter approach. Each adapter could be optimized for its specific role in the workflow, thus allowing developers to benefit from all the advantages provided by local models.
This isn't really meant as a call to arms, but more so to spark a discussion on:
Do you think this would be feasible?
What are the improvements we should realistically expect to come out of something like this?
I think if someone were to seriously consider this, the first step would be building a database of question-response pairs for each step in the spec-kit process. Ideally, this would include: a) the questions/tasks given to an LLM, b) the responses it gave, and c) quality ratings. With this data, I believe we could make great strides toward this goal.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
The mapcoder-lite research paper showed that instead of using one general-purpose model, using specialized LoRA adapters for distinct steps (like 'planning,' 'coding,' 'debugging') significantly improved performance, particularly with smaller models.
I've tried the spec-kit framework both with Claude as well as Qwen Coder 30b and found that Qwen is completely overwhelmed by even fairly simple tasks.
Since spec-kit already breaks development into distinct phases (specification, implementation, testing, etc.), it seems like a natural fit for the per-step adapter approach. Each adapter could be optimized for its specific role in the workflow, thus allowing developers to benefit from all the advantages provided by local models.
This isn't really meant as a call to arms, but more so to spark a discussion on:
I think if someone were to seriously consider this, the first step would be building a database of question-response pairs for each step in the spec-kit process. Ideally, this would include: a) the questions/tasks given to an LLM, b) the responses it gave, and c) quality ratings. With this data, I believe we could make great strides toward this goal.
Beta Was this translation helpful? Give feedback.
All reactions