-
Notifications
You must be signed in to change notification settings - Fork 292
Draft add local LiteRT-LM provider +executor #1342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
maceip
wants to merge
14
commits into
JetBrains:develop
Choose a base branch
from
maceip:add-litert-lm-provider
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add support for LiteRT-LM, Google's on-device inference engine that enables running LLMs locally on Android and JVM platforms. Changes: - Add LiteRTLM provider to LLMProvider sealed class - Create LiteRTLMModels with Gemma-3n-E4B model support - Add prompt-executor-litertlm-client module with LiteRTLMClient - Add litertlm-jvm and litertlm-android dependencies to version catalog The LiteRT-LM client supports: - Synchronous and streaming response generation - Temperature control via SamplerConfig - System message and conversation context - CPU and GPU backends for inference Note: The LiteRT-LM library dependency is marked as compileOnly. Users must add the LiteRT-LM runtime dependency to their project when using this provider.
Add test coverage for the LiteRT-LM client: - Unit tests for configuration, error handling, and provider validation - Integration test template for local testing with actual models The integration tests are disabled by default and require: - LiteRT-LM library dependency - A valid model file (set via MODEL_PATH env var)
Add prebuilt native libraries from google-ai-edge/LiteRT-LM for Android ARM64 platform: - libGemmaModelConstraintProvider.so - libLiteRtGpuAccelerator.so - libLiteRtOpenClAccelerator.so - libLiteRtTopKOpenClSampler.so - libLiteRtTopKWebGpuSampler.so - libLiteRtWebGpuAccelerator.so These libraries enable GPU-accelerated inference on Android devices. Source: https://github.com/google-ai-edge/LiteRT-LM/tree/main/prebuilt/android_arm64
Model files (.litertlm) are too large for git. Users should download models separately for testing.
…support Addresses several implementation issues: 1. Conversation history support (issue 2): - Now processes all messages in prompt, not just last user message - Maintains context through multi-turn conversations - Handles System, User, Assistant, Tool.Call, Tool.Result, Reasoning 2. Multimodal content handling (issue 3): - Added support for Image content via Content.ImageBytes - Added support for Audio content via Content.AudioBytes - Validates model capabilities before processing - File attachments converted to text representation 3. Configurable sampler (issue 4): - Added defaultTopK, defaultTopP, defaultTemperature constructor params - Temperature still overridable via prompt.params 4. Tool support (issue 6): - Tools parameter accepted in createConversationConfig - Added TODO noting LiteRT-LM uses annotation-based tool registration - Tool calls/results from history preserved as context strings
These binaries are for Android NDK and not usable by the JVM-only LiteRT-LM client module. Users targeting Android should use the litertlm-android dependency directly which includes the native libs.
- executeStreaming now passes tools parameter instead of emptyList() - Added guard for empty content parts in buildUserMessage
Add expect/actual pattern for cross-platform LiteRT-LM client creation: - commonMain: LiteRTLMClientConfig and factory function declarations - jvmMain: Full implementation using LiteRTLMClient - androidMain: Stub with guidance for adding litertlm-android dependency - appleMain/jsMain/wasmJsMain: Stubs returning UnsupportedOperationException Also update build.gradle.kts to work with full KMP via convention plugin.
Refactored configuration and client to match official LiteRT-LM patterns: - Add LiteRTLMEngineConfig with visionBackend, audioBackend, maxNumTokens - Add LiteRTLMSamplerConfig with Double types and seed parameter - Add NPU backend option - Add require() validation in config classes (matching official style) - Add ImageFile/AudioFile support for file:// URLs - Add cancelProcess() for conversation cancellation - Use @volatile and synchronized lock pattern for thread safety - Update KDoc with example usage matching official documentation style
… duplication - Add sendMultimodal() private helper for sync multimodal sends - Add sendMultimodalStreaming() private helper for async multimodal sends - Public API unchanged, internal code quality improved
- Add back agent modules to settings.gradle.kts - Add dokka entries for agent modules in build.gradle.kts - Restore test-utils dependency in litertlm-client commonTest Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
3f84fe4 to
b4af5ca
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation and Context
This PR adds LiteRT-LM as a new LLMProvider to Koog, enabling on-device LLM inference using Google's LiteRT-LM engine. This allows users to run LLMs locally on Android and JVM platforms without requiring network connectivity, which is valuable for:
The implementation follows the existing Ollama provider patterns for consistency with Koog's architecture.
Breaking Changes
None. This is a purely additive change introducing a new module and provider.
Type of the changes
Checklist
developas the base branchAdditional steps for pull requests adding a new feature
Summary of Changes
New module:
prompt-executor-litertlm-clientKey components:
LiteRTLMClientLLMClientinterface with conversation APIManagedConversationLiteRTLMToolBridgeToolDescriptorto LiteRT-LM's annotation-based tool systemLiteRTLMClientFactoryLiteRTLMModelsFeatures:
execute()andexecuteStreaming()for single requestsconversation()API for multi-turn interactions with context preservationsendImage,sendAudio, etc.)ToolExecutorcallback pattern@MustUseReturnValueannotations