Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
-
Updated
Dec 10, 2025 - Python
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
Low-code framework for building custom LLMs, neural networks, and other AI models
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
Using Low-rank adaptation to quickly fine-tune diffusion models.
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
Easily build AI systems with Evals, RAG, Agents, fine-tuning, synthetic data, and more.
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
《大语言模型》作者:赵鑫,李军毅,周昆,唐天一,文继荣
Open Source Data Security Platform for Developers to Monitor and Detect PII, Anonymize Production Data and Sync it across environments.
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."