Skip to content

All available LTX-2 models, encoders, workflows, LoRAs for ComfyUI

Notifications You must be signed in to change notification settings

wildminder/awesome-ltx2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 

Repository files navigation

Awesome LTX-2

A curated list of models, text encoders, and tools for the LTX-2 video generation suite.

Intro

Models

LTX-2 models are available in various formats including full weights, transformers-only, and GGUF quantizations for efficient inference.

Checkpoints

Name Precision Size Download
ltx-2-19b dev bf16 43.3 GB Lightricks
ltx-2-19b dev fp8 27.1 GB Lightricks
ltx-2-19b dev fp4 20 GB Lightricks
ltx-2-19b distilled bf16 43.3 GB Lightricks
ltx-2-19b distilled fp8 27.1 GB Lightricks
ltx-2-19b distilled nvfp4 20 GB szwagros

Quantized to fp8_e5m2 to support older Triton with older Pytorch on 30 series GPUs. For WangGP in Pinokio

Name Precision Size Download
ltx-2-19b dev fp8_e5m2 27.1 GB Lightricks

Distilled LoRA

Rank Precision Size Download
384 bf16 7.67 GB Lightricks
242 bf16 4.88 GB Lightricks
175 bf16 3.58 GB Lightricks
175 fp8 1.79 GB Lightricks

Spatial Upscaler

Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

Temporal Upscaler

Required for current two-stage pipeline implementations in this repository. Download to COMFYUI_ROOT_FOLDER/models/latent_upscale_models folder.

══════════════════════════════════

GGUF Quantized Models

These models are optimized for lower memory usage. Note that in ComfyUI, these are typically loaded as transformer-only models.

QuantStack
Model Quant Size Download
LTX-2-dev Q2_K 8.03 GB QuantStack
LTX-2-dev Q3_K_M 10.3 GB QuantStack
LTX-2-dev Q3_K_S 9.57 GB QuantStack
LTX-2-dev Q4_K_M 13.4 GB QuantStack
LTX-2-dev Q4_K_S 12.3 GB QuantStack
LTX-2-dev Q5_K_M 15 GB QuantStack
LTX-2-dev Q5_K_S 14.2 GB QuantStack
LTX-2-dev Q6_K 16.6 GB QuantStack
LTX-2-dev Q8_0 21.1 GB QuantStack
Unsloth
Model Quant Size Download
ltx-2-19b-dev BF16 37.8 GB Unsloth
ltx-2-19b-dev F16 37.8 GB Unsloth
ltx-2-19b-dev UD-Q2_K_L 10.1 GB Unsloth
ltx-2-19b-dev UD-Q2_K_XL 11.6 GB Unsloth
ltx-2-19b-dev Q2_K 8.1 GB Unsloth
ltx-2-19b-dev Q3_K_L 10.7 GB Unsloth
ltx-2-19b-dev Q3_K_M 10.1 GB Unsloth
ltx-2-19b-dev Q3_K_S 9.47 GB Unsloth
ltx-2-19b-dev Q4_0 11.3 GB Unsloth
ltx-2-19b-dev Q4_1 12.3 GB Unsloth
ltx-2-19b-dev Q4_K_M 12.8 GB Unsloth
ltx-2-19b-dev Q4_K_S 11.9 GB Unsloth
ltx-2-19b-dev Q5_0 13.7 GB Unsloth
ltx-2-19b-dev Q5_1 14.6 GB Unsloth
ltx-2-19b-dev Q5_K_M 14.3 GB Unsloth
ltx-2-19b-dev Q5_K_S 13.6 GB Unsloth
ltx-2-19b-dev Q6_K 16 GB Unsloth
ltx-2-19b-dev Q8_0 20.4 GB Unsloth
Vantage
Model Quant Size Download
ltx-2-19b-dev Q3_K_M 9.96 GB Download
ltx-2-19b-dev Q3_K_S 9.28 GB Download
ltx-2-19b-dev Q4_0 11.6 GB Download
ltx-2-19b-dev Q4_1 12.4 GB Download
ltx-2-19b-dev Q4_K_M 12.8 GB Download
ltx-2-19b-dev Q4_K_S 11.8 GB Download
ltx-2-19b-dev Q5_0 13.6 GB Download
ltx-2-19b-dev Q5_1 14.5 GB Download
ltx-2-19b-dev Q5_K_M 14.4 GB Download
ltx-2-19b-dev Q5_K_S 13.5 GB Download
ltx-2-19b-dev Q6_K 15.9 GB Download
ltx-2-19b-dev Q8_0 20.4 GB Download
ltx-2-19b-distilled Q3_K_M 9.96 GB Download
ltx-2-19b-distilled Q3_K_S 9.28 GB Download
ltx-2-19b-distilled Q4_0 11.6 GB Download
ltx-2-19b-distilled Q4_1 12.4 GB Download
ltx-2-19b-distilled Q4_K_M 12.8 GB Download
ltx-2-19b-distilled Q4_K_S 11.8 GB Download
ltx-2-19b-distilled Q5_0 13.6 GB Download
ltx-2-19b-distilled Q5_1 14.5 GB Download
ltx-2-19b-distilled Q5_K_M 14.4 GB Download
ltx-2-19b-distilled Q5_K_S 13.5 GB Download
ltx-2-19b-distilled Q6_K 15.9 GB Download
ltx-2-19b-distilled Q8_0 20.4 GB Download

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

Text Encoders

LTX-2 requires Gemma-3-12b variants.

Comfy-Org Optimized Encoders

Official and optimized versions for ComfyUI.

Model Name Size Download
gemma_3_12B_it.safetensors 24.4 GB ComfyUI
gemma_3_12B_it_fpmixed.safetensors 13.7 GB ComfyUI
gemma_3_12B_it_fp8_scaled.safetensors 13.2 GB ComfyUI
gemma_3_12B_it_fp4_mixed.safetensors 9.5 GB ComfyUI
  • gemma_3_12B_it_fpmixed: Experimental quant. Should be better than the fp8 scaled
  • gemma_3_12B_it_fp4_mixed: 90% fp4 layers

Gemma-3-12b Abliterated

Why Choose Abliterated Encoders?

Standard Gemma models often incorporate safety alignment that "sanitizes" or weakens specific concepts within prompt embeddings. Even when the model doesn't explicitly refuse a request, this internal filtering can dilute creative intent. For LTX-2 video generation, using a standard encoder often results in:

  • Reduced Prompt Adherence: Key stylistic or descriptive terms may be ignored or weakened.
  • Visual Softening: Visual intensity and fine details are often "muted" to fit generic safety profiles.
  • Concept Dilution: Complex or niche creative requests are subtly altered, leading to less faithful representations of your vision.

Abliteration bypasses these restrictive alignment layers, allowing the encoder to translate your prompts into embeddings with maximum fidelity. This ensures LTX-2 receives the most accurate and un-filtered instructions possible.

Gemma-3-12b-Abliterated

Fixed versions of the abliterated Gemma-3-12b-it model by FusionCow, modified specifically for compatibility with LTX-2. The original model

Model Precision Size Download
Gemma ablit fixed bf16 23.5 GB FusionCow
Gemma ablit fixed fp8 13.8 GB FusionCow
Gemma 3 12B IT Heretic

Models by DreamFast

Safetensors

Model Precision Size Download
Gemma_3_12B_it Heretic bf16 23.5 GB DreamFast
Gemma_3_12B_it Heretic fp8 12.8 GB DreamFast

GGUF

Quant Size Quality Recommendation Download
F16 22GB Lossless Reference, same as original DreamFast
Q8_0 12GB Excellent Best quality quantization DreamFast
Q6_K 9.0GB Very Good High quality, good compression DreamFast
Q5_K_M 7.9GB Good Balanced quality/size DreamFast
Q5_K_S 7.7GB Good Slightly smaller Q5 DreamFast
Q4_K_M 6.8GB Good Still useful DreamFast
Q4_K_S 6.5GB Decent Smaller Q4 variant DreamFast
Q3_K_M 5.6GB Acceptable For very low VRAM only DreamFast

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

Separated Components

Separated LTX2 checkpoint by Kijai. For alternative way to load the models in Comfy

Diffusion Models (Transformer Only)

Name Precision Size Download
ltx-2-19b dev bf16 37.8 GB Kijai
ltx-2-19b dev fp8 21.6 GB Kijai
ltx-2-19b dev fp4 14.5 GB Kijai
ltx-2-19b distilled bf16 37.8 GB Kijai
ltx-2-19b distilled fp8 21.6 GB Kijai

VAE (Video & Audio)

Component Precision Size Download Link
Video VAE BF16 2.45 GB Kijai
Video VAE old BF16 2.49 GB Kijai
Audio VAE BF16 218 MB Kijai

Embedding Connectors

Name Precision Size Download
Connector dev bf16 2.86 GB Kijai
Connector distilled bf16 2.86 GB Kijai

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

LoRA

Enchancer, special

Styles

◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆

Workflow & Technical Notes

Lightricks official WF:

ComfyUI official WF:

RuneXX:

Good collection by RuneXX

About

All available LTX-2 models, encoders, workflows, LoRAs for ComfyUI

Topics

Resources

Stars

Watchers

Forks