Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
6335 commits
Select commit Hold shift + click to select a range
06d0f46
Added --ft-num-warmup-iters option. (#3052)
hexinw-nvidia Jan 26, 2026
642fdd9
Reapply "Various CUDA graph improvements on capture time, replay time…
jiemingz Jan 26, 2026
bb42a00
fix(fsdp): add CLI argument for outer_dp_sharding_strategy (#3053)
liuyun7345 Jan 26, 2026
94d8186
ci: Log node name (#3081)
ko3n1g Jan 26, 2026
528cb2e
add all_gather process-group for overlapping in fsdp disributed train…
jeffnvidia Jan 26, 2026
23a76d1
docs: Release docs (#3055)
ko3n1g Jan 26, 2026
b47c376
Support NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 FP8/NVFP4 PTQ in example …
ChenhanYu Jan 27, 2026
35e85a6
Reapply "Support multimodule communication (#2031)" (#3068)
ko3n1g Jan 27, 2026
db6b895
Add router replay for MoE models (#2101)
litianjian Jan 26, 2026
7031953
ci: Disable gpt_dynamic_inference_tp1_pp1_dp8_583m_throughputtest_zmq…
ko3n1g Jan 27, 2026
dea21a0
ci: Repeat func tests, save logs of unit tests and lessen debug outpu…
ko3n1g Jan 27, 2026
dd83fc6
ci: Update improvement of step-time (#3104)
ko3n1g Jan 27, 2026
2bdf7e1
ci: Add GPU health checks (#3100)
ko3n1g Jan 27, 2026
d68721b
Harden GRPO functional tests (#3065)
jon-barker Jan 27, 2026
4015ff1
Inference functional tests: Write outputs to INFERENCE_OUTPUT_PATH in…
mathemakitten Jan 27, 2026
0888a06
Update moe readme. (#2830)
Victarry Jan 27, 2026
2b02a28
build: Bump to TE2.12 (#3086)
ko3n1g Jan 27, 2026
6cf285b
Logging cleanup (only log on rank 0 if possible) (#3036)
deepakn94 Jan 27, 2026
65217aa
Move all bert and t5 tests to nightly (#3106)
Phlip79 Jan 27, 2026
4fb549f
Create greptile.json (#3087)
Phlip79 Jan 28, 2026
6273d74
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Jan 28, 2026
33224cc
Fix bug of reuse_grad_buf_for_mxfp8_param_ag (#2802)
kunlunl Jan 28, 2026
fb6a592
Fix for Hybrid CP (#3091)
parthmannan Jan 27, 2026
f6c8a61
Fix GRPO re-fit functional test (#3113)
jon-barker Jan 28, 2026
991138e
Minimize README contents (#3020)
megnvidia Jan 28, 2026
964c902
Add end-to-end tests for M-FSDP and ND-Parallel (#3031)
shjwudp Jan 28, 2026
38cd9fc
Fix Multimodal Dockerfile (#3006)
faradawn Jan 28, 2026
b6b49e7
[M-FSDP] Fix double buffering not working with activation recompute (…
shjwudp Jan 28, 2026
d4f9347
[training migration] Add CheckpointConfig dataclass (#2431)
maanug-nv Jan 28, 2026
d5cac80
chore: rotate oncall schedule
github-actions[bot] Jan 28, 2026
008926a
[training migration] Add StragglerDetectionConfig dataclass (#2435)
maanug-nv Jan 28, 2026
1453f94
Standardize RL unit tests (#3088)
tdene Jan 28, 2026
71c49b5
Fix for PR-2142 (#3096)
HaochenYuan Jan 28, 2026
93ddc24
Use the latest hybrid-ep (#3093)
Autumn1998 Jan 28, 2026
fc6969f
remove retro (#3001)
dimapihtar Jan 28, 2026
c22615e
ci: Mark test_compatible_with_nd_parallel as flaky (#3122)
ko3n1g Jan 28, 2026
a883e96
build: Use merge-commit-sha for container (#3123)
ko3n1g Jan 28, 2026
42986ac
Refactor `rl_offload_kv_cache_during_training` to offload KV cache to…
mathemakitten Jan 28, 2026
d41bf66
Disable Greptile status comments (#3127)
Phlip79 Jan 28, 2026
e2ff203
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Jan 29, 2026
50132f2
ci: Add unit tests to merge queue (#3125)
ko3n1g Jan 28, 2026
9f05aac
Create CodeRabbit config (#3131)
Phlip79 Jan 29, 2026
f0b1cb2
build: Explicitly set minimum torch version to >= 2.6.0 (#3085)
chtruong814 Jan 29, 2026
190f5b6
Move kitchen extension file to private kitchen repository (#2779)
kwyss-nvidia Jan 29, 2026
287d2f4
Fix RL optimizer offload (#3112)
jon-barker Jan 29, 2026
3955c49
Revert "Fix RL optimizer offload (#3112)" (#3141)
ko3n1g Jan 29, 2026
4913c46
Revise and move KD docs (#3108)
AAnoosheh Jan 29, 2026
558fdaf
build: Bump FLA (#3139)
ko3n1g Jan 29, 2026
409af92
ci: Add job timeouts (#3142)
ko3n1g Jan 29, 2026
f4af1bf
ci: Set NODE_RANK (#3143)
ko3n1g Jan 29, 2026
0b619c2
Multiturn rollout support prep (#2966)
yobibyte Jan 29, 2026
36411dd
Reapply 3955c49ed9af5e5b38dccdd30c1323c00b9bcd29 (#3146)
jon-barker Jan 29, 2026
dbd8dda
Revert "Multiturn rollout support prep (#2966)" (#3153)
ko3n1g Jan 29, 2026
f58b6d6
Fix coderabbit instructions error (#3150)
Phlip79 Jan 29, 2026
063624b
Force input ids generated by mock dataset are < vocab_size (#2945)
asolergi-nv Jan 29, 2026
4652e7b
Add a check to make sure we are distributing all the layers when usin…
asolergi-nv Jan 29, 2026
18deeff
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Jan 30, 2026
67f3515
Automatically choose available ports in ZMQ (#2278)
tdene Jan 29, 2026
639c08a
Generate arguments from TransformerConfig (#2896)
maanug-nv Jan 30, 2026
4cd9563
Fix for PR-2142 (#3165)
HaochenYuan Jan 30, 2026
6de6362
ci: Onboard more GB200 tests (#3145)
ko3n1g Jan 30, 2026
de15117
ci(hotfix): Alert for GB200 (#3168)
ko3n1g Jan 30, 2026
7952d7e
Fix SFTDataset truncation bug (#3158)
duncanriach Jan 30, 2026
b9ee19e
Vitalyk/multiturn v2 (#3167)
yobibyte Jan 30, 2026
b168849
ci: Disable the api check for now (#3157)
chtruong814 Jan 30, 2026
a205538
ci: Add DSv3 proxy (#3169)
ko3n1g Jan 30, 2026
14b70c7
Nvshmem refit (#2696)
wdykas Jan 30, 2026
fdc04f6
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Jan 30, 2026
9ad5906
[Community][Main] fix(moe): Fix theoretical memory calculation of lay…
1195343015 Jan 30, 2026
5415e1d
fix: Set --refit-method default to gloo (#3172)
wdykas Jan 30, 2026
a976754
[fix] Bug fix for offloading in evaluate() (#3043)
lhb8125 Jan 30, 2026
991c38f
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Jan 31, 2026
5d0a7fd
cp: `Fix: nccl-ub in ddp path (3181)` into `main` (#3182)
ko3n1g Jan 31, 2026
ffbc43f
Miscellaneous inference cleanup (#2955)
santhnm2 Jan 31, 2026
0fe3232
Revert "Miscellaneous inference cleanup (#2955)"
ko3n1g Jan 31, 2026
69a5c63
ci: Fix DSv3 (#3188)
ko3n1g Jan 31, 2026
2fadde8
Fix missing argument in MoELayer.forward() (#3133)
jiemingz Feb 1, 2026
ae67076
Fix H2D stream synchronization in optimizer offload (#3140)
tgkyrie Feb 1, 2026
300d1b6
Add MTP support for hybrid models (#2363)
rkarimimahab Feb 1, 2026
dceb1fb
docs: improve Megatron-LM and Megatron Core descriptions (#3115)
sbhavani Feb 2, 2026
f4502eb
Handle `step` key correctly in checkpoint save with `--optimizer-cpu-…
ahmadki Feb 2, 2026
70719cd
mRoPE for MTP (#3114)
BestJuly Feb 2, 2026
e836e62
Fix two minor bugs in MTP implementation for hybrid models (#3194)
deepakn94 Feb 2, 2026
1362e4a
Update README.md (#2111)
mvirts Feb 2, 2026
31d0c87
Revert "Fix two minor bugs in MTP implementation for hybrid models (#…
ko3n1g Feb 2, 2026
a0cc8ca
Revert "Add MTP support for hybrid models (#2363)"
ko3n1g Feb 2, 2026
50546da
Fix bug in SFTDataset (#3185)
duncanriach Feb 2, 2026
dff4189
Fix several syntax error (#3004)
HollowMan6 Feb 2, 2026
c4bea0a
Fix for RL Test (#3148)
wdykas Feb 3, 2026
a4008d0
Fix latent moe flops and backward_dw (#2977)
buptzyb Feb 3, 2026
afe443b
Use global user buffer when the bucket size does not fit FixedPoolAll…
shengf-nv Feb 3, 2026
78475fe
ci: Checkpoint retention (#3205)
ko3n1g Feb 3, 2026
7080697
Add unit test for LatentMoE (#2892)
venmugil Feb 3, 2026
0028273
ci: Enable unit tests on merge-queue (#3186)
ko3n1g Feb 3, 2026
94c9eae
Fix seq pack flag in `get_logprobs` (#3206)
mathemakitten Feb 3, 2026
b477d12
ci(fix): Parse unit tests in merge-queue (#3224)
ko3n1g Feb 3, 2026
1a61b77
Fix TE 2.12 AllGather CI failure (#3101)
BestJuly Feb 3, 2026
79e7bfe
ci(hotfix): Pin uv (#3233)
ko3n1g Feb 3, 2026
18d69f1
Add a unit test to check that RL `get_logprobs` will reuse training c…
mathemakitten Feb 3, 2026
27a5f83
Do not offload grad buffers when training graphs are enabled (#3231)
mathemakitten Feb 3, 2026
bc2eb9a
Fix missing PackedSeqParams import (#3214)
parthmannan Feb 3, 2026
1fdb29f
Synchronize the request counts for EP inference with strict matching …
santhnm2 Feb 3, 2026
e02344e
Do not let requests fail silently inside inference engine (#3228)
tdene Feb 3, 2026
4c48248
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 4, 2026
9050d5b
Fix coordinator address collision check in flask (#3208)
tdene Feb 3, 2026
cd5ed74
torch saver inference model offload (#3170)
wdykas Feb 4, 2026
982ca5d
enable cuda graph ut (#3197)
Autumn1998 Feb 4, 2026
473e283
Support EP with HSDP (#2840)
wplf Feb 4, 2026
4a23972
[Main] Add the missing part to support 1F1B overlap for Qwen3-Next (#…
BestJuly Feb 4, 2026
c036e77
Missing import fix (#3241)
parthmannan Feb 4, 2026
43db8c1
Miscellaneous inference cleanup (Replay of !2955) (#3232)
santhnm2 Feb 4, 2026
adce147
Add DistributedInitConfig (#3173)
maanug-nv Feb 4, 2026
f3e6cc8
Fix checkpoint converter missing parallel group initialization (#3217)
yashaswikarnati Feb 4, 2026
d558b5f
Skip empty sequences and chunks in MTP tensor roll (#3035)
BestJuly Feb 4, 2026
f708b5d
Implement get_parameters for ChainedOptimizer (#3201)
nschank Feb 4, 2026
66c432a
ci(fix): Create main/dev image tags (#3252)
ko3n1g Feb 4, 2026
e24767f
ci(hotfix): Skopeo copy
ko3n1g Feb 4, 2026
d959620
ci(hotfix): Add skopeo
ko3n1g Feb 4, 2026
9d71cb1
Reapply "Add MTP support for hybrid models (#2363)" (#3207)
sancha Feb 4, 2026
b043863
Fix uv install for GH actions (#3259)
Phlip79 Feb 4, 2026
dd7d141
Update the project structure in README (#3251)
janEbert Feb 5, 2026
1f6d8c2
chore: rotate oncall schedule
github-actions[bot] Feb 5, 2026
1b11076
Cherry-pick: Fix mtp_num_layers and clip_qk issues (#2581, #2776) (#3…
BestJuly Feb 5, 2026
111a2a0
RL: training cudagraphs functional test (#3235)
mathemakitten Feb 5, 2026
1934391
[Main] fix cg missing wgrad hook (#3074)
Wohox Feb 5, 2026
801f12f
Avoid .cuda call on meta device in LanguageModel (#3202)
nschank Feb 5, 2026
347ad21
Nano QAT/D fix with sft tokenizer and datasets (#3254)
ChenhanYu Feb 5, 2026
3c0a4f3
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 6, 2026
0434f87
fix checkpointing error message (#3203)
dimapihtar Feb 5, 2026
8379d43
Revert "fix checkpointing error message (#3203)" (#3283)
ko3n1g Feb 6, 2026
e2e5a6a
Reapply "fix checkpointing error message (#3203)" (#3283) (#3285)
ko3n1g Feb 6, 2026
a116ce3
docs: Add changelog for 0.15.3 (#3286)
ko3n1g Feb 6, 2026
4376cc5
ci: Set throughput tests as flaky (#3301)
chtruong814 Feb 6, 2026
f92460b
chore: Move GB200 tests to nightly (#3302)
ko3n1g Feb 6, 2026
cfbe9b5
Ensure type-checker understands use of Submodules in bert_model (#3256)
nschank Feb 6, 2026
a63d045
Override extra_repr instead of __repr__ (#3200)
nschank Feb 7, 2026
f68c7c1
Replace ModuleSpec with Protocols for LayerNorm submodules (#3090)
nschank Feb 7, 2026
2f99ee8
chore: Remove gpt_grpo_tp2tp1_pp4pp2_dp8_583m_throughputtest
ko3n1g Feb 7, 2026
e3ae6e4
Non colocated refit (#3213)
wdykas Feb 7, 2026
554ce49
Fuse permute+pad and unpermute+unpad ops for FP8/FP4 training (#2763)
xiaoxi-wangfj Feb 7, 2026
7cbbba2
Add check to prevent MFSDP from numeric issue in gradient accumulate …
shjwudp Feb 7, 2026
c99c962
update get_embedding_ranks and get_position_embedding_ranks docstring…
c1lovez1 Feb 7, 2026
6d81e3d
ci: Add secrets detector (#3180)
chtruong814 Feb 7, 2026
a3ec4b0
Param offset in _ParamAndGradBucket should be aligned (#3007)
skydoorkai Feb 7, 2026
916301a
updates to support modelopt EAGLE training with CP (#3147)
yeyu-nvidia Feb 9, 2026
6103cb5
Ensure type-checker understands use of Submodules in llava_model (#3257)
nschank Feb 9, 2026
4ff7686
M-FSDP: Remove redundant stream waits in HSDP to prevent CG fail (#2941)
shjwudp Feb 9, 2026
3257093
fully remove legacy tokenizer system (#2946)
dimapihtar Feb 9, 2026
3069591
General README and pyproject fixes (#2907)
ahmadki Feb 9, 2026
3bb539e
chore: More aggressive checkpointing (#3315)
ko3n1g Feb 9, 2026
c072f89
ci: Pin down setuptools to lt 82 (#3313)
ko3n1g Feb 9, 2026
9ddbce3
fix: T5 dataset (#3307)
ko3n1g Feb 9, 2026
f14d161
fix: numpy overflow (#3306)
ko3n1g Feb 9, 2026
8d79987
ci: Revert "ci: Add secrets detector (#3180)" (#3330)
chtruong814 Feb 10, 2026
55c3e63
ci: Add more tests, run on merge-queue (#3317)
ko3n1g Feb 10, 2026
ba76934
ci: Remove merge-gate environment check (#3331)
chtruong814 Feb 10, 2026
ab5e277
Use FP4 context for mamba (#2604)
kwyss-nvidia Feb 10, 2026
fc557ec
ci: Ensure we run all functional tests in merge group (#3332)
chtruong814 Feb 10, 2026
55198ba
Replace ModuleSpec with Protocols for inputs to MLP (#3084)
nschank Feb 10, 2026
5eb20b8
ci: Fix merge queue functional tests (#3337)
chtruong814 Feb 10, 2026
367f0b8
ci: skip queue in merge-gate (#3343)
ko3n1g Feb 10, 2026
3fb6006
ci: Timeout for functional tests (#3346)
ko3n1g Feb 10, 2026
76cf11e
update checkpointing documentation (#3347)
dimapihtar Feb 10, 2026
836d473
Update golden values to reflect improvements (#3350)
tdene Feb 10, 2026
2451508
BUGFIX: gpt vs hybrid model mtp naming mismatch (#3334)
sancha Feb 10, 2026
8da949e
Disable flaky test (#3354)
tdene Feb 10, 2026
bb97791
re-enable gpt grpo tests (#3348)
jon-barker Feb 10, 2026
4bce841
Fix SFT Pipeline when TP>1 (#3268)
asolergi-nv Feb 10, 2026
f5238ba
Fixes for KD mode (#3342)
AAnoosheh Feb 10, 2026
c1169ea
chore: rotate oncall schedule
github-actions[bot] Feb 11, 2026
4f025a1
chore: Update codeowners file (#3365)
ko3n1g Feb 11, 2026
66ec17e
Siddharth/fix inference functional tests (#3357)
sidsingh-nvidia Feb 11, 2026
6a9da99
Switch oncall (#3360)
janEbert Feb 11, 2026
b6e883b
Add missing RMSNorm to llama train script (#3314)
AAnoosheh Feb 11, 2026
cd14090
Fix inference for MTP models (#3297)
tdene Feb 11, 2026
6f5de16
Add a logprobs test with real gpt model. (#2870)
yobibyte Feb 11, 2026
b5d50cb
Add simple GRPO functional test (#3323)
tdene Feb 11, 2026
1c245c7
ci: Concurrency control for merge-queue (#3353)
ko3n1g Feb 11, 2026
d9f075c
ci: Update golden value download script to work with Github (#3335)
chtruong814 Feb 11, 2026
d0b768f
Removing etc from main index page, shifted name of discussions (#3271)
megnvidia Feb 11, 2026
7d1acf6
fix: correct typos 'seperated' and 'recieved' (#3305)
thecaptain789 Feb 11, 2026
2807a4e
Improved PyTorch profiler and added PyTorch execution trace (#3273)
shengf-nv Feb 11, 2026
1959739
build: Bump TE on 2.12 (#3371)
ko3n1g Feb 11, 2026
f06e669
ci(hotfix): job conditions (#3376)
ko3n1g Feb 11, 2026
6467cbc
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 12, 2026
faced51
Record moe routing decisions during inference. (#3034)
sidsingh-nvidia Feb 11, 2026
dedb6dd
[Main] Fix EP Overlap Bugs for Full-Iter CG (#3164)
Wohox Feb 12, 2026
11a4659
Avoid direct pickle import (#3375)
maanug-nv Feb 12, 2026
fe9279e
Delete old pretrain_* files (#3359)
Phlip79 Feb 12, 2026
7df15bd
Add Qwen3-VL support with Megatron-FSDP (#2841)
xuwchen Feb 12, 2026
c65fb25
Refactor Mamba chunked prefill (#3265)
santhnm2 Feb 12, 2026
47938af
Improved parallel logging of learning rate (#3319)
jstjohn Feb 12, 2026
a51c1c8
Add enhanced event tracking with TTFT measurement and compact seriali…
lmcafee-nvidia Feb 12, 2026
dbc444f
Add assertion that max_requests is divisible by tp_size (#3304)
santhnm2 Feb 12, 2026
1123fc0
Move to using the Inference OpenAI API server (#3107)
ArEsKay3 Feb 12, 2026
4184bfa
Revert "Move to using the Inference OpenAI API server (#3107)"
ko3n1g Feb 13, 2026
9119fae
Update moe github test cases. (#3077)
Victarry Feb 13, 2026
e0aa16b
Revert "Update moe github test cases. (#3077)"
ko3n1g Feb 13, 2026
28ccdaa
Split layer_specs to return Submodules instead of ModuleSpecs (#3255)
nschank Feb 13, 2026
76a9f47
ci: Remove gpu sanity check (#3420)
chtruong814 Feb 13, 2026
d10eb6f
[Critical-Bug] Fix Uneven PP for Mamba models (Nemotron3-nano) (#3399)
kevalmorabia97 Feb 13, 2026
2611830
Fix for rl (#3390)
shanmugamr1992 Feb 13, 2026
4578ed8
Add check for full_iteration scope before instantiating CudaGraphMana…
vasunvidia Feb 13, 2026
698feec
Fix broken links throughout (#3230)
megnvidia Feb 13, 2026
d401490
Extract intermediate embeddings of transformer block (#3060)
sajadn Feb 13, 2026
545bff9
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 14, 2026
b890099
Decouple topk and loss from DSA Indexer (#3248)
kunlunl Feb 13, 2026
7a8f305
Move to using the Inference OpenAI API server (bis) (#3395)
tdene Feb 14, 2026
4beb8ca
Make Mamba inference state memory ratio configurable (#3322)
santhnm2 Feb 16, 2026
cbb47c8
Fix configs for RL model environments (#3441)
tdene Feb 16, 2026
8f1c2f8
Replace pickle with json in rl_utils (#3351)
tdene Feb 16, 2026
057c804
fix: correct typo in demo training example (#3428)
dndnda Feb 17, 2026
b218e64
Clean up logging inside inference flask server (#3437)
tdene Feb 17, 2026
3c69780
ci: Update release-docs workflow to use FW-CI-templates v0.72.0 (#3438)
chtruong814 Feb 17, 2026
74ef64e
Fix --tokenizer-hf-include-special-tokens (#3422)
jon-barker Feb 17, 2026
267cf1f
Update num_tokens_to_generate default for Gym (#3453)
tdene Feb 17, 2026
0627623
Fix slowdown in inference flask server (#3445)
tdene Feb 17, 2026
a22c40e
Add a normalized scale for MTP per token loss (#3159)
BestJuly Feb 17, 2026
d7500d4
[Bugfix] Fix nan loss caused by zero token in MTP (#3396)
BestJuly Feb 17, 2026
ad5a627
ci: Add testing branches
ko3n1g Feb 18, 2026
cd71d4c
chore: rotate oncall schedule
github-actions[bot] Feb 18, 2026
f1908bc
Log RL metrics per environment (#3446)
yobibyte Feb 18, 2026
1106df4
Move tensor offload/onload out of RL code (#3029)
tdene Feb 18, 2026
0672477
Add Engine event to the follow up requests after checkpointing (#3473)
ArEsKay3 Feb 18, 2026
7b016be
Fix another inference flask / Gym interaction (#3467)
tdene Feb 18, 2026
acb7273
adding in copyright blurb at the top of md file (#3394)
megnvidia Feb 18, 2026
fdde15e
[Megatron-FSDP] Add fsdp_all_gather_in_start_param_sync option in DDP…
shjwudp Feb 18, 2026
77f22f2
ci: Update release workflow to include changelog and publish docs (#3…
chtruong814 Feb 18, 2026
1666b45
ci(fix): Weekly GPT tests (#3443)
ko3n1g Feb 18, 2026
0d0943c
ci: Remove environments (#3462)
ko3n1g Feb 19, 2026
d07f16c
update HF tokenizer defaults (#3440)
dimapihtar Feb 19, 2026
1d694c2
PTQ changes for upcoming QAD (#3124)
AAnoosheh Feb 19, 2026
655cc8e
ci: Bump preflight to detect our svc (#3494)
ko3n1g Feb 19, 2026
7f35af4
build: Drop Python 3.10 support and pip install one-logger (#3485)
ko3n1g Feb 19, 2026
2d06cc9
ci: Bump pre-flight for Bot SSO (#3497)
ko3n1g Feb 19, 2026
a781f3c
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 19, 2026
50ebe8e
Revert "build: Drop Python 3.10 support and pip install one-logger (#…
ko3n1g Feb 19, 2026
c191ae8
Fix chunked prefill edge cases (#3404)
santhnm2 Feb 19, 2026
9f611b7
ci: Enable MBridge downstream testing via PR (#3483)
ko3n1g Feb 19, 2026
7b6e226
ci: Remove gitlab docs build job and set LTS integration and function…
chtruong814 Feb 19, 2026
31bd4a3
[OMNIML-3232] ModelOpt: add full TE spec option and wire Mamba stack …
yueshen2016 Feb 20, 2026
9ba248e
Track off-policyness across RL steps (#3030)
tdene Feb 20, 2026
9d72f63
chore(beep boop 🤖): Bump (main) (2026-02-20)
github-actions[bot] Feb 20, 2026
b7aa6a0
ci: MBridge testing branch name during merge-queues (#3513)
ko3n1g Feb 20, 2026
7a36263
ci: Enable Dependabot Automerge (#3487)
ko3n1g Feb 20, 2026
e8fd432
ci: Also sync direct teams (#3484)
ko3n1g Feb 20, 2026
01b361c
Multimodal: fix argument checking (#3449)
faradawn Feb 20, 2026
773c113
Fix Megatron-FSDP optimizer state DCP checkpointing, and fix DTensor …
cspades Feb 20, 2026
b555baf
Update copy-pr-bot.yaml [skip ci]
github-actions[bot] Feb 21, 2026
32efeff
Renable full_iteration cuda graphs for inference. Add them for the ma…
sidsingh-nvidia Feb 20, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
39 changes: 39 additions & 0 deletions .coderabbit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
language: "en-US"

# Only comment on Critical/Major bugs. No Minor, Trivial, or style comments.
tone_instructions: "Only comment on Critical or Major bugs. Never comment on Minor issues, style, refactoring, or suggestions. When in doubt, stay silent."

reviews:
# Use chill profile - filters out nitpicks automatically
profile: "chill"

# Disable all summary features
high_level_summary: false
high_level_summary_in_walkthrough: false

# Disable walkthrough comment entirely
collapse_walkthrough: true
changed_files_summary: false
sequence_diagrams: false

# Disable status/effort estimates
review_status: false
commit_status: false
estimate_code_review_effort: false

# Disable auto-suggestions for labels/reviewers
suggested_labels: false
suggested_reviewers: false

# Disable related issues/PRs lookup
assess_linked_issues: false
related_issues: false
related_prs: false

# Auto-review disabled - only review when explicitly requested via @coderabbitai review
auto_review:
enabled: false

chat:
auto_reply: true
5 changes: 0 additions & 5 deletions .coveragerc

This file was deleted.

4 changes: 4 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[flake8]
max-line-length = 100
extend-ignore = E203,E501,F401,E402,E714
per-file-ignores = __init__.py:F401
61 changes: 61 additions & 0 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
megatron/core/ @NVIDIA/core-adlr @NVIDIA/core-nemo

megatron/core/models/gpt/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/gpt

megatron/core/models/multimodal/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/multi-modal

megatron/core/models/mamba/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/hybrid-mamba
megatron/core/ssm/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/hybrid-mamba

megatron/core/datasets/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/datasets

megatron/core/tokenizers/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/tokenizers

megatron/core/distributed/fsdp/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/megatron-fsdp

megatron/core/transformer/fsdp_dtensor_checkpoint.py @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/megatron-fsdp

megatron/core/dist_checkpointing/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/dist-checkpointing

megatron/core/optimizer/distrib_optimizer/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/dist-optimizer

megatron/core/inference/modelopt_support @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/quantization-and-inference

megatron/core/datasets/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/datasets

megatron/core/pipeline_parallel/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/pipeline-parallelism

megatron/core/transformer/ @NVIDIA/core-adlr @NVIDIA/core-nemo

megatron/core/transformer/moe/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/mixture-of-experts-adlr @NVIDIA/mixture-of-experts-devtech

megatron/core/inference/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/inference

megatron/core/parallel_state.py @NVIDIA/core-adlr @NVIDIA/core-nemo

megatron/core/post_training/ @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/post-training

megatron/post_training/ @NVIDIA/post-training

megatron/core/transformer/cuda_graphs.py @NVIDIA/core-adlr @NVIDIA/core-nemo @NVIDIA/cuda-graphs

.gitlab/ @NVIDIA/ci
.github/ @NVIDIA/ci
.gitlab-ci.yml @NVIDIA/ci
docker/ @NVIDIA/ci
tests/functional_tests/python_test_utils/ @NVIDIA/ci
tests/functional_tests/shell_test_utils/ @NVIDIA/ci
tests/test_utils/recipes/ @NVIDIA/ci
tests/unit_tests/run_ci_test.sh @NVIDIA/ci

# API Backwards Compatibility Check
scripts/check_api_backwards_compatibility.py @NVIDIA/ci
scripts/README_API_COMPAT.md @NVIDIA/ci
.github/workflows/check_api_backwards_compatibility_workflow.yml @NVIDIA/ci
docs/api-backwards-compatibility-check.md @NVIDIA/ci
tests/unit_tests/test_api_backwards_compat_setup.py @NVIDIA/ci

megatron/rl/ @NVIDIA/reinforcement-learning
examples/rl/ @NVIDIA/reinforcement-learning
test/unit_tests/test_rl_utils.py @NVIDIA/reinforcement-learning
train_rl.py @NVIDIA/reinforcement-learning
29 changes: 29 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
name: Bug report
about: Create a report to help us improve the repository or project
title: ""
labels: bug
assignees: ''

---

**Describe the bug**

A clear and concise description of what the bug is. Tag the [@mcore-oncall](https://github.com/orgs/NVIDIA/teams/mcore-oncall)
to get oncall's attention to this issue.

**Steps/Code to reproduce bug**

Please list *minimal* steps or code snippet for us to be able to reproduce the bug.

A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports.


**Expected behavior**

A clear and concise description of what you expected to happen.


**Additional context**

Add any other context about the problem here.
2 changes: 2 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
blank_issues_enabled: false

23 changes: 23 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: enhancement
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Tag the [@mcore-oncall](https://github.com/orgs/NVIDIA/teams/mcore-oncall)
to get oncall's attention to this issue.

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
13 changes: 13 additions & 0 deletions .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: QUESTION
about: Ask a question about Megatron-LM that is not a bug, regression or enhancement
request
title: "[QUESTION]"
labels: ''
assignees: ''

---

**Your question**
Ask a clear and concise question about Megatron-LM. Tag the [@mcore-oncall](https://github.com/orgs/NVIDIA/teams/mcore-oncall)
to get oncall's attention to this issue.
40 changes: 40 additions & 0 deletions .github/ISSUE_TEMPLATE/regression.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
name: REGRESSION
about: Report a regression in speed or accuracy due to a Megatron-LM update
title: "[REGRESSION]"
labels: ''
assignees: ''

---

**Describe the regression**
A clear and concise description of what the regression is. Tag the [@mcore-oncall](https://github.com/orgs/NVIDIA/teams/mcore-oncall)
to get oncall's attention to this issue.

**To Reproduce**
Steps to reproduce the behavior. The easier it is to reproduce the faster it will get maintainer attention.

**Previous performance**
What speed or accuracy did you previously see.

**New performance**
What speed or accuracy do you see after the update.

**Stack trace/logs**
If applicable, add the stack trace or logs related to the regression.

**Environment (please complete the following information):**
- Previous Megatron-LM commit ID
- New Megatron-LM commit ID
- Previous PyTorch version
- New PyTorch version
- Previous CUDA version
- New CUDA version
- Previous NCCL version
- New NCCL version

**Proposed fix**
If you have a proposal for how to fix the issue state it here or link to a PR.

**Additional context**
Add any other context about the problem here.
Loading