Skip to content

Conversation

@red-hat-konflux-kflux-prd-rh02
Copy link

@red-hat-konflux-kflux-prd-rh02 red-hat-konflux-kflux-prd-rh02 bot commented Jan 16, 2026

Note: This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
torch >=2.5.0,<2.6.0 -> >=2.10.0,<2.11.0 age confidence

Release Notes

pytorch/pytorch (torch)

v2.10.0: PyTorch 2.10.0 Release

Compare Source

PyTorch 2.10.0 Release Notes

Highlights

Python 3.14 support for torch.compile(). Python 3.14t (freethreaded build) is experimentally supported as well.
Reduced kernel launch overhead with combo-kernels horizontal fusion in torchinductor
A new varlen_attn() op providing support for ragged and packed sequences
Efficient eigenvalue decompositions with DnXgeev
torch.compile() now respects use_deterministic_mode
DebugMode for tracking dispatched calls and debugging numerical divergence - This makes it simpler to track down subtle numerical bugs.
Intel GPUs support: Expand PyTorch support to the latest Panther Lake on Windows and Linux by enabling FP8 (core ops and scaled matmul) and complex MatMul support, and extending SYCL support in the C++ Extension API for Windows custom ops.

For more details about these highlighted features, you can look at the release blogpost. Below are the full release notes for this release.

Backwards Incompatible Changes

Dataloader Frontend

  • Removed unused data_source argument from Sampler (#​163134). This is a no-op, unless you have a custom sampler that uses this argument. Please update your custom sampler accordingly.
  • Removed deprecated imports for torch.utils.data.datapipes.iter.grouping (#​163438). from torch.utils.data.datapipes.iter.grouping import SHARDING_PRIORITIES, ShardingFilterIterDataPipe is no longer supported. Please import from torch.utils.data.datapipes.iter.sharding instead.

torch.nn

  • Remove Nested Jagged Tensor support from nn.attention.flex_attention (#​161734)

ONNX

  • fallback=False is now the default in torch.onnx.export (#​162726)
  • The exporter now uses the dynamo=True option without fallback. This is the recommended way to use the ONNX exporter. To preserve 2.9 behavior, manually set fallback=True in the torch.onnx.export call.

Release Engineering

  • Rename pytorch-triton package to triton (#​169888)

Deprecations

Distributed

  • DeviceMesh
    • Added a warning for slicing flattened dim from root mesh and types for _get_slice_mesh_layout (#​164993)

We decided to deprecate an existing behavior which goes against the PyTorch design principle (explicit over implicit) for device mesh slicing of flattened dim.

Version <2.9
import torch
from torch.distributed.device_mesh import

device_type = (
    acc.type
    if (acc := torch.accelerator.current_accelerator(check_available=True))
    else "cpu"
)
mesh_shape = (2, 2, 2)
mesh_3d = init_device_mesh(
    device_type, mesh_shape, mesh_dim_names=("dp", "cp", "tp")
)

mesh_3d["dp", "cp"]._flatten()
mesh_3["dp_cp"]  # This comes with no warning
Version >=2.10
import torch
from torch.distributed.device_mesh import

device_type = (
    acc.type
    if (acc := torch.accelerator.current_accelerator(check_available=True))
    else "cpu"
)
mesh_shape = (2, 2, 2)
mesh_3d = init_device_mesh(
    device_type, mesh_shape, mesh_dim_names=("dp", "cp", "tp")
)

mesh_3d["dp", "cp"]._flatten()
mesh_3["dp_cp"]  # This will come with a warning because it implicitly change the state of the original mesh. We will eventually remove this behavior in future release. User should do the bookkeeping of flattened mesh explicitly.

Ahead-Of-Time Inductor (AOTI)

  • Move from/to to torch::stable::detail (#​164956)

JIT

  • torch.jit is not guaranteed to work in Python 3.14. Deprecation warnings have been added to user-facing torch.jit API (#​167669).

torch.jit should be replaced with torch.compile or torch.export.

ONNX

  • The dynamic_axes option in torch.onnx.export is deprecated (#​165769)

Users should supply the dynamic_shapes argument instead. See https://docs.pytorch.org/docs/stable/export.html#expressing-dynamism for more documentation.

Profiler

  • Deprecate export_memory_timeline method (#​168036)

The export_memory_timeline method in torch.profiler is being deprecated in favor of the newer memory snapshot API (torch.cuda.memory._record_memory_history and torch.cuda.memory._export_memory_snapshot). This change adds the deprecated decorator from typing_extensions and updates the docstring to guide users to the recommended alternative.

New Features

Autograd

  • Allow setting grad_dtype on leaf tensors (#​164751)
  • Add Default Autograd Fallback for PrivateUse1 in PyTorch (#​165315)
  • Add API to annotate disjoint backward for use with torch.utils.checkpoint.checkpoint (#​166536)

Complex Frontend

Composability

cuDNN

  • BFloat16 support added to cuDNN RNN (#​164411)
  • [cuDNN][submodule] Upgrade to cuDNN frontend 1.16.1 (#​170591)

Distributed

  • LocalTensor:

    • LocalTensor is a powerful debugging and simulation tool in PyTorch's distributed tensor ecosystem. It allows you to simulate distributed tensor computations across multiple SPMD (Single Program, Multiple Data) ranks on a single process. This is incredibly valuable for: 1) debugging distributed code without spinning up multiple processes; 2) understanding DTensor behavior by inspecting per-rank tensor states; 3) testing DTensor operations with uneven sharding across ranks; 4) rapid prototyping of distributed algorithms. Note that LocalTensor is designed for debugging purposes only. It has significant overhead and is not suitable for production distributed training.
    • LocalTensor is a torch.Tensor subclass that internally holds a mapping from rank IDs to local tensor shards. When you perform a PyTorch operation on a LocalTensor, the operation is applied independently to each local shard, mimicking distributed computation (LocalTensor simulates collective operations locally without actual network communication.). LocalTensorMode is the context manager that enables LocalTensor dispatch. It intercepts PyTorch operations and routes them appropriately. The @maybe_run_for_local_tensor decorator is essential for handling rank-specific logic when implementing distributed code.
    • To get started with LocalTensor, users import from torch.distributed._local_tensor, initialize a fake process group, and wrap their distributed code in a LocalTensorMode context. Within this context, DTensor operations automatically produce LocalTensors.
    • PRs: (#​164537, #​166595, #​168110,#​168314,#​169088,#​169734)
  • c10d:

    • New shrink_group implementation to expose ncclCommShrink API (#​164518)

Dynamo

  • torch.compile now fully works in Python 3.14 (#​167384)
  • Add option to error or disable applying side effects (#​167239)
  • Config flag (skip_fwd_side_effects_in_bwd_under_checkpoint) to allow eager and compile activation-checkpointing divergence for side-effects (#​165775)
  • torch._higher_order_ops.print for enabling printing without graph breaks or reordering (#​167571)

FX

  • Added node metadata annotation API

  • Disable preservation of node metadata when enable=False (#​164772)

  • Annotation should be mapped across submod (#​165202)

  • Annotate bw nodes before eliminate dead code (#​165782)

  • Add logging for debugging annotation (#​165797)

  • Override metadata on regenerated node in functional mode (#​166200)

  • Skip copying custom meta for gradient accumulation nodes; tag with is_gradient_acc=True (#​167572)

  • Add metadata hook for all nodes created in runtime_assert pass (#​169497)

  • Update gm.print_readable to include Annotation (#​165397)

  • Add annotation to assertion nodes in export (#​167171)

  • Add debug mode to print meta in fx graphs (#​165874)

Inductor

Ahead-Of-Time Inductor (AOTI)

  • Integrate AOTI as a backend. (#​167338)
  • Add AOTI mingw cross compilation for Windows. (#​163188)

MPS

torch.nn

ONNX

  • A new testing module torch.onnx.testing with a testing utility assert_onnx_program (#​162495)

Profiler

Quantization

  • Add _scaled_mm_v2 API (#​164141)

  • Add scaled_grouped_mm_v2 and python API (#​165154)

  • Add embedding_bag_byte_prepack_with_rowwise_min_max and embedding_bag_{2/4}bit_prepack_with_rowwise_min_max (#​162924)

  • Add MXFP4 support for _scaled_grouped_mm_v2 via. FBGEMM kernels (#​166530)

Release Engineering

ROCm

  • Enable grouped GEMM via regular GEMM fallback (#​162419)
  • Enable grouped GEMM via CK (#​166334, #​167403)
  • Enable ATen GEMM overload for FP32 output from FP16/BF16 inputs (#​162600)
  • Support torch.cuda._compile_kernel (#​162510)
  • Enhanced Windows support
  • load_inline (#​162577)
  • Enable AOTriton runtime compile (#​165538)
  • AOTriton scaled_dot_product_attention (#​162330)
  • Add gfx1150 gfx1151 to hipblaslt-supported GEMM lists (#​164744)
  • Add scaled_mm v2 support. (#​165528)
  • Add torch.version.rocm, distinct from torch.version.hip (#​168097)

XPU

  • Support ATen operators scaled_mm and scaled_mm_v2 for Intel GPU (#​166056)
  • Support ATen operator _weight_int8pack_mm for Intel GPU (#​160938)
  • Extend SYCL support in PyTorch CPP Extension API to allow users to implement new custom operators on Windows (#​162579)
  • Add API torch.xpu.get_per_process_memory_fraction for Intel GPU (#​165511)
  • Add API torch.xpu.set_per_process_memory_fraction for Intel GPU (#​165510)
  • Add API torch.xpu.is_tf32_supported for Intel GPU (#​163141)
  • Add API torch.xpu.can_device_access_peer for Intel GPU (#​162705)
  • Add API torch.accelerator.get_memory_info for Intel GPU (#​162564)

Improvements

Build Frontend

Composability

  • If you are using the torch.compile(backend="aot_eager") backend, it should now give bitwise equivalent results in eager. Previously it sometimes would not due to extra compile-only decompositions running (#​165910)
  • Some dynamic shape errors were changed to recommend using torch._check over torch._check_is_size (#​164889,
  • Some unbacked (dynamic shape) improvements (#​162652, #​169612)
  • Some bugfixes for symbolic float handling in compile (#​166573, #​162788)

C++ Frontend

CUDA

  • Make torch.cuda.rng_set_state and torch.cuda.rng_get_state work in CUDA graph capture. (#​162505)
  • Enable templated kernels (#​162875)
  • Enable pre-compiled kernels (#​162972)
  • Add CUDA headers automatically (#​162634)
  • Remove outdated header_code argument (#​163165)
  • Prevent copies of std::vector in CUDA ForeachOps (#​163416)
  • Implement cuda-python CUDA stream protocol (#​163614)
  • Remove outdated checks and docs for cuBLAS determinism (#​161749)
  • Cleanup old workaround code in launch_logcumsumexp_cuda_kernel (#​164567)
  • Add a compile-time flag to trigger verbose logging for device-side asserts (#​166171)
  • Support SM 10.3 in custom CUTLASS matmuls (#​162956)
  • Enable CUTLASS matmuls on Thor (#​164836)
  • Add per_process_memory_fraction option to PYTORCH_CUDA_ALLOC_CONF (#​161035)
  • Support nested memory pools (#​168382)
  • Upgrade cuDNN to 9.15.1 for CUDA 13 builds (#​169412)

Distributed

Dynamo

  • Turn on capture_scalar_outputs and capture_dynamic_output_shape_ops when fullgraph=True (#​163121, #​163123)

  • Improved tracing for dict key hashing (#​169204)

  • Tracing support for torch.cuda.stream (#​166472)

  • Improved tracing of torch.autograd.Functions (#​166788)

  • Miscellaneous smaller tracing support additions:

  • Extend collections.defaultdict support with *args, **kwargs and custom default_factory (#​166793)

  • Support for bitwise xor (#​166065)

  • Support repr on user-defined objects (#​167372)

  • Support new typing union syntax X | Y (#​166599)

Export

  • Improved fake tensor leakage detection in export (#​163516)
  • Improved support for tensor subclasses (#​163770)

FX

  • Add tensor subclass printing support in fx/graph.py (#​164403)
  • Update Node.is_impure check if subgraph contains impure ops (#​166609, #​167443)
  • Explicitly remove call_mod_node_to_replace after inlining the submodule in const_fold._inline_module` (#​166871)
  • Add strict argument validation to Interpreter.boxed_run (#​166784)
  • Use stable topological sort in fuse_by_partitions (#​167397)

Inductor

  • Pruned failed compilations from Autotuning candidates (#​162673)
  • Extend triton_mm auto-tune options for HIM shapes (#​163273)
  • Various fixes for AOTI-FX backend
  • Solve for undefined symbols in dynamic input shapes (#​163044)
  • Support symbol and dynamic scalar graph inputs and outputs (#​163596)
  • Support unbacked symbol definitions (#​163729)
  • Generalize FloorDiv conversion to handle more complex launch grids. (#​163828)
  • Don't flatten constant args (#​166144)
  • Support SymInt placeholder(#​167757)
  • Support torch.cond (#​163234)
  • Add tanh, exp, and sigmoid activations for Cutlass backend. (#​162535) (#​162536)
  • Hardened the experimental horizontal fusion torch._inductor.config.combo_kernels (#​162442) (#​166274) (#​162759) (#​167781) (#​168127) (#​168946) (#​168109) (#​164918)
  • Enable TMA store for TMA matmul templates on Triton. (#​160480)
  • Add Blackwell GPU templates (persistent matmul, FP8 scaled persistent + TMA GEMMs, CuTeDSL grouped GEMM, FlexFlash forward, FlexAttention configs). (#​162916) (#​163147) (#​167340) (#​167040) (#​165760)
  • Support qconv_pointwise.tensor and qconv2d_pointwise.binary_tensor quantized operations. (#​166608)
  • Support out_dtype argument for matmul operations. (#​163393)
  • Add support for bound methods in pattern matcher. (#​167795)
  • Add way to register custom rules for graph partitioning. (#​166458) (#​163310)
  • Add codegen support for fast_tanhf on ROCm. (#​162052)
  • Support deepseek-style FP8 scaling in Inductor. (#​164404)
  • Enable int64 indexing in convolution and matmul templates. (#​162506)
  • Add SDPA patterns for T5 variants when batch size is 1. (#​163252)
  • Add mechanism to get optimal autotune decision for FlexAttention. (#​165817)
  • Add fallback config fallback_embedding_bag_byte_unpack. (#​163803)
  • Expose config for FX bucket all_reduces. (#​167634)
  • Add in-kernel NaN check support. (#​166008)
  • Enable pad_mm and decompose_mm_pass pass on Intel GPU. (#​166618) (#​166613)
  • Improve CUDA support for int8pack_mm weight-only quantization pattern. (#​161680) (#​161848) (#​163461)
  • Improve heuristics for pointwise kernels on ROCm. (#​163197)
  • Enable mix-order reduction fusion earlier and allow fusing more nodes. (#​168209)
  • Make mix order reduction work with dynamic shapes (#​168117)
  • Better use of memory tracking (#​168121)
  • Turn on LOAF (for OSS) by default. (#​162030)
  • Log kernel autotuning results to CSV. (#​164191)
  • Add warning for CUDA graph re-recording from dynamic shapes. (#​162696)
  • Quiesce triton compile workers by default. (#​169485)
  • Support masked vectorization for tail loops with integer and bool datatypes. (#​165885)
  • Support tile-wise (1x128) FP8 scaling in Inductor. (#​165132)
  • Support fallback for all GEMM-like operations. (#​165755)
  • Enable Triton kernels with unbacked inputs. (#​164509)
  • Add AVX512-VNNI-based micro kernel for CPU GEMM template. (#​166846)
  • Support mixed dtype in native_layer_norm_backward meta function. (#​159830)
  • Add tech specs for MI350 GPU. (#​166576)
  • Add assume_32bit_indexing inductor config option. (#​167784)
  • Wire up mask_mod and blockmask to FlexFlash implementation. (#​166359)
  • More aggressive mix order reduction for better fusion. (#​166382)
  • Mix order reduction heuristics and tuning. (#​166585)
  • CuteDSL flat indexer needs to be colexigraphic in coordinate space (#​166657)

MPS

Nested Tensor (NJT)

  • Added NJT support for share_memory_ (#​162272)

torch.nn

  • Support batch size 0 for flash attention in scaled_dot_product_attention (#​166318)
  • Raise an error when using a sliced BlockMask in nn.functional.flex_attention (#​164702)

ONNX

  • Improved graph capture logic to preserve dynamic shapes and improve conversion success rate
  • Cover all FX passes into backed size oblivious (#​166151)
  • Set prefer_deferred_runtime_asserts_over_guards to True (#​165820)
  • Various warning and error messages improvements (#​162819, #​163074, #​166412, #​166558, #​166692)
  • Improved operator translation logic
  • Update weight tensor initialization in RMSNormalization (#​166550)
  • Support enable_gqa when dropout is non-zero (#​162771)
  • Implement tofile() in ONNX IR tensors for more efficient ONNX model serialization (#​165195)

Optimizer

  • Make Adam, AdamW work with nonzero-dim Tensor betas (#​149939)

Profiler

  • Expose Kineto event metadata in PyTorch Profiler events (#​161624)
  • Add user_metadata display to memory visualizer (#​165939)
  • Add warning for clearing profiler events at the end of each cycle (#​168066)

Python Frontend

  • Improved torch.library and custom ops to support view functions (#​164520)
  • Rework PyObject preservation to make it thread safe, significantly simpler and better handle some edge cases (#​167564)
  • Remove reference cycle in torch.save to improve memory usage (#​165204)
  • Add generator arg to rand*_like APIs (#​166160)
  • support negative index arguments to torch.take_along_dim negative (#​152161)

Quantization

  • half and bf16 support for fused_moving_avg_obs_fake_quant (#​162620, #​164175)
  • bf16 support for fake_quantize_learnable_per_channel_affine (#​165098)
  • bf16 support for backward of torch._fake_quantize_learnable_per_tensor_affine (#​165362)
  • Add NVFP4 two-level scaling to scaled_mm (#​165774)
  • Add support for fp8_input/fp8_weight/bf16_bias and bf16_output for fp8 qconv in CPU (#​167611)
  • Make the torch.float4_e2m1fn_x2 dtype support equality comparisons (#​169575)
  • add copy_ support for torch.float4_e2m1fn_x2 dtype (#​169595)

Release Engineering

ROCm

  • Allow custom OpenBLAS library name for CMake build (#​166333)
  • Add gfx1150 gfx1151 to binary build targets (#​164782, #​164854, #​164763)
  • hipSPARSELt support - Update cuda_to_hip_mappings.py (#​167335)
  • New implementation of upsample_bilinear2d_backward (#​164572)
  • Remove env var HIPBLASLT_ALLOW_TF32 from codebase, TF32 always allowed (#​162998)
  • Enable multi

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.


Documentation

Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.

@red-hat-konflux-kflux-prd-rh02
Copy link
Author

red-hat-konflux-kflux-prd-rh02 bot commented Jan 16, 2026

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: uv.lock
Command failed: uv lock --upgrade-package torch
Using CPython 3.12.12 interpreter at: /usr/bin/python3
  × No solution found when resolving dependencies for split (markers:
  │ python_full_version == '3.12.*' and platform_machine == 'aarch64' and
  │ platform_python_implementation != 'CPython' and sys_platform == 'linux'):
  ╰─▶ Because only the following versions of torchvision are available:
          torchvision<=0.20.0
          torchvision==0.20.0+cpu
          torchvision==0.20.1
          torchvision==0.20.1+cpu
          torchvision>=0.21.0
      and torchvision>=0.20.0,<=0.20.0+cpu depends on torch==2.5.0, we can
      conclude that torchvision>=0.20.0,<0.20.1 depends on torch==2.5.0.
      And because torchvision>=0.20.1,<=0.20.1+cpu depends on torch==2.5.1, we
      can conclude that torchvision>=0.20.0,<0.21.0 depends on one of:
          torch==2.5.0
          torch==2.5.1

      And because your project depends on torch>=2.10.0 and
      torchvision>=0.20.0,<0.21.0, we can conclude that your project's
      requirements are unsatisfiable.

      hint: The resolution failed for an environment that is not the current
      one, consider limiting the environments with `tool.uv.environments`.

@coderabbitai
Copy link

coderabbitai bot commented Jan 16, 2026

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: red-hat-konflux-kflux-prd-rh02 <190377777+red-hat-konflux-kflux-prd-rh02[bot]@users.noreply.github.com>
@red-hat-konflux-kflux-prd-rh02 red-hat-konflux-kflux-prd-rh02 bot force-pushed the konflux/mintmaker/main/torch-2.x branch from 925e743 to 8ffb5a3 Compare January 22, 2026 00:09
@red-hat-konflux-kflux-prd-rh02 red-hat-konflux-kflux-prd-rh02 bot changed the title Update dependency torch to >=2.9.1,<2.10.0 Update dependency torch to >=2.10.0,<2.11.0 Jan 22, 2026
@red-hat-konflux-kflux-prd-rh02 red-hat-konflux-kflux-prd-rh02 bot changed the title Update dependency torch to >=2.10.0,<2.11.0 fix(deps): update dependency torch to >=2.10.0,<2.11.0 Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant