Improve robustness of cebra model loading#292
Merged
MMathisLab merged 9 commits intomainfrom Feb 2, 2026
Merged
Conversation
Member
Author
|
Just checked https://endoflife.date/numpy, it seems that <2 support actually ended a few months ago:
So I guess we could also ignore that particular test and force numpy > 2... |
Member
Author
|
Although e.g. Deeplabcut still has <2 requirement https://github.com/DeepLabCut/DeepLabCut/blob/85911cb83d315398ead65c1198e4991a73001834/setup.py#L68 |
Member
Author
stes
commented
Feb 1, 2026
Draft
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


This PR
numpy<2for ~ 1 extra year (discussed with @MMathisLab today), given its widespread use in other packages used together with CEBRA.test_save_and_loadwhich was skipped entirely previously due to a syntax issue. This revealed several issues in the loading logicCEBRA.loadwas improved: Instead of using three different backend choices, we now default to the (future proof)sklearnbackend: The state dict of the model is saved, and the model is always constructed from the state dict. This follows the recent change in torch (from 2.6.0) to discontinue unsafe loads via pickle.cebra/registry.pymodule. When the@parameterizedecorator is used, the class attributes are now properly passed to the wrapped class. A test has been added for this functionality