-
Notifications
You must be signed in to change notification settings - Fork 530
Adds configurable storage quotas on individual datasets #11997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This comment has been minimized.
This comment has been minimized.
…was "ManagePermissions" in the original collection-level command, which I initially copied into the new dataset equivalent. But I don't think it was the right, or even an intentional choise. #11987
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
…may be inherited from a parent collection. #11987
src/main/java/edu/harvard/iq/dataverse/engine/command/impl/DeleteDatasetQuotaCommand.java
Outdated
Show resolved
Hide resolved
src/main/java/edu/harvard/iq/dataverse/engine/command/impl/GetDatasetQuotaCommand.java
Show resolved
Hide resolved
| StorageQuota storageQuota = target.getStorageQuota(); | ||
|
|
||
| if (storageQuota != null) { | ||
| storageQuota.setAllocation(allocation); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any check to prevent negative numbers? I guess 0 or negative will just block the ability to store more data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is the answer - it is possible to set it to a negative number. But it will be equivalent to zero in practice.
Tbh, I am generally less concerned about preventing invalid or meaningless values from being entered when it comes to superusers-only APIs. Under the assumption that they should know what they are doing, and/or can be expected to own the consequences of their actions.
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I did notice that myself the other day. I may have even done that on purpose... but the behavior is inconsistent/counter-intuitive, I agree. |
…into a dedicated API #11987
That was also part of my rationale behind shoving this limits info into an existing API (optionally, when present); vs. figuring out how to communicate that there are no limits, in an API dedicated to showing the limits. or, in the absence of any limits Would this be positive enough? |
I didn't implement those... but I am 99.9% positive that they are in fact inherited, from the nearest container with a limit defined. ... will double-check. |
Yeah, this seems somewhat arbitrary; in practice it is simply a result of these two limits having been implemented by different developers. Although I should have thought of making them more consistent when reviewing the file counts PR. I personally feel like having a global setting is useful - for the sake of being able to flip the enforcement on and off, without losing the configuration. Would probably vote for having 2 separate settings for enforcing these. |
This comment has been minimized.
This comment has been minimized.
|
Overall, I would prefer to have a separate PR for following up on many of the suggestions above, rather than trying to address them here on a short notice (again, this was supposed to be a "low effort" PR).
I don't know what I was thinking there really... Did I copy-and-paste that solution from some other place in the API ?? - idk. Somewhat on the fence about the This I find very troubling:
I'm quite positive that both mine, and Steven's implementations did have such warning messages in the JSF UI. Did they get muted by later changes to the page? - idk. I want this fixed, but in a separate pr. |
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
The changes made per QA feedback:
I made an attempt to make -X DELETE remove the quota, instead of setting allocation to null; for some unclear reason that same code was bombing for datasets (??) - so I left it unchanged in both cases, for now. |
This comment has been minimized.
This comment has been minimized.
|
📦 Pushed preview images as 🚢 See on GHCR. Use by referencing with full name as printed above, mind the registry name. |
|
FWIW: I tested the size and file quota add/delete apis and verified that the new /uploadlimits api correctly reports when a file or size limit, or both, exist, and returns {"status":"OK","data":{"uploadLimits":{}}} when they don't. I see a sematic API IT test failing in the latest build - I've seen that in other PRs - looks like an intermittent issue with the order of two description fields that I'll investigate, but I'll go ahead and merge this as it shouldn't be related. |
qqmyers
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest changes look good
What this PR does / why we need it:
This is a low-hanging fruit/easily-achievable part of the parent issue #11529 that will make it possible to configure quotas on individual datasets. The primary use case is datasets in root and/or another collection where defining a collection-level quota may be impractical. The functionality was already there - the implementation underneath operates on DvObjectContainers; it was in fact already possible to create a dataset-level quota by directly inserting an entry into the
storagequotatable. This PR exposes the functionality via the new APIs under/api/datasets/.The part of the parent issue that will require more design and dev., configurable quotas for users, remains to be prioritized separately
Which issue(s) this PR closes:
Special notes for your reviewer:
Note that as of right now (11 am Nov. 21) the checklist below says that the continuous integration test has failed ("The build of this commit was aborted"). But this is because I killed a redundant extra Jenkins run that was triggered by a typo fix in the release note. The last Jenkins run did actually pass - see https://jenkins.dataverse.org/job/IQSS-Dataverse-Develop-PR/view/change-requests/job/PR-11997/.
Suggestions on how to test this:
Straightforward. Enable a quota on a dataset (see the section of the API guide and/or the FilesIT test that are being added here), make sure uploads are blocked once it is reached.
Does this PR introduce a user interface change? If mockups are available, please link/include them here:
Is there a release notes update needed for this change?:
Additional documentation: