-
Notifications
You must be signed in to change notification settings - Fork 2
Worked through checklists and evidences for TT-EXPECTATIONS #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ased Signed-off-by: LucaFue <luca.fueger@d-fine.de>
…ased Signed-off-by: LucaFue <luca.fueger@d-fine.de>
| - **Answer**: The nlohmann/json library does not have any external dependencies apart from the testing pipeline, so there are no dependencies that could possibly affect the Expectations. | ||
| - Are input analysis findings from components, tools, and data considered in relation to Expectations? | ||
| - **Answer**: | ||
| - **Answer**: For components, there is no input analysis as the nlohmann/json library has no external components (see JLS-34). For Tools, a tool assessment is provided via JLS-50. In addition, the only data provided to the nlohmann/json library is the input data when using the libraries' functionality, as well as the test data taken from [here](https://github.com/nlohmann/json_test_data). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You still haven't mentioned anything about input analysis findings for the json_test_data and whether it was considered for the Expectations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I adapted the answer.
Generally, I would advise against a full input analysis of json_test_data because it aggregates large, well-known, independently curated JSON test suites that are already specifically designed to cover malformed inputs, edge cases, and realistic usage, so re-classifying every file would be a high-effort, low-benefit exercise that would also be hard to keep up to date as upstream data evolves. In our context, we already combine these corpora with nlohmann/json’s own tests and fuzzing and achieve very high coverage.
| - **Answer**: No downstream consumers exist yet to validate this. However, the AOUs are structured with the intent to guide downstream consumers in extending existing Statements. | ||
| - Do they provide clear guidance for upstreams on reusing components with | ||
| well-defined claims? | ||
| - **Answer**: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
answered it.
Worked through and filled out checklists and evidences for all TAs associated with TT-Expectations: