From f597288e7d0ce123e7140251e1903b68174fdc23 Mon Sep 17 00:00:00 2001 From: htarver Date: Tue, 2 Dec 2025 00:21:37 -0600 Subject: [PATCH 1/3] Update citations.rst Change page title, rearrange text --- doc/citations.rst | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/doc/citations.rst b/doc/citations.rst index c89aca5..62a91bf 100644 --- a/doc/citations.rst +++ b/doc/citations.rst @@ -1,6 +1,11 @@ -========= +======= +Sources +======= +This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that may support specific goals that organizations may have for metadata quality or user interactions more generally. + +--------- Citations -========= +--------- These sources were referenced directly to compile benchmarks and supplemental information about metadata quality frameworks. - Bruce & Hillmann (2004). The Continuum of Metadata Quality: Defining, Expressing, Exploiting. https://www.ecommons.cornell.edu/handle/1813/7895 @@ -15,8 +20,6 @@ These sources were referenced directly to compile benchmarks and supplemental in *************** Other Resources *************** -This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that may support specific goals that organizations may have for metadata quality or user interactions more generally. - Sources Related to Benchmarking =============================== From 02386524d4212c44ff241fdf078b4f75d66e95f2 Mon Sep 17 00:00:00 2001 From: htarver Date: Tue, 2 Dec 2025 00:25:12 -0600 Subject: [PATCH 2/3] Update benchmarks-summary.rst Add point about cumulative benchmarks and normalize formatting --- doc/benchmarks-summary.rst | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/doc/benchmarks-summary.rst b/doc/benchmarks-summary.rst index 9db9484..6566677 100644 --- a/doc/benchmarks-summary.rst +++ b/doc/benchmarks-summary.rst @@ -11,9 +11,11 @@ Usage: every situation (e.g., local field requirements) - Criteria are binary -- i.e., the set being evaluated must meet all points or it does not meet the benchmarking standard +- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen + level and the lower levels, if relevant - These benchmarks focus solely on the quality of metadata entry, not the quality - of information (i.e., available information is all entered correctly, although - we might wish that additional information is known about an item to improve the record) + of information -- i.e., available information is all entered correctly, although + we might wish that additional information is known about an item to improve the record - This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system) - Minimal criteria apply in all cases; suggested criteria do not rise to the level From d45700ea94e184ce2c5c7cf6b3436528b4060c2d Mon Sep 17 00:00:00 2001 From: htarver Date: Tue, 2 Dec 2025 00:27:00 -0600 Subject: [PATCH 3/3] Update benchmarks-full.rst Align with updated usage points from summary --- doc/benchmarks-full.rst | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/doc/benchmarks-full.rst b/doc/benchmarks-full.rst index a3d5133..69bf729 100644 --- a/doc/benchmarks-full.rst +++ b/doc/benchmarks-full.rst @@ -11,10 +11,21 @@ In this model, the benchmark and metrics set the standard (i.e., the criteria th - General benchmarks usage: - - Each criterion is intended to be “system agnostic” but some may not apply to every situation (e.g., local field requirements) - - Criteria are binary -- i.e., the set being evaluated must meet all points or it does not meet the benchmarking standard - - These benchmarks focus solely on the quality of metadata entry, not the quality of information (i.e., available information is all entered correctly, although we might wish that additional information is known about an item to improve the record) - - This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system) + - Each criterion is intended to be “system agnostic” but some may not apply to + every situation (e.g., local field requirements) + - Criteria are binary -- i.e., the set being evaluated must meet all points or + it does not meet the benchmarking standard + - Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen + level and the lower levels, if relevant + - These benchmarks focus solely on the quality of metadata entry, not the quality + of information -- i.e., available information is all entered correctly, although + we might wish that additional information is known about an item to improve the record + - This framework is intended to be scalable (it is written in the context of 1 record, + but could apply across a collection, resource type, or an entire system) + - Minimal criteria apply in all cases; suggested criteria do not rise to the level + of “absolute minimum” but are suggested as priorities for "better-than-minimal" + based on our research and experience; ideal criteria tend to be more subjective and may not apply in every situation +