diff --git a/doc/benchmarks-full.rst b/doc/benchmarks-full.rst index b08758d..289a01f 100644 --- a/doc/benchmarks-full.rst +++ b/doc/benchmarks-full.rst @@ -11,10 +11,21 @@ In this model, the benchmark and metrics set the standard (i.e., the criteria th - General benchmarks usage: - - Each criterion is intended to be “system agnostic” but some may not apply to every situation (e.g., local field requirements) - - Criteria are binary -- i.e., the set being evaluated must meet all points or it does not meet the benchmarking standard for that level - - These benchmarks focus solely on the quality of metadata entry, not the quality of information (i.e., available information is all entered correctly, although we might wish that additional information is known about an item to improve the record) - - This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system) + - Each criterion is intended to be “system agnostic” but some may not apply to + every situation (e.g., local field requirements) + - Criteria are binary -- i.e., the set being evaluated must meet all points or + it does not meet the benchmarking standard + - Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen + level and the lower levels, if relevant + - These benchmarks focus solely on the quality of metadata entry, not the quality + of information -- i.e., available information is all entered correctly, although + we might wish that additional information is known about an item to improve the record + - This framework is intended to be scalable (it is written in the context of 1 record, + but could apply across a collection, resource type, or an entire system) + - Minimal criteria apply in all cases; suggested criteria do not rise to the level + of “absolute minimum” but are suggested as priorities for "better-than-minimal" + based on our research and experience; ideal criteria tend to be more subjective and may not apply in every situation + diff --git a/doc/benchmarks-summary.rst b/doc/benchmarks-summary.rst index f311cd6..87fc8da 100644 --- a/doc/benchmarks-summary.rst +++ b/doc/benchmarks-summary.rst @@ -10,10 +10,12 @@ Usage: - Each criterion is intended to be “system agnostic” but some may not apply to every situation (e.g., local field requirements) - Criteria are binary -- i.e., the set being evaluated must meet all points or - it does not meet the benchmarking standard for that level + it does not meet the benchmarking standard +- Benchmarks are cumulative -- i.e., records must meet all the criteria at the chosen + level and the lower levels, if relevant - These benchmarks focus solely on the quality of metadata entry, not the quality - of information (i.e., available information is all entered correctly, although - we might wish that additional information is known about an item to improve the record) + of information -- i.e., available information is all entered correctly, although + we might wish that additional information is known about an item to improve the record - This framework is intended to be scalable (it is written in the context of 1 record, but could apply across a collection, resource type, or an entire system) - Minimal criteria apply in all cases; suggested criteria do not rise to the level diff --git a/doc/citations.rst b/doc/citations.rst index 74b605b..658b60c 100644 --- a/doc/citations.rst +++ b/doc/citations.rst @@ -1,6 +1,11 @@ -========= +======= +Sources +======= +This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that may support specific goals that organizations may have for metadata quality or user interactions more generally. + +--------- Citations -========= +--------- These sources were referenced directly to compile benchmarks and supplemental information about metadata quality frameworks. - Bruce & Hillmann (2004). The Continuum of Metadata Quality: Defining, Expressing, Exploiting. https://www.ecommons.cornell.edu/handle/1813/7895 @@ -15,8 +20,6 @@ These sources were referenced directly to compile benchmarks and supplemental in *************** Other Resources *************** -This (non-comprehensive) list of references includes a wide array of literature and other resources that may be helpful for organizations that are thinking about benchmarking projects, such as papers and articles related to metadata quality work and benchmarking processes within and outside the library sphere. We have also tried to include links to resources that could support specific goals that organizations may have for metadata quality or user interactions more generally. - Sources Related to Benchmarking ===============================