Skip to content
@due-benchmark

DUE Benchmark

The benchmark consisting of both available and reformulated datasets to measure the end-to-end capabilities of systems in real-world scenarios.

Pinned Loading

  1. baselines baselines Public

    The code related to the baselines from NeurIPS 2021 paper "DUE: End-to-End Document Understanding Benchmark."

    Python 36 4

  2. du-schema du-schema Public

    JSON Schema format for storing datasets details, documents processed contents, and documents annotations in the document understanding domain.

    13 2

  3. evaluator evaluator Public

    The evaluator covering all of the metrics required by tasks within the DUE Benchmark.

    Python 7

Repositories

Showing 3 of 3 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…