diff --git a/data/readmes/airflow-313.md b/data/readmes/airflow-313.md
new file mode 100644
index 0000000..12b8939
--- /dev/null
+++ b/data/readmes/airflow-313.md
@@ -0,0 +1,552 @@
+# Airflow - README (3.1.3)
+
+**Repository**: https://github.com/apache/airflow
+**Version**: 3.1.3
+
+---
+
+
+
+
+# Apache Airflow
+
+| Category | Badges |
+|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| License | [](https://www.apache.org/licenses/LICENSE-2.0.txt) |
+| PyPI | [](https://badge.fury.io/py/apache-airflow) [](https://pypi.org/project/apache-airflow/) [](https://pypi.org/project/apache-airflow/) |
+| Containers | [](https://hub.docker.com/r/apache/airflow) [](https://hub.docker.com/r/apache/airflow) [](https://artifacthub.io/packages/search?repo=apache-airflow) |
+| Community | [](https://github.com/apache/airflow/graphs/contributors) [](https://s.apache.org/airflow-slack)  [](https://insights.linuxfoundation.org/project/apache-airflow) |
+| Dev tools | [](https://github.com/j178/prek) |
+
+
+| Version | Build Status |
+|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Main | [](https://github.com/apache/airflow/actions) |
+| 3.x | [](https://github.com/apache/airflow/actions) |
+| 2.x | [](https://github.com/apache/airflow/actions) |
+
+
+
+
+
+
+
+[Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows.
+
+When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
+
+Use Airflow to author workflows (Dags) that orchestrate tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
+
+
+
+
+**Table of contents**
+
+- [Project Focus](#project-focus)
+- [Principles](#principles)
+- [Requirements](#requirements)
+- [Getting started](#getting-started)
+- [Installing from PyPI](#installing-from-pypi)
+- [Installation](#installation)
+- [Official source code](#official-source-code)
+- [Convenience packages](#convenience-packages)
+- [User Interface](#user-interface)
+- [Semantic versioning](#semantic-versioning)
+- [Version Life Cycle](#version-life-cycle)
+- [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions)
+- [Base OS support for reference Airflow images](#base-os-support-for-reference-airflow-images)
+- [Approach to dependencies of Airflow](#approach-to-dependencies-of-airflow)
+- [Contributing](#contributing)
+- [Voting Policy](#voting-policy)
+- [Who uses Apache Airflow?](#who-uses-apache-airflow)
+- [Who maintains Apache Airflow?](#who-maintains-apache-airflow)
+- [What goes into the next release?](#what-goes-into-the-next-release)
+- [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation)
+- [Links](#links)
+- [Sponsors](#sponsors)
+
+
+
+## Project Focus
+
+Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/).
+
+Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [XCom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work.
+
+Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches.
+
+## Principles
+
+- **Dynamic**: Pipelines are defined in code, enabling dynamic dag generation and parameterization.
+- **Extensible**: The Airflow framework includes a wide range of built-in operators and can be extended to fit your needs.
+- **Flexible**: Airflow leverages the [**Jinja**](https://jinja.palletsprojects.com) templating engine, allowing rich customizations.
+
+
+## Requirements
+
+Apache Airflow is tested with:
+
+| | Main version (dev) | Stable version (3.1.3) |
+|------------|------------------------------|------------------------|
+| Python | 3.10, 3.11, 3.12, 3.13 | 3.10, 3.11, 3.12, 3.13 |
+| Platform | AMD64/ARM64(\*) | AMD64/ARM64(\*) |
+| Kubernetes | 1.30, 1.31, 1.32, 1.33, 1.34 | 1.30, 1.31, 1.32, 1.33 |
+| PostgreSQL | 14, 15, 16, 17, 18 | 13, 14, 15, 16, 17 |
+| MySQL | 8.0, 8.4, Innovation | 8.0, 8.4, Innovation |
+| SQLite | 3.15.0+ | 3.15.0+ |
+
+\* Experimental
+
+**Note**: MariaDB is not tested/recommended.
+
+**Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend
+using the latest stable version of SQLite for local development.
+
+**Note**: Airflow currently can be run on POSIX-compliant Operating Systems. For development, it is regularly
+tested on fairly modern Linux Distros and recent versions of macOS.
+On Windows you can run it via WSL2 (Windows Subsystem for Linux 2) or via Linux Containers.
+The work to add Windows support is tracked via [#10388](https://github.com/apache/airflow/issues/10388), but
+it is not a high priority. You should only use Linux-based distros as "Production" execution environment
+as this is the only environment that is supported. The only distro that is used in our CI tests and that
+is used in the [Community managed DockerHub image](https://hub.docker.com/p/apache/airflow) is
+`Debian Bookworm`.
+
+
+
+## Getting started
+
+Visit the official Airflow website documentation (latest **stable** release) for help with
+[installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation/),
+[getting started](https://airflow.apache.org/docs/apache-airflow/stable/start.html), or walking
+through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/).
+
+> Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/).
+
+For more information on Airflow Improvement Proposals (AIPs), visit
+the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals).
+
+Documentation for dependent projects like provider distributions, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/).
+
+
+
+
+## Installing from PyPI
+
+We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky
+because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and
+applications usually pin them, but we should do neither and both simultaneously. We decided to keep
+our dependencies as open as possible (in `pyproject.toml`) so users can install different versions of libraries
+if needed. This means that `pip install apache-airflow` will not work from time to time or will
+produce unusable Airflow installation.
+
+To have repeatable installation, however, we keep a set of "known-to-be-working" constraint
+files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working"
+constraints files separately per major/minor Python version.
+You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify
+correct Airflow tag/version/branch and Python versions in the URL.
+
+1. Installing just Airflow:
+
+> Note: Only `pip` installation is currently officially supported.
+
+While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or
+[pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as
+`pip` - especially when it comes to constraint vs. requirements management.
+Installing via `Poetry` or `pip-tools` is not currently supported.
+
+If you wish to install Airflow using those tools, you should use the constraint files and convert
+them to the appropriate format and workflow that your tool requires.
+
+
+```bash
+pip install 'apache-airflow==3.1.3' \
+ --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-3.1.3/constraints-3.10.txt"
+```
+
+2. Installing with extras (i.e., postgres, google)
+
+```bash
+pip install 'apache-airflow[postgres,google]==3.1.3' \
+ --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-3.1.3/constraints-3.10.txt"
+```
+
+For information on installing provider distributions, check
+[providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html).
+
+
+
+## Installation
+
+For comprehensive instructions on setting up your local development environment and installing Apache Airflow, please refer to the [INSTALLING.md](INSTALLING.md) file.
+
+
+## Official source code
+
+Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project,
+and our official source code releases:
+
+- Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html)
+- Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow)
+- Are cryptographically signed by the release manager
+- Are officially voted on by the PMC members during the
+ [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval)
+
+Following the ASF rules, the source packages released must be sufficient for a user to build and test the
+release provided they have access to the appropriate platform and tools.
+
+
+## Convenience packages
+
+There are other ways of installing and using Airflow. Those are "convenience" methods - they are
+not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users
+who do not want to build the software themselves.
+
+Those are - in the order of most common ways people install Airflow:
+
+- [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool
+- [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via
+ `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can
+ read more about using, customizing, and extending the images in the
+ [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and
+ learn details on the internals in the [images](https://airflow.apache.org/docs/docker-stack/index.html) document.
+- [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that
+ were used to generate official source packages via git
+
+All those artifacts are not official releases, but they are prepared using officially released sources.
+Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such
+following the ASF Policy.
+
+## User Interface
+
+- **DAGs**: Overview of all DAGs in your environment.
+
+ 
+
+- **Assets**: Overview of Assets with dependencies.
+
+ 
+
+- **Grid**: Grid representation of a DAG that spans across time.
+
+ 
+
+- **Graph**: Visualization of a DAG's dependencies and their current status for a specific run.
+
+ 
+
+- **Home**: Summary statistics of your Airflow environment.
+
+ 
+
+- **Backfill**: Backfilling a DAG for a specific date range.
+
+ 
+
+- **Code**: Quick way to view source code of a DAG.
+
+ 
+
+## Semantic versioning
+
+As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released.
+
+There are few specific rules that we agreed to that define details of versioning of the different
+packages:
+
+* **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers).
+ Changing limits for versions of Airflow dependencies is not a breaking change on its own.
+* **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only.
+ SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version.
+ For example, `google 4.1.0` and `amazon 3.1.1` providers can happily be installed
+ with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages,
+ they are present in providers as `install_requires` limitations. We aim to keep backwards
+ compatibility of providers with all previously released Airflow 2 versions but
+ there will sometimes be breaking changes that might make some, or all
+ providers, have minimum Airflow version specified.
+* **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR
+ versions for the chart are independent of the Airflow version. We aim to keep backwards
+ compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might
+ only work starting from specific Airflow releases. We might however limit the Helm
+ Chart to depend on minimal Airflow version.
+* **Airflow API clients**: Their versioning is independent from Airflow versions. They follow their own
+ SemVer rules for breaking changes and new features - which for example allows to change the way we generate
+ the clients.
+
+## Version Life Cycle
+
+Apache Airflow version life cycle:
+
+
+
+
+| Version | Current Patch/Minor | State | First Release | Limited Maintenance | EOL/Terminated |
+|-----------|-----------------------|-----------|-----------------|-----------------------|------------------|
+| 3 | 3.1.3 | Supported | Apr 22, 2025 | TBD | TBD |
+| 2 | 2.11.0 | Supported | Dec 17, 2020 | Oct 22, 2025 | Apr 22, 2026 |
+| 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 |
+| 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 |
+| 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 |
+| 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 |
+
+
+
+Limited support versions will be supported with security and critical bug fix only.
+EOL versions will not get any fixes nor support.
+We always recommend that all users run the latest available minor release for whatever major version is in use.
+We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date.
+
+## Support for Python and Kubernetes versions
+
+As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support.
+They are based on the official release schedule of Python and Kubernetes, nicely summarized in the
+[Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and
+[Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/).
+
+1. We drop support for Python and Kubernetes versions when they reach EOL. Except for Kubernetes, a
+ version stays supported by Airflow if two major cloud providers still provide support for it. We drop
+ support for those EOL versions in main right after EOL date, and it is effectively removed when we release
+ the first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow. For example, for Python 3.10 it
+ means that we will drop support in main right after 27.06.2023, and the first MAJOR or MINOR version of
+ Airflow released after will not have it.
+
+2. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we
+ make them work in our CI pipeline (which might not be immediate due to dependencies catching up with
+ new versions of Python mostly) we release new images/support in Airflow based on the working CI setup.
+
+3. This policy is best-effort which means there may be situations where we might terminate support earlier
+ if circumstances require it.
+
+## Base OS support for reference Airflow images
+
+The Airflow Community provides conveniently packaged container images that are published whenever
+we publish an Apache Airflow release. Those images contain:
+
+* Base OS with necessary packages to install Airflow (stable Debian OS)
+* Base Python installation in versions supported at the time of release for the MINOR version of
+ Airflow released (so there could be different versions for 2.3 and 2.2 line for example)
+* Libraries required to connect to supported Databases (again the set of databases supported depends
+ on the MINOR version of Airflow)
+* Predefined set of popular providers (for details see the [Dockerfile](https://raw.githubusercontent.com/apache/airflow/main/Dockerfile)).
+* Possibility of building your own, custom image where the user can choose their own set of providers
+ and libraries (see [Building the image](https://airflow.apache.org/docs/docker-stack/build.html))
+* In the future Airflow might also support a "slim" version without providers nor database clients installed
+
+The version of the base OS image is the stable version of Debian. Airflow supports using all currently active
+stable versions - as soon as all Airflow dependencies support building, and we set up the CI pipeline for
+building and testing the OS version. Approximately 6 months before the end-of-regular support of a
+previous stable version of the OS, Airflow switches the images released to use the latest supported
+version of the OS.
+
+For example switch from ``Debian Bullseye`` to ``Debian Bookworm`` has been implemented
+before 2.8.0 release in October 2023 and ``Debian Bookworm`` will be the only option supported as of
+Airflow 2.10.0.
+
+Users will continue to be able to build their images using stable Debian releases until the end of regular
+support and building and verifying of the images happens in our CI but no unit tests were executed using
+this image in the `main` branch.
+
+## Approach to dependencies of Airflow
+
+Airflow has a lot of dependencies - direct and transitive, also Airflow is both - library and application,
+therefore our policies to dependencies has to include both - stability of installation of application,
+but also ability to install newer version of dependencies for those users who develop DAGs. We developed
+the approach where `constraints` are used to make sure airflow can be installed in a repeatable way, while
+we do not limit our users to upgrade most of the dependencies. As a result we decided not to upper-bound
+version of Airflow dependencies by default, unless we have good reasons to believe upper-bounding them is
+needed because of importance of the dependency as well as risk it involves to upgrade specific dependency.
+We also upper-bound the dependencies that we know cause problems.
+
+The constraint mechanism of ours takes care about finding and upgrading all the non-upper bound dependencies
+automatically (providing that all the tests pass). Our `main` build failures will indicate in case there
+are versions of dependencies that break our tests - indicating that we should either upper-bind them or
+that we should fix our code/tests to account for the upstream changes from those dependencies.
+
+Whenever we upper-bound such a dependency, we should always comment why we are doing it - i.e. we should have
+a good reason why dependency is upper-bound. And we should also mention what is the condition to remove the
+binding.
+
+### Approach for dependencies for Airflow Core
+
+Those dependencies are maintained in ``pyproject.toml``.
+
+There are few dependencies that we decided are important enough to upper-bound them by default, as they are
+known to follow predictable versioning scheme, and we know that new versions of those are very likely to
+bring breaking changes. We commit to regularly review and attempt to upgrade to the newer versions of
+the dependencies as they are released, but this is manual process.
+
+The important dependencies are:
+
+* `SQLAlchemy`: upper-bound to specific MINOR version (SQLAlchemy is known to remove deprecations and
+ introduce breaking changes especially that support for different Databases varies and changes at
+ various speed)
+* `Alembic`: it is important to handle our migrations in predictable and performant way. It is developed
+ together with SQLAlchemy. Our experience with Alembic is that it very stable in MINOR version
+* `Flask`: We are using Flask as the back-bone of our web UI and API. We know major version of Flask
+ are very likely to introduce breaking changes across those so limiting it to MAJOR version makes sense
+* `werkzeug`: the library is known to cause problems in new versions. It is tightly coupled with Flask
+ libraries, and we should update them together
+* `celery`: Celery is a crucial component of Airflow as it used for CeleryExecutor (and similar). Celery
+ [follows SemVer](https://docs.celeryq.dev/en/stable/contributing.html?highlight=semver#versions), so
+ we should upper-bound it to the next MAJOR version. Also, when we bump the upper version of the library,
+ we should make sure Celery Provider minimum Airflow version is updated.
+* `kubernetes`: Kubernetes is a crucial component of Airflow as it is used for the KubernetesExecutor
+ (and similar). Kubernetes Python library [follows SemVer](https://github.com/kubernetes-client/python#compatibility),
+ so we should upper-bound it to the next MAJOR version. Also, when we bump the upper version of the library,
+ we should make sure Kubernetes Provider minimum Airflow version is updated.
+
+### Approach for dependencies in Airflow Providers and extras
+
+The main part of the Airflow is the Airflow Core, but the power of Airflow also comes from a number of
+providers that extend the core functionality and are released separately, even if we keep them (for now)
+in the same monorepo for convenience. You can read more about the providers in the
+[Providers documentation](https://airflow.apache.org/docs/apache-airflow-providers/index.html). We also
+have set of policies implemented for maintaining and releasing community-managed providers as well
+as the approach for community vs. 3rd party providers in the [providers](https://github.com/apache/airflow/blob/main/PROVIDERS.rst) document.
+
+Those `extras` and `providers` dependencies are maintained in `provider.yaml` of each provider.
+
+By default, we should not upper-bound dependencies for providers, however each provider's maintainer
+might decide to add additional limits (and justify them with comment).
+
+
+
+## Contributing
+
+Want to help build Apache Airflow? Check out our [contributors' guide](https://github.com/apache/airflow/blob/main/contributing-docs/README.rst) for a comprehensive overview of how to contribute, including setup instructions, coding standards, and pull request guidelines.
+
+If you can't wait to contribute, and want to get started asap, check out the [contribution quickstart](https://github.com/apache/airflow/blob/main/contributing-docs/03a_contributors_quick_start_beginners.rst) here!
+
+Official Docker (container) images for Apache Airflow are described in [images](https://github.com/apache/airflow/blob/main/dev/breeze/doc/ci/02_images.md).
+
+
+
+
+## Voting Policy
+
+* Commits need a +1 vote from a committer who is not the author
+* When we do AIP voting, both PMC member's and committer's `+1s` are considered a binding vote.
+
+## Who uses Apache Airflow?
+
+We know about around 500 organizations that are using Apache Airflow (but there are likely many more)
+[in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md).
+
+If you use Airflow - feel free to make a PR to add your organisation to the list.
+
+
+
+
+## Who maintains Apache Airflow?
+
+Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors),
+but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow)
+are responsible for reviewing and merging PRs as well as steering conversations around new feature requests.
+If you would like to become a maintainer, please review the Apache Airflow
+[committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer).
+
+
+
+## What goes into the next release?
+
+Often you will see an issue that is assigned to specific milestone with Airflow version, or a PR that gets merged
+to the main branch and you might wonder which release the merged PR(s) will be released in or which release the fixed
+issues will be in. The answer to this is as usual - it depends on various scenarios. The answer is different for PRs and Issues.
+
+To add a bit of context, we are following the [Semver](https://semver.org/) versioning scheme as described in
+[Airflow release process](https://airflow.apache.org/docs/apache-airflow/stable/release-process.html). More
+details are explained in detail in this README under the [Semantic versioning](#semantic-versioning) chapter, but
+in short, we have `MAJOR.MINOR.PATCH` versions of Airflow.
+
+* `MAJOR` version is incremented in case of breaking changes
+* `MINOR` version is incremented when there are new features added
+* `PATCH` version is incremented when there are only bug-fixes and doc-only changes
+
+Generally we release `MINOR` versions of Airflow from a branch that is named after the MINOR version. For example
+`2.7.*` releases are released from `v2-7-stable` branch, `2.8.*` releases are released from `v2-8-stable`
+branch, etc.
+
+1. Most of the time in our release cycle, when the branch for next `MINOR` branch is not yet created, all
+PRs merged to `main` (unless they get reverted), will find their way to the next `MINOR` release. For example
+if the last release is `2.7.3` and `v2-8-stable` branch is not created yet, the next `MINOR` release
+is `2.8.0` and all PRs merged to main will be released in `2.8.0`. However, some PRs (bug-fixes and
+doc-only changes) when merged, can be cherry-picked to current `MINOR` branch and released in the
+next `PATCHLEVEL` release. For example, if `2.8.1` is already released and we are working on `2.9.0dev`, then
+marking a PR with `2.8.2` milestone means that it will be cherry-picked to `v2-8-test` branch and
+released in `2.8.2rc1`, and eventually in `2.8.2`.
+
+2. When we prepare for the next `MINOR` release, we cut new `v2-*-test` and `v2-*-stable` branch
+and prepare `alpha`, `beta` releases for the next `MINOR` version, the PRs merged to main will still be
+released in the next `MINOR` release until `rc` version is cut. This is happening because the `v2-*-test`
+and `v2-*-stable` branches are rebased on top of main when next `beta` and `rc` releases are prepared.
+For example, when we cut `2.10.0beta1` version, anything merged to main before `2.10.0rc1` is released,
+will find its way to 2.10.0rc1.
+
+3. Then, once we prepare the first RC candidate for the MINOR release, we stop moving the `v2-*-test` and
+`v2-*-stable` branches and the PRs merged to main will be released in the next `MINOR` release.
+However, some PRs (bug-fixes and doc-only changes) when merged, can be cherry-picked to current `MINOR`
+branch and released in the next `PATCHLEVEL` release - for example when the last released version from `v2-10-stable`
+branch is `2.10.0rc1`, some of the PRs from main can be marked as `2.10.0` milestone by committers,
+the release manager will try to cherry-pick them into the release branch.
+If successful, they will be released in `2.10.0rc2` and subsequently in `2.10.0`. This also applies to
+subsequent `PATCHLEVEL` versions. When for example `2.10.1` is already released, marking a PR with
+`2.10.2` milestone will mean that it will be cherry-picked to `v2-10-stable` branch and released in `2.10.2rc1`
+and eventually in `2.10.2`.
+
+The final decision about cherry-picking is made by the release manager.
+
+Marking issues with a milestone is a bit different. Maintainers do not mark issues with a milestone usually,
+normally they are only marked in PRs. If PR linked to the issue (and "fixing it") gets merged and released
+in a specific version following the process described above, the issue will be automatically closed, no
+milestone will be set for the issue, you need to check the PR that fixed the issue to see which version
+it was released in.
+
+However, sometimes maintainers mark issues with specific milestone, which means that the
+issue is important to become a candidate to take a look when the release is being prepared. Since this is an
+Open-Source project, where basically all contributors volunteer their time, there is no guarantee that specific
+issue will be fixed in specific version. We do not want to hold the release because some issue is not fixed,
+so in such case release manager will reassign such unfixed issues to the next milestone in case they are not
+fixed in time for the current release. Therefore, the milestone for issue is more of an intent that it should be
+looked at, than promise it will be fixed in the version.
+
+More context and **FAQ** about the patchlevel release can be found in the
+[What goes into the next release](dev/WHAT_GOES_INTO_THE_NEXT_RELEASE.md) document in the `dev` folder of the
+repository.
+
+## Can I use the Apache Airflow logo in my presentation?
+
+Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up-to-date logos are found in [this repo](https://github.com/apache/airflow/tree/main/airflow-core/docs/img/logos/) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html).
+
+## Links
+
+- [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/)
+- [Chat](https://s.apache.org/airflow-slack)
+- [Community Information](https://airflow.apache.org/community/)
+
+## Sponsors
+
+The CI infrastructure for Apache Airflow has been sponsored by:
+
+
+
+
+
diff --git a/data/readmes/akri-v0138.md b/data/readmes/akri-v0138.md
new file mode 100644
index 0000000..453400d
--- /dev/null
+++ b/data/readmes/akri-v0138.md
@@ -0,0 +1,76 @@
+# Akri - README (v0.13.8)
+
+**Repository**: https://github.com/project-akri/akri
+**Version**: v0.13.8
+
+---
+
+
+
+[](https://kubernetes.slack.com/messages/akri)
+[](https://blog.rust-lang.org/2024/10/17/Rust-1.82.0/)
+[](https://kubernetes.io/)
+[](https://codecov.io/gh/project-akri/akri)
+[](https://bestpractices.coreinfrastructure.org/projects/5339)
+
+[](https://github.com/project-akri/akri/actions?query=workflow%3A%22Check+Rust%22)
+[](https://github.com/project-akri/akri/actions?query=workflow%3A%22Tarpaulin+Code+Coverage%22)
+[](https://github.com/project-akri/akri/actions?query=workflow%3A%22Build+Controller%22)
+[](https://github.com/project-akri/akri/actions?query=workflow%3A%22Build+Agents%22)
+[](https://github.com/project-akri/akri/actions?query=workflow%3A%22Test+K3s%2C+Kubernetes%2C+and+MicroK8s%22)
+
+---
+
+Akri is a [Cloud Native Computing Foundation (CNCF) Sandbox project](https://www.cncf.io/sandbox-projects/).
+
+Akri lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices) as resources in a Kubernetes cluster, while also supporting the exposure of embedded hardware resources such as GPUs and FPGAs. Akri continually detects nodes that have access to these devices and schedules workloads based on them.
+
+Simply put: you name it, Akri finds it, you use it.
+
+---
+
+## Why Akri
+
+At the edge, there are a variety of sensors, controllers, and MCU class devices that are producing data and performing actions. For Kubernetes to be a viable edge computing solution, these heterogeneous “leaf devices” need to be easily utilized by Kubernetes clusters. However, many of these leaf devices are too small to run Kubernetes themselves. Akri is an open source project that exposes these leaf devices as resources in a Kubernetes cluster. It leverages and extends the Kubernetes [device plugin framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), which was created with the cloud in mind and focuses on advertising static resources such as GPUs and other system hardware. Akri took this framework and applied it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability.
+
+Akri is made for the edge, **handling the dynamic appearance and disappearance of leaf devices**. Akri provides an abstraction layer similar to [CNI](https://github.com/containernetworking/cni), but instead of abstracting the underlying network details, it is removing the work of finding, utilizing, and monitoring the availability of the leaf device. An operator simply has to apply a Akri Configuration to a cluster, specifying the Discovery Handler (say ONVIF) that should be used to discover the devices and the Pod that should be deployed upon discovery (say a video frame server). Then, Akri does the rest. An operator can also allow multiple nodes to utilize a leaf device, thereby **providing high availability** in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes.
+
+Most importantly, Akri **was built to be extensible**. Akri currently supports ONVIF, udev, and OPC UA Discovery Handlers, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover. We are excited to work with you to build a more connected edge.
+
+## How Akri Works
+
+Akri’s architecture is made up of five key components: two custom resources, Discovery Handlers, an Agent (device plugin implementation), and a custom Controller. The first custom resource, the Akri Configuration, is where **you name it**. This tells Akri what kind of device it should look for. At this point, **Akri finds it**! Akri's Discovery Handlers look for the device and inform the Agent of discovered devices. The Agent then creates Akri's second custom resource, the Akri Instance, to track the availability and usage of the device. Having found your device, the Akri Controller helps **you use it**. It sees each Akri Instance (which represents a leaf device) and deploys a ("broker") Pod that knows how to connect to the resource and utilize it.
+
+
+
+## Quick Start with a Demo
+
+Try the [end to end demo](https://docs.akri.sh/demos/usb-camera-demo) of Akri to see Akri discover mock video cameras and a streaming app display the footage from those cameras. It includes instructions on K8s cluster setup. If you would like to perform the demo on a cluster of Raspberry Pi 4's, see the [Raspberry Pi 4 demo](https://docs.akri.sh/demos/usb-camera-demo-rpi4).
+
+## Documentation
+
+See Akri's [documentation site](https://docs.akri.sh/), which includes:
+
+- [User guide for deploying Akri using Helm](https://docs.akri.sh/user-guide/getting-started)
+- [Akri architecture](https://docs.akri.sh/architecture/architecture-overview)
+- [How to build Akri](https://docs.akri.sh/development/building)
+- [How to extend Akri for protocols that haven't been supported yet](https://docs.akri.sh/development/handler-development).
+- [How to create a broker to leverage discovered devices](https://docs.akri.sh/development/broker-development).
+ To contribute to Akri's documentation, visit Akri's [docs repository](https://github.com/project-akri/akri-docs).
+
+## Roadmap
+
+Akri is built to be extensible. We currently have ONVIF, udev, OPC UA Discovery Handlers, but as a community, we hope to continuously support more protocols. We have created a [Discovery Handler implementation roadmap](https://docs.akri.sh/community/roadmap#implement-additional-discovery-handlers) in order to prioritize development of Discovery Handlers. If there is a protocol you feel we should prioritize, please [create an issue](https://github.com/project-akri/akri/issues/new/choose), or better yet, contribute the implementation!
+
+To see what else is in store for Akri, reference our [roadmap](https://docs.akri.sh/community/roadmap).
+
+## Community, Contributing, and Support
+
+You can reach the Akri community via the [#akri](https://kubernetes.slack.com/messages/akri) channel in [Kubernetes Slack](https://kubernetes.slack.com) or join our [community calls](https://hackmd.io/@akri/S1GKJidJd) on the first Tuesday of the month at 9:00 AM PT.
+
+Akri welcomes contributions, whether by [creating new issues](https://github.com/project-akri/akri/issues/new/choose) or pull requests. See our [contributing document](https://docs.akri.sh/community/contributing) on how to get started!
+
+## Licensing
+
+This project is released under the [Apache 2.0 license](./LICENSE).
diff --git a/data/readmes/ansible-v2201rc1.md b/data/readmes/ansible-v2201rc1.md
new file mode 100644
index 0000000..467701c
--- /dev/null
+++ b/data/readmes/ansible-v2201rc1.md
@@ -0,0 +1,108 @@
+# Ansible - README (v2.20.1rc1)
+
+**Repository**: https://github.com/ansible/ansible
+**Version**: v2.20.1rc1
+
+---
+
+[](https://pypi.org/project/ansible-core)
+[](https://docs.ansible.com/ansible/latest/)
+[](https://docs.ansible.com/ansible/devel/community/communication.html)
+[](https://dev.azure.com/ansible/ansible/_build/latest?definitionId=20&branchName=devel)
+[](https://docs.ansible.com/ansible/devel/community/code_of_conduct.html)
+[](https://docs.ansible.com/ansible/devel/community/communication.html#mailing-list-information)
+[](COPYING)
+[](https://bestpractices.coreinfrastructure.org/projects/2372)
+
+# Ansible
+
+Ansible is a radically simple IT automation system. It handles
+configuration management, application deployment, cloud provisioning,
+ad-hoc task execution, network automation, and multi-node orchestration. Ansible makes complex
+changes like zero-downtime rolling updates with load balancers easy. More information on the Ansible [website](https://ansible.com/).
+
+## Design Principles
+
+* Have an extremely simple setup process with a minimal learning curve.
+* Manage machines quickly and in parallel.
+* Avoid custom-agents and additional open ports, be agentless by
+ leveraging the existing SSH daemon.
+* Describe infrastructure in a language that is both machine and human
+ friendly.
+* Focus on security and easy auditability/review/rewriting of content.
+* Manage new remote machines instantly, without bootstrapping any
+ software.
+* Allow module development in any dynamic language, not just Python.
+* Be usable as non-root.
+* Be the easiest IT automation system to use, ever.
+
+## Use Ansible
+
+You can install a released version of Ansible with `pip` or a package manager. See our
+[installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) for details on installing Ansible
+on a variety of platforms.
+
+Power users and developers can run the `devel` branch, which has the latest
+features and fixes, directly. Although it is reasonably stable, you are more likely to encounter
+breaking changes when running the `devel` branch. We recommend getting involved
+in the Ansible community if you want to run the `devel` branch.
+
+## Communication
+
+Join the Ansible forum to ask questions, get help, and interact with the
+community.
+
+* [Get Help](https://forum.ansible.com/c/help/6): Find help or share your Ansible knowledge to help others.
+ Use tags to filter and subscribe to posts, such as the following:
+ * Posts tagged with [ansible](https://forum.ansible.com/tag/ansible)
+ * Posts tagged with [ansible-core](https://forum.ansible.com/tag/ansible-core)
+ * Posts tagged with [playbook](https://forum.ansible.com/tag/playbook)
+* [Social Spaces](https://forum.ansible.com/c/chat/4): Meet and interact with fellow enthusiasts.
+* [News & Announcements](https://forum.ansible.com/c/news/5): Track project-wide announcements including social events.
+* [Bullhorn newsletter](https://docs.ansible.com/ansible/devel/community/communication.html#the-bullhorn): Get release announcements and important changes.
+
+For more ways to get in touch, see [Communicating with the Ansible community](https://docs.ansible.com/ansible/devel/community/communication.html).
+
+## Contribute to Ansible
+
+* Check out the [Contributor's Guide](./.github/CONTRIBUTING.md).
+* Read [Community Information](https://docs.ansible.com/ansible/devel/community) for all
+ kinds of ways to contribute to and interact with the project,
+ including how to submit bug reports and code to Ansible.
+* Submit a proposed code update through a pull request to the `devel` branch.
+* Talk to us before making larger changes
+ to avoid duplicate efforts. This not only helps everyone
+ know what is going on, but it also helps save time and effort if we decide
+ some changes are needed.
+
+## Coding Guidelines
+
+We document our Coding Guidelines in the [Developer Guide](https://docs.ansible.com/ansible/devel/dev_guide/). We particularly suggest you review:
+
+* [Contributing your module to Ansible](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_checklist.html)
+* [Conventions, tips, and pitfalls](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_best_practices.html)
+
+## Branch Info
+
+* The `devel` branch corresponds to the release actively under development.
+* The `stable-2.X` branches correspond to stable releases.
+* Create a branch based on `devel` and set up a [dev environment](https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_general.html#common-environment-setup) if you want to open a PR.
+* See the [Ansible release and maintenance](https://docs.ansible.com/ansible/devel/reference_appendices/release_and_maintenance.html) page for information about active branches.
+
+## Roadmap
+
+Based on team and community feedback, an initial roadmap will be published for a major or minor version (ex: 2.7, 2.8).
+The [Ansible Roadmap page](https://docs.ansible.com/ansible/devel/roadmap/) details what is planned and how to influence the roadmap.
+
+## Authors
+
+Ansible was created by [Michael DeHaan](https://github.com/mpdehaan)
+and has contributions from over 5000 users (and growing). Thanks everyone!
+
+[Ansible](https://www.ansible.com) is sponsored by [Red Hat, Inc.](https://www.redhat.com)
+
+## License
+
+GNU General Public License v3.0 or later
+
+See [COPYING](COPYING) to see the full text.
diff --git a/data/readmes/antrea-v250.md b/data/readmes/antrea-v250.md
new file mode 100644
index 0000000..d94d302
--- /dev/null
+++ b/data/readmes/antrea-v250.md
@@ -0,0 +1,144 @@
+# Antrea - README (v2.5.0)
+
+**Repository**: https://github.com/antrea-io/antrea
+**Version**: v2.5.0
+
+---
+
+# Antrea
+
+
+
+
+[](https://goreportcard.com/report/antrea.io/antrea)
+[](https://bestpractices.coreinfrastructure.org/projects/4173)
+[](https://opensource.org/licenses/Apache-2.0)
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_shield)
+
+## Overview
+
+Antrea is a [Kubernetes](https://kubernetes.io) networking solution intended
+to be Kubernetes native. It operates at Layer 3/4 to provide networking and
+security services for a Kubernetes cluster, leveraging
+[Open vSwitch](https://www.openvswitch.org/) as the networking data plane.
+
+
+
+
+
+Open vSwitch is a widely adopted high-performance programmable virtual
+switch; Antrea leverages it to implement Pod networking and security features.
+For instance, Open vSwitch enables Antrea to implement Kubernetes
+Network Policies in a very efficient manner.
+
+## Prerequisites
+
+Antrea has been tested with Kubernetes clusters running version 1.23 or later.
+
+* `NodeIPAMController` must be enabled in the Kubernetes cluster.\
+ When deploying a cluster with kubeadm the `--pod-network-cidr `
+ option must be specified.
+ Alternately, NodeIPAM feature of Antrea Controller should be enabled and
+ configured.
+* Open vSwitch kernel module must be present on every Kubernetes node.
+
+## Getting Started
+
+Getting started with Antrea is very simple, and takes only a few minutes.
+See how it's done in the [Getting started](docs/getting-started.md) document.
+
+## Contributing
+
+The Antrea community welcomes new contributors. We are waiting for your PRs!
+
+* Before contributing, please get familiar with our
+[Code of Conduct](CODE_OF_CONDUCT.md).
+* Check out the Antrea [Contributor Guide](CONTRIBUTING.md) for information
+about setting up your development environment and our contribution workflow.
+* Learn about Antrea's [Architecture and Design](docs/design/architecture.md).
+Your feedback is more than welcome!
+* Check out [Open Issues](https://github.com/antrea-io/antrea/issues).
+* Join the Antrea [community](#community) and ask us any question you may have.
+
+### Community
+
+* Join the [Kubernetes Slack](http://slack.k8s.io/) and look for our
+[#antrea](https://kubernetes.slack.com/messages/CR2J23M0X) channel.
+* Check the [Antrea Team Calendar](https://calendar.google.com/calendar/embed?src=uuillgmcb1cu3rmv7r7jrhcrco%40group.calendar.google.com)
+ and join the developer and user communities!
+ + The [Antrea community meeting](https://broadcom.zoom.us/j/823654111?pwd=MEV6blNtUUtqallVSkVFSGZtQ1kwUT09),
+every two weeks on Tuesday at 5AM GMT+1 (United Kingdom time). See Antrea team calendar for localized times.
+ - [Meeting minutes](https://github.com/antrea-io/antrea/wiki/Community-Meetings)
+ - [Meeting recordings](https://www.youtube.com/playlist?list=PLuzde2hYeDBdw0BuQCYbYqxzoJYY1hfwv)
+ + [Antrea live office hours](https://antrea.io/live) archives.
+* Join our mailing lists to always stay up-to-date with Antrea development:
+ + [projectantrea-announce](https://groups.google.com/forum/#!forum/projectantrea-announce)
+for important project announcements.
+ + [projectantrea](https://groups.google.com/forum/#!forum/projectantrea)
+for updates about Antrea or provide feedback.
+ + [projectantrea-dev](https://groups.google.com/forum/#!forum/projectantrea-dev)
+to participate in discussions on Antrea development.
+
+Also check out [@ProjectAntrea](https://twitter.com/ProjectAntrea) on Twitter!
+
+## Features
+
+* **Kubernetes-native**: Antrea follows best practices to extend the Kubernetes
+ APIs and provide familiar abstractions to users, while also leveraging
+ Kubernetes libraries in its own implementation.
+* **Powered by Open vSwitch**: Antrea relies on Open vSwitch to implement all
+ networking functions, including Kubernetes Service load-balancing, and to
+ enable hardware offloading in order to support the most demanding workloads.
+* **Run everywhere**: Run Antrea in private clouds, public clouds and on bare
+ metal, and select the appropriate traffic mode (with or without overlay) based
+ on your infrastructure and use case.
+* **Comprehensive policy model**: Antrea provides a comprehensive network policy
+ model, which builds upon Kubernetes Network Policies with new features such as
+ policy tiering, rule priorities, cluster-level policies, and Node policies.
+ Refer to the [Antrea Network Policy documentation](docs/antrea-network-policy.md)
+ for a full list of features.
+* **Windows Node support**: Thanks to the portability of Open vSwitch, Antrea
+ can use the same data plane implementation on both Linux and Windows
+ Kubernetes Nodes.
+* **Multi-cluster networking**: Federate multiple Kubernetes clusters and
+ benefit from a unified data plane (including multi-cluster Services) and a
+ unified security posture. Refer to the [Antrea Multi-cluster documentation](docs/multicluster/user-guide.md)
+ to get started.
+* **Troubleshooting and monitoring tools**: Antrea comes with CLI and UI tools
+ which provide visibility and diagnostics capabilities (packet tracing, policy
+ analysis, flow inspection). It exposes Prometheus metrics and supports
+ exporting network flow information to collectors and analyzers.
+* **Network observability and analytics**: Antrea + [Theia](https://github.com/antrea-io/theia)
+ enable fine-grained visibility into the communication among Kubernetes
+ workloads. Theia provides visualization for Antrea network flows in Grafana
+ dashboards, and recommends Network Policies to secure the workloads.
+* **Network Policies for virtual machines**: Antrea-native policies can be
+ enforced on non-Kubernetes Nodes including VMs and baremetal servers. Project
+ [Nephe](https://github.com/antrea-io/nephe) implements security policies for
+ VMs across clouds, leveraging Antrea-native policies.
+* **Encryption**: Encryption of inter-Node Pod traffic with IPsec or WireGuard
+ tunnels.
+* **Easy deployment**: Antrea is deployed by applying a single YAML manifest
+ file.
+
+To explore more Antrea features and their usage, check the [Getting started](docs/getting-started.md#features)
+document and user guides in the [Antrea documentation folder](docs/). Refer to
+the [Changelogs](CHANGELOG/README.md) for a detailed list of features
+introduced for each version release.
+
+## Adopters
+
+For a list of Antrea Adopters, please refer to [ADOPTERS.md](ADOPTERS.md).
+
+## Roadmap
+
+We are adding features very quickly to Antrea. Check out the list of features we
+are considering on our [Roadmap](ROADMAP.md) page. Feel free to throw your ideas
+in!
+
+## License
+
+Antrea is licensed under the [Apache License, version 2.0](LICENSE)
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_large)
diff --git a/data/readmes/ape-framework-v0842.md b/data/readmes/ape-framework-v0842.md
new file mode 100644
index 0000000..ce257e7
--- /dev/null
+++ b/data/readmes/ape-framework-v0842.md
@@ -0,0 +1,158 @@
+# Ape Framework - README (v0.8.42)
+
+**Repository**: https://github.com/ApeWorX/ape
+**Version**: v0.8.42
+
+---
+
+[![Pypi.org][pypi-badge]][pypi-url]
+[![Apache licensed][license-badge]][license-url]
+[![Build Status][actions-badge]][actions-url]
+[![Contributing][contributing-badge]][contributing-url]
+[![Discord chat][discord-badge]][discord-url]
+[![Twitter][twitter-badge]][twitter-url]
+
+# Overview
+
+[Ape Framework](https://apeworx.io/framework/) is an easy-to-use Web3 development tool.
+Users can compile, test, and interact with smart contracts all in one command line session.
+With our [modular plugin system](#plugin-system), Ape supports multiple contract languages and chains.
+
+Ape is built by [ApeWorX LTD](https://www.apeworx.io/).
+
+Join our [ApeWorX Discord server][discord-url] to stay up to date on new releases, plugins, and tutorials.
+
+If you want to get started now, see the [Quickstart](#quickstart) section.
+
+## Documentation
+
+Read our [technical documentation](https://docs.apeworx.io/ape/stable/) to get a deeper understanding of our open source Framework.
+
+Read our [academic platform](https://academy.apeworx.io/) which will help you master Ape Framework with tutorials and challenges.
+
+## Prerequisite
+
+In the latest release, Ape requires:
+
+- Linux or macOS
+- Python 3.9 up to 3.12
+- **Windows**: Install Windows Subsystem Linux [(WSL)](https://docs.microsoft.com/en-us/windows/wsl/install)
+
+Check your python version in a terminal with `python3 --version`.
+
+## Installation
+
+There are three ways to install ape: `pipx`, `pip`, or `Docker`.
+
+### Considerations for Installing
+
+- If using `pip`, we advise using the most up-to-date version of `pip` to increase the chance of a successful installation.
+
+ - See issue https://github.com/ApeWorX/ape/issues/1558.
+ - To upgrade `pip` from the command line, run: `pip install --upgrade pip`.
+
+- We advise installing in a [virtualenv](https://pypi.org/project/virtualenv/) or [venv](https://docs.python.org/3/library/venv.html) to avoid interfering with *OS-level site packages*.
+
+- We advise installing **`ape`** with recommended plugins `pip install eth-ape'[recommended-plugins]'`.
+
+- We advise for **macOS** users to install virtual env via [homebrew](https://formulae.brew.sh/formula/virtualenv).
+
+### Installing with `pipx` or `pip`
+
+1. Install `pipx` via their [installation instructions](https://pypa.github.io/pipx/) or `pip` via their [installation instructions](https://pip.pypa.io/en/stable/cli/pip_install/).
+
+2. Install **`ape`** via `pipx install eth-ape` or `pip install eth-ape`.
+
+### Installing with `docker`
+
+Ape can also run in a docker container.
+
+You can pull our images from [ghcr](https://ghcr.io/apeworx/ape).
+This image is built using our `recommended-plugins` extra, so it is a great starting point for running ape in a containerized environment.
+
+We also have a `slim` docker image that is built without any installed plugins.
+This image is meant for production support and must be further configured if any plugins are in use.
+
+You can pull the image:
+
+```bash
+$ docker pull ghcr.io/apeworx/ape:latest # installs with recommended-plugins
+```
+
+or pull the slim if you have specific needs that you'd like to build from:
+
+```bash
+$ docker pull ghcr.io/apeworx/ape:latest-slim # installs ape with required packages
+```
+
+or build the image locally from source:
+
+```bash
+$ docker build -t ape:latest-slim -f Dockerfile.slim .
+$ docker build -t ape:latest .
+```
+
+An example of running a command from the container would be:
+
+```bash
+docker run \
+ --volume $HOME/.ape:/home/harambe/.ape \
+ --volume $HOME/.vvm:/home/harambe/.vvm \
+ --volume $HOME/.solcx:/home/harambe/.solcx \
+ --volume $PWD:/home/harambe/project \
+ apeworx/ape compile
+```
+
+> **Note:**
+> The above command requires the full install which includes `recommended-plugins` installation extra.
+
+## Quickstart
+
+After you have installed Ape, run `ape --version` to verify the installation was successful.
+
+You can interact with Ape using the [command line](https://docs.apeworx.io/ape/stable/userguides/clis.html) or the [Ape console](https://docs.apeworx.io/ape/stable/userguides/console.html).
+
+See the following user-guides for more in-depth tutorials:
+
+- [Accounts][accounts-guide]
+- [Networks][networks-guide]
+- [Projects][projects-guide]
+- [Compiling][compile-guide]
+- [Testing][testing-guide]
+- [Console][console-guide]
+- [Scripting][scripting-guide]
+- [Logging][logging-guide]
+
+## Plugin System
+
+Ape's modular plugin system allows users to have an interoperable experience with Web3.
+
+- Learn about **installing** plugins from following this [installing user guide](https://docs.apeworx.io/ape/stable/userguides/installing_plugins.html).
+
+- Learn more about **developing** your own plugins from this [developing user guide](https://docs.apeworx.io/ape/stable/userguides/developing_plugins.html).
+
+```{note}
+If a plugin does not originate from the [ApeWorX GitHub Organization](https://github.com/ApeWorX?q=ape&type=all), you will get a warning about installing 3rd-party plugins.
+Install 3rd party plugins at your own risk.
+```
+
+[accounts-guide]: https://docs.apeworx.io/ape/stable/userguides/accounts.html
+[actions-badge]: https://github.com/ApeWorX/ape/actions/workflows/test.yaml/badge.svg
+[actions-url]: https://github.com/ApeWorX/ape/actions?query=branch%3Amain+event%3Apush
+[compile-guide]: https://docs.apeworx.io/ape/stable/userguides/compile.html
+[console-guide]: https://docs.apeworx.io/ape/stable/userguides/console.html
+[contributing-badge]: https://img.shields.io/badge/CONTRIBUTING-guidelines-brightgreen?style=flat-square
+[contributing-url]: https://github.com/ApeWorX/ape?tab=contributing-ov-file
+[discord-badge]: https://img.shields.io/discord/922917176040640612.svg?logo=discord&style=flat-square
+[discord-url]: https://discord.gg/apeworx
+[license-badge]: https://img.shields.io/github/license/ApeWorX/ape?color=blue
+[license-url]: https://github.com/ApeWorX/ape?tab=License-1-ov-file
+[logging-guide]: https://docs.apeworx.io/ape/stable/userguides/logging.html
+[networks-guide]: https://docs.apeworx.io/ape/stable/userguides/networks.html
+[projects-guide]: https://docs.apeworx.io/ape/stable/userguides/projects.html
+[pypi-badge]: https://img.shields.io/pypi/dm/eth-ape?label=pypi.org
+[pypi-url]: https://pypi.org/project/eth-ape/
+[scripting-guide]: https://docs.apeworx.io/ape/stable/userguides/scripts.html
+[testing-guide]: https://docs.apeworx.io/ape/stable/userguides/testing.html
+[twitter-badge]: https://img.shields.io/twitter/follow/ApeFramework
+[twitter-url]: https://twitter.com/ApeFramework
diff --git a/data/readmes/apisix-3141.md b/data/readmes/apisix-3141.md
new file mode 100644
index 0000000..35605ac
--- /dev/null
+++ b/data/readmes/apisix-3141.md
@@ -0,0 +1,248 @@
+# APISIX - README (3.14.1)
+
+**Repository**: https://github.com/apache/apisix
+**Version**: 3.14.1
+
+---
+
+
+
+# Apache APISIX API Gateway | AI Gateway
+
+
+
+[](https://github.com/apache/apisix/actions/workflows/build.yml)
+[](https://github.com/apache/apisix/blob/master/LICENSE)
+[](https://github.com/apache/apisix/graphs/commit-activity)
+[](http://isitmaintained.com/project/apache/apisix "Average time to resolve an issue")
+[](http://isitmaintained.com/project/apache/apisix "Percentage of issues still open")
+[](https://apisix.apache.org/slack)
+
+**Apache APISIX** is a dynamic, real-time, high-performance API Gateway.
+
+APISIX API Gateway provides rich traffic management features such as load balancing, dynamic upstream, canary release, circuit breaking, authentication, observability, and more.
+
+APISIX can serve as an **[AI Gateway](https://apisix.apache.org/ai-gateway/)** through its flexible plugin system, providing AI proxying, load balancing for LLMs, retries and fallbacks, token-based rate limiting, and robust security to ensure the efficiency and reliability of AI agents. APISIX also provides the [`mcp-bridge`](https://apisix.apache.org/blog/2025/04/21/host-mcp-server-with-api-gateway/) plugin to seamlessly convert stdio-based MCP servers to scalable HTTP SSE services.
+
+You can use APISIX API Gateway to handle traditional north-south traffic, as well as east-west traffic between services. It can also be used as a [k8s ingress controller](https://github.com/apache/apisix-ingress-controller).
+
+The technical architecture of Apache APISIX:
+
+
+
+## Community
+
+- [Kindly Write a Review](https://www.g2.com/products/apache-apisix/reviews) for APISIX in G2.
+- Mailing List: Mail to dev-subscribe@apisix.apache.org, follow the reply to subscribe to the mailing list.
+- Slack Workspace - [invitation link](https://apisix.apache.org/slack) (Please open an [issue](https://apisix.apache.org/docs/general/submit-issue) if this link is expired), and then join the #apisix channel (Channels -> Browse channels -> search for "apisix").
+-  - follow and interact with us using hashtag `#ApacheAPISIX`
+- [Documentation](https://apisix.apache.org/docs/)
+- [Discussions](https://github.com/apache/apisix/discussions)
+- [Blog](https://apisix.apache.org/blog)
+
+## Features
+
+You can use APISIX API Gateway as a traffic entrance to process all business data, including dynamic routing, dynamic upstream, dynamic certificates,
+A/B testing, canary release, blue-green deployment, limit rate, defense against malicious attacks, metrics, monitoring alarms, service observability, service governance, etc.
+
+- **All platforms**
+
+ - Cloud-Native: Platform agnostic, No vendor lock-in, APISIX API Gateway can run from bare-metal to Kubernetes.
+ - Supports ARM64: Don't worry about the lock-in of the infra technology.
+
+- **Multi protocols**
+
+ - [TCP/UDP Proxy](docs/en/latest/stream-proxy.md): Dynamic TCP/UDP proxy.
+ - [Dubbo Proxy](docs/en/latest/plugins/dubbo-proxy.md): Dynamic HTTP to Dubbo proxy.
+ - [Dynamic MQTT Proxy](docs/en/latest/plugins/mqtt-proxy.md): Supports to load balance MQTT by `client_id`, both support MQTT [3.1.\*](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html), [5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html).
+ - [gRPC proxy](docs/en/latest/grpc-proxy.md): Proxying gRPC traffic.
+ - [gRPC Web Proxy](docs/en/latest/plugins/grpc-web.md): Proxying gRPC Web traffic to gRPC Service.
+ - [gRPC transcoding](docs/en/latest/plugins/grpc-transcode.md): Supports protocol transcoding so that clients can access your gRPC API by using HTTP/JSON.
+ - Proxy Websocket
+ - Proxy Protocol
+ - HTTP(S) Forward Proxy
+ - [SSL](docs/en/latest/certificate.md): Dynamically load an SSL certificate
+ - [HTTP/3 with QUIC](docs/en/latest/http3.md)
+
+- **Full Dynamic**
+
+ - [Hot Updates And Hot Plugins](docs/en/latest/terminology/plugin.md): Continuously updates its configurations and plugins without restarts!
+ - [Proxy Rewrite](docs/en/latest/plugins/proxy-rewrite.md): Support rewrite the `host`, `uri`, `schema`, `method`, `headers` of the request before send to upstream.
+ - [Response Rewrite](docs/en/latest/plugins/response-rewrite.md): Set customized response status code, body and header to the client.
+ - Dynamic Load Balancing: Round-robin load balancing with weight.
+ - Hash-based Load Balancing: Load balance with consistent hashing sessions.
+ - [Health Checks](docs/en/latest/tutorials/health-check.md): Enable health check on the upstream node and will automatically filter unhealthy nodes during load balancing to ensure system stability.
+ - Circuit-Breaker: Intelligent tracking of unhealthy upstream services.
+ - [Proxy Mirror](docs/en/latest/plugins/proxy-mirror.md): Provides the ability to mirror client requests.
+ - [Traffic Split](docs/en/latest/plugins/traffic-split.md): Allows users to incrementally direct percentages of traffic between various upstreams.
+
+- **Fine-grained routing**
+
+ - [Supports full path matching and prefix matching](docs/en/latest/router-radixtree.md#how-to-use-libradixtree-in-apisix)
+ - [Support all Nginx built-in variables as conditions for routing](docs/en/latest/router-radixtree.md#how-to-filter-route-by-nginx-builtin-variable), so you can use `cookie`, `args`, etc. as routing conditions to implement canary release, A/B testing, etc.
+ - Support [various operators as judgment conditions for routing](https://github.com/iresty/lua-resty-radixtree#operator-list), for example `{"arg_age", ">", 24}`
+ - Support [custom route matching function](https://github.com/iresty/lua-resty-radixtree/blob/master/t/filter-fun.t#L10)
+ - IPv6: Use IPv6 to match the route.
+ - Support [TTL](docs/en/latest/admin-api.md#route)
+ - [Support priority](docs/en/latest/router-radixtree.md#3-match-priority)
+ - [Support Batch Http Requests](docs/en/latest/plugins/batch-requests.md)
+ - [Support filtering route by GraphQL attributes](docs/en/latest/router-radixtree.md#how-to-filter-route-by-graphql-attributes)
+
+- **Security**
+
+ - Rich authentication & authorization support:
+ * [key-auth](docs/en/latest/plugins/key-auth.md)
+ * [JWT](docs/en/latest/plugins/jwt-auth.md)
+ * [basic-auth](docs/en/latest/plugins/basic-auth.md)
+ * [wolf-rbac](docs/en/latest/plugins/wolf-rbac.md)
+ * [casbin](docs/en/latest/plugins/authz-casbin.md)
+ * [keycloak](docs/en/latest/plugins/authz-keycloak.md)
+ * [casdoor](docs/en/latest/plugins/authz-casdoor.md)
+ - [IP Whitelist/Blacklist](docs/en/latest/plugins/ip-restriction.md)
+ - [Referer Whitelist/Blacklist](docs/en/latest/plugins/referer-restriction.md)
+ - [IdP](docs/en/latest/plugins/openid-connect.md): Support external Identity platforms, such as Auth0, okta, etc..
+ - [Limit-req](docs/en/latest/plugins/limit-req.md)
+ - [Limit-count](docs/en/latest/plugins/limit-count.md)
+ - [Limit-concurrency](docs/en/latest/plugins/limit-conn.md)
+ - Anti-ReDoS(Regular expression Denial of Service): Built-in policies to Anti ReDoS without configuration.
+ - [CORS](docs/en/latest/plugins/cors.md) Enable CORS(Cross-origin resource sharing) for your API.
+ - [URI Blocker](docs/en/latest/plugins/uri-blocker.md): Block client request by URI.
+ - [Request Validator](docs/en/latest/plugins/request-validation.md)
+ - [CSRF](docs/en/latest/plugins/csrf.md) Based on the [`Double Submit Cookie`](https://en.wikipedia.org/wiki/Cross-site_request_forgery#Double_Submit_Cookie) way, protect your API from CSRF attacks.
+
+- **OPS friendly**
+
+ - Zipkin tracing: [Zipkin](docs/en/latest/plugins/zipkin.md)
+ - Open source APM: support [Apache SkyWalking](docs/en/latest/plugins/skywalking.md)
+ - Works with external service discovery: In addition to the built-in etcd, it also supports [Consul](docs/en/latest/discovery/consul.md), [Consul_kv](docs/en/latest/discovery/consul_kv.md), [Nacos](docs/en/latest/discovery/nacos.md), [Eureka](docs/en/latest/discovery/eureka.md) and [Zookeeper (CP)](https://github.com/api7/apisix-seed/blob/main/docs/en/latest/zookeeper.md).
+ - Monitoring And Metrics: [Prometheus](docs/en/latest/plugins/prometheus.md)
+ - Clustering: APISIX nodes are stateless, creates clustering of the configuration center, please refer to [etcd Clustering Guide](https://etcd.io/docs/v3.5/op-guide/clustering/).
+ - High availability: Support to configure multiple etcd addresses in the same cluster.
+ - [Dashboard](https://github.com/apache/apisix-dashboard)
+ - Version Control: Supports rollbacks of operations.
+ - CLI: start\stop\reload APISIX through the command line.
+ - [Standalone](docs/en/latest/deployment-modes.md#standalone): Supports to load route rules from local YAML file, it is more friendly such as under the kubernetes(k8s).
+ - [Global Rule](docs/en/latest/terminology/global-rule.md): Allows to run any plugin for all request, eg: limit rate, IP filter etc.
+ - High performance: The single-core QPS reaches 18k with an average delay of fewer than 0.2 milliseconds.
+ - [Fault Injection](docs/en/latest/plugins/fault-injection.md)
+ - [REST Admin API](docs/en/latest/admin-api.md): Using the REST Admin API to control Apache APISIX, which only allows 127.0.0.1 access by default, you can modify the `allow_admin` field in `conf/config.yaml` to specify a list of IPs that are allowed to call the Admin API. Also, note that the Admin API uses key auth to verify the identity of the caller.
+ - External Loggers: Export access logs to external log management tools. ([HTTP Logger](docs/en/latest/plugins/http-logger.md), [TCP Logger](docs/en/latest/plugins/tcp-logger.md), [Kafka Logger](docs/en/latest/plugins/kafka-logger.md), [UDP Logger](docs/en/latest/plugins/udp-logger.md), [RocketMQ Logger](docs/en/latest/plugins/rocketmq-logger.md), [SkyWalking Logger](docs/en/latest/plugins/skywalking-logger.md), [Alibaba Cloud Logging(SLS)](docs/en/latest/plugins/sls-logger.md), [Google Cloud Logging](docs/en/latest/plugins/google-cloud-logging.md), [Splunk HEC Logging](docs/en/latest/plugins/splunk-hec-logging.md), [File Logger](docs/en/latest/plugins/file-logger.md), [SolarWinds Loggly Logging](docs/en/latest/plugins/loggly.md), [TencentCloud CLS](docs/en/latest/plugins/tencent-cloud-cls.md)).
+ - [ClickHouse](docs/en/latest/plugins/clickhouse-logger.md): push logs to ClickHouse.
+ - [Elasticsearch](docs/en/latest/plugins/elasticsearch-logger.md): push logs to Elasticsearch.
+ - [Datadog](docs/en/latest/plugins/datadog.md): push custom metrics to the DogStatsD server, comes bundled with [Datadog agent](https://docs.datadoghq.com/agent/), over the UDP protocol. DogStatsD basically is an implementation of StatsD protocol which collects the custom metrics for Apache APISIX agent, aggregates it into a single data point and sends it to the configured Datadog server.
+ - [Helm charts](https://github.com/apache/apisix-helm-chart)
+ - [HashiCorp Vault](https://www.vaultproject.io/): Support secret management solution for accessing secrets from Vault secure storage backed in a low trust environment. Currently, RS256 keys (public-private key pairs) or secret keys can be linked from vault in jwt-auth authentication plugin using [APISIX Secret](docs/en/latest/terminology/secret.md) resource.
+
+- **Highly scalable**
+ - [Custom plugins](docs/en/latest/plugin-develop.md): Allows hooking of common phases, such as `rewrite`, `access`, `header filter`, `body filter` and `log`, also allows to hook the `balancer` stage.
+ - [Plugin can be written in Java/Go/Python](docs/en/latest/external-plugin.md)
+ - [Plugin can be written with Proxy Wasm SDK](docs/en/latest/wasm.md)
+ - Custom load balancing algorithms: You can use custom load balancing algorithms during the `balancer` phase.
+ - Custom routing: Support users to implement routing algorithms themselves.
+
+- **Multi-Language support**
+ - Apache APISIX is a multi-language gateway for plugin development and provides support via `RPC` and `Wasm`.
+ 
+ - The RPC way, is the current way. Developers can choose the language according to their needs and after starting an independent process with the RPC, it exchanges data with APISIX through local RPC communication. Till this moment, APISIX has support for [Java](https://github.com/apache/apisix-java-plugin-runner), [Golang](https://github.com/apache/apisix-go-plugin-runner), [Python](https://github.com/apache/apisix-python-plugin-runner) and Node.js.
+ - The Wasm or WebAssembly, is an experimental way. APISIX can load and run Wasm bytecode via APISIX [wasm plugin](https://github.com/apache/apisix/blob/master/docs/en/latest/wasm.md) written with the [Proxy Wasm SDK](https://github.com/proxy-wasm/spec#sdks). Developers only need to write the code according to the SDK and then compile it into a Wasm bytecode that runs on Wasm VM with APISIX.
+
+- **Serverless**
+ - [Lua functions](docs/en/latest/plugins/serverless.md): Invoke functions in each phase in APISIX.
+ - [AWS Lambda](docs/en/latest/plugins/aws-lambda.md): Integration with AWS Lambda function as a dynamic upstream to proxy all requests for a particular URI to the AWS API gateway endpoint. Supports authorization via api key and AWS IAM access secret.
+ - [Azure Functions](docs/en/latest/plugins/azure-functions.md): Seamless integration with Azure Serverless Function as a dynamic upstream to proxy all requests for a particular URI to the Microsoft Azure cloud.
+ - [Apache OpenWhisk](docs/en/latest/plugins/openwhisk.md): Seamless integration with Apache OpenWhisk as a dynamic upstream to proxy all requests for a particular URI to your own OpenWhisk cluster.
+
+## Get Started
+
+1. Installation
+
+ Please refer to [install documentation](https://apisix.apache.org/docs/apisix/installation-guide/).
+
+2. Getting started
+
+ The getting started guide is a great way to learn the basics of APISIX. Just follow the steps in [Getting Started](https://apisix.apache.org/docs/apisix/getting-started/).
+
+ Further, you can follow the documentation to try more [plugins](docs/en/latest/plugins).
+
+3. Admin API
+
+ Apache APISIX provides [REST Admin API](docs/en/latest/admin-api.md) to dynamically control the Apache APISIX cluster.
+
+4. Plugin development
+
+ You can refer to [plugin development guide](docs/en/latest/plugin-develop.md), and sample plugin `example-plugin`'s code implementation.
+ Reading [plugin concept](docs/en/latest/terminology/plugin.md) would help you learn more about the plugin.
+
+For more documents, please refer to [Apache APISIX Documentation site](https://apisix.apache.org/docs/apisix/getting-started/)
+
+## Benchmark
+
+Using AWS's eight-core server, APISIX's QPS reaches 140,000 with a latency of only 0.2 ms.
+
+[Benchmark script](benchmark/run.sh) has been open sourced, welcome to try and contribute.
+
+[APISIX also works perfectly in AWS graviton3 C7g.](https://apisix.apache.org/blog/2022/06/07/installation-performance-test-of-apigateway-apisix-on-aws-graviton3)
+
+## User Stories
+
+- [European eFactory Platform: API Security Gateway – Using APISIX in the eFactory Platform](https://www.efactory-project.eu/post/api-security-gateway-using-apisix-in-the-efactory-platform)
+- [Copernicus Reference System Software](https://github.com/COPRS/infrastructure/wiki/Networking-trade-off)
+- [More Stories](https://apisix.apache.org/blog/tags/case-studies/)
+
+## Who Uses APISIX API Gateway?
+
+A wide variety of companies and organizations use APISIX API Gateway for research, production and commercial product, below are some of them:
+
+- Airwallex
+- Bilibili
+- CVTE
+- European eFactory Platform
+- European Copernicus Reference System
+- Geely
+- HONOR
+- Horizon Robotics
+- iQIYI
+- Lenovo
+- NASA JPL
+- Nayuki
+- OPPO
+- QingCloud
+- Swisscom
+- Tencent Game
+- Travelsky
+- vivo
+- Sina Weibo
+- WeCity
+- WPS
+- XPENG
+- Zoom
+
+## Logos
+
+- [Apache APISIX logo(PNG)](https://github.com/apache/apisix/tree/master/logos/apache-apisix.png)
+- [Apache APISIX logo source](https://apache.org/logos/#apisix)
+
+## Acknowledgments
+
+Inspired by Kong and Orange.
+
+## License
+
+[Apache 2.0 License](https://github.com/apache/apisix/tree/master/LICENSE)
diff --git a/data/readmes/apisix-dashboard-notice.md b/data/readmes/apisix-dashboard-notice.md
new file mode 100644
index 0000000..f42786f
--- /dev/null
+++ b/data/readmes/apisix-dashboard-notice.md
@@ -0,0 +1,38 @@
+# APISIX-Dashboard - README (notice)
+
+**Repository**: https://github.com/apache/apisix-dashboard
+**Version**: notice
+
+---
+
+# Apache APISIX Dashboard
+
+[](https://github.com/apache/apisix-dashboard/blob/master/LICENSE)
+[](https://apisix.apache.org/slack)
+
+
+
+- The master version should be used with Apache APISIX master version.
+- The project will not be released independently but will use a fixed git tag for each APISIX release.
+
+## What's Apache APISIX Dashboard
+
+The Apache APISIX Dashboard is designed to make it as easy as possible for users to operate [Apache APISIX](https://github.com/apache/apisix) through a frontend interface.
+
+## Development
+
+Pull requests are encouraged and always welcome. [Pick an issue](https://github.com/apache/apisix-dashboard/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) and help us out!
+
+Please refer to the [Development Guide](./docs/en/development.md).
+
+## Contributing
+
+Please refer to the [Contribution Guide](./CONTRIBUTING.md) for a more detailed information.
+
+## License
+
+[Apache License 2.0](./LICENSE)
diff --git a/data/readmes/aptos-aptos-node-v1384-rc.md b/data/readmes/aptos-aptos-node-v1384-rc.md
new file mode 100644
index 0000000..ff20e38
--- /dev/null
+++ b/data/readmes/aptos-aptos-node-v1384-rc.md
@@ -0,0 +1,34 @@
+# Aptos - README (aptos-node-v1.38.4-rc)
+
+**Repository**: https://github.com/aptos-labs/aptos-core
+**Version**: aptos-node-v1.38.4-rc
+
+---
+
+
+
+
+
+---
+
+[](LICENSE)
+[](https://github.com/aptos-labs/aptos-core/actions/workflows/lint-test.yaml)
+[](https://codecov.io/gh/aptos-labs/aptos-core)
+[](https://discord.gg/aptosnetwork)
+
+Aptos is a layer 1 blockchain bringing a paradigm shift to Web3 through better technology and user experience. Built with Move to create a home for developers building next-gen applications.
+
+## Getting Started
+
+* [Aptos Foundation](https://aptosfoundation.org/)
+* [Aptos Developer Network](https://aptos.dev)
+* [Guide - Integrate with the Aptos Blockchain](https://aptos.dev/guides/system-integrators-guide)
+* [Tutorials](https://aptos.dev/tutorials)
+* Follow us on [Twitter](https://twitter.com/Aptos).
+* Join us on the [Aptos Discord](https://discord.gg/aptosnetwork).
+
+## Contributing
+
+You can learn more about contributing to the Aptos project by reading our [Contribution Guide](https://github.com/aptos-labs/aptos-core/blob/main/CONTRIBUTING.md) and by viewing our [Code of Conduct](https://github.com/aptos-labs/aptos-core/blob/main/CODE_OF_CONDUCT.md).
+
+Aptos Core is licensed under [Innovation-Enabling Source Code License](https://github.com/aptos-labs/aptos-core/blob/main/LICENSE).
diff --git a/data/readmes/arbitrum-v394-rc2.md b/data/readmes/arbitrum-v394-rc2.md
new file mode 100644
index 0000000..7e1d074
--- /dev/null
+++ b/data/readmes/arbitrum-v394-rc2.md
@@ -0,0 +1,63 @@
+# Arbitrum - README (v3.9.4-rc.2)
+
+**Repository**: https://github.com/OffchainLabs/nitro
+**Version**: v3.9.4-rc.2
+
+---
+
+
+
+
+## About Arbitrum Nitro
+
+
+
+Nitro is the latest iteration of the Arbitrum technology. It is a fully integrated, complete
+layer 2 optimistic rollup system, including fraud proofs, the sequencer, the token bridges,
+advanced calldata compression, and more.
+
+See the live docs-site [here](https://developer.arbitrum.io/) (or [here](https://github.com/OffchainLabs/arbitrum-docs) for markdown docs source.)
+
+See [here](https://docs.arbitrum.io/audit-reports) for security audit reports.
+
+The Nitro stack is built on several innovations. At its core is a new prover, which can do Arbitrum’s classic
+interactive fraud proofs over WASM code. That means the L2 Arbitrum engine can be written and compiled using
+standard languages and tools, replacing the custom-designed language and compiler used in previous Arbitrum
+versions. In normal execution,
+validators and nodes run the Nitro engine compiled to native code, switching to WASM if a fraud proof is needed.
+We compile the core of Geth, the EVM engine that practically defines the Ethereum standard, right into Arbitrum.
+So the previous custom-built EVM emulator is replaced by Geth, the most popular and well-supported Ethereum client.
+
+The last piece of the stack is a slimmed-down version of our ArbOS component, rewritten in Go, which provides the
+rest of what’s needed to run an L2 chain: things like cross-chain communication, and a new and improved batching
+and compression system to minimize L1 costs.
+
+Essentially, Nitro runs Geth at layer 2 on top of Ethereum, and can prove fraud over the core engine of Geth
+compiled to WASM.
+
+Arbitrum One successfully migrated from the Classic Arbitrum stack onto Nitro on 8/31/22. (See [state migration](https://developer.arbitrum.io/migration/state-migration) and [dapp migration](https://developer.arbitrum.io/migration/dapp_migration) for more info).
+
+## License
+
+Nitro is currently licensed under a [Business Source License](./LICENSE.md), similar to our friends at Uniswap and Aave, with an "Additional Use Grant" to ensure that everyone can have full comfort using and running nodes on all public Arbitrum chains.
+
+The Additional Use Grant also permits the deployment of the Nitro software, in a permissionless fashion and without cost, as a new blockchain provided that the chain settles to either Arbitrum One or Arbitrum Nova.
+
+For those that prefer to deploy the Nitro software either directly on Ethereum (i.e. an L2) or have it settle to another Layer-2 on top of Ethereum, the [Arbitrum Expansion Program (the "AEP")](https://docs.arbitrum.foundation/aep/ArbitrumExpansionProgramTerms.pdf) was recently established. The AEP allows for the permissionless deployment in the aforementioned fashion provided that 10% of net revenue (as more fully described in the AEP) is contributed back to the Arbitrum community in accordance with the requirements of the AEP.
+
+## Contact
+
+Discord - [Arbitrum](https://discord.com/invite/5KE54JwyTs)
+
+Twitter: [Arbitrum](https://twitter.com/arbitrum)
diff --git a/data/readmes/argo-v321.md b/data/readmes/argo-v321.md
new file mode 100644
index 0000000..4c23213
--- /dev/null
+++ b/data/readmes/argo-v321.md
@@ -0,0 +1,96 @@
+# Argo - README (v3.2.1)
+
+**Repository**: https://github.com/argoproj/argo-cd
+**Version**: v3.2.1
+
+---
+
+**Releases:**
+[](https://github.com/argoproj/argo-cd/releases/latest)
+[](https://artifacthub.io/packages/helm/argo/argo-cd)
+[](https://slsa.dev)
+
+**Code:**
+[](https://github.com/argoproj/argo-cd/actions?query=workflow%3A%22Integration+tests%22)
+[](https://codecov.io/gh/argoproj/argo-cd)
+[](https://bestpractices.coreinfrastructure.org/projects/4486)
+[](https://scorecard.dev/viewer/?uri=github.com/argoproj/argo-cd)
+
+**Social:**
+[](https://twitter.com/argoproj)
+[](https://argoproj.github.io/community/join-slack)
+[](https://www.linkedin.com/company/argoproj/)
+
+# Argo CD - Declarative Continuous Delivery for Kubernetes
+
+## What is Argo CD?
+
+Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
+
+
+
+[](https://youtu.be/0WAm0y2vLIo)
+
+## Why Argo CD?
+
+1. Application definitions, configurations, and environments should be declarative and version controlled.
+1. Application deployment and lifecycle management should be automated, auditable, and easy to understand.
+
+## Who uses Argo CD?
+
+[Official Argo CD user list](USERS.md)
+
+## Documentation
+
+To learn more about Argo CD [go to the complete documentation](https://argo-cd.readthedocs.io/).
+Check live demo at https://cd.apps.argoproj.io/.
+
+## Community
+
+### Contribution, Discussion and Support
+
+ You can reach the Argo CD community and developers via the following channels:
+
+* Q & A : [Github Discussions](https://github.com/argoproj/argo-cd/discussions)
+* Chat : [The #argo-cd Slack channel](https://argoproj.github.io/community/join-slack)
+* Contributors Office Hours: [Every Thursday](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
+* User Community meeting: [First Wednesday of the month](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1ttgw98MO45Dq7ZUHpIiOIEfbyeitKHNfMjbY5dLLMKQ)
+
+
+Participation in the Argo CD project is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)
+
+
+### Blogs and Presentations
+
+1. [Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo](https://github.com/terrytangyuan/awesome-argo)
+1. [Unveil the Secret Ingredients of Continuous Delivery at Enterprise Scale with Argo CD](https://akuity.io/blog/secret-ingredients-of-continuous-delivery-at-enterprise-scale-with-argocd/)
+1. [GitOps Without Pipelines With ArgoCD Image Updater](https://youtu.be/avPUQin9kzU)
+1. [Combining Argo CD (GitOps), Crossplane (Control Plane), And KubeVela (OAM)](https://youtu.be/eEcgn_gU3SM)
+1. [How to Apply GitOps to Everything - Combining Argo CD and Crossplane](https://youtu.be/yrj4lmScKHQ)
+1. [Couchbase - How To Run a Database Cluster in Kubernetes Using Argo CD](https://youtu.be/nkPoPaVzExY)
+1. [Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts](https://youtu.be/XNXJtxkUKeY)
+1. [Environments Based On Pull Requests (PRs): Using Argo CD To Apply GitOps Principles On Previews](https://youtu.be/cpAaI8p4R60)
+1. [Argo CD: Applying GitOps Principles To Manage Production Environment In Kubernetes](https://youtu.be/vpWQeoaiRM4)
+1. [Creating Temporary Preview Environments Based On Pull Requests With Argo CD And Codefresh](https://codefresh.io/continuous-deployment/creating-temporary-preview-environments-based-pull-requests-argo-cd-codefresh/)
+1. [Tutorial: Everything You Need To Become A GitOps Ninja](https://www.youtube.com/watch?v=r50tRQjisxw) 90m tutorial on GitOps and Argo CD.
+1. [Comparison of Argo CD, Spinnaker, Jenkins X, and Tekton](https://www.inovex.de/blog/spinnaker-vs-argo-cd-vs-tekton-vs-jenkins-x/)
+1. [Simplify and Automate Deployments Using GitOps with IBM Multicloud Manager 3.1.2](https://www.ibm.com/cloud/blog/simplify-and-automate-deployments-using-gitops-with-ibm-multicloud-manager-3-1-2)
+1. [GitOps for Kubeflow using Argo CD](https://v0-6.kubeflow.org/docs/use-cases/gitops-for-kubeflow/)
+1. [GitOps Toolsets on Kubernetes with CircleCI and Argo CD](https://www.digitalocean.com/community/tutorials/webinar-series-gitops-tool-sets-on-kubernetes-with-circleci-and-argo-cd)
+1. [CI/CD in Light Speed with K8s and Argo CD](https://www.youtube.com/watch?v=OdzH82VpMwI&feature=youtu.be)
+1. [Machine Learning as Code](https://www.youtube.com/watch?v=VXrGp5er1ZE&t=0s&index=135&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU). Among other things, describes how Kubeflow uses Argo CD to implement GitOPs for ML
+1. [Argo CD - GitOps Continuous Delivery for Kubernetes](https://www.youtube.com/watch?v=aWDIQMbp1cc&feature=youtu.be&t=1m4s)
+1. [Introduction to Argo CD : Kubernetes DevOps CI/CD](https://www.youtube.com/watch?v=2WSJF7d8dUg&feature=youtu.be)
+1. [GitOps Deployment and Kubernetes - using Argo CD](https://medium.com/riskified-technology/gitops-deployment-and-kubernetes-f1ab289efa4b)
+1. [Deploy Argo CD with Ingress and TLS in Three Steps: No YAML Yak Shaving Required](https://itnext.io/deploy-argo-cd-with-ingress-and-tls-in-three-steps-no-yaml-yak-shaving-required-bc536d401491)
+1. [GitOps Continuous Delivery with Argo and Codefresh](https://codefresh.io/events/cncf-member-webinar-gitops-continuous-delivery-argo-codefresh/)
+1. [Stay up to date with Argo CD and Renovate](https://mjpitz.com/blog/2020/12/03/renovate-your-gitops/)
+1. [Setting up Argo CD with Helm](https://www.arthurkoziel.com/setting-up-argocd-with-helm/)
+1. [Applied GitOps with Argo CD](https://thenewstack.io/applied-gitops-with-argocd/)
+1. [Solving configuration drift using GitOps with Argo CD](https://www.cncf.io/blog/2020/12/17/solving-configuration-drift-using-gitops-with-argo-cd/)
+1. [Decentralized GitOps over environments](https://blogs.sap.com/2021/05/06/decentralized-gitops-over-environments/)
+1. [Getting Started with ArgoCD for GitOps Deployments](https://youtu.be/AvLuplh1skA)
+1. [Using Argo CD & Datree for Stable Kubernetes CI/CD Deployments](https://youtu.be/17894DTru2Y)
+1. [How to create Argo CD Applications Automatically using ApplicationSet? "Automation of GitOps"](https://amralaayassen.medium.com/how-to-create-argocd-applications-automatically-using-applicationset-automation-of-the-gitops-59455eaf4f72)
+1. [Progressive Delivery with Service Mesh – Argo Rollouts with Istio](https://www.cncf.io/blog/2022/12/16/progressive-delivery-with-service-mesh-argo-rollouts-with-istio/)
+
diff --git a/data/readmes/armada-v02021.md b/data/readmes/armada-v02021.md
new file mode 100644
index 0000000..674fe3a
--- /dev/null
+++ b/data/readmes/armada-v02021.md
@@ -0,0 +1,230 @@
+# Armada - README (v0.20.21)
+
+**Repository**: https://github.com/armadaproject/armada
+**Version**: v0.20.21
+
+---
+
+
+
+
+
+
+
+
+
+
+
+# Armada
+
+Armada is a system built on top of [Kubernetes](https://kubernetes.io/docs/concepts/overview/) for running batch workloads. With Armada as middleware for batch, Kubernetes can be a common substrate for batch and service workloads. Armada is used in production and can run millions of jobs per day across tens of thousands of nodes.
+
+Armada addresses the following limitations of Kubernetes:
+
+1. Scaling a single Kubernetes cluster beyond a certain size is [challenging](https://openai.com/blog/scaling-kubernetes-to-7500-nodes/). Hence, Armada is designed to effectively schedule jobs across many Kubernetes clusters. Many thousands of nodes can be managed by Armada in this way.
+2. Achieving very high throughput using the in-cluster storage backend, etcd, is [challenging](https://etcd.io/docs/v3.5/op-guide/performance/). Hence, Armada performs queueing and scheduling out-of-cluster using a specialized storage layer. This allows Armada to maintain queues composed of millions of jobs.
+3. The default [kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/) is not suitable for batch. Instead, Armada includes a novel multi-Kubernetes cluster scheduler with support for important batch scheduling features, such as:
+ * Fair queuing and scheduling across multiple users. Based on dominant resource fairness.
+ * Resource and job scheduling rate limits.
+ * Gang-scheduling, i.e., atomically scheduling sets of related jobs.
+ * Job preemption, both to run urgent jobs in a timely fashion and to balance resource allocation between users.
+
+Armada also provides features to help manage large compute clusters effectively, including:
+
+* Detailed analytics exposed via [Prometheus](https://prometheus.io/) showing how the system behaves and how resources are allocated.
+* Automatically removing nodes exhibiting high failure rates from consideration for scheduling.
+* A mechanism to earmark nodes for a particular set of jobs, but allowing them to be used by other jobs when not used for their primary purpose.
+
+Armada is designed with the enterprise in mind; all components are secure and highly available.
+
+Armada is a [CNCF](https://www.cncf.io/) Sandbox project and is used in production at [G-Research](https://www.gresearch.co.uk/).
+
+For an overview of Armada, see the following videos:
+
+- [Armada - high-throughput batch scheduling](https://www.youtube.com/watch?v=FT8pXYciD9A)
+- [Building Armada - Running Batch Jobs at Massive Scale on Kubernetes](https://www.youtube.com/watch?v=B3WPxw3OUl4)
+
+The Armada project adheres to the CNCF [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
+
+## Installation
+
+### Armada Operator
+
+For installation instructions, easiest way is to use the Armada Operator.
+For more information, see the [Armada Operator repository](https://github.com/armadaproject/armada-operator).
+
+Alternatively, you can install Armada manually by using the Helm charts defined in the `deployment` directory.
+
+### armadactl
+
+Armada also provides a command-line interface, `armadactl`, which can be used to interact with the Armada system.
+
+To install `armadactl`, run the following script:
+```bash
+scripts/get-armadactl.sh
+```
+
+Or download it from the [GitHub Release](https://github.com/armadaproject/armada/releases/latest) page for your platform.
+
+## Local Development
+
+### Local Development with Goreman
+
+[Goreman](https://github.com/mattn/goreman) is a Go-based clone of [Foreman](https://github.com/ddollar/foreman) that manages Procfile-based applications,
+allowing you to run multiple processes with a single command.
+
+Goreman will build the components from source and run them locally, making it easy to test changes quickly.
+
+1. Install `goreman`:
+ ```shell
+ go install github.com/mattn/goreman@latest
+ ```
+2. Start dependencies:
+ ```shell
+ docker-compose -f _local/docker-compose-deps.yaml up -d
+ ```
+ - **Note**: Images can be overridden using environment variables:
+ `REDIS_IMAGE`, `POSTGRES_IMAGE`, `PULSAR_IMAGE`, `KEYCLOAK_IMAGE`
+3. Initialize databases and Kubernetes resources:
+ ```shell
+ scripts/localdev-init.sh
+ ```
+4. Start Armada components:
+ ```shell
+ goreman -f _local/procfiles/no-auth.Procfile start
+ ```
+
+### Local Development with Authentication
+
+To run Armada with OIDC authentication enabled using Keycloak:
+
+1. Start dependencies with the auth profile:
+ ```shell
+ docker-compose -f _local/docker-compose-deps.yaml --profile auth up -d
+ ```
+ This starts Redis, PostgreSQL, Pulsar, and Keycloak with a pre-configured realm.
+
+2. Initialize databases and Kubernetes resources:
+ ```shell
+ scripts/localdev-init.sh
+ ```
+
+3. Start Armada components with auth configuration:
+ ```shell
+ goreman -f _local/procfiles/auth.Procfile start
+ ```
+
+4. Use armadactl with OIDC authentication:
+ ```shell
+ armadactl --config _local/.armadactl.yaml --context auth-oidc get queues
+ ```
+
+#### Authentication Configuration
+
+The auth profile configures:
+- **Keycloak**: OIDC provider running on http://localhost:8180 with pre-configured realm, users, and clients
+- **Users**: `admin/admin` (admin group), `user/password` (users group) for both OIDC and basic auth
+- **Service accounts**: Executor and Scheduler use OIDC Client Credentials flow for service-to-service authentication
+- **APIs**: Server, Lookout, and Binoculars APIs are secured with OIDC and basic auth
+- **Web UIs**: Lookout UI uses OIDC for user authentication
+- **armadactl**: Supports multiple authentication flows - OIDC PKCE flow (`auth-oidc`), OIDC Device flow (`auth-oidc-device`), OIDC Password flow (`auth-oidc-password`), and basic auth (`auth-basic`)
+
+All components support both OIDC and basic auth for convenience.
+
+### Local Development with Fake Executor
+
+For testing Armada without a real Kubernetes cluster, you can use the fake executor that simulates a Kubernetes environment:
+
+```shell
+goreman -f _local/procfiles/fake-executor.Procfile start
+```
+
+The fake executor simulates:
+- 2 virtual nodes with 8 CPUs and 32Gi memory each
+- Pod lifecycle management without actual container execution
+- Resource allocation and job state transitions
+
+This is useful for:
+- Testing Armada's scheduling logic
+- Development when Kubernetes is not available
+- Integration testing of job flows
+
+### Available Procfiles
+
+All Procfiles are located in `_local/procfiles/`:
+
+| Procfile | Description |
+|--------------------------|---------------------------------------------------|
+| `no-auth.Procfile` | Standard setup without authentication |
+| `auth.Procfile` | Standard setup with OIDC authentication |
+| `fake-executor.Procfile` | Uses fake executor for testing without Kubernetes |
+
+Restart individual processes with `goreman restart ` (e.g., `goreman restart server`).
+
+### Service Ports
+
+Run `goreman run status` to check the status of the processes (running processes are prefixed with `*`):
+```shell
+$ goreman run status
+*server
+*scheduler
+*scheduleringester
+*eventingester
+*executor
+*lookout
+*lookoutingester
+*binoculars
+*lookoutui
+```
+
+Goreman exposes services on the following ports:
+
+| Service | Port | Description |
+|----------------------------|-------|---------------------|
+| Server gRPC | 50051 | Armada gRPC API |
+| Server HTTP | 8081 | REST API & Health |
+| Server Metrics | 9000 | Prometheus metrics |
+| Scheduler gRPC | 50052 | Scheduler API |
+| Scheduler Metrics | 9001 | Prometheus metrics |
+| Scheduler Ingester Metrics | 9006 | Prometheus metrics |
+| Lookout UI | 3000 | Frontend dev server |
+| Lookout Metrics | 9003 | Prometheus metrics |
+| Lookout Ingester Metrics | 9005 | Prometheus metrics |
+| Executor Metrics | 9002 | Prometheus metrics |
+| Event Ingester Metrics | 9004 | Prometheus metrics |
+| Executor HTTP | 8082 | Executor HTTP |
+| Binoculars HTTP | 8084 | Binoculars HTTP |
+| Binoculars gRPC | 50053 | Binoculars gRPC |
+| Binoculars Metrics | 9007 | Prometheus metrics |
+| Redis | 6379 | Cache & events |
+| PostgreSQL | 5432 | Database |
+| Pulsar | 6650 | Message broker |
+
+## Documentation
+
+For documentation, see the following:
+
+- [System overview](./docs/system_overview.md)
+- [Scheduler](./docs/scheduler.md)
+- [User guide](./docs/user.md)
+- [Development guide](./docs/developer.md)
+- [Release notes/Version history](https://github.com/armadaproject/armada/releases)
+- [API Documentation](./docs/developer/api.md)
+
+We expect readers of the documentation to have a basic understanding of Docker and Kubernetes; see, e.g., the following links:
+
+- [Docker overiew](https://docs.docker.com/get-started/overview/)
+- [Kubernetes overview](https://kubernetes.io/docs/concepts/overview/)
+
+## Contributions
+
+Thank you for considering contributing to Armada!
+We want everyone to feel that they can contribute to the Armada Project.
+Your contributions are valuable, whether it's fixing a bug, implementing a new feature, improving documentation, or suggesting enhancements.
+We appreciate your time and effort in helping make this project better for everyone.
+For more information about contributing to Armada see [CONTRIBUTING.md](https://github.com/armadaproject/armada/blob/master/CONTRIBUTING.md) and before proceeding to contributions see [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)
+
+## Discussion
+
+If you are interested in discussing Armada you can find us on [](https://cloud-native.slack.com/?redir=%2Farchives%2FC03T9CBCEMC)
+
diff --git a/data/readmes/artifact-hub-v1220.md b/data/readmes/artifact-hub-v1220.md
new file mode 100644
index 0000000..0852433
--- /dev/null
+++ b/data/readmes/artifact-hub-v1220.md
@@ -0,0 +1,119 @@
+# Artifact Hub - README (v1.22.0)
+
+**Repository**: https://github.com/artifacthub/hub
+**Version**: v1.22.0
+
+---
+
+# Artifact Hub
+
+[](https://goreportcard.com/report/github.com/artifacthub/hub)
+[](https://bestpractices.coreinfrastructure.org/projects/4106)
+[](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub)
+[](https://clomonitor.io/projects/cncf/artifact-hub)
+[](https://securityscorecards.dev/viewer/?uri=github.com/artifacthub/hub)
+[](https://gitpod.io/#https://github.com/artifacthub/hub)
+[](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fartifacthub%2Fhub?ref=badge_shield)
+
+[Artifact Hub](https://artifacthub.io) is a web-based application that enables finding, installing, and publishing packages and configurations for Cloud Native packages.
+
+Discovering artifacts to use with CNCF projects can be difficult. If every CNCF project that needs to share artifacts creates its own Hub this creates a fair amount of repeat work for each project and a fractured experience for those trying to find the artifacts to consume. The Artifact Hub attempts to solve that by providing a single experience for consumers that any CNCF project can leverage.
+
+At the moment, the following artifacts kinds are supported *(with plans to support more projects to follow)*:
+
+- [Argo templates](https://argoproj.github.io/argo-workflows/)
+- [Backstage plugins](https://backstage.io)
+- [Bootable containers](https://containers.github.io/bootc/)
+- [Containers images](https://opencontainers.org)
+- [CoreDNS plugins](https://coredns.io/)
+- [Falco configurations](https://falco.org/)
+- [Gatekeeper policies](https://open-policy-agent.github.io/gatekeeper/website/docs/)
+- [Headlamp plugins](https://headlamp.dev)
+- [Helm charts](https://helm.sh/)
+- [Helm plugins](https://helm.sh/docs/topics/plugins/)
+- [Inspektor Gadgets](https://www.inspektor-gadget.io)
+- [Kagent agents](https://kagent.dev)
+- [KCL modules](https://kcl-lang.io)
+- [KEDA scalers](https://keda.sh/)
+- [Keptn integrations](https://keptn.sh)
+- [Knative client plugins](https://knative.dev)
+- [KubeArmor policies](https://kubearmor.io)
+- [Kubectl plugins (Krew)](https://krew.sigs.k8s.io/)
+- [Kubewarden policies](https://www.kubewarden.io)
+- [Kyverno policies](https://kyverno.io)
+- [Meshery designs](https://meshery.io)
+- [OLM operators](https://github.com/operator-framework)
+- [OpenCost plugins](https://www.opencost.io)
+- [Open Policy Agent (OPA) policies](https://www.openpolicyagent.org/)
+- [Radius Recipes](https://radapp.io)
+- [Tekton tasks, pipelines and stepactions](https://tekton.dev/)
+- [Tinkerbell actions](https://tinkerbell.org/)
+
+You can use Artifact Hub to:
+
+- [Discover](https://artifacthub.io/packages/search), [install](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub?modal=install) and [publish](https://artifacthub.io/docs/topics/repositories/) packages and configurations
+- Explore content like Helm charts [schemas](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub?modal=values-schema) and [templates](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub/0.20.0?modal=template&template=db_migrator_install_job.yaml) in an interactive way
+- Subscribe to packages' new releases and security alerts notifications, via email or webhooks
+- Visualize packages' [security reports](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub/0.19.0?modal=security-report)
+- Inspect packages' [changelog](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub?modal=changelog)
+
+Feel free to ask any questions on the #artifact-hub channel in the CNCF Slack. To get an invite please visit [http://slack.cncf.io/](http://slack.cncf.io/).
+
+Artifact Hub is a [CNCF Incubating Project](https://www.cncf.io/projects/).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Getting started
+
+[Artifact Hub](https://artifacthub.io) allows publishers to list their content in an automated way. Please check out the [repositories guide](https://artifacthub.io/docs/topics/repositories/) for more details about how to add your repositories.
+
+If you want to run your own Artifact Hub instance in your Kubernetes cluster, the easiest way is by deploying the Helm chart provided. For more details, please see the [Helm chart documentation in Artifact Hub](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub).
+
+## Contributing
+
+Please see [CONTRIBUTING.md](./CONTRIBUTING.md) for more details.
+
+## Community
+
+The Artifact Hub is an open source project. Aside from contributing code and feature suggestions you can also engage via:
+
+- Attending a meeting. Meetings are on the 2nd Tuesday of the month at 10:30am PT / 1:30pm ET. [Meeting minutes and agenda are in Google Docs](https://docs.google.com/document/d/1nkIgFh4dNPawoDD_9fV7vicVSeKk2Zcdd0C5yovSiKQ/edit).
+- Joining [CNCF slack](https://cloud-native.slack.com) ([join link](https://slack.cncf.io/)) and joining the room #artifact-hub.
+
+## Changelog
+
+The *changelog* is [available on Artifact Hub](https://artifacthub.io/packages/helm/artifact-hub/artifact-hub?modal=changelog).
+
+## Code of Conduct
+
+This project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
+
+## Roadmap
+
+Please see [ROADMAP.md](./ROADMAP.md) for more details.
+
+## Security
+
+To report a security problem in Artifact Hub, please contact the Maintainers Team at . Please see [SECURITY.md](./SECURITY.md) for more details.
+
+## CLOMonitor Report
+
+[](https://clomonitor.io/projects/cncf/artifact-hub)
+
+## License
+
+Artifact Hub is an Open Source project licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
+
+[](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fartifacthub%2Fhub?ref=badge_large)
diff --git a/data/readmes/athenz-v11231.md b/data/readmes/athenz-v11231.md
new file mode 100644
index 0000000..577c49f
--- /dev/null
+++ b/data/readmes/athenz-v11231.md
@@ -0,0 +1,122 @@
+# Athenz - README (v1.12.31)
+
+**Repository**: https://github.com/AthenZ/athenz
+**Version**: v1.12.31
+
+---
+
+
+
+# Athenz
+
+[](https://github.com/AthenZ/athenz/actions)
+[](https://sourcespy.com/github/athenzathenz/)
+[](https://bestpractices.coreinfrastructure.org/projects/4681)
+[](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2FAthenZ%2Fathenz?ref=badge_shield)
+
+> Athenz is an open source platform for X.509 certificate based service authentication and fine-grained
+> access control in dynamic infrastructures. It supports provisioning and configuration (centralized
+> authorization) use cases as well as serving/runtime (decentralized authorization) use cases. Athenz
+> authorization system utilizes x.509 certificates and industry standard mutual TLS bound oauth2 access
+> tokens. The name “Athenz” is derived from “AuthNZ” (N for authentication and Z for authorization).
+
+## Table of Contents
+
+* [Background](#background)
+* [Install](#install)
+* [Usage](#usage)
+* [Contribute](#contribute)
+* [License](#license)
+
+## Background
+
+Athenz is an open source platform for X.509 certificate based service authentication
+and fine-grained role based access control in dynamic infrastructures. It provides
+support for the following three major functional areas.
+
+### Service Authentication
+
+Athenz provides secure identity in the form of short-lived X.509 certificate
+for every workload or service deployed in private (e.g. Openstack, K8S, Screwdriver)
+or public cloud (e.g. AWS EC2, ECS, Fargate, Lambda). Using these X.509 certificates
+clients and services establish secure connections and through mutual TLS authentication verify
+each other's identity. The service identity certificates are valid for 30 days only,
+and the service identity agents (SIA) part of those frameworks automatically refresh
+them daily. The term service within Athenz is more generic than a traditional service.
+A service identity could represent a command, job, daemon, workflow, as well as both an
+application client, and an application service.
+
+Since Athenz service authentication is based on
+[X.509 certificates](https://en.wikipedia.org/wiki/X.509), it is
+important that you have a good understanding of what X.509 certificates are
+and how they're used to establish secure connections in Internet protocols
+such as [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security).
+
+### Role-Based Authorization (RBAC)
+
+Once the client is authenticated with its x.509 certificate, the service
+can then check if the given client is authorized to carry out the requested
+action. Athenz provides fine-grained role-based access control (RBAC) support
+for a centralized management system with support for control-plane access control
+decisions and a decentralized enforcement mechanism suitable for data-plane
+access control decisions. It also provides a delegated management model that
+supports multi-tenant and self-service concepts.
+
+### AWS Temporary Credentials Support
+
+When working with AWS, Athenz provides support to access AWS services
+from on-prem services with using AWS temporary credentials rather than
+static credentials. Athenz ZTS server can be used to request AWS temporary
+credentials for configured AWS IAM roles.
+
+## Install
+
+* [Development Environment](docs/dev_environment.md)
+* Local/Development/Production Environment Setup
+ * [ZMS Server](docs/setup_zms.md)
+ * [ZTS Server](docs/setup_zts.md)
+ * [UI Server](docs/setup_ui.md)
+* AWS Production Environment Setup
+ * [Introduction](docs/aws_athenz_setup.md)
+
+## Usage
+
+* Architecture
+ * [Data Model](docs/data_model.md)
+ * [System View](docs/system_view.md)
+ * [Authorization Flow](docs/auth_flow.md)
+* Features
+ * [Service Identity X.509 Certificates - Copper Argos](docs/copper_argos.md)
+* Developer Guide
+ * [Centralized Access Control](docs/cent_authz_flow.md)
+ * [Java Client/Servlet Example](docs/example_java_centralized_access.md)
+ * [Go Client/Server Example](docs/example_go_centralized_access.md)
+ * [Decentralized Access Control](docs/decent_authz_flow.md)
+ * [Java Client/Servlet Example](docs/example_java_decentralized_access.md)
+* Customizing Athenz
+ * [Principal Authentication](docs/principal_authentication.md)
+ * [Private Key Store](docs/private_key_store.md)
+ * [Certificate Signer](docs/cert_signer.md)
+ * [Service Identity X.509 Certificate Support Requirements - Copper Argos](docs/copper_argos_dev.md)
+ * [OIDC Authentication Provider Support for AWS EKS](docs/oidc_aws_eks.md)
+* User Guide
+ * [ZMS Client Utility](docs/zms_client.md)
+ * [ZPU Utility](docs/setup_zpu.md)
+ * [Registering ZMS Service Identity](docs/reg_service_guide.md)
+ * [ZMS API](docs/zms_api.md)
+ * [ZTS API](docs/zts_api.md)
+
+## Contribute
+
+Please refer to the [contributing file](CONTRIBUTING.md) for information about how to get involved. We welcome issues, questions, and pull requests.
+
+You can also contact us for any user and development discussions through our groups:
+
+* [Athenz-Dev](https://groups.google.com/d/forum/athenz-dev) for development discussions
+* [Athenz-Users](https://groups.google.com/d/forum/athenz-users) for users questions
+
+The [sourcespy dashboard](https://sourcespy.com/github/yahooathenz/) provides a high level overview of the repository including [module dependencies](https://sourcespy.com/github/yahooathenz/xx-omodulesc-.html), [module hierarchy](https://sourcespy.com/github/yahooathenz/xx-omodules-.html), [external libraries](https://sourcespy.com/github/yahooathenz/xx-ojavalibs-.html), [web services](https://sourcespy.com/github/yahooathenz/xx-owebservices-.html), and other components of the system.
+
+## License
+
+Licensed under the Apache License, Version 2.0: [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
diff --git a/data/readmes/atlantis-v0371.md b/data/readmes/atlantis-v0371.md
new file mode 100644
index 0000000..573b1af
--- /dev/null
+++ b/data/readmes/atlantis-v0371.md
@@ -0,0 +1,49 @@
+# Atlantis - README (v0.37.1)
+
+**Repository**: https://github.com/runatlantis/atlantis
+**Version**: v0.37.1
+
+---
+
+# Atlantis
+
+[](https://github.com/runatlantis/atlantis/releases/latest)
+[](https://twitter.com/kelseyhightower/status/893260922222813184)
+[](https://goreportcard.com/report/github.com/runatlantis/atlantis)
+[](https://pkg.go.dev/github.com/runatlantis/atlantis)
+[](https://slack.cncf.io/)
+[](https://scorecard.dev/viewer/?uri=github.com/runatlantis/atlantis)
+[](https://www.bestpractices.dev/projects/9428)
+
+
+
+ Terraform Pull Request Automation
+
+
+- [Resources](#resources)
+- [What is Atlantis?](#what-is-atlantis)
+- [What does it do?](#what-does-it-do)
+- [Why should you use it?](#why-should-you-use-it)
+- [Stargazers over time](#stargazers-over-time)
+
+## Resources
+* How to get started: [www.runatlantis.io/guide](https://www.runatlantis.io/guide)
+* Full documentation: [www.runatlantis.io/docs](https://www.runatlantis.io/docs)
+* Download the latest release: [github.com/runatlantis/atlantis/releases/latest](https://github.com/runatlantis/atlantis/releases/latest)
+* Get help in our [Slack channel](https://slack.cncf.io/) in channel #atlantis and development in #atlantis-contributors
+* Start Contributing: [CONTRIBUTING.md](CONTRIBUTING.md)
+
+## What is Atlantis?
+A self-hosted golang application that listens for Terraform pull request events via webhooks.
+
+## What does it do?
+Runs `terraform plan`, `import`, `apply` remotely and comments back on the pull request with the output.
+
+## Why should you use it?
+* Make Terraform changes visible to your whole team.
+* Enable non-operations engineers to collaborate on Terraform.
+* Standardize your Terraform workflows.
+
+## Stargazers over time
+
+[](https://starchart.cc/runatlantis/atlantis)
diff --git a/data/readmes/avalanche-v1140.md b/data/readmes/avalanche-v1140.md
new file mode 100644
index 0000000..a07818f
--- /dev/null
+++ b/data/readmes/avalanche-v1140.md
@@ -0,0 +1,257 @@
+# Avalanche - README (v1.14.0)
+
+**Repository**: https://github.com/ava-labs/avalanchego
+**Version**: v1.14.0
+
+---
+
+
+
+
+Bank-Vaults is now a [CNCF Sandbox](https://www.cncf.io/sandbox-projects/) project.
+
+[](https://gitpod.io/#https://github.com/bank-vaults/bank-vaults)
+
+[](https://github.com/bank-vaults/bank-vaults/actions/workflows/ci.yaml?query=workflow%3ACI)
+[](https://api.securityscorecards.dev/projects/github.com/bank-vaults/bank-vaults)
+[](https://www.bestpractices.dev/projects/7871)
+
+*Bank Vaults is a thick, tricky, shifty right with a fast and intense tube for experienced surfers only, located on Mentawai.
+Think heavy steel doors, secret unlocking combinations and burly guards with smack-down attitude. Watch out for clean-up sets.*
+
+---
+
+Bank-Vaults is an umbrella project which provides various tools for Cloud Native secret management, including:
+
+- Bank-Vaults CLI to make configuring Hashicorp Vault easier
+- [Vault Operator](https://github.com/bank-vaults/vault-operator/) to make operating Hashicorp Vault on top of Kubernetes easier
+- [Secrets Webhook](https://github.com/bank-vaults/secrets-webhook) to inject secrets directly into Kubernetes pods
+- [Vault SDK](https://github.com/bank-vaults/vault-sdk) to make working with Vault easier in Go
+- and others
+
+## Usage
+
+Some of the usage patterns are highlighted through these blog posts:
+
+- [Authentication and authorization of Pipeline users with OAuth2 and Vault](https://outshift.cisco.com/blog/oauth2-vault/)
+- [Dynamic credentials with Vault using Kubernetes Service Accounts](https://outshift.cisco.com/blog/vault-dynamic-secrets/)
+- [Dynamic SSH with Vault and Pipeline](https://outshift.cisco.com/blog/vault-dynamic-ssh/)
+- [Secure Kubernetes Deployments with Vault and Pipeline](https://outshift.cisco.com/blog/hashicorp-guest-post/)
+- [Vault Operator](https://outshift.cisco.com/blog/vault-operator/)
+- [Vault unseal flow with KMS](https://outshift.cisco.com/blog/vault-unsealing/)
+- [Monitoring Vault on Kubernetes using Cloud Native technologies](https://web.archive.org/web/20231014000501/https://banzaicloud.com/blog/monitoring-vault-grafana/)
+- [Inject secrets directly into pods from Vault](https://outshift.cisco.com/blog/inject-secrets-into-pods-vault-revisited/)
+- [Backing up Vault with Velero](https://outshift.cisco.com/blog/vault-backup-velero/)
+- [Vault replication across multiple datacenters on Kubernetes](https://outshift.cisco.com/blog/vault-multi-datacenter/)
+- [Bank Vaults Configuration Helm Chart](https://github.com/rljohnsn/bank-vault-config/tree/main)
+
+## Documentation
+
+The official documentation is available at [https://bank-vaults.dev](https://bank-vaults.dev/).
+
+## Development
+
+Install [Go](https://go.dev/dl/) on your computer then run `make deps` to install the rest of the dependencies.
+
+Make sure Docker is installed with Compose and Buildx.
+
+Fetch required tools:
+
+```shell
+make deps
+```
+
+Run project dependencies:
+
+```shell
+make up
+```
+
+Run the test suite:
+
+```shell
+make test
+make test-integration
+```
+
+Run linters:
+
+```shell
+make lint # pass -j option to run them in parallel
+```
+
+Some linter violations can automatically be fixed:
+
+```shell
+make fmt
+```
+
+Build artifacts locally:
+
+```shell
+make artifacts
+```
+
+Once you are done either stop or tear down dependencies:
+
+```shell
+make stop
+
+# OR
+
+make down
+```
+
+## Credits
+
+Kudos to HashiCorp for open sourcing Vault and making secret management easier and more secure.
+
+## License
+
+The project is licensed under the [Apache 2.0 License](LICENSE).
diff --git a/data/readmes/base-v0143.md b/data/readmes/base-v0143.md
new file mode 100644
index 0000000..3c2a37c
--- /dev/null
+++ b/data/readmes/base-v0143.md
@@ -0,0 +1,152 @@
+# Base - README (v0.14.3)
+
+**Repository**: https://github.com/base-org/node
+**Version**: v0.14.3
+
+---
+
+
+
+# Base Node
+
+Base is a secure, low-cost, developer-friendly Ethereum L2 built on Optimism's [OP Stack](https://docs.optimism.io/). This repository contains Docker builds to run your own node on the Base network.
+
+[](https://base.org)
+[](https://docs.base.org/)
+[](https://base.org/discord)
+[](https://x.com/Base)
+[](https://farcaster.xyz/base)
+
+## Quick Start
+
+1. Ensure you have an Ethereum L1 full node RPC available
+2. Choose your network:
+ - For mainnet: Use `.env.mainnet`
+ - For testnet: Use `.env.sepolia`
+3. Configure your L1 endpoints in the appropriate `.env` file:
+ ```bash
+ OP_NODE_L1_ETH_RPC=
+ OP_NODE_L1_BEACON=
+ OP_NODE_L1_BEACON_ARCHIVER=
+ ```
+4. Start the node:
+
+ ```bash
+ # For mainnet (default):
+ docker compose up --build
+
+ # For testnet:
+ NETWORK_ENV=.env.sepolia docker compose up --build
+
+ # To use a specific client (optional):
+ CLIENT=reth docker compose up --build
+
+ # For testnet with a specific client:
+ NETWORK_ENV=.env.sepolia CLIENT=reth docker compose up --build
+ ```
+
+### Supported Clients
+
+- `reth` (default)
+- `geth`
+- `nethermind`
+
+## Requirements
+
+### Minimum Requirements
+
+- Modern Multicore CPU
+- 32GB RAM (64GB Recommended)
+- NVMe SSD drive
+- Storage: (2 \* [current chain size](https://base.org/stats) + [snapshot size](https://basechaindata.vercel.app) + 20% buffer) (to accommodate future growth)
+- Docker and Docker Compose
+
+### Production Hardware Specifications
+
+The following are the hardware specifications we use in production:
+
+#### Reth Archive Node (recommended)
+
+- **Instance**: AWS i7i.12xlarge
+- **Storage**: RAID 0 of all local NVMe drives (`/dev/nvme*`)
+- **Filesystem**: ext4
+
+#### Geth Full Node
+
+- **Instance**: AWS i7i.12xlarge
+- **Storage**: RAID 0 of all local NVMe drives (`/dev/nvme*`)
+- **Filesystem**: ext4
+
+> [!NOTE]
+To run the node using a supported client, you can use the following command:
+`CLIENT=supported_client docker compose up --build`
+
+Supported clients:
+ - reth (runs vanilla node by default, Flashblocks mode enabled by providing RETH_FB_WEBSOCKET_URL, see [Reth Node README](./reth/README.md))
+ - geth
+ - nethermind
+
+## Configuration
+
+### Required Settings
+
+- L1 Configuration:
+ - `OP_NODE_L1_ETH_RPC`: Your Ethereum L1 node RPC endpoint
+ - `OP_NODE_L1_BEACON`: Your L1 beacon node endpoint
+ - `OP_NODE_L1_BEACON_ARCHIVER`: Your L1 beacon archiver endpoint
+ - `OP_NODE_L1_RPC_KIND`: The type of RPC provider being used (default: "debug_geth"). Supported values:
+ - `alchemy`: Alchemy RPC provider
+ - `quicknode`: QuickNode RPC provider
+ - `infura`: Infura RPC provider
+ - `parity`: Parity RPC provider
+ - `nethermind`: Nethermind RPC provider
+ - `debug_geth`: Debug Geth RPC provider
+ - `erigon`: Erigon RPC provider
+ - `basic`: Basic RPC provider (standard receipt fetching only)
+ - `any`: Any available RPC method
+ - `standard`: Standard RPC methods including newer optimized methods
+
+### Network Settings
+
+- Mainnet:
+ - `RETH_CHAIN=base`
+ - `OP_NODE_NETWORK=base-mainnet`
+ - Sequencer: `https://mainnet-sequencer.base.org`
+
+### Performance Settings
+
+- Cache Settings:
+ - `GETH_CACHE="20480"` (20GB)
+ - `GETH_CACHE_DATABASE="20"` (4GB)
+ - `GETH_CACHE_GC="12"`
+ - `GETH_CACHE_SNAPSHOT="24"`
+ - `GETH_CACHE_TRIE="44"`
+
+### Optional Features
+
+- EthStats Monitoring (uncomment to enable)
+- Trusted RPC Mode (uncomment to enable)
+- Snap Sync (experimental)
+
+For full configuration options, see the `.env.mainnet` file.
+
+## Snapshots
+
+Snapshots are available to help you sync your node more quickly. See [docs.base.org](https://docs.base.org/chain/run-a-base-node#snapshots) for links and more details on how to restore from a snapshot.
+
+## Supported Networks
+
+| Network | Status |
+| ------- | ------ |
+| Mainnet | ✅ |
+| Testnet | ✅ |
+
+## Troubleshooting
+
+For support please join our [Discord](https://discord.gg/buildonbase) post in `🛠|node-operators`. You can alternatively open a new GitHub issue.
+
+## Disclaimer
+
+THE NODE SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. We make no guarantees about asset protection or security. Usage is subject to applicable laws and regulations.
+
+For more information, visit [docs.base.org](https://docs.base.org/).
diff --git a/data/readmes/benthos-v4720.md b/data/readmes/benthos-v4720.md
new file mode 100644
index 0000000..ce2a93e
--- /dev/null
+++ b/data/readmes/benthos-v4720.md
@@ -0,0 +1,222 @@
+# Benthos - README (v4.72.0)
+
+**Repository**: https://github.com/benthosdev/benthos
+**Version**: v4.72.0
+
+---
+
+Redpanda Connect
+================
+
+[![Build Status][actions-badge]][actions-url]
+
+API for Apache V2 builds: [![godoc for redpanda-data/connect ASL][godoc-badge]][godoc-url-apache]
+
+API for Enterprise builds: [![godoc for redpanda-data/connect RCL][godoc-badge]][godoc-url-enterprise]
+
+Redpanda Connect is a high performance and resilient stream processor, able to connect various [sources][inputs] and [sinks][outputs] in a range of brokering patterns and perform [hydration, enrichments, transformations and filters][processors] on payloads.
+
+It comes with a [powerful mapping language][bloblang-about], is easy to deploy and monitor, and ready to drop into your pipeline either as a static binary or docker image, making it cloud native as heck.
+
+Redpanda Connect is declarative, with stream pipelines defined in as few as a single config file, allowing you to specify connectors and a list of processing stages:
+
+```yaml
+input:
+ gcp_pubsub:
+ project: foo
+ subscription: bar
+
+pipeline:
+ processors:
+ - mapping: |
+ root.message = this
+ root.meta.link_count = this.links.length()
+ root.user.age = this.user.age.number()
+
+output:
+ redis_streams:
+ url: tcp://TODO:6379
+ stream: baz
+ max_in_flight: 20
+```
+
+### !NEW! Check Out the Latest AI Goodies
+
+MCP Demo:
+
+[](https://www.youtube.com/watch?v=JhF8HMpVmus)
+
+Agentic AI Demo:
+
+[](https://www.youtube.com/watch?v=oi8qgtTqQRU)
+
+### Delivery Guarantees
+
+Delivery guarantees [can be a dodgy subject](https://youtu.be/QmpBOCvY8mY). Redpanda Connect processes and acknowledges messages using an in-process transaction model with no need for any disk persisted state, so when connecting to at-least-once sources and sinks it's able to guarantee at-least-once delivery even in the event of crashes, disk corruption, or other unexpected server faults.
+
+This behaviour is the default and free of caveats, which also makes deploying and scaling Redpanda Connect much simpler.
+
+## Supported Sources & Sinks
+
+AWS (DynamoDB, Kinesis, S3, SQS, SNS), Azure (Blob storage, Queue storage, Table storage), GCP (Pub/Sub, Cloud storage, Big query), Kafka, NATS (JetStream, Streaming), NSQ, MQTT, AMQP 0.91 (RabbitMQ), AMQP 1, Redis (streams, list, pubsub, hashes), Cassandra, Elasticsearch, HDFS, HTTP (server and client, including websockets), MongoDB, SQL (MySQL, PostgreSQL, Clickhouse, MSSQL), and [you know what just click here to see them all, they don't fit in a README][about-categories].
+
+## Documentation
+
+If you want to dive fully into Redpanda Connect then don't waste your time in this dump, check out the [documentation site][general-docs].
+
+For guidance on building your own custom plugins in Go check out [the public APIs](https://pkg.go.dev/github.com/redpanda-data/benthos/v4/public/service).
+
+## Install
+
+Install on Linux:
+
+```shell
+curl -LO https://github.com/redpanda-data/redpanda/releases/latest/download/rpk-linux-amd64.zip
+unzip rpk-linux-amd64.zip -d ~/.local/bin/
+```
+
+Or use Homebrew:
+
+```shell
+brew install redpanda-data/tap/redpanda
+```
+
+Or pull the docker image:
+
+```shell
+docker pull docker.redpanda.com/redpandadata/connect
+```
+
+For more information check out the [getting started guide][getting-started].
+
+## Run
+
+```shell
+rpk connect run ./config.yaml
+```
+
+Or, with docker:
+
+```shell
+# Using a config file
+docker run --rm -v /path/to/your/config.yaml:/connect.yaml docker.redpanda.com/redpandadata/connect run
+
+# Using a series of -s flags
+docker run --rm -p 4195:4195 docker.redpanda.com/redpandadata/connect run \
+ -s "input.type=http_server" \
+ -s "output.type=kafka" \
+ -s "output.kafka.addresses=kafka-server:9092" \
+ -s "output.kafka.topic=redpanda_topic"
+```
+
+## Monitoring
+
+### Health Checks
+
+Redpanda Connect serves two HTTP endpoints for health checks:
+- `/ping` can be used as a liveness probe as it always returns a 200.
+- `/ready` can be used as a readiness probe as it serves a 200 only when both the input and output are connected, otherwise a 503 is returned.
+
+### Metrics
+
+Redpanda Connect [exposes lots of metrics][metrics] either to Statsd, Prometheus, a JSON HTTP endpoint, [and more][metrics].
+
+### Tracing
+
+Redpanda Connect also [emits open telemetry tracing events][tracers], which can be used to visualise the processors within a pipeline.
+
+## Configuration
+
+Redpanda Connect provides lots of tools for making configuration discovery, debugging and organisation easy. You can [read about them here][config-doc].
+
+## Build
+
+Build with Go (any [currently supported version](https://go.dev/dl/)):
+
+```shell
+git clone git@github.com:redpanda-data/connect
+cd connect
+task build:all
+```
+
+## Formatting and Linting
+
+Redpanda Connect uses [golangci-lint][golangci-lint] for formatting and linting.
+
+- `task fmt` to format the codebase,
+- `task lint` to lint the codebase.
+
+Configure your editor to use `gofumpt` as a formatter, see the instructions for different editors [here](https://github.com/mvdan/gofumpt#installation).
+
+## Plugins
+
+It's pretty easy to write your own custom plugins for Redpanda Connect in Go, for information check out [the API docs][godoc-url], and for inspiration there's an [example repo][plugin-repo] demonstrating a variety of plugin implementations.
+
+## Extra Plugins
+
+By default Redpanda Connect does not build with components that require linking to external libraries, such as the `zmq4` input and outputs. If you wish to build Redpanda Connect locally with these dependencies then set the build tag `x_benthos_extra`:
+
+```shell
+# With go
+go install -tags "x_benthos_extra" github.com/redpanda-data/connect/v4/cmd/redpanda-connect@latest
+
+# Using task
+TAGS=x_benthos_extra task build:all
+```
+
+Note that this tag may change or be broken out into granular tags for individual components outside of major version releases. If you attempt a build and these dependencies are not present you'll see error messages such as `ld: library not found for -lzmq`.
+
+## Docker Builds
+
+There's a multi-stage `Dockerfile` for creating a Redpanda Connect docker image which results in a minimal image from scratch. You can build it with:
+
+```shell
+task docker:all
+```
+
+Then use the image:
+
+```shell
+docker run --rm \
+ -v /path/to/your/benthos.yaml:/config.yaml \
+ -v /tmp/data:/data \
+ -p 4195:4195 \
+ docker.redpanda.com/redpandadata/connect run /config.yaml
+```
+
+## Contributing
+
+Contributions are welcome! To prevent CI errors, please always make sure a pull request has been:
+
+- Unit tested with `task test`
+- Linted with `task lint`
+- Formatted with `task fmt`
+
+Note: most integration tests need to spin up Docker containers, so they are skipped by `task test`. You can trigger
+them individually via `go test -run "^Test.*Integration.*$" ./internal/impl//...`.
+
+[inputs]: https://docs.redpanda.com/redpanda-connect/components/inputs/about
+[about-categories]: https://docs.redpanda.com/redpanda-connect/about#components
+[processors]: https://docs.redpanda.com/redpanda-connect/components/processors/about
+[outputs]: https://docs.redpanda.com/redpanda-connect/components/outputs/about
+[metrics]: https://docs.redpanda.com/redpanda-connect/components/metrics/about
+[tracers]: https://docs.redpanda.com/redpanda-connect/components/tracers/about
+[config-interp]: https://docs.redpanda.com/redpanda-connect/configuration/interpolation
+[streams-api]: https://docs.redpanda.com/redpanda-connect/guides/streams_mode/streams_api
+[streams-mode]: https://docs.redpanda.com/redpanda-connect/guides/streams_mode/about
+[general-docs]: https://docs.redpanda.com/redpanda-connect/about
+[bloblang-about]: https://docs.redpanda.com/redpanda-connect/guides/bloblang/about
+[config-doc]: https://docs.redpanda.com/redpanda-connect/configuration/about
+[releases]: https://github.com/redpanda-data/connect/releases
+[plugin-repo]: https://github.com/redpanda-data/redpanda-connect-plugin-example
+[getting-started]: https://docs.redpanda.com/redpanda-connect/guides/getting_started
+
+[godoc-badge]: https://pkg.go.dev/badge/github.com/redpanda-data/benthos/v4/public
+[godoc-url]: https://pkg.go.dev/github.com/redpanda-data/benthos/v4/public
+[godoc-url-apache]: https://pkg.go.dev/github.com/redpanda-data/connect/public/bundle/free/v4
+[godoc-url-enterprise]: https://pkg.go.dev/github.com/redpanda-data/connect/public/bundle/enterprise/v4
+[actions-badge]: https://github.com/redpanda-data/connect/actions/workflows/test.yml/badge.svg
+[actions-url]: https://github.com/redpanda-data/connect/actions/workflows/test.yml
+
+[golangci-lint]: https://golangci-lint.run/
+[jaeger]: https://www.jaegertracing.io/
diff --git a/data/readmes/besu-25110.md b/data/readmes/besu-25110.md
new file mode 100644
index 0000000..e32146e
--- /dev/null
+++ b/data/readmes/besu-25110.md
@@ -0,0 +1,79 @@
+# Besu - README (25.11.0)
+
+**Repository**: https://github.com/hyperledger/besu
+**Version**: 25.11.0
+
+---
+
+# Besu Ethereum Client
+ [](https://circleci.com/gh/hyperledger/besu/tree/main)
+ [](https://github.com/hyperledger/besu/actions/workflows/codeql.yml)
+ [](https://bestpractices.coreinfrastructure.org/projects/3174)
+ [](https://github.com/hyperledger/besu/blob/main/LICENSE)
+ [](https://discord.com/invite/hyperledger)
+ [](https://twitter.com/HyperledgerBesu)
+
+[Download](https://github.com/hyperledger/besu/releases)
+
+Besu is an Apache 2.0 licensed, MainNet compatible, Ethereum client written in Java.
+
+## Useful Links
+
+* [Besu User Documentation]
+* [Besu Issues]
+* [Besu Wiki](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/)
+* [How to Contribute to Besu](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22156850/How+to+Contribute)
+* [Besu Roadmap & Planning](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154278/Besu+Roadmap+Planning)
+
+
+## Issues
+
+Besu issues are tracked [in the github issues tab][Besu Issues].
+See our [guidelines](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154243/Issues) for more details on searching and creating issues.
+
+If you have any questions, queries or comments, [Besu channel on Discord] is the place to find us.
+
+
+## Besu Users
+
+To install the Besu binary, follow [these instructions](https://besu.hyperledger.org/public-networks/get-started/install/binary-distribution).
+
+## Besu Developers
+
+* [Contributing Guidelines]
+* [Coding Conventions](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154259/Coding+Conventions)
+* [Command Line Interface (CLI) Style Guide](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154260/Besu+CLI+Style+Guide)
+* [Besu User Documentation] for running and using Besu
+
+
+### Development
+
+Instructions for how to get started with developing on the Besu codebase. Please also read the
+[wiki](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154251/Pull+Requests) for more details on how to submit a pull request (PR).
+
+* [Checking Out and Building](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154264/Building+from+source)
+* [Code Coverage](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154288/Code+coverage)
+* [Logging](https://lf-hyperledger.atlassian.net/wiki/spaces/BESU/pages/22154291/Logging) or the [Documentation's Logging section](https://besu.hyperledger.org/public-networks/how-to/monitor/logging)
+
+### Profiling Besu
+
+Besu supports performance profiling using [Async Profiler](https://github.com/async-profiler/async-profiler), a low-overhead sampling profiler.
+You can find setup and usage instructions in the [Profiling Guide](docs/PROFILING.md).
+
+Profiling can help identify performance bottlenecks in block processing, transaction validation, and EVM execution.
+Please ensure the profiler is run as the same user that started the Besu process.
+
+## Release Notes
+
+[Release Notes](CHANGELOG.md)
+
+## Reference Tests and JSON Tracing
+
+Besu includes support for running Ethereum reference tests and generating detailed EVM execution traces.
+
+To learn how to run the tests and enable opcode-level JSON tracing for debugging and correctness verification, see the [Reference Test Execution and Tracing Guide](REFERENCE_TESTS.md).
+
+[Besu Issues]: https://github.com/hyperledger/besu/issues
+[Besu User Documentation]: https://besu.hyperledger.org
+[Besu channel on Discord]: https://discord.com/invite/hyperledger
+[Contributing Guidelines]: CONTRIBUTING.md
diff --git a/data/readmes/bfe-v180.md b/data/readmes/bfe-v180.md
new file mode 100644
index 0000000..29cc70d
--- /dev/null
+++ b/data/readmes/bfe-v180.md
@@ -0,0 +1,95 @@
+# BFE - README (v1.8.0)
+
+**Repository**: https://github.com/bfenetworks/bfe
+**Version**: v1.8.0
+
+---
+
+# BFE
+
+[](https://github.com/bfenetworks/bfe/blob/develop/LICENSE)
+[](https://travis-ci.com/bfenetworks/bfe)
+[](https://goreportcard.com/report/github.com/bfenetworks/bfe)
+[](https://godoc.org/github.com/bfenetworks/bfe/bfe_module)
+[](https://snapcraft.io/bfe)
+[](https://bestpractices.coreinfrastructure.org/projects/3209)
+[](https://app.fossa.com/reports/1f05f9f0-ac3d-486e-8ba9-ad95dabd4768)
+[](https://slack.cncf.io)
+
+English | [中文](README-CN.md)
+
+BFE (Beyond Front End) is a modern layer 7 load balancer from baidu.
+
+
+
+BFE is a [Cloud Native Computing Foundation](https://cncf.io/) (CNCF) sandbox project.
+
+
+
+## Introduction
+
+BFE opensource project includes several components, which can be used together as a integrated layer 7 load balancer and traffic management solution.
+
+BFE system consists of data plane and control plane:
+
+- Data plane:responsible for forwarding user's traffic, including below component:
+ - BFE Server:BFE forward engine (this repository, bfenetworks/bfe). BFE Server performs content based routing, load balancing and forwards the traffic to backend servers.
+- Control plane:responsible for management and configuration of BFE system, including below components:
+ - [API-Server](https://github.com/bfenetworks/api-server):provides API and handles update, storage and generation of BFE config
+ - [Conf-Agent](https://github.com/bfenetworks/conf-agent):component for loading config, fetches latest config from API-Server and triggers BFE Server to reload it
+ - [Dashboard](https://github.com/bfenetworks/dashboard):provides a graphic interface for user to manage and view major config of BFE
+
+Refer to [Overview](docs/en_us/introduction/overview.md) in BFE document for more information
+
+Besides, we also implement [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) based on BFE, to fulfill Ingress in Kubernetes
+
+## Advantages
+
+- Multiple protocols supported, including HTTP, HTTPS, SPDY, HTTP2, WebSocket, TLS, FastCGI, etc.
+- Content based routing, support user-defined routing rule in advanced domain-specific language.
+- Support multiple load balancing policies.
+- Flexible plugin framework to extend functionality. Based on the framework, developer can add new features rapidly.
+- Efficient, easy and centralized management, with RESTful API and Dashboard support.
+- Detailed built-in metrics available for service status monitor.
+
+## Getting Started
+
+- Data plane: BFE Server [build and run](docs/en_us/installation/install_from_source.md)
+- Control plane: English document coming soon. [Chinese version](https://github.com/bfenetworks/api-server/blob/develop/docs/zh_cn/deploy.md)
+
+## Running the tests
+
+- See [Build and run](docs/en_us/installation/install_from_source.md)
+
+## Documentation
+
+- [English version](https://www.bfe-networks.net/en_us/ABOUT/)
+- [Chinese version](https://www.bfe-networks.net/zh_cn/ABOUT/)
+
+## Book
+
+- [In-depth Understanding of BFE](https://github.com/baidu/bfe-book) (Released in Feb 2023)
+
+ This book focuses on BFE open source project, introduces the relevant technical principles of network access, explains the design idea of BFE open source software, and how to build a network front-end platform based on BFE open source software. Readers with development capabilities can also develop BFE extension modules according to their own needs or contribute code to BFE open source projects according to the instructions in this book.
+
+
+## Contributing
+
+- Please create an issue in [issue list](http://github.com/bfenetworks/bfe/issues).
+- Contact Committers/Owners for further discussion if needed.
+- Following the golang coding standards.
+- See the [CONTRIBUTING](CONTRIBUTING.md) file for details.
+
+## Authors
+
+- Owners: [MAINTAINERS](MAINTAINERS.md)
+- Contributors: [CONTRIBUTORS](CONTRIBUTORS.md)
+
+## Communication
+
+- BFE community on Slack: [Sign up](https://slack.cncf.io/) CNCF Slack and join bfe channel.
+- BFE developer group on WeChat: [Send a request mail](mailto:iyangsj@gmail.com) with your WeChat ID and a contribution you've made to BFE(such as a PR/Issue). We will invite you right away.
+
+## License
+
+BFE is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
diff --git a/data/readmes/bitcoin-core-v283.md b/data/readmes/bitcoin-core-v283.md
new file mode 100644
index 0000000..eb54fec
--- /dev/null
+++ b/data/readmes/bitcoin-core-v283.md
@@ -0,0 +1,86 @@
+# Bitcoin Core - README (v28.3)
+
+**Repository**: https://github.com/bitcoin/bitcoin
+**Version**: v28.3
+
+---
+
+Bitcoin Core integration/staging tree
+=====================================
+
+https://bitcoincore.org
+
+For an immediately usable, binary version of the Bitcoin Core software, see
+https://bitcoincore.org/en/download/.
+
+What is Bitcoin Core?
+---------------------
+
+Bitcoin Core connects to the Bitcoin peer-to-peer network to download and fully
+validate blocks and transactions. It also includes a wallet and graphical user
+interface, which can be optionally built.
+
+Further information about Bitcoin Core is available in the [doc folder](/doc).
+
+License
+-------
+
+Bitcoin Core is released under the terms of the MIT license. See [COPYING](COPYING) for more
+information or see https://opensource.org/license/MIT.
+
+Development Process
+-------------------
+
+The `master` branch is regularly built (see `doc/build-*.md` for instructions) and tested, but it is not guaranteed to be
+completely stable. [Tags](https://github.com/bitcoin/bitcoin/tags) are created
+regularly from release branches to indicate new official, stable release versions of Bitcoin Core.
+
+The https://github.com/bitcoin-core/gui repository is used exclusively for the
+development of the GUI. Its master branch is identical in all monotree
+repositories. Release branches and tags do not exist, so please do not fork
+that repository unless it is for development reasons.
+
+The contribution workflow is described in [CONTRIBUTING.md](CONTRIBUTING.md)
+and useful hints for developers can be found in [doc/developer-notes.md](doc/developer-notes.md).
+
+Testing
+-------
+
+Testing and code review is the bottleneck for development; we get more pull
+requests than we can review and test on short notice. Please be patient and help out by testing
+other people's pull requests, and remember this is a security-critical project where any mistake might cost people
+lots of money.
+
+### Automated Testing
+
+Developers are strongly encouraged to write [unit tests](src/test/README.md) for new code, and to
+submit new unit tests for old code. Unit tests can be compiled and run
+(assuming they weren't disabled during the generation of the build system) with: `ctest`. Further details on running
+and extending unit tests can be found in [/src/test/README.md](/src/test/README.md).
+
+There are also [regression and integration tests](/test), written
+in Python.
+These tests can be run (if the [test dependencies](/test) are installed) with: `build/test/functional/test_runner.py`
+(assuming `build` is your build directory).
+
+The CI (Continuous Integration) systems make sure that every pull request is tested on Windows, Linux, and macOS.
+The CI must pass on all commits before merge to avoid unrelated CI failures on new pull requests.
+
+### Manual Quality Assurance (QA) Testing
+
+Changes should be tested by somebody other than the developer who wrote the
+code. This is especially important for large or high-risk changes. It is useful
+to add a test plan to the pull request description if testing the changes is
+not straightforward.
+
+Translations
+------------
+
+Changes to translations as well as new translations can be submitted to
+[Bitcoin Core's Transifex page](https://explore.transifex.com/bitcoin/bitcoin/).
+
+Translations are periodically pulled from Transifex and merged into the git repository. See the
+[translation process](doc/translation_process.md) for details on how this works.
+
+**Important**: We do not accept translation changes as GitHub pull requests because the next
+pull from Transifex would automatically overwrite them again.
diff --git a/data/readmes/bookkeeper-release-4172.md b/data/readmes/bookkeeper-release-4172.md
new file mode 100644
index 0000000..c72af16
--- /dev/null
+++ b/data/readmes/bookkeeper-release-4172.md
@@ -0,0 +1,60 @@
+# Bookkeeper - README (release-4.17.2)
+
+**Repository**: https://github.com/apache/bookkeeper
+**Version**: release-4.17.2
+
+---
+
+
+
+[](https://maven-badges.herokuapp.com/maven-central/org.apache.bookkeeper/bookkeeper)
+
+# Apache BookKeeper
+
+Apache BookKeeper is a scalable, fault-tolerant and low latency storage service optimized for append-only workloads.
+
+It is suitable for being used in following scenarios:
+
+- WAL (Write-Ahead-Logging), e.g. HDFS NameNode, Pravega.
+- Message Store, e.g. Apache Pulsar.
+- Offset/Cursor Store, e.g. Apache Pulsar.
+- Object/Blob Store, e.g. storing state machine snapshots.
+
+## Get Started
+
+* Checkout the project [website](https://bookkeeper.apache.org/).
+* *Concepts*: Start with the [basic concepts](https://bookkeeper.apache.org/docs/getting-started/concepts) of Apache BookKeeper.
+ This will help you to fully understand the other parts of the documentation.
+* Follow the [Installation](https://bookkeeper.apache.org/docs/getting-started/installation) guide to set up BookKeeper.
+
+## Documentation
+
+Please visit the [Documentation](https://bookkeeper.apache.org/docs/overview/) from the project website for more information.
+
+## Get In Touch
+
+### Report a Bug
+
+For filing bugs, suggesting improvements, or requesting new features, help us out by [opening a GitHub issue](https://github.com/apache/bookkeeper/issues).
+
+### Need Help?
+
+[Subscribe](mailto:user-subscribe@bookkeeper.apache.org) or [mail](mailto:user@bookkeeper.apache.org) the [user@bookkeeper.apache.org](mailto:user@bookkeeper.apache.org) list - Ask questions, find answers, and also help other users.
+
+[Subscribe](mailto:dev-subscribe@bookkeeper.apache.org) or [mail](mailto:dev@bookkeeper.apache.org) the [dev@bookkeeper.apache.org](mailto:dev@bookkeeper.apache.org) list - Join development discussions, propose new ideas and connect with contributors.
+
+[Join us on Slack](https://communityinviter.com/apps/apachebookkeeper/apache-bookkeeper) - This is the most immediate way to connect with Apache BookKeeper committers and contributors.
+
+## Contributing
+
+We feel that a welcoming open community is important and welcome contributions.
+
+### Contributing Code
+
+1. See our [installation guide](https://bookkeeper.apache.org/docs/next/getting-started/installation/) to get your local environment setup.
+
+2. Take a look at our open issues: [GitHub Issues](https://github.com/apache/bookkeeper/issues).
+
+3. Review our [coding style](https://bookkeeper.apache.org/community/coding-guide/) and follow our [pull requests](https://github.com/apache/bookkeeper/pulls) to learn more about our conventions.
+
+4. Make your changes according to our [contributing guide](https://bookkeeper.apache.org/community/contributing/)
diff --git a/data/readmes/bootc-v1110.md b/data/readmes/bootc-v1110.md
new file mode 100644
index 0000000..86ff60c
--- /dev/null
+++ b/data/readmes/bootc-v1110.md
@@ -0,0 +1,81 @@
+# bootc - README (v1.11.0)
+
+**Repository**: https://github.com/bootc-dev/bootc
+**Version**: v1.11.0
+
+---
+
+
+# bootc
+
+Transactional, in-place operating system updates using OCI/Docker container images.
+
+## Motivation
+
+The original Docker container model of using "layers" to model
+applications has been extremely successful. This project
+aims to apply the same technique for bootable host systems - using
+standard OCI/Docker containers as a transport and delivery format
+for base operating system updates.
+
+The container image includes a Linux kernel (in e.g. `/usr/lib/modules`),
+which is used to boot. At runtime on a target system, the base userspace is
+*not* itself running in a "container" by default. For example, assuming
+systemd is in use, systemd acts as pid1 as usual - there's no "outer" process.
+More about this in the docs; see below.
+
+## Status
+
+The CLI and API are considered stable. We will ensure that every existing system
+can be upgraded in place seamlessly across any future changes.
+
+## Documentation
+
+See the [project documentation](https://bootc-dev.github.io/bootc/).
+
+## Versioning
+
+Although bootc is not released to crates.io as a library, version
+numbers are expected to follow [semantic
+versioning](https://semver.org/) standards. This practice began with
+the release of version 1.2.0; versions prior may not adhere strictly
+to semver standards.
+
+## Adopters (base and end-user images)
+
+The bootc CLI is just a client system; it is not tied to any particular
+operating system or Linux distribution. You very likely want to actually
+start by looking at [ADOPTERS.md](ADOPTERS.md).
+
+## Community discussion
+
+- [Github discussion forum](https://github.com/containers/bootc/discussions) for async discussion
+- [#bootc-dev on CNCF Slack](https://cloud-native.slack.com/archives/C08SKSQKG1L) for live chat
+- Recurring live meeting hosted on [CNCF Zoom](https://zoom-lfx.platform.linuxfoundation.org/meeting/96540875093?password=7889708d-c520-4565-90d3-ce9e253a1f65) each Friday at 15:30 UTC.
+
+This project is also tightly related to the previously mentioned Fedora/CentOS bootc project,
+and many developers monitor the relevant discussion forums there. In particular there's a
+Matrix channel and a weekly video call meeting for example: .
+
+## Developing bootc
+
+Are you interested in working on bootc? Great! See our [CONTRIBUTING.md](CONTRIBUTING.md) guide.
+There is also a list of [MAINTAINERS.md](MAINTAINERS.md).
+
+## Governance
+See [GOVERNANCE.md](GOVERNANCE.md) for project governance details.
+
+## Badges
+
+[](https://www.bestpractices.dev/projects/10113)
+[](https://insights.linuxfoundation.org/project/bootc)
+[](https://insights.linuxfoundation.org/project/bootc)
+[](https://insights.linuxfoundation.org/project/bootc)
+
+### Code of Conduct
+
+The bootc project is a [Cloud Native Computing Foundation (CNCF) Sandbox project](https://www.cncf.io/sandbox-projects/)
+and adheres to the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+---
+The Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see [Trademark Usage](https://www.linuxfoundation.org/trademark-usage/).
diff --git a/data/readmes/bpfman-v056.md b/data/readmes/bpfman-v056.md
new file mode 100644
index 0000000..3b04783
--- /dev/null
+++ b/data/readmes/bpfman-v056.md
@@ -0,0 +1,133 @@
+# bpfman - README (v0.5.6)
+
+**Repository**: https://github.com/bpfman/bpfman
+**Version**: v0.5.6
+
+---
+
+
+
+# bpfman: An eBPF Manager
+
+[![License][apache2-badge]][apache2-url]
+[![License][bsd2-badge]][bsd2-url]
+[![License][gpl-badge]][gpl-url]
+![Build status][build-badge]
+[![Book][book-badge]][book-url]
+[![Netlify Status][netlify-badge]][netlify-url]
+[![Copr build status][copr-badge]][copr-url]
+[![OpenSSF Scorecard][openssf-badge]][openssf-url]
+[![OpenSSF Best Practices][openssf-best-practices-badge]][openssf-best-practices-url]
+[![FOSSA Status][fossa-badge]][fossa-url]
+[](https://deepwiki.com/bpfman/bpfman)
+
+[apache2-badge]: https://img.shields.io/badge/License-Apache%202.0-blue.svg
+[apache2-url]: https://opensource.org/licenses/Apache-2.0
+[bsd2-badge]: https://img.shields.io/badge/License-BSD%202--Clause-orange.svg
+[bsd2-url]: https://opensource.org/licenses/BSD-2-Clause
+[gpl-badge]: https://img.shields.io/badge/License-GPL%20v2-blue.svg
+[gpl-url]: https://opensource.org/licenses/GPL-2.0
+[build-badge]: https://img.shields.io/github/actions/workflow/status/bpfman/bpfman/build.yml?branch=main
+[book-badge]: https://img.shields.io/badge/read%20the-book-9cf.svg
+[book-url]: https://bpfman.io/
+[copr-badge]: https://copr.fedorainfracloud.org/coprs/g/ebpf-sig/bpfman-next/package/bpfman/status_image/last_build.png
+[copr-url]: https://copr.fedorainfracloud.org/coprs/g/ebpf-sig/bpfman-next/package/bpfman/
+[netlify-badge]: https://api.netlify.com/api/v1/badges/557ca612-4b7f-480d-a1cc-43b453502992/deploy-status
+[netlify-url]: https://app.netlify.com/sites/bpfman/deploys
+[openssf-badge]: https://api.scorecard.dev/projects/github.com/bpfman/bpfman/badge
+[openssf-url]: https://scorecard.dev/viewer/?uri=github.com/bpfman/bpfman
+[openssf-best-practices-badge]: https://www.bestpractices.dev/projects/10169/badge
+[openssf-best-practices-url]: https://www.bestpractices.dev/projects/10169
+[fossa-badge]: https://app.fossa.com/api/projects/git%2Bgithub.com%2Fbpfman%2Fbpfman.svg?type=shield
+[fossa-url]: https://app.fossa.com/projects/git%2Bgithub.com%2Fbpfman%2Fbpfman?ref=badge_shield
+
+_Formerly know as `bpfd`_
+
+bpfman is a Cloud Native Computing Foundation Sandbox project
+
+
+
+
+
+
+
+## Welcome to bpfman
+
+bpfman operates as an eBPF manager, focusing on simplifying the deployment and administration of eBPF programs. Its notable features encompass:
+
+- **System Overview**: Provides insights into how eBPF is utilized in your system.
+- **eBPF Program Loader**: Includes a built-in program loader that supports program cooperation for XDP and TC programs, as well as deployment of eBPF programs from OCI images.
+- **eBPF Filesystem Management**: Manages the eBPF filesystem, facilitating the deployment of eBPF applications without requiring additional privileges.
+
+Our program loader and eBPF filesystem manager ensure the secure deployment of eBPF applications.
+Furthermore, bpfman includes a Kubernetes operator, extending these capabilities to Kubernetes.
+This allows users to confidently deploy eBPF through custom resource definitions across nodes in a cluster.
+
+Here are some links to help in your bpfman journey (all links are from the bpfman website ):
+
+- [Welcome to bpfman](https://bpfman.io/) for overview of bpfman.
+- [Quick Start](https://bpfman.io/main/quick-start) for a quick installation of bpfman without having to download or
+ build the code from source.
+ Good for just getting familiar with bpfman and playing around with it.
+- [Deploying Example eBPF Programs On Local Host](https://bpfman.io/main/getting-started/example-bpf-local/)
+ for some examples of running `bpfman` on local host and using the CLI to install
+ eBPF programs on the host.
+- [Deploying Example eBPF Programs On Kubernetes](https://bpfman.io/main/getting-started/example-bpf-k8s/)
+ for some examples of deploying eBPF programs through `bpfman` in a Kubernetes deployment.
+- [Setup and Building bpfman](https://bpfman.io/main/getting-started/building-bpfman/) for instructions
+ on setting up your development environment and building bpfman.
+- [Example eBPF Programs](https://bpfman.io/main/getting-started/example-bpf/) for some
+ examples of eBPF programs written in Go, interacting with `bpfman`.
+- [Deploying the bpfman-operator](https://bpfman.io/main/getting-started/develop-operator/) for details on launching
+ bpfman in a Kubernetes cluster.
+- [Meet the Community](https://bpfman.io/main/governance/meetings/) for details on community meeting details.
+
+## Issues
+
+Would you like to report a bug? Feel free to [add an issue](https://github.com/bpfman/bpfman/issues).
+
+Would you like to start a conversation on a specific topic? Please, [open a discussion](https://github.com/bpfman/bpfman/discussions).
+
+## License
+
+With the exception of eBPF code, everything is distributed under the terms of
+the [Apache License] (version 2.0).
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fbpfman%2Fbpfman?ref=badge_large)
+
+### eBPF
+
+All eBPF code is distributed under either:
+
+- The terms of the [GNU General Public License, Version 2] or the
+ [BSD 2 Clause] license, at your option.
+- The terms of the [GNU General Public License, Version 2].
+
+The exact license text varies by file. Please see the SPDX-License-Identifier
+header in each file for details.
+
+Files that originate from the authors of bpfman use
+`(GPL-2.0-only OR BSD-2-Clause)` - for example the [TC dispatcher] or our
+own example programs.
+
+Files that were originally created in [libxdp] use `GPL-2.0-only`.
+
+Unless you explicitly state otherwise, any contribution intentionally submitted
+for inclusion in this project by you, as defined in the GPL-2 license, shall be
+dual licensed as above, without any additional terms or conditions.
+
+[Apache license]: LICENSE-APACHE
+[GNU General Public License, Version 2]: LICENSE-GPL2
+[BSD 2 Clause]: LICENSE-BSD2
+[libxdp]: https://github.com/xdp-project/xdp-tools
+[TC dispatcher]:https://github.com/bpfman/bpfman/blob/main/bpf/tc_dispatcher.bpf.c
+
+## Star History
+
+
+
+
+
+
+
+
diff --git a/data/readmes/brigade-v260.md b/data/readmes/brigade-v260.md
new file mode 100644
index 0000000..1783d00
--- /dev/null
+++ b/data/readmes/brigade-v260.md
@@ -0,0 +1,89 @@
+# Brigade - README (v2.6.0)
+
+**Repository**: https://github.com/brigadecore/brigade
+**Version**: v2.6.0
+
+---
+
+> # ⚠️ Brigade is an [_archived_ CNCF project](https://www.cncf.io/archived-projects/).
+
+
+
+# Brigade: Event-Driven Scripting for Kubernetes
+
+
+[](https://kubernetes.slack.com/messages/C87MF1RFD)
+[](https://app.netlify.com/sites/brigade-docs/deploys)
+
+
+
+Brigade is a full-featured, event-driven scripting platform built on top of
+Kubernetes. It integrates with many different event sources, more are always
+being added, and it's easy to create your own if you need something specific.
+And the best part -- Kubernetes is well-abstracted so even team members without
+extensive Kubernetes experience or without direct access to a cluster can be
+productive.
+
+
+
+> ⚠️ You are viewing docs and code for Brigade 2. If you are looking for legacy
+> Brigade 1.x documentation and code, visit
+> [the v1 branch](https://github.com/brigadecore/brigade/tree/v1)
+
+## Getting Started
+
+Ready to get started? Check out our
+[QuickStart](https://docs.brigade.sh/intro/quickstart/) for comprehensive
+instructions.
+
+## The Brigade Ecosystem
+
+Brigade's API makes it easy to create all manner of peripherals-- tooling, event
+gateways, and more.
+
+### Gateways
+
+Our event gateways receive events from upstream systems (the "outside world")
+and convert them to Brigade events that are emitted into Brigade's event bus.
+
+* [Bitbucket Gateway](https://github.com/brigadecore/brigade-bitbucket-gateway/tree/v2)
+* [CloudEvents Gateway](https://github.com/brigadecore/brigade-cloudevents-gateway)
+* [Docker Hub Gateway](https://github.com/brigadecore/brigade-dockerhub-gateway)
+* [GitHub Gateway](https://github.com/brigadecore/brigade-github-gateway)
+* [Slack Gateway](https://github.com/brigadecore/brigade-slack-gateway)
+
+### Other Event Sources
+
+* [Cron Event Source](https://github.com/brigadecore/brigade-cron-event-source)
+* [Brigade Noisy Neighbor](https://github.com/brigadecore/brigade-noisy-neighbor)
+
+### Monitoring
+
+[Brigade Metrics](https://github.com/brigadecore/brigade-metrics) is a great way
+to obtain operational insights into a Brigade installation.
+
+### SDKs
+
+Use any of these to develop your own integrations!
+
+* [Brigade SDK for Go](https://github.com/brigadecore/brigade/tree/main/sdk) (used by Brigade itself)
+* [Brigade SDK for JavaScript](https://github.com/krancour/brigade-sdk-for-js) (and TypeScript)
+
+## Contributing
+
+The Brigade project accepts contributions via GitHub pull requests. The
+[Contributing](CONTRIBUTING.md) document outlines the process to help get your
+contribution accepted.
+
+## Support & Feedback
+
+We have a slack channel!
+[Kubernetes/#brigade](https://kubernetes.slack.com/messages/C87MF1RFD) Feel free
+to join for any support questions or feedback, we are happy to help. To report
+an issue or to request a feature open an issue
+[here](https://github.com/brigadecore/brigade/issues)
+
+## Code of Conduct
+
+Participation in the Brigade project is governed by the
+[CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
diff --git a/data/readmes/buildah-v1422.md b/data/readmes/buildah-v1422.md
new file mode 100644
index 0000000..b3bd264
--- /dev/null
+++ b/data/readmes/buildah-v1422.md
@@ -0,0 +1,143 @@
+# Buildah - README (v1.42.2)
+
+**Repository**: https://github.com/containers/buildah
+**Version**: v1.42.2
+
+---
+
+
+
+
+# [Buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool that facilitates building [Open Container Initiative (OCI)](https://www.opencontainers.org/) container images
+
+[](https://goreportcard.com/report/github.com/containers/buildah)
+[](https://www.bestpractices.dev/projects/10579)
+
+
+The Buildah package provides a command line tool that can be used to
+* create a working container, either from scratch or using an image as a starting point
+* create an image, either from a working container or via the instructions in a Dockerfile
+* images can be built in either the OCI image format or the traditional upstream docker image format
+* mount a working container's root filesystem for manipulation
+* unmount a working container's root filesystem
+* use the updated contents of a container's root filesystem as a filesystem layer to create a new image
+* delete a working container or an image
+* rename a local container
+
+## Buildah Information for Developers
+
+For blogs, release announcements and more, please checkout the [buildah.io](https://buildah.io) website!
+
+**[Buildah Container Images](https://github.com/containers/image_build/blob/main/buildah/README.md)**
+
+**[Buildah Demos](demos)**
+
+**[Changelog](CHANGELOG.md)**
+
+**[Contributing](CONTRIBUTING.md)**
+
+**[Development Plan](developmentplan.md)**
+
+**[Installation notes](install.md)**
+
+**[Troubleshooting Guide](troubleshooting.md)**
+
+**[Tutorials](docs/tutorials)**
+
+## Buildah and Podman relationship
+
+Buildah and Podman are two complementary open-source projects that are
+available on most Linux platforms and both projects reside at
+[GitHub.com](https://github.com) with Buildah
+[here](https://github.com/containers/buildah) and Podman
+[here](https://github.com/containers/podman). Both, Buildah and Podman are
+command line tools that work on Open Container Initiative (OCI) images and
+containers. The two projects differentiate in their specialization.
+
+Buildah specializes in building OCI images. Buildah's commands replicate all
+of the commands that are found in a Dockerfile. This allows building images
+with and without Dockerfiles while not requiring any root privileges.
+Buildah’s ultimate goal is to provide a lower-level coreutils interface to
+build images. The flexibility of building images without Dockerfiles allows
+for the integration of other scripting languages into the build process.
+Buildah follows a simple fork-exec model and does not run as a daemon
+but it is based on a comprehensive API in golang, which can be vendored
+into other tools.
+
+Podman specializes in all of the commands and functions that help you to maintain and modify
+OCI images, such as pulling and tagging. It also allows you to create, run, and maintain those containers
+created from those images. For building container images via Dockerfiles, Podman uses Buildah's
+golang API and can be installed independently from Buildah.
+
+A major difference between Podman and Buildah is their concept of a container. Podman
+allows users to create "traditional containers" where the intent of these containers is
+to be long lived. While Buildah containers are really just created to allow content
+to be added back to the container image. An easy way to think of it is the
+`buildah run` command emulates the RUN command in a Dockerfile while the `podman run`
+command emulates the `docker run` command in functionality. Because of this and their underlying
+storage differences, you can not see Podman containers from within Buildah or vice versa.
+
+In short, Buildah is an efficient way to create OCI images while Podman allows
+you to manage and maintain those images and containers in a production environment using
+familiar container cli commands. For more details, see the
+[Container Tools Guide](https://github.com/containers/buildah/tree/main/docs/containertools).
+
+## Example
+
+From [`./examples/lighttpd.sh`](examples/lighttpd.sh):
+
+```bash
+$ cat > lighttpd.sh <<"EOF"
+#!/usr/bin/env bash
+
+set -x
+
+ctr1=$(buildah from "${1:-fedora}")
+
+## Get all updates and install our minimal httpd server
+buildah run "$ctr1" -- dnf update -y
+buildah run "$ctr1" -- dnf install -y lighttpd
+
+## Include some buildtime annotations
+buildah config --annotation "com.example.build.host=$(uname -n)" "$ctr1"
+
+## Run our server and expose the port
+buildah config --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf" "$ctr1"
+buildah config --port 80 "$ctr1"
+
+## Commit this container to an image name
+buildah commit "$ctr1" "${2:-$USER/lighttpd}"
+EOF
+
+$ chmod +x lighttpd.sh
+$ ./lighttpd.sh
+```
+
+## Commands
+| Command | Description |
+| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| [buildah-add(1)](/docs/buildah-add.1.md) | Add the contents of a file, URL, or a directory to the container. |
+| [buildah-build(1)](/docs/buildah-build.1.md) | Build an image using instructions from Containerfiles or Dockerfiles. |
+| [buildah-commit(1)](/docs/buildah-commit.1.md) | Create an image from a working container. |
+| [buildah-config(1)](/docs/buildah-config.1.md) | Update image configuration settings. |
+| [buildah-containers(1)](/docs/buildah-containers.1.md) | List the working containers and their base images. |
+| [buildah-copy(1)](/docs/buildah-copy.1.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
+| [buildah-from(1)](/docs/buildah-from.1.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
+| [buildah-images(1)](/docs/buildah-images.1.md) | List images in local storage. |
+| [buildah-info(1)](/docs/buildah-info.1.md) | Display Buildah system information. |
+| [buildah-inspect(1)](/docs/buildah-inspect.1.md) | Inspects the configuration of a container or image. |
+| [buildah-mount(1)](/docs/buildah-mount.1.md) | Mount the working container's root filesystem. |
+| [buildah-pull(1)](/docs/buildah-pull.1.md) | Pull an image from the specified location. |
+| [buildah-push(1)](/docs/buildah-push.1.md) | Push an image from local storage to elsewhere. |
+| [buildah-rename(1)](/docs/buildah-rename.1.md) | Rename a local container. |
+| [buildah-rm(1)](/docs/buildah-rm.1.md) | Removes one or more working containers. |
+| [buildah-rmi(1)](/docs/buildah-rmi.1.md) | Removes one or more images. |
+| [buildah-run(1)](/docs/buildah-run.1.md) | Run a command inside of the container. |
+| [buildah-tag(1)](/docs/buildah-tag.1.md) | Add an additional name to a local image. |
+| [buildah-umount(1)](/docs/buildah-umount.1.md) | Unmount a working container's root file system. |
+| [buildah-unshare(1)](/docs/buildah-unshare.1.md) | Launch a command in a user namespace with modified ID mappings. |
+| [buildah-version(1)](/docs/buildah-version.1.md) | Display the Buildah Version Information |
+
+**Future goals include:**
+* more CI tests
+* additional CLI commands (?)
diff --git a/data/readmes/buildpacks-v0390.md b/data/readmes/buildpacks-v0390.md
new file mode 100644
index 0000000..7d53524
--- /dev/null
+++ b/data/readmes/buildpacks-v0390.md
@@ -0,0 +1,52 @@
+# Buildpacks - README (v0.39.0)
+
+**Repository**: https://github.com/buildpacks/pack
+**Version**: v0.39.0
+
+---
+
+# pack - Buildpack CLI
+
+[](https://github.com/buildpacks/pack/actions)
+[](https://goreportcard.com/report/github.com/buildpacks/pack)
+[](https://codecov.io/gh/buildpacks/pack)
+[](https://godoc.org/github.com/buildpacks/pack)
+[](https://github.com/buildpacks/pack/blob/main/LICENSE)
+[](https://bestpractices.coreinfrastructure.org/projects/4748)
+[](https://slack.cncf.io/)
+[](https://gitpod.io/#https://github.com/buildpacks/pack)
+
+`pack` makes it easy for...
+- [**App Developers**][app-dev] to use buildpacks to convert code into runnable images.
+- [**Buildpack Authors**][bp-author] to develop and package buildpacks for distribution.
+- [**Operators**][operator] to package buildpacks for distribution and maintain applications.
+
+## Usage
+
+
+
+## Getting Started
+Get started by running through our tutorial: [An App’s Brief Journey from Source to Image][getting-started]
+
+## Contributing
+- [CONTRIBUTING](CONTRIBUTING.md) - Information on how to contribute, including the pull request process.
+- [DEVELOPMENT](DEVELOPMENT.md) - Further detail to help you during the development process.
+- [RELEASE](RELEASE.md) - Further details about our release process.
+
+## Documentation
+Check out the command line documentation [here][pack-docs]
+
+## Specifications
+`pack` is a CLI implementation of the [Platform Interface Specification][platform-spec] for [Cloud Native Buildpacks][buildpacks.io].
+
+To learn more about the details, check out the [specs repository][specs].
+
+[app-dev]: https://buildpacks.io/docs/for-app-developers/
+[bp-author]: https://buildpacks.io/docs/for-buildpack-authors/
+[operator]: https://buildpacks.io/docs/for-platform-operators/
+[buildpacks.io]: https://buildpacks.io/
+[install-pack]: https://buildpacks.io/docs/install-pack/
+[getting-started]: https://buildpacks.io/docs/app-journey
+[specs]: https://github.com/buildpacks/spec/
+[platform-spec]: https://github.com/buildpacks/spec/blob/main/platform.md
+[pack-docs]: https://buildpacks.io/docs/tools/pack/cli/pack/
diff --git a/data/readmes/caddy-v2110-beta1.md b/data/readmes/caddy-v2110-beta1.md
new file mode 100644
index 0000000..06186ed
--- /dev/null
+++ b/data/readmes/caddy-v2110-beta1.md
@@ -0,0 +1,209 @@
+# Caddy - README (v2.11.0-beta.1)
+
+**Repository**: https://github.com/caddyserver/caddy
+**Version**: v2.11.0-beta.1
+
+---
+
+
+
+
+## [Features](https://caddyserver.com/features)
+
+- **Easy configuration** with the [Caddyfile](https://caddyserver.com/docs/caddyfile)
+- **Powerful configuration** with its [native JSON config](https://caddyserver.com/docs/json/)
+- **Dynamic configuration** with the [JSON API](https://caddyserver.com/docs/api)
+- [**Config adapters**](https://caddyserver.com/docs/config-adapters) if you don't like JSON
+- **Automatic HTTPS** by default
+ - [ZeroSSL](https://zerossl.com) and [Let's Encrypt](https://letsencrypt.org) for public names
+ - Fully-managed local CA for internal names & IPs
+ - Can coordinate with other Caddy instances in a cluster
+ - Multi-issuer fallback
+ - Encrypted ClientHello (ECH) support
+- **Stays up when other servers go down** due to TLS/OCSP/certificate-related issues
+- **Production-ready** after serving trillions of requests and managing millions of TLS certificates
+- **Scales to hundreds of thousands of sites** as proven in production
+- **HTTP/1.1, HTTP/2, and HTTP/3** all supported by default
+- **Highly extensible** [modular architecture](https://caddyserver.com/docs/architecture) lets Caddy do anything without bloat
+- **Runs anywhere** with **no external dependencies** (not even libc)
+- Written in Go, a language with higher **memory safety guarantees** than other servers
+- Actually **fun to use**
+- So much more to [discover](https://caddyserver.com/features)
+
+## Install
+
+The simplest, cross-platform way to get started is to download Caddy from [GitHub Releases](https://github.com/caddyserver/caddy/releases) and place the executable file in your PATH.
+
+See [our online documentation](https://caddyserver.com/docs/install) for other install instructions.
+
+## Build from source
+
+Requirements:
+
+- [Go 1.25.0 or newer](https://golang.org/dl/)
+
+### For development
+
+_**Note:** These steps [will not embed proper version information](https://github.com/golang/go/issues/29228). For that, please follow the instructions in the next section._
+
+```bash
+$ git clone "https://github.com/caddyserver/caddy.git"
+$ cd caddy/cmd/caddy/
+$ go build
+```
+
+When you run Caddy, it may try to bind to low ports unless otherwise specified in your config. If your OS requires elevated privileges for this, you will need to give your new binary permission to do so. On Linux, this can be done easily with: `sudo setcap cap_net_bind_service=+ep ./caddy`
+
+If you prefer to use `go run` which only creates temporary binaries, you can still do this with the included `setcap.sh` like so:
+
+```bash
+$ go run -exec ./setcap.sh main.go
+```
+
+If you don't want to type your password for `setcap`, use `sudo visudo` to edit your sudoers file and allow your user account to run that command without a password, for example:
+
+```
+username ALL=(ALL:ALL) NOPASSWD: /usr/sbin/setcap
+```
+
+replacing `username` with your actual username. Please be careful and only do this if you know what you are doing! We are only qualified to document how to use Caddy, not Go tooling or your computer, and we are providing these instructions for convenience only; please learn how to use your own computer at your own risk and make any needful adjustments.
+
+### With version information and/or plugins
+
+Using [our builder tool, `xcaddy`](https://github.com/caddyserver/xcaddy)...
+
+```
+$ xcaddy build
+```
+
+...the following steps are automated:
+
+1. Create a new folder: `mkdir caddy`
+2. Change into it: `cd caddy`
+3. Copy [Caddy's main.go](https://github.com/caddyserver/caddy/blob/master/cmd/caddy/main.go) into the empty folder. Add imports for any custom plugins you want to add.
+4. Initialize a Go module: `go mod init caddy`
+5. (Optional) Pin Caddy version: `go get github.com/caddyserver/caddy/v2@version` replacing `version` with a git tag, commit, or branch name.
+6. (Optional) Add plugins by adding their import: `_ "import/path/here"`
+7. Compile: `go build -tags=nobadger,nomysql,nopgx`
+
+
+
+
+## Quick start
+
+The [Caddy website](https://caddyserver.com/docs/) has documentation that includes tutorials, quick-start guides, reference, and more.
+
+**We recommend that all users -- regardless of experience level -- do our [Getting Started](https://caddyserver.com/docs/getting-started) guide to become familiar with using Caddy.**
+
+If you've only got a minute, [the website has several quick-start tutorials](https://caddyserver.com/docs/quick-starts) to choose from! However, after finishing a quick-start tutorial, please read more documentation to understand how the software works. 🙂
+
+
+
+
+## Overview
+
+Caddy is most often used as an HTTPS server, but it is suitable for any long-running Go program. First and foremost, it is a platform to run Go applications. Caddy "apps" are just Go programs that are implemented as Caddy modules. Two apps -- `tls` and `http` -- ship standard with Caddy.
+
+Caddy apps instantly benefit from [automated documentation](https://caddyserver.com/docs/json/), graceful on-line [config changes via API](https://caddyserver.com/docs/api), and unification with other Caddy apps.
+
+Although [JSON](https://caddyserver.com/docs/json/) is Caddy's native config language, Caddy can accept input from [config adapters](https://caddyserver.com/docs/config-adapters) which can essentially convert any config format of your choice into JSON: Caddyfile, JSON 5, YAML, TOML, NGINX config, and more.
+
+The primary way to configure Caddy is through [its API](https://caddyserver.com/docs/api), but if you prefer config files, the [command-line interface](https://caddyserver.com/docs/command-line) supports those too.
+
+Caddy exposes an unprecedented level of control compared to any web server in existence. In Caddy, you are usually setting the actual values of the initialized types in memory that power everything from your HTTP handlers and TLS handshakes to your storage medium. Caddy is also ridiculously extensible, with a powerful plugin system that makes vast improvements over other web servers.
+
+To wield the power of this design, you need to know how the config document is structured. Please see [our documentation site](https://caddyserver.com/docs/) for details about [Caddy's config structure](https://caddyserver.com/docs/json/).
+
+Nearly all of Caddy's configuration is contained in a single config document, rather than being scattered across CLI flags and env variables and a configuration file as with other web servers. This makes managing your server config more straightforward and reduces hidden variables/factors.
+
+
+## Full documentation
+
+Our website has complete documentation:
+
+**https://caddyserver.com/docs/**
+
+The docs are also open source. You can contribute to them here: https://github.com/caddyserver/website
+
+
+
+## Getting help
+
+- We advise companies using Caddy to secure a support contract through [Ardan Labs](https://www.ardanlabs.com) before help is needed.
+
+- A [sponsorship](https://github.com/sponsors/mholt) goes a long way! We can offer private help to sponsors. If Caddy is benefitting your company, please consider a sponsorship. This not only helps fund full-time work to ensure the longevity of the project, it provides your company the resources, support, and discounts you need; along with being a great look for your company to your customers and potential customers!
+
+- Individuals can exchange help for free on our community forum at https://caddy.community. Remember that people give help out of their spare time and good will. The best way to get help is to give it first!
+
+Please use our [issue tracker](https://github.com/caddyserver/caddy/issues) only for bug reports and feature requests, i.e. actionable development items (support questions will usually be referred to the forums).
+
+
+
+## About
+
+Matthew Holt began developing Caddy in 2014 while studying computer science at Brigham Young University. (The name "Caddy" was chosen because this software helps with the tedious, mundane tasks of serving the Web, and is also a single place for multiple things to be organized together.) It soon became the first web server to use HTTPS automatically and by default, and now has hundreds of contributors and has served trillions of HTTPS requests.
+
+**The name "Caddy" is trademarked.** The name of the software is "Caddy", not "Caddy Server" or "CaddyServer". Please call it "Caddy" or, if you wish to clarify, "the Caddy web server". Caddy is a registered trademark of Stack Holdings GmbH.
+
+- _Project on X: [@caddyserver](https://x.com/caddyserver)_
+- _Author on X: [@mholt6](https://x.com/mholt6)_
+
+Caddy is a project of [ZeroSSL](https://zerossl.com), a Stack Holdings company.
+
+Debian package repository hosting is graciously provided by [Cloudsmith](https://cloudsmith.com). Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence.
diff --git a/data/readmes/cadence-workflow-v137-prerelease23.md b/data/readmes/cadence-workflow-v137-prerelease23.md
new file mode 100644
index 0000000..d40f4f8
--- /dev/null
+++ b/data/readmes/cadence-workflow-v137-prerelease23.md
@@ -0,0 +1,123 @@
+# Cadence Workflow - README (v1.3.7-prerelease23)
+
+**Repository**: https://github.com/cadence-workflow/cadence
+**Version**: v1.3.7-prerelease23
+
+---
+
+# Cadence
+[](https://github.com/cadence-workflow/cadence/actions/workflows/ci-checks.yml)
+[](https://codecov.io/gh/cadence-workflow/cadence)
+[](https://communityinviter.com/apps/cloud-native/cncf)
+[](https://github.com/cadence-workflow/cadence/releases)
+[](http://www.apache.org/licenses/LICENSE-2.0)
+
+Cadence Workflow is an open-source platform since 2017 for building and running scalable, fault-tolerant, and long-running workflows. This repository contains the core orchestration engine and tools including CLI, schema managment, benchmark and canary.
+
+
+## Getting Started
+
+Cadence backend consists of multiple services, a database (Cassandra/MySQL/PostgreSQL) and optionally Kafka+Elasticsearch.
+As a user, you need a worker which contains your workflow implementation.
+Once you have Cadence backend and worker(s) running, you can trigger workflows by using SDKs or via CLI.
+
+1. Start cadence backend components locally
+
+```
+docker compose -f docker/docker-compose.yml up
+```
+
+2. Run the Samples
+
+Try out the sample recipes for [Go](https://github.com/cadence-workflow/cadence-samples) or [Java](https://github.com/cadence-workflow/cadence-java-samples).
+
+3. Visit UI
+
+Visit http://localhost:8088 to check workflow histories and detailed traces.
+
+
+### Client Libraries
+You can implement your workflows with one of our client libraries:
+- [Official Cadence Go SDK](https://github.com/cadence-workflow/cadence-go-client)
+- [Official Cadence Java SDK](https://github.com/cadence-workflow/cadence-java-client)
+There are also unofficial [Python](https://github.com/firdaus/cadence-python) and [Ruby](https://github.com/coinbase/cadence-ruby) SDKs developed by the community.
+
+You can also use [iWF](https://github.com/indeedeng/iwf) as a DSL framework on top of Cadence.
+
+### CLI
+
+Cadence CLI can be used to operate workflows, tasklist, domain and even the clusters.
+
+You can use the following ways to install Cadence CLI:
+* Use brew to install CLI: `brew install cadence-workflow`
+ * Follow the [instructions](https://github.com/cadence-workflow/cadence/discussions/4457) if you need to install older versions of CLI via homebrew. Usually this is only needed when you are running a server of a too old version.
+* Use docker image for CLI: `docker run --rm ubercadence/cli:` or `docker run --rm ubercadence/cli:master ` . Be sure to update your image when you want to try new features: `docker pull ubercadence/cli:master `
+* Build the CLI binary yourself, check out the repo and run `make cadence` to build all tools. See [CONTRIBUTING](CONTRIBUTING.md) for prerequisite of make command.
+* Build the CLI image yourself, see [instructions](docker/README.md#diy-building-an-image-for-any-tag-or-branch)
+
+Cadence CLI is a powerful tool. The commands are organized by tabs. E.g. `workflow`->`batch`->`start`, or `admin`->`workflow`->`describe`.
+
+Please read the [documentation](https://cadenceworkflow.io/docs/cli/#documentation) and always try out `--help` on any tab to learn & explore.
+
+### UI
+
+Try out [Cadence Web UI](https://github.com/cadence-workflow/cadence-web) to view your workflows on Cadence.
+(This is already available at localhost:8088 if you run Cadence with docker compose)
+
+
+### Other binaries in this repo
+
+#### Bench/stress test workflow tools
+See [bench documentation](./bench/README.md).
+
+#### Periodical feature health check workflow tools(aka Canary)
+See [canary documentation](./canary/README.md).
+
+#### Schema tools for SQL and Cassandra
+The tools are for [manual setup or upgrading database schema](docs/persistence.md)
+
+ * If server runs with Cassandra, Use [Cadence Cassandra tool](tools/cassandra/README.md)
+ * If server runs with SQL database, Use [Cadence SQL tool](tools/sql/README.md)
+
+The easiest way to get the schema tool is via homebrew.
+
+`brew install cadence-workflow` also includes `cadence-sql-tool` and `cadence-cassandra-tool`.
+ * The schema files are located at `/usr/local/etc/cadence/schema/`.
+ * To upgrade, make sure you remove the old ElasticSearch schema first: `mv /usr/local/etc/cadence/schema/elasticsearch /usr/local/etc/cadence/schema/elasticsearch.old && brew upgrade cadence-workflow`. Otherwise ElasticSearch schemas may not be able to get updated.
+ * Follow the [instructions](https://github.com/cadence-workflow/cadence/discussions/4457) if you need to install older versions of schema tools via homebrew.
+ However, easier way is to use new versions of schema tools with old versions of schemas.
+ All you need is to check out the older version of schemas from this repo. Run `git checkout v0.21.3` to get the v0.21.3 schemas in [the schema folder](/schema).
+
+
+## Contributing
+
+We'd love your help in making Cadence great. Please review our [contribution guide](CONTRIBUTING.md).
+
+If you'd like to propose a new feature, first join the [CNCF Slack workspace](https://communityinviter.com/apps/cloud-native/cncf) in the **#cadence-users** channel to start a discussion.
+
+Please visit our [documentation](https://cadenceworkflow.io/docs/operation-guide/) site for production/cluster setup.
+
+
+### Learning Resources
+See Maxim's talk at [Data@Scale Conference](https://atscaleconference.com/videos/cadence-microservice-architecture-beyond-requestreply) for an architectural overview of Cadence.
+
+Visit [cadenceworkflow.io](https://cadenceworkflow.io) to learn more about Cadence. Join us in [Cadence Documentation](https://github.com/cadence-workflow/Cadence-Docs) project. Feel free to raise an Issue or Pull Request there.
+
+### Community
+* [Github Discussion](https://github.com/cadence-workflow/cadence/discussions)
+ * Best for Q&A, support/help, general discusion, and annoucement
+* [Github Issues](https://github.com/cadence-workflow/cadence/issues)
+ * Best for reporting bugs and feature requests
+* [StackOverflow](https://stackoverflow.com/questions/tagged/cadence-workflow)
+ * Best for Q&A and general discusion
+* [Slack](https://communityinviter.com/apps/cloud-native/cncf) - Join **#cadence-users** channel on CNCF Slack
+ * Best for contributing/development discussion
+
+
+## Stars over time
+[](https://starchart.cc/uber/cadence)
+
+
+## License
+
+Apache 2.0 License, please see [LICENSE](https://github.com/cadence-workflow/cadence/blob/master/LICENSE) for details.
diff --git a/data/readmes/camel-camel-4160.md b/data/readmes/camel-camel-4160.md
new file mode 100644
index 0000000..e7f60b0
--- /dev/null
+++ b/data/readmes/camel-camel-4160.md
@@ -0,0 +1,115 @@
+# Camel - README (camel-4.16.0)
+
+**Repository**: https://github.com/apache/camel
+**Version**: camel-4.16.0
+
+---
+
+# Apache Camel
+
+[](https://maven-badges.herokuapp.com/maven-central/org.apache.camel/apache-camel)
+[](https://www.javadoc.io/doc/org.apache.camel/camel-api)
+[](http://stackoverflow.com/questions/tagged/apache-camel)
+[](https://camel.zulipchat.com/)
+[](https://twitter.com/ApacheCamel)
+
+
+[Apache Camel](https://camel.apache.org/) is an Open Source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.
+
+### Introduction
+
+Camel empowers you to define routing and mediation rules in a variety of domain-specific languages (DSL, such as Java, XML, Groovy and YAML). This means you get smart completion of routing rules in your IDE, whether in a Java or XML editor.
+
+Apache Camel uses URIs to enable easier integration with all kinds of
+transport or messaging model including HTTP, ActiveMQ, JMS, JBI, SCA, MINA
+or CXF together with working with pluggable Data Format options.
+Apache Camel is a small library that has minimal dependencies for easy embedding
+in any Java application. Apache Camel lets you work with the same API regardless of the
+transport type, making it possible to interact with all the components provided out-of-the-box,
+with a good understanding of the API.
+
+Apache Camel has powerful Bean Binding and integrated seamlessly with
+popular frameworks such as Spring, Quarkus, and CDI.
+
+Apache Camel has extensive testing support allowing you to easily
+unit test your routes.
+
+## Components
+
+Apache Camel comes alongside several artifacts with components, data formats, languages, and kinds.
+The up-to-date list is available online at the Camel website:
+
+* Components:
+* Data Formats:
+* Languages:
+* Miscellaneous:
+
+## Examples
+
+Apache Camel comes with many examples.
+The up to date list is available online at GitHub:
+
+* Examples:
+
+## Getting Started
+
+To help you get started, try the following links:
+
+**Getting Started**
+
+
+
+The beginner examples are another powerful alternative pathway for getting started with Apache Camel.
+
+* Examples:
+
+**Building**
+
+
+
+**Contributions**
+
+We welcome all kinds of contributions, the details of which are specified here:
+
+
+
+
+Please refer to the website for details of finding the issue tracker,
+email lists, GitHub, chat
+
+Website:
+
+GitHub (source):
+
+Issue tracker:
+
+Mailing-list:
+
+Chat:
+
+StackOverflow:
+
+Twitter:
+
+
+**Support**
+
+For additional help, support, we recommend referencing this page first:
+
+
+
+**Getting Help**
+
+If you get stuck somewhere, please feel free to reach out to us on either StackOverflow, Chat, or the email mailing list.
+
+Please help us make Apache Camel better — we appreciate any feedback you may have.
+
+Enjoy!
+
+-----------------
+The Camel riders!
+
+# Licensing
+
+The terms for software licensing are detailed in the `LICENSE.txt` file,
+located in the working directory.
diff --git a/data/readmes/capsule-v0120.md b/data/readmes/capsule-v0120.md
new file mode 100644
index 0000000..3f6bf87
--- /dev/null
+++ b/data/readmes/capsule-v0120.md
@@ -0,0 +1,140 @@
+# Capsule - README (v0.12.0)
+
+**Repository**: https://github.com/projectcapsule/capsule
+**Version**: v0.12.0
+
+---
+
+
+
+
+---
+
+**Join the community** on the [#capsule](https://kubernetes.slack.com/archives/C03GETTJQRL) channel in the [Kubernetes Slack](https://slack.k8s.io/).
+
+# Kubernetes multi-tenancy made easy
+
+**Capsule** implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
+
+# What's the problem with the current status?
+
+Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well known phenomena of the _clusters sprawl_.
+
+# Entering Capsule
+
+Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources.
+
+On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator.
+
+# Features
+
+## Self-Service
+
+Leave developers the freedom to self-provision their cluster resources according to the assigned boundaries.
+
+## Preventing Clusters Sprawl
+
+Share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts.
+
+## Governance
+
+Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet policy requirements.
+
+## Resources Control
+
+Take control of the resources consumed by users while preventing them to overtake.
+
+## Native Experience
+
+Provide multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
+
+## GitOps ready
+
+Capsule is completely declarative and GitOps ready.
+
+## Bring your own device (BYOD)
+
+Assign to tenants a dedicated set of compute, storage, and network resources and avoid the noisy neighbors' effect.
+
+# Documentation
+
+Please check the project [documentation](https://projectcapsule.dev) for the cool things you can do with Capsule.
+
+# Contributions
+
+Capsule is Open Source with Apache 2 license and any contribution is welcome.
+
+## Community meeting
+
+Join the community, share and learn from it. You can find all the resources to how to contribute code and docs, connect with people in the [community repository](https://github.com/projectcapsule/capsule-community).
+
+Please read the [code of conduct](CODE_OF_CONDUCT.md).
+
+## Adopters
+
+See the [ADOPTERS.md](ADOPTERS.md) file for a list of companies that are using Capsule.
+
+# Project Governance
+
+You can find how the Capsule project is governed [here](https://projectcapsule.dev/project/governance/).
+
+## Maintainers
+
+Please refer to the maintainers file available [here](.github/maintainers.yaml).
+
+## CLOMonitor
+
+CLOMonitor is a tool that periodically checks open source project repositories to verify they meet certain project health best practices.
+
+[](https://clomonitor.io/projects/cncf/capsule)
+
+### Changelog
+
+Read how we log changes [here](CHANGELOG.md)
+
+### Software Bill of Materials
+
+All OCI release artifacts include a Software Bill of Materials (SBOM) in CycloneDX JSON format. More information about this is available [here](SECURITY.md#software-bill-of-materials-sbom)
+
+# FAQ
+
+- Q. How do you pronounce Capsule?
+
+ A. It should be pronounced as `/ˈkæpsjuːl/`.
+
+- Q. Is it production grade?
+
+ A. Although under frequent development and improvement, Capsule is ready to be used in production environments as currently, people are using it in public and private deployments. Check out the [release](https://github.com/projectcapsule/capsule/releases) page for a detailed list of available versions.
+
+- Q. Does it work with my Kubernetes XYZ distribution?
+
+ A. We tested Capsule with vanilla Kubernetes 1.16+ on private environments and public clouds. We expect it to work smoothly on any other Kubernetes distribution. Please let us know if you find it doesn't.
+
+- Q. Do you provide commercial support?
+
+ A. Yes, we're available to help and provide commercial support. [Clastix](https://clastix.io) is the company behind Capsule. Please, contact us for a quote.
diff --git a/data/readmes/cardano-1061.md b/data/readmes/cardano-1061.md
new file mode 100644
index 0000000..3cdfc49
--- /dev/null
+++ b/data/readmes/cardano-1061.md
@@ -0,0 +1,74 @@
+# Cardano - README (10.6.1)
+
+**Repository**: https://github.com/IntersectMBO/cardano-node
+**Version**: 10.6.1
+
+---
+
+
+
+# `cardano-node`
+
+The `cardano-node` repository is the point of integration of the
+[ledger](https://github.com/IntersectMBO/cardano-ledger),
+[consensus](https://github.com/IntersectMBO/ouroboros-consensus),
+[networking](https://github.com/IntersectMBO/ouroboros-network)
+and [logging](https://github.com/IntersectMBO/cardano-node/tree/master/trace-dispatcher)
+layers. It provides the `cardano-node` executable which is used to participate in the Cardano network.
+
+This is an approximate diagram of the dependencies among the different components:
+
+```mermaid
+stateDiagram-v2
+ cn: cardano-node
+ tr: trace-dispatcher/iohk-monitoring-framework
+ ca: cardano-api
+ co: ouroboros-consensus
+ on: ouroboros-network
+ cl: cardano-ledger
+ p: plutus
+ cn --> ca
+ cn --> tr
+ ca --> co
+ ca --> on
+ co --> on
+ co --> cl
+ ca --> cl
+ cl --> p
+```
+
+
+# Instructions
+
+The process for getting a `cardano-node` executable can be found in the
+[Cardano Developer
+Portal](https://developers.cardano.org/docs/operate-a-stake-pool/node-operations/installing-cardano-node).
+
+The configuration and files required to run a `cardano-node` in one of the
+supported networks are described also in the [Cardano Developer
+Portal](https://developers.cardano.org/docs/operate-a-stake-pool/node-operations/running-cardano).
+
+# Using `cardano-node` and dependencies as a library
+
+The API documentation is published on [the
+webpage](https://cardano-node.cardano.intersectmbo.org/). If you want to use the
+`cardano-node` Haskell packages from another Haskell project, you should set up
+[CHaP](https://chap.intersectmbo.org) to get the packages defined in this
+repository.
+
+# Troubleshooting
+
+For some troubleshooting help with building or running `cardano-node`,
+the wiki has a [troubleshooting
+page](https://github.com/input-output-hk/cardano-node-wiki/wiki/Troubleshooting)
+that documents some common gotchas.
diff --git a/data/readmes/carina-v0140.md b/data/readmes/carina-v0140.md
new file mode 100644
index 0000000..26f2a37
--- /dev/null
+++ b/data/readmes/carina-v0140.md
@@ -0,0 +1,184 @@
+# Carina - README (v0.14.0)
+
+**Repository**: https://github.com/carina-io/carina
+**Version**: v0.14.0
+
+---
+
+
+
+# Carina
+
+[](https://github.com/carina-io/carina/blob/main/LICENSE)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fcarina-io%2Fcarina?ref=badge_shield)
+
+[](https://bestpractices.coreinfrastructure.org/projects/6908)
+
+> English | [中文](README_zh.md)
+
+## Background
+
+Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applications are shifting into cloud native world, for example, modern databases and middlewares. However, both modern databases and its storage providers try to solve some common problems in their own way. For example, they both deal with data replications and consistency. This introduces a giant waste of both capacity and performance and needs more mantainness effort. And besides that, stateful applications strive to be more peformant, eliminating every possible latency, which is unavoidable for modern distributed storage systems. Enters carina.
+
+Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. The key considerations of carina includes:
+
+* Workloads need different storage systems. Carina will focus on cloudnative database scenario usage only.
+* Completely kubernetes native and easy to install.
+* Using local disks and group them as needed, user can provison different type of disks using different storage class.
+* Scaning physical disks and building a RAID as required. If disk fails, just plugin a new one and it's done.
+* Node capacity and performance aware, so scheduling pods more smartly.
+* Extremly low overhead. Carina sit besides the core data path and provide raw disk performance to applications.
+* Auto tiering. Admins can configure carina to combine the large-capacity-but-low-performant disk and small-capacity-but-high-performant disks as one storageclass, so user can benifit both from capacity and performance.
+* If nodes fails, carina will automatically detach the local volume from pods thus pods can be rescheduled.
+* Middleware runs on baremetals for decades. There are many valueable optimizations and enhancements which are definitely not outdated even in cloudnative era. Let carina be an DBA expert of the storage domain for cloudnative databases!
+
+
+**In short, Carina strives to provide extremely-low-latency and noOps storage system for cloudnative databases and be DBA expert of the storage domain in cloudnative era!**
+
+# Running Environments
+
+* Kubernetes:(CSI_VERSION=1.5.0)
+* Node OS:Linux
+* Filesystems:ext4,xfs
+
+* If Kubelet is running in containerized mode, you need to mount the host /dev:/dev directory
+* Each node in the cluster has 1..N Bare disks, supporting SSDS and HDDS. (You can run the LSBLK --output NAME,ROTA command to view the disk type. If ROTA=1 is HDD,ROTA =0 is SSD.)
+* The capacity of a raw disk must be greater than 10 GB
+* If the server does not support the bcache kernel module, see [FAQ](docs/manual/FAQ.md), Modify yamL deployment
+
+### Kubernetes compatiblity
+| kubernetes | v0.9 | v0.9.1 | v0.10 | v0.11.0 | v1.0 |
+| ---------- | ---------- | ---------- | ---------- | ------------ | ----------- |
+| >=1.18 | support | support | support | support | not released |
+| >=1.25 | nonsupport | nonsupport | nonsupport | experimental | not released |
+
+# Carina architecture
+
+Carina is built for cloudnative stateful applications with raw disk performance and ops-free maintainess. Carina can scan local disks and classify them by disk types, for example, one node can have 10 HDDs and 2 SSDs. Carina then will group them into different disk pools and user can request different disk type by using different storage class. For data HA, carina now leverages STORCLI to build RAID groups.
+
+
+
+# Carina components
+
+It has three componets: carina-scheduler, carina-controller and carina-node.
+
+* carina-scheduler is an kubernetes scheduler plugin, sorting nodes based on the requested PV size、node's free disk space and node IO perf stats. By default, carina-scheduler supports binpack and spreadout policies.
+* carina-controller is the controll plane of carina, which watches PVC resources and maintain the internal logivalVolume object.
+* carina-node is an agent which runs on each node. It manage local disks using LVM.
+
+# Features
+
+* [disk management](docs/manual/disk-manager.md)
+* [device registration](docs/manual/device-register.md)
+* [volume mode: filesystem](docs/manual/pvc-xfs.md)
+* [volume mode: block](docs/manual/pvc-device.md)
+* [PVC resizing](docs/manual/pvc-expand.md)
+* [scheduing based on capacity](docs/manual/capacity-scheduler.md)
+* [volume tooplogy](docs/manual/topology.md)
+* [PVC autotiering](docs/manual/pvc-bcache.md)
+* [RAID management](docs/manual/raid-manager.md)
+* [failover](docs/manual/failover.md)
+* [io throttling](docs/manual/disk-speed-limit.md)
+* [metrics](docs/manual/metrics.md)
+* [API](docs/manual/api.md)
+
+# Quickstart
+
+## Install by shell
+
+- In this deployment mode, the image TAG is Latest. If you want to deploy a specific version of Carina, you need to change the image address
+
+```shell
+$ cd deploy/kubernetes
+# install, The default installation is kube-system.
+$ ./deploy.sh
+
+# uninstall
+$ ./deploy.sh uninstall
+```
+
+## Install by helm3
+
+- Support installation of specified versions of Carina
+
+```bash
+helm repo add carina-csi-driver https://carina-io.github.io
+
+helm search repo -l carina-csi-driver
+
+helm install carina-csi-driver carina-csi-driver/carina-csi-driver --namespace kube-system --version v0.11.0
+```
+
+* [deployment guide](docs/manual/install.md)
+* [user guide](docs/user-guide.md)
+
+## Upgrading
+
+- Uninstall the old version `./deploy.sh uninstall` and then install the new version `./deploy.sh` (uninstalling carina will not affect volume usage)
+
+# Contribution Guide
+
+* [development guide](docs/manual/development.md)
+* [build local runtime](docs/manual/runtime-container.md)
+
+# Blogs
+
+* [blogs](http://www.opencarina.io/blog)
+
+# Roadmap
+
+* [roadmap](docs/roadmap/roadmap.md)
+
+# Typical storage providers
+
+| | NFS/NAS | SAN | Ceph | Carina |
+| ---------- | --------| ----| -----| -------|
+| typical usage | general storage | high performance block device | extremly scalability | high performance block device for cloudnative applications |
+| filesystem | yes | yes | yes | yes |
+| filesystem type | NFS | driver specific | ext4/xfs | ext4/xfs |
+| block | no | yes | yes | yes |
+| bandwidth | standard | standard | high | high |
+| IOPS | standard | high | standard | high |
+| latency | standard | low | standard | low |
+| CSI support| yes | yes | yes | yes |
+| snapshot | no | driver specific| yes | no|
+| clone | no | driver specific | yes | not yet, comming soon |
+| quota| no | yes | yes | yes |
+| resizing | yes | driver specific | yes | yes |
+| data HA | RAID or NAS appliacne | yes | yes | RAID |
+| ease of maintainess | driver specific | multiple drivers for multiple SAN | high maintainess effort | ops-free |
+| budget | high for NAS | high | high | low, using the extra disks in existing kubernetes cluster |
+| others | data migrates with pods | data migrates with pods | data migrates with pods |*binpack or spreadout scheduling policy * data doesn't migrate with pods * inplace rebulid if pod fails |
+
+# FAQ
+
+- [FAQ](docs/manual/FAQ.md)
+
+# Similar projects
+
+* [openebs](https://openebs.io/)
+* [topolvm](https://github.com/topolvm/topolvm)
+* [csi-driver-host-path](https://github.com/kubernetes-csi/csi-driver-host-path)
+* [local-path-provisioner](https://github.com/rancher/local-path-provisioner)
+
+# Known Users
+Welcome to register the company name in [ADOPTERS.md](ADOPTERS.md)
+
+
+
+# Community
+
+- For wechat users
+
+
+
+# License
+
+Carina is under the Apache 2.0 license. See the [LICENSE](https://github.com/FabEdge/fabedge/blob/main/LICENSE) file for details.
+
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fcarina-io%2Fcarina?ref=badge_large)
+
+# Code of Conduct
+
+Please refer to our [Carina Community Code of Conduct](https://github.com/carina-io/community/blob/main/code-of-conduct.md)
\ No newline at end of file
diff --git a/data/readmes/cartography-01220.md b/data/readmes/cartography-01220.md
new file mode 100644
index 0000000..c4a4dbb
--- /dev/null
+++ b/data/readmes/cartography-01220.md
@@ -0,0 +1,161 @@
+# Cartography - README (0.122.0)
+
+**Repository**: https://github.com/cartography-cncf/cartography
+**Version**: 0.122.0
+
+---
+
+
+
+[](https://scorecard.dev/viewer/?uri=github.com/cartography-cncf/cartography)
+[](https://www.bestpractices.dev/projects/9637)
+
+
+
+
+Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a [Neo4j](https://www.neo4j.com) database.
+
+
+
+## Why Cartography?
+Cartography aims to enable a broad set of exploration and automation scenarios. It is particularly good at exposing otherwise hidden dependency relationships between your service's assets so that you may validate assumptions about security risks.
+
+Service owners can generate asset reports, Red Teamers can discover attack paths, and Blue Teamers can identify areas for security improvement. All can benefit from using the graph for manual exploration through a web frontend interface, or in an automated fashion by calling the APIs.
+
+Cartography is not the only [security](https://github.com/dowjones/hammer) [graph](https://github.com/BloodHoundAD/BloodHound) [tool](https://github.com/Netflix/security_monkey) [out](https://github.com/vysecurity/ANGRYPUPPY) [there](https://github.com/duo-labs/cloudmapper), but it differentiates itself by being fully-featured yet generic and [extensible](https://cartography-cncf.github.io/cartography/dev/writing-analysis-jobs.html) enough to help make anyone better understand their risk exposure, regardless of what platforms they use. Rather than being focused on one core scenario or attack vector like the other linked tools, Cartography focuses on flexibility and exploration.
+
+You can learn more about the story behind Cartography in our [presentation at BSidesSF 2019](https://www.youtube.com/watch?v=ZukUmZSKSek).
+
+
+## Supported platforms
+- [Airbyte](https://cartography-cncf.github.io/cartography/modules/airbyte/index.html) - Organization, Workspace, User, Source, Destination, Connection, Tag, Stream
+- [Amazon Web Services](https://cartography-cncf.github.io/cartography/modules/aws/index.html) - ACM, API Gateway, CloudWatch, CodeBuild, Config, Cognito, EC2, ECS, ECR (including multi-arch images, image layers, and attestations), EFS, Elasticsearch, Elastic Kubernetes Service (EKS), DynamoDB, Glue, GuardDuty, IAM, Inspector, KMS, Lambda, RDS, Redshift, Route53, S3, Secrets Manager(Secret Versions), Security Hub, SNS, SQS, SSM, STS, Tags
+- [Anthropic](https://cartography-cncf.github.io/cartography/modules/anthropic/index.html) - Organization, ApiKey, User, Workspace
+- [BigFix](https://cartography-cncf.github.io/cartography/modules/bigfix/index.html) - Computers
+- [Cloudflare](https://cartography-cncf.github.io/cartography/modules/cloudflare/index.html) - Account, Role, Member, Zone, DNSRecord
+- [Crowdstrike Falcon](https://cartography-cncf.github.io/cartography/modules/crowdstrike/index.html) - Hosts, Spotlight vulnerabilities, CVEs
+- [DigitalOcean](https://cartography-cncf.github.io/cartography/modules/digitalocean/index.html)
+- [Duo](https://cartography-cncf.github.io/cartography/modules/duo/index.html) - Users, Groups, Endpoints
+- [GitHub](https://cartography-cncf.github.io/cartography/modules/github/index.html) - repos, branches, users, teams, dependency graph manifests, dependencies
+- [Google Cloud Platform](https://cartography-cncf.github.io/cartography/modules/gcp/index.html) - Bigtable, Cloud Resource Manager, Compute, DNS, Storage, Google Kubernetes Engine
+- [Google GSuite](https://cartography-cncf.github.io/cartography/modules/gsuite/index.html) - users, groups (deprecated - use Google Workspace instead)
+- [Google Workspace](https://cartography-cncf.github.io/cartography/modules/googleworkspace/index.html) - users, groups, devices, OAuth apps
+- [Kandji](https://cartography-cncf.github.io/cartography/modules/kandji/index.html) - Devices
+- [Keycloak](https://cartography-cncf.github.io/cartography/modules/keycloak/index.html) - Realms, Users, Groups, Roles, Scopes, Clients, IdentityProviders, Authentication Flows, Authentication Executions, Organizations, Organization Domains
+- [Kubernetes](https://cartography-cncf.github.io/cartography/modules/kubernetes/index.html) - Cluster, Namespace, Service, Pod, Container, ServiceAccount, Role, RoleBinding, ClusterRole, ClusterRoleBinding, OIDCProvider
+- [Lastpass](https://cartography-cncf.github.io/cartography/modules/lastpass/index.html) - users
+- [Microsoft Azure](https://cartography-cncf.github.io/cartography/modules/azure/index.html) - App Service, Container Instance, CosmosDB, Data Factory, Event Grid, Functions, Azure Kubernetes Service (AKS), Load Balancer, Logic Apps, Resource Group, SQL, Storage, Virtual Machine, Virtual Networks
+- [Microsoft Entra ID](https://cartography-cncf.github.io/cartography/modules/entra/index.html) - Users, Groups, Applications, OUs, App Roles, federation to AWS Identity Center
+- [NIST CVE](https://cartography-cncf.github.io/cartography/modules/cve/index.html) - Common Vulnerabilities and Exposures (CVE) data from NIST database
+- [Okta](https://cartography-cncf.github.io/cartography/modules/okta/index.html) - users, groups, organizations, roles, applications, factors, trusted origins, reply URIs, federation to AWS roles, federation to AWS Identity Center
+- [OpenAI](https://cartography-cncf.github.io/cartography/modules/openai/index.html) - Organization, AdminApiKey, User, Project, ServiceAccount, ApiKey
+- [Oracle Cloud Infrastructure](https://cartography-cncf.github.io/cartography/modules/oci/index.html) - IAM
+- [PagerDuty](https://cartography-cncf.github.io/cartography/modules/pagerduty/index.html) - Users, teams, services, schedules, escalation policies, integrations, vendors
+- [Scaleway](https://cartography-cncf.github.io/cartography/modules/scaleway/index.html) - Projects, IAM, Local Storage, Instances
+- [SentinelOne](https://cartography-cncf.github.io/cartography/modules/sentinelone/index.html) - Accounts, Agents
+- [Slack](https://cartography-cncf.github.io/cartography/modules/slack/index.html) - Teams, Users, UserGroups, Channels
+- [SnipeIT](https://cartography-cncf.github.io/cartography/modules/snipeit/index.html) - Users, Assets
+- [Tailscale](https://cartography-cncf.github.io/cartography/modules/tailscale/index.html) - Tailnet, Users, Devices, Groups, Tags, PostureIntegrations
+- [Trivy Scanner](https://cartography-cncf.github.io/cartography/modules/trivy/index.html) - AWS ECR Images
+
+
+## Philosophy
+Here are some points that can help you decide if adopting Cartography is a good fit for your problem.
+
+### What Cartography is
+- A simple Python script that pulls data from multiple providers and writes it to a Neo4j graph database in batches.
+- A powerful analysis tool that captures the current snapshot of the environment, building a uniquely useful inventory where you can ask complex questions such as:
+ - Which identities have access to which datastores?
+ - What are the cross-tenant permission relationships in the environment?
+ - What are the network paths in and out of the environment?
+ - What are the backup policies for my datastores?
+- Battle-tested in production by [many companies](#who-uses-cartography).
+- Straightforward to extend with your own custom plugins.
+- Provides a useful data-plane that you can build automation and CSPM (Cloud Security Posture Management) applications on top of.
+
+### What Cartography is not
+- A near-real time capability.
+ - Cartography is not designed for very fast updates. Cartography writes to the database in a batches (not streamed).
+ - Cartography is also limited by how most upstream sources only provide APIs to retrieve assets in a batched manner.
+- By itself, Cartography does not capture data changes over time.
+ - Although we do include a [drift detection](https://cartography-cncf.github.io/cartography/usage/drift-detect.html) feature.
+ - It's also possible to implement other processes in your Cartography installation to make this happen.
+
+
+## Install and configure
+
+### Trying out Cartography on a test machine
+Start [here](https://cartography-cncf.github.io/cartography/install.html) to set up a test graph and get data into it.
+
+### Setting up Cartography in production
+When you are ready to try it in production, read [here](https://cartography-cncf.github.io/cartography/ops.html) for recommendations on getting cartography spun up in your environment.
+
+## Usage
+
+### Running rules
+
+You can check your environment against common security frameworks using the `cartography-rules` command.
+
+```bash
+cartography-rules run all
+```
+
+See [the rules docs](https://cartography-cncf.github.io/cartography/usage/rules.html) for more detail.
+
+
+### Querying the database directly
+
+
+
+Now that data is in the graph, you can quickly start with our [querying tutorial](https://cartography-cncf.github.io/cartography/usage/tutorial.html). Our [data schema](https://cartography-cncf.github.io/cartography/usage/schema.html) is a helpful reference when you get stuck.
+
+### Building applications around Cartography
+Directly querying Neo4j is already very useful as a sort of "swiss army knife" for security data problems, but you can also build applications and data pipelines around Cartography. View this doc on [applications](https://cartography-cncf.github.io/cartography/usage/applications.html).
+
+
+## Docs
+
+See [here](https://cartography-cncf.github.io/cartography/)
+
+## Community
+
+- Hang out with us on Slack: Join the CNCF Slack workspace [here](https://communityinviter.com/apps/cloud-native/cncf), and then join the `#cartography` channel.
+- Talk to us and see what we're working on at our [monthly community meeting](https://zoom-lfx.platform.linuxfoundation.org/meetings/cartography?view=week).
+ - Meeting minutes are [here](https://docs.google.com/document/d/1VyRKmB0dpX185I15BmNJZpfAJ_Ooobwz0U1WIhjDxvw).
+ - Recorded videos from before 2025 are posted [here](https://www.youtube.com/playlist?list=PLMga2YJvAGzidUWJB_fnG7EHI4wsDDsE1).
+
+## License
+
+This project is licensed under the [Apache 2.0 License](LICENSE).
+
+## Contributing
+Thank you for considering contributing to Cartography!
+
+### Code of conduct
+All contributors and participants of this project must follow the [CNCF code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+### Bug reports and feature requests and discussions
+Submit a GitHub issue to report a bug or request a new feature. If we decide that the issue needs more discussion - usually because the scope is too large or we need to make careful decision - we will convert the issue to a [GitHub Discussion](https://github.com/cartography-cncf/cartography/discussions).
+
+### Developing Cartography
+
+Get started with our [developer documentation](https://cartography-cncf.github.io/cartography/dev/developer-guide.html). Please feel free to submit your own PRs to update documentation if you've found a better way to explain something.
+
+## Who uses Cartography?
+
+1. [Lyft](https://www.lyft.com)
+1. [Thought Machine](https://thoughtmachine.net/)
+1. [MessageBird](https://messagebird.com)
+1. [Cloudanix](https://www.cloudanix.com/)
+1. [Corelight](https://www.corelight.com/)
+1. [SubImage](https://subimage.io)
+1. {Your company here} :-)
+
+If your organization uses Cartography, please file a PR and update this list. Say hi on Slack too!
+
+---
+
+Cartography is a [Cloud Native Computing Foundation](https://www.cncf.io/) sandbox project.
+
+
+
diff --git a/data/readmes/carvel-v0521.md b/data/readmes/carvel-v0521.md
new file mode 100644
index 0000000..39d5771
--- /dev/null
+++ b/data/readmes/carvel-v0521.md
@@ -0,0 +1,49 @@
+# Carvel - README (v0.52.1)
+
+**Repository**: https://github.com/carvel-dev/ytt
+**Version**: v0.52.1
+
+---
+
+
+
+[](https://bestpractices.coreinfrastructure.org/projects/7746)
+
+# ytt
+
+* Play: Jump right in by trying out the [online playground](https://carvel.dev/ytt/#playground)
+* Discover `ytt` in [video](https://youtu.be/WJw1MDFMVuk)
+* For more information about annotations, data values, overlays and other features see [Docs](https://carvel.dev/ytt/docs/latest/) page
+* Slack: [#carvel in Kubernetes slack](https://slack.kubernetes.io/)
+* Install: Grab prebuilt binaries from the [Releases page](https://github.com/carvel-dev/ytt/releases) or [Homebrew Carvel tap](https://github.com/carvel-dev/homebrew)
+* Backlog: [See what we're up to]([https://app.zenhub.com/workspaces/carvel-backlog-6013063a24147d0011410709/board?repos=173207060). (Note: we use ZenHub which requires GitHub authorization](https://github.com/orgs/carvel-dev/projects/1/views/1?filterQuery=repo%3A%22vmware-tanzu%2Fcarvel-ytt%22)).
+
+## Overview
+
+`ytt` (pronounced spelled out) is a templating tool that understands YAML structure. It helps you easily configure complex software via reusable templates and user provided values. Ytt includes the following features:
+- Structural templating: understands yaml structure so users can focus on their configuration instead of issues associated with text templating, such as YAML value quoting or manual template indentation
+- Built-in programming language: includes the "fully featured" Python-like programming language Starlark which helps ease the burden of configuring complex software through a richer set of functionality.
+- Reusable configuration: You can reuse the same configuration in different environments by applying environment-specific values.
+- Custom validations: coupled with the fast and deterministic execution, allows you to take advantage of faster feedback loops when creating and testing templates
+- Overlays: this advanced configuration helps users manage the customization required for complex software. For more, see [this example](https://carvel.dev/ytt/#example:example-overlay-files) in the online playground.
+- Sandboxing: provides a secure, deterministic environment for execution of templates
+
+## Try it
+
+To get started with `ytt` and to see examples, you use the online playground or download the binaries and run the playground locally.
+
+- Try out the [online playground](https://carvel.dev/ytt/#playground)
+- Download the latest binaries from the [releases page](https://github.com/carvel-dev/ytt/releases) and run the playground locally: `ytt website`
+- See the examples used in the playground on the [examples](https://github.com/carvel-dev/ytt/tree/develop/examples/playground) page
+- Editor Extensions: [vscode syntax highlighting](https://marketplace.visualstudio.com/items?itemName=ewrenn.vscode-ytt)
+
+### Join the Community and Make Carvel Better
+Carvel is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online community meetings. Details can be found on our [Carvel website](https://carvel.dev/community/).
+
+You can chat with us on Kubernetes Slack in the #carvel channel and follow us on Twitter at @carvel_dev.
+
+Check out which organizations are using and contributing to Carvel: [Adopter's list](https://github.com/carvel-dev/carvel/blob/master/ADOPTERS.md)
+
+### Integrating with ytt
+
+If you want to integrate `ytt` within your own tooling, review our [APIs](examples/integrating-with-ytt/apis.md).
diff --git a/data/readmes/cassandra-cassandra-506.md b/data/readmes/cassandra-cassandra-506.md
new file mode 100644
index 0000000..f78f911
--- /dev/null
+++ b/data/readmes/cassandra-cassandra-506.md
@@ -0,0 +1,96 @@
+# Cassandra - README (cassandra-5.0.6)
+
+**Repository**: https://github.com/apache/cassandra
+**Version**: cassandra-5.0.6
+**Branch**: cassandra-5.0.6
+
+---
+
+Apache Cassandra
+-----------------
+
+Apache Cassandra is a highly-scalable partitioned row store. Rows are organized into tables with a required primary key.
+
+https://cwiki.apache.org/confluence/display/CASSANDRA2/Partitioners[Partitioning] means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster.
+
+https://cwiki.apache.org/confluence/display/CASSANDRA2/DataModel[Row store] means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
+
+For more information, see http://cassandra.apache.org/[the Apache Cassandra web site].
+
+Issues should be reported on https://issues.apache.org/jira/projects/CASSANDRA/issues/[The Cassandra Jira].
+
+Requirements
+------------
+- Java: see supported versions in build.xml (search for property "java.supported").
+- Python: for `cqlsh`, see `bin/cqlsh` (search for function "is_supported_version").
+
+
+Getting started
+---------------
+
+This short guide will walk you through getting a basic one node cluster up
+and running, and demonstrate some simple reads and writes. For a more-complete guide, please see the Apache Cassandra website's https://cassandra.apache.org/doc/latest/cassandra/getting_started/index.html[Getting Started Guide].
+
+First, we'll unpack our archive:
+
+ $ tar -zxvf apache-cassandra-$VERSION.tar.gz
+ $ cd apache-cassandra-$VERSION
+
+After that we start the server. Running the startup script with the -f argument will cause
+Cassandra to remain in the foreground and log to standard out; it can be stopped with ctrl-C.
+
+ $ bin/cassandra -f
+
+Now let's try to read and write some data using the Cassandra Query Language:
+
+ $ bin/cqlsh
+
+The command line client is interactive so if everything worked you should
+be sitting in front of a prompt:
+
+----
+Connected to Test Cluster at localhost:9160.
+[cqlsh 6.2.0 | Cassandra 5.0-SNAPSHOT | CQL spec 3.4.7 | Native protocol v5]
+Use HELP for help.
+cqlsh>
+----
+
+As the banner says, you can use 'help;' or '?' to see what CQL has to
+offer, and 'quit;' or 'exit;' when you've had enough fun. But lets try
+something slightly more interesting:
+
+----
+cqlsh> CREATE KEYSPACE schema1
+ WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
+cqlsh> USE schema1;
+cqlsh:Schema1> CREATE TABLE users (
+ user_id varchar PRIMARY KEY,
+ first varchar,
+ last varchar,
+ age int
+ );
+cqlsh:Schema1> INSERT INTO users (user_id, first, last, age)
+ VALUES ('jsmith', 'John', 'Smith', 42);
+cqlsh:Schema1> SELECT * FROM users;
+ user_id | age | first | last
+---------+-----+-------+-------
+ jsmith | 42 | john | smith
+cqlsh:Schema1>
+----
+
+If your session looks similar to what's above, congrats, your single node
+cluster is operational!
+
+For more on what commands are supported by CQL, see
+https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/index.html[the CQL reference]. A
+reasonable way to think of it is as, "SQL minus joins and subqueries, plus collections."
+
+Wondering where to go from here?
+
+ * Join us in #cassandra on the https://s.apache.org/slack-invite[ASF Slack] and ask questions.
+ * Subscribe to the Users mailing list by sending a mail to
+ user-subscribe@cassandra.apache.org.
+ * Subscribe to the Developer mailing list by sending a mail to
+ dev-subscribe@cassandra.apache.org.
+ * Visit the http://cassandra.apache.org/community/[community section] of the Cassandra website for more information on getting involved.
+ * Visit the http://cassandra.apache.org/doc/latest/development/index.html[development section] of the Cassandra website for more information on how to contribute.
diff --git a/data/readmes/cdk-for-kubernetes-cdk8s-redirect.md b/data/readmes/cdk-for-kubernetes-cdk8s-redirect.md
new file mode 100644
index 0000000..02cfee1
--- /dev/null
+++ b/data/readmes/cdk-for-kubernetes-cdk8s-redirect.md
@@ -0,0 +1,120 @@
+# CDK for Kubernetes (CDK8s) - README (redirect)
+
+**Repository**: https://github.com/cdk8s-team/cdk8s
+**Version**: redirect
+
+---
+
+# Cloud Development Kit for Kubernetes
+
+
+
+[](https://github.com/cdk8s-team/cdk8s/actions/workflows/website.yml)
+
+[](https://constructs.dev/packages/cdk8s)
+
+**cdk8s** is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar
+programming languages and rich object-oriented APIs. cdk8s apps synthesize into standard Kubernetes
+manifests which can be applied to any Kubernetes cluster.
+
+cdk8s is a [Cloud Native Computing Foundation](https://www.cncf.io) Sandbox Project, built with ❤️ at AWS. We encourage you to [try it out](#getting-started), [leave feedback](#help--feedback), and [jump in to help](#contributing)!
+
+Contents:
+
+- [Repositories](#repositories)
+- [Overview](#overview)
+- [Getting Started](#getting-started)
+- [Help \& Feedback](#help--feedback)
+- [Documentation](#documentation)
+- [Examples](#examples)
+- [Roadmap](#roadmap)
+- [Community](#community)
+- [Contributing](#contributing)
+- [CDK8s.io website](#cdk8sio-website)
+- [License](#license)
+
+## Repositories
+
+This project consists of multiple packages, maintained and released via the following repositories:
+
+- [cdk8s](https://github.com/cdk8s-team/cdk8s-core) - Core library. For historical reasons note that the [`cdk8s`](https://www.npmjs.com/package/cdk8s) package is maintained in the `cdk8s-team/cdk8s-core` repository.
+- [cdk8s-cli](https://github.com/cdk8s-team/cdk8s-cli) - Command-Line interface.
+- [cdk8s-plus](https://github.com/cdk8s-team/cdk8s-plus) - High-Level constructs for Kubernetes core.
+
+The current repository acts as an umbrella repository for cross module concerns, as well as the deployment of [`cdk8s.io`](https://cdk8s.io)
+
+## Overview
+
+**cdk8s** apps are programs written in one of the supported programming
+languages. They are structured as a tree of
+[constructs](https://github.com/aws/constructs).
+
+The root of the tree is an `App` construct. Within an app, users define any
+number of charts (classes that extend the `Chart` class). Each chart is
+synthesized into a separate Kubernetes manifest file. Charts are, in turn,
+composed of any number of constructs, and eventually from resources, which
+represent any Kubernetes resource, such as `Pod`, `Service`, `Deployment`,
+`ReplicaSet`, etc.
+
+cdk8s apps only ***define*** Kubernetes applications, they don't actually apply
+them to the cluster. When an app is executed, it *synthesizes* all the charts
+defined within the app into the `dist` directory, and then those charts can be
+applied to any Kubernetes cluster using `kubectl apply -f dist/chart.k8s.yaml` or a GitOps tool like [Flux](https://fluxcd.io/).
+
+> **cdk8s** is based on the design concepts and technologies behind the [AWS
+Cloud Development Kit](https://aws.amazon.com/cdk), and can interoperate with
+AWS CDK constructs to define cloud-native applications that include both
+Kubernetes resources and other CDK constructs as first class citizens.
+
+Read [our blog](https://aws.amazon.com/blogs/containers/introducing-cdk-for-kubernetes/) or [watch our CNCF webinar](https://www.cncf.io/webinars/end-yaml-engineering-with-cdk8s/) to learn more and see a live demo of cdk8s in action.
+
+## Getting Started
+
+See the [Getting Started](https://cdk8s.io/docs/latest/get-started) guide in
+[cdk8s Documentation](https://cdk8s.io/docs/).
+
+## Help & Feedback
+
+Interacting with the community and the development team is a great way to
+contribute to the project. Please consider the following venues (in order):
+
+- Search [open issues](https://github.com/cdk8s-team/cdk8s/issues)
+- Stack Overflow: [cdk8s](https://stackoverflow.com/questions/tagged/cdk8s)
+- File a [new issue](https://github.com/cdk8s-team/cdk8s/issues/new/choose)
+- Slack: #cdk8s channel in [cdk.dev](https://cdk.dev)
+
+## Documentation
+
+See [cdk8s Documentation](https://cdk8s.io/docs).
+
+## Examples
+
+See our [Examples Directory](./examples).
+
+## Roadmap
+
+See our [roadmap](https://github.com/cdk8s-team/cdk8s/projects/1) for details about our plans for the project.
+
+## Community
+
+See [Awesome cdk8s](https://github.com/dungahk/awesome-cdk8s).
+
+If you're a cdk8s user please consider adding your name to the [ADOPTERS](./ADOPTERS.md) file.
+
+## Contributing
+
+The cdk8s project adheres to the [CNCF Code of
+Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
+
+We welcome community contributions and pull requests. See our [contribution
+guide](./CONTRIBUTING.md) for more information on how to report issues, set up a
+development environment and submit code.
+
+## CDK8s.io website
+
+See [Docs Directory](./docs/README.md).
+
+## License
+
+This project is distributed under the [Apache License, Version 2.0](./LICENSE).
+
diff --git a/data/readmes/cert-manager-v1200-alpha0.md b/data/readmes/cert-manager-v1200-alpha0.md
new file mode 100644
index 0000000..01e123c
--- /dev/null
+++ b/data/readmes/cert-manager-v1200-alpha0.md
@@ -0,0 +1,132 @@
+# cert-manager - README (v1.20.0-alpha.0)
+
+**Repository**: https://github.com/cert-manager/cert-manager
+**Version**: v1.20.0-alpha.0
+
+---
+
+
+
+
+# cert-manager
+
+cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.
+
+It supports issuing certificates from a variety of sources, including Let's Encrypt (ACME), HashiCorp Vault, and Venafi TPP / TLS Protect Cloud, as well as local in-cluster issuance.
+
+cert-manager also ensures certificates remain valid and up to date, attempting to renew certificates at an appropriate time before expiry to reduce the risk of outages and remove toil.
+
+
+
+## Documentation
+
+Documentation for cert-manager can be found at [cert-manager.io](https://cert-manager.io/docs/).
+
+For the common use-case of automatically issuing TLS certificates for
+Ingress resources, see the [cert-manager nginx-ingress quick start guide](https://cert-manager.io/docs/tutorials/acme/nginx-ingress/).
+
+For a more comprehensive guide to issuing your first certificate, see our [getting started guide](https://cert-manager.io/docs/getting-started/).
+
+### Installation
+
+[Installation](https://cert-manager.io/docs/installation/) is documented on the website, with a variety of supported methods.
+
+## Developing cert-manager
+
+We actively welcome contributions and we support both Linux and macOS environments for development.
+
+Different platforms have different requirements; we document everything on our [Building cert-manager](https://cert-manager.io/docs/contributing/building/)
+website page.
+
+Note in particular that macOS has several extra requirements, to ensure that modern tools are installed and available. Read the page before
+getting started!
+
+## Troubleshooting
+
+If you encounter any issues whilst using cert-manager, we have a number of ways to get help:
+
+- A [troubleshooting guide](https://cert-manager.io/docs/faq/troubleshooting/) on our website.
+- Our official [Kubernetes Slack channel](https://cert-manager.io/docs/contributing/#slack) - the quickest way to ask! ([#cert-manager](https://kubernetes.slack.com/messages/cert-manager) and [#cert-manager-dev](https://kubernetes.slack.com/messages/cert-manager-dev))
+- [Searching for an existing issue](https://github.com/cert-manager/cert-manager/issues).
+
+If you believe you've found a bug and cannot find an existing issue, feel free to [open a new issue](https://github.com/cert-manager/cert-manager/issues)!
+Be sure to include as much information as you can about your environment.
+
+## Community
+
+The [`cert-manager-dev` Google Group](https://groups.google.com/forum/#!forum/cert-manager-dev)
+is used for project wide announcements and development coordination.
+Anybody with a Google account can join the group by visiting the group and clicking "Join Group".
+
+### Meetings
+
+We have several public meetings which any member of our Google Group is more than welcome to join!
+
+Check out the details on [our website](https://cert-manager.io/docs/contributing/#meetings). Feel
+free to drop in and ask questions, chat with us or just to say hi!
+
+## Contributing
+
+We welcome pull requests with open arms! There's a lot of work to do here, and
+we're especially concerned with ensuring the longevity and reliability of the
+project. The [contributing guide](https://cert-manager.io/docs/contributing/)
+will help you get started.
+
+## Coding Conventions
+
+Code style guidelines are documented on the [coding conventions](https://cert-manager.io/docs/contributing/coding-conventions/) page
+of the cert-manager website. Please try to follow those guidelines if you're submitting a pull request for cert-manager.
+
+## Importing cert-manager as a Module
+
+⚠️ Please note that cert-manager **does not** currently provide a Go module compatibility guarantee. That means that
+**most code under `pkg/` is subject to change in a breaking way**, even between minor or patch releases and even if
+the code is currently publicly exported.
+
+The lack of a Go module compatibility guarantee does not affect API version guarantees
+under the [Kubernetes Deprecation Policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).
+
+For more details see [Importing cert-manager in Go](https://cert-manager.io/docs/contributing/importing/) on the
+cert-manager website.
+
+The import path for cert-manager versions 1.8 and later is `github.com/cert-manager/cert-manager`.
+
+For all versions of cert-manager before 1.8, including minor and patch releases, the import path is `github.com/jetstack/cert-manager`.
+
+## Security Reporting
+
+Security is the number one priority for cert-manager. If you think you've found a security vulnerability, we'd love to hear from you.
+
+Follow the instructions in [SECURITY.md](./SECURITY.md) to make a report.
+
+## Changelog
+
+[Every release](https://github.com/cert-manager/cert-manager/releases) on GitHub has a changelog,
+and we also publish release notes on [the website](https://cert-manager.io/docs/release-notes/).
+
+## History
+
+cert-manager is loosely based upon the work of [kube-lego](https://github.com/jetstack/kube-lego)
+and has borrowed some wisdom from other similar projects such as [kube-cert-manager](https://github.com/PalmStoneGames/kube-cert-manager).
+
+
+Logo design by [Zoe Paterson](https://zoepatersonmedia.com)
diff --git a/data/readmes/certbot-v521.md b/data/readmes/certbot-v521.md
new file mode 100644
index 0000000..7aae25f
--- /dev/null
+++ b/data/readmes/certbot-v521.md
@@ -0,0 +1,9 @@
+# Certbot - README (v5.2.1)
+
+**Repository**: https://github.com/certbot/certbot
+**Version**: v5.2.1
+**Branch**: v5.2.1
+
+---
+
+certbot/README.rst
\ No newline at end of file
diff --git a/data/readmes/chainlink-v2301.md b/data/readmes/chainlink-v2301.md
new file mode 100644
index 0000000..9ee1fc0
--- /dev/null
+++ b/data/readmes/chainlink-v2301.md
@@ -0,0 +1,365 @@
+# Chainlink - README (v2.30.1)
+
+**Repository**: https://github.com/smartcontractkit/chainlink
+**Version**: v2.30.1
+
+---
+
+
+
+
+
+[](https://hub.docker.com/r/smartcontract/chainlink/tags)
+[](https://github.com/smartcontractkit/chainlink/blob/master/LICENSE)
+[](https://github.com/smartcontractkit/chainlink/actions/workflows/changeset.yml?query=workflow%3AChangeset)
+[](https://github.com/smartcontractkit/chainlink/graphs/contributors)
+[](https://github.com/smartcontractkit/chainlink/commits/master)
+[](https://docs.chain.link/)
+
+[Chainlink](https://chain.link/) expands the capabilities of smart contracts by enabling access to real-world data and off-chain computation while maintaining the security and reliability guarantees inherent to blockchain technology.
+
+This repo contains the Chainlink core node and contracts. The core node is the bundled binary available to be run by node operators participating in a [decentralized oracle network](https://link.smartcontract.com/whitepaper).
+All major release versions have pre-built docker images available for download from the [Chainlink dockerhub](https://hub.docker.com/r/smartcontract/chainlink/tags).
+If you are interested in contributing please see our [contribution guidelines](./docs/CONTRIBUTING.md).
+If you are here to report a bug or request a feature, please [check currently open Issues](https://github.com/smartcontractkit/chainlink/issues).
+For more information about how to get started with Chainlink, check our [official documentation](https://docs.chain.link/).
+
+## Community
+
+Chainlink has an active and ever growing community. [Discord](https://discordapp.com/invite/aSK4zew)
+is the primary communication channel used for day to day communication,
+answering development questions, and aggregating Chainlink related content. Take
+a look at the [community docs](./docs/COMMUNITY.md) for more information
+regarding Chainlink social accounts, news, and networking.
+
+## Build Chainlink
+
+1. [Install Go 1.23](https://golang.org/doc/install), and add your GOPATH's [bin directory to your PATH](https://golang.org/doc/code.html#GOPATH)
+ - Example Path for macOS `export PATH=$GOPATH/bin:$PATH` & `export GOPATH=/Users/$USER/go`
+2. Install [NodeJS v20](https://nodejs.org/en/download/package-manager/) & [pnpm v10 via npm](https://pnpm.io/installation#using-npm).
+ - It might be easier long term to use [nvm](https://nodejs.org/en/download/package-manager/#nvm) to switch between node versions for different projects. For example, assuming $NODE_VERSION was set to a valid version of NodeJS, you could run: `nvm install $NODE_VERSION && nvm use $NODE_VERSION`
+3. Install [Postgres (>= 12.x)](https://wiki.postgresql.org/wiki/Detailed_installation_guides). It is recommended to run the latest major version of postgres.
+ - Note if you are running the official Chainlink docker image, the highest supported Postgres version is 16.x due to the bundled client.
+ - You should [configure Postgres](https://www.postgresql.org/docs/current/ssl-tcp.html) to use SSL connection (or for testing you can set `?sslmode=disable` in your Postgres query string).
+4. Download Chainlink: `git clone https://github.com/smartcontractkit/chainlink && cd chainlink`
+5. Build and install Chainlink: `make install`
+6. Run the node: `chainlink help`
+
+For the latest information on setting up a development environment, see the [Development Setup Guide](https://github.com/smartcontractkit/chainlink/wiki/Development-Setup-Guide).
+
+### Build from PR
+
+To build an unofficial testing-only image from a feature branch or PR. You can do one of the following:
+
+1. Send a workflow dispatch event from our [`docker-build` workflow](https://github.com/smartcontractkit/chainlink/actions/workflows/docker-build.yml).
+2. Add the `build-publish` label to your PR and then either retry the `docker-build` workflow, or push a new commit.
+
+### Build Plugins
+
+Plugins are defined in yaml files within the `plugins/` directory. Each plugin file is a yaml file and has a `plugins.` prefix name. Plugins are installed with [loopinstall](https://github.com/smartcontractkit/chainlink-common/tree/main/pkg/loop/cmd/loopinstall).
+
+To install the plugins, run:
+
+```bash
+make install-plugins
+```
+
+Some plugins (such as those in `plugins/plugins.private.yaml`) reference private GitHub repositories. To build these plugins, you must have a GITHUB_TOKEN environment variable set, or preferably use the [gh](https://cli.github.com/manual/gh) GitHub CLI tool to use the [GitHub CLI credential helper](https://cli.github.com/manual/gh_auth_setup-git) like:
+
+```shell
+# Sets up a credential helper.
+gh auth setup-git
+```
+
+Then you can build the plugins with:
+
+```shell
+make install-plugins-private
+```
+
+### Docker Builds
+
+To build the experimental "plugins" Chainlink docker image, you can run this from the root of the repository:
+
+```shell
+# The GITHUB_TOKEN is required to access private repos which are used by some plugins.
+export GITHUB_TOKEN=$(gh auth token) # requires the `gh` cli tool.
+make docker-plugins
+```
+
+### Ethereum Execution Client Requirements
+
+In order to run the Chainlink node you must have access to a running Ethereum node with an open websocket connection.
+Any Ethereum based network will work once you've [configured](https://github.com/smartcontractkit/chainlink#configure) the chain ID.
+Ethereum node versions currently tested and supported:
+
+[Officially supported]
+
+- [Parity/Openethereum](https://github.com/openethereum/openethereum) (NOTE: Parity is deprecated and support for this client may be removed in future)
+- [Geth](https://github.com/ethereum/go-ethereum/releases)
+- [Besu](https://github.com/hyperledger/besu)
+
+[Supported but broken]
+These clients are supported by Chainlink, but have bugs that prevent Chainlink from working reliably on these execution clients.
+
+- [Nethermind](https://github.com/NethermindEth/nethermind)
+ Blocking issues:
+ - ~https://github.com/NethermindEth/nethermind/issues/4384~
+- [Erigon](https://github.com/ledgerwatch/erigon)
+ Blocking issues:
+ - https://github.com/ledgerwatch/erigon/discussions/4946
+ - https://github.com/ledgerwatch/erigon/issues/4030#issuecomment-1113964017
+
+We cannot recommend specific version numbers for ethereum nodes since the software is being continually updated, but you should usually try to run the latest version available.
+
+## Running a local Chainlink node
+
+**NOTE**: By default, chainlink will run in TLS mode. For local development you can disable this by using a `dev build` using `make chainlink-dev` and setting the TOML fields:
+
+```toml
+[WebServer]
+SecureCookies = false
+TLS.HTTPSPort = 0
+
+[Insecure]
+DevWebServer = true
+```
+
+Alternatively, you can generate self signed certificates using `tools/bin/self-signed-certs` or [manually](https://github.com/smartcontractkit/chainlink/wiki/Creating-Self-Signed-Certificates).
+
+To start your Chainlink node, simply run:
+
+```bash
+chainlink node start
+```
+
+By default this will start on port 6688. You should be able to access the UI at [http://localhost:6688/](http://localhost:6688/).
+
+Chainlink provides a remote CLI client as well as a UI. Once your node has started, you can open a new terminal window to use the CLI. You will need to log in to authorize the client first:
+
+```bash
+chainlink admin login
+```
+
+(You can also set `ADMIN_CREDENTIALS_FILE=/path/to/credentials/file` in future if you like, to avoid having to login again).
+
+Now you can view your current jobs with:
+
+```bash
+chainlink jobs list
+```
+
+To find out more about the Chainlink CLI, you can always run `chainlink help`.
+
+Check out the [doc](https://docs.chain.link/) pages on [Jobs](https://docs.chain.link/docs/jobs/) to learn more about how to create Jobs.
+
+### Configuration
+
+Node configuration is managed by a combination of environment variables and direct setting via API/UI/CLI.
+
+Check the [official documentation](https://docs.chain.link/docs/configuration-variables) for more information on how to configure your node.
+
+### External Adapters
+
+External adapters are what make Chainlink easily extensible, providing simple integration of custom computations and specialized APIs. A Chainlink node communicates with external adapters via a simple REST API.
+
+For more information on creating and using external adapters, please see our [external adapters page](https://docs.chain.link/docs/external-adapters).
+
+## Verify Official Chainlink Releases
+
+We use `cosign` with OIDC keyless signing during the [Build, Sign and Publish Chainlink](https://github.com/smartcontractkit/chainlink/actions/workflows/build-publish.yml) workflow.
+
+It is encourage for any node operator building from the official Chainlink docker image to verify the tagged release version was did indeed built from this workflow.
+
+You will need `cosign` in order to do this verification. [Follow the instruction here to install cosign](https://docs.sigstore.dev/system_config/installation/).
+
+```bash
+# tag is the tagged release version - ie. 2.16.0
+cosign verify index.docker.io/smartcontract/chainlink:${tag} \
+ --certificate-oidc-issuer https://token.actions.githubusercontent.com \
+ --certificate-identity "https://github.com/smartcontractkit/chainlink/.github/workflows/build-publish.yml@refs/tags/v${tag}"
+```
+
+## Development
+
+### Running tests
+
+1. [Install pnpm 10 via npm](https://pnpm.io/installation#using-npm)
+
+2. Install [gencodec](https://github.com/fjl/gencodec) and [jq](https://stedolan.github.io/jq/download/) to be able to run `go generate ./...` and `make abigen`
+
+3. Install mockery
+
+`make mockery`
+
+Using the `make` command will install the correct version.
+
+4. Generate and compile static assets:
+
+```bash
+make generate
+```
+
+5. Prepare your development environment:
+
+The tests require a postgres database. In turn, the environment variable
+`CL_DATABASE_URL` must be set to value that can connect to `_test` database, and the user must be able to create and drop
+the given `_test` database.
+
+Note: Other environment variables should not be set for all tests to pass
+
+There helper script for initial setup to create an appropriate test user. It requires postgres to be running on localhost at port 5432. You will be prompted for
+the `postgres` user password
+
+```bash
+make setup-testdb
+```
+
+This script will save the `CL_DATABASE_URL` in `.dbenv`
+
+Changes to database require migrations to be run. Similarly, `pull`'ing the repo may require migrations to run.
+After the one-time setup above:
+
+```
+source .dbenv
+make testdb
+```
+
+If you encounter the error `database accessed by other users (SQLSTATE 55006) exit status 1`
+and you want force the database creation then use
+
+```
+source .dbenv
+make testdb-force
+```
+
+7. Run tests:
+
+```bash
+go test ./...
+```
+
+#### Notes
+
+- The `parallel` flag can be used to limit CPU usage, for running tests in the background (`-parallel=4`) - the default is `GOMAXPROCS`
+- The `p` flag can be used to limit the number of _packages_ tested concurrently, if they are interferring with one another (`-p=1`)
+- The `-short` flag skips tests which depend on the database, for quickly spot checking simpler tests in around one minute
+
+#### Race Detector
+
+As of Go 1.1, the runtime includes a data race detector, enabled with the `-race` flag. This is used in CI via the
+`tools/bin/go_core_race_tests` script. If the action detects a race, the artifact on the summary page will include
+`race.*` files with detailed stack traces.
+
+> _**It will not issue false positives, so take its warnings seriously.**_
+
+For local, targeted race detection, you can run:
+
+```bash
+GORACE="log_path=$PWD/race" go test -race ./core/path/to/pkg -count 10
+GORACE="log_path=$PWD/race" go test -race ./core/path/to/pkg -count 100 -run TestFooBar/sub_test
+```
+
+https://go.dev/doc/articles/race_detector
+
+#### Fuzz tests
+
+As of Go 1.18, fuzz tests `func FuzzXXX(*testing.F)` are included as part of the normal test suite, so existing cases are executed with `go test`.
+
+Additionally, you can run active fuzzing to search for new cases:
+
+```bash
+go test ./pkg/path -run=XXX -fuzz=FuzzTestName
+```
+
+https://go.dev/doc/fuzz/
+
+### Go Modules
+
+This repository contains three Go modules:
+
+```mermaid
+flowchart RL
+ github.com/smartcontractkit/chainlink/v2
+ github.com/smartcontractkit/chainlink/integration-tests --> github.com/smartcontractkit/chainlink/v2
+ github.com/smartcontractkit/chainlink/core/scripts --> github.com/smartcontractkit/chainlink/v2
+
+```
+
+The `integration-tests` and `core/scripts` modules import the root module using a relative replace in their `go.mod` files,
+so dependency changes in the root `go.mod` often require changes in those modules as well. After making a change, `go mod tidy`
+can be run on all three modules using:
+
+```
+make gomodtidy
+```
+
+### Code Generation
+
+Go generate is used to generate mocks in this project. Mocks are generated with [mockery](https://github.com/vektra/mockery) and live in core/internal/mocks.
+
+### Nix
+
+A [shell.nix](https://nixos.wiki/wiki/Development_environment_with_nix-shell) is provided for use with the [Nix package manager](https://nixos.org/). By default,we utilize the shell through [Nix Flakes](https://nixos.wiki/wiki/Flakes).
+
+Nix defines a declarative, reproducible development environment. Flakes version use deterministic, frozen (`flake.lock`) dependencies to
+gain more consistency/reproducibility on the built artifacts.
+
+To use it:
+
+1. Install [nix package manager](https://nixos.org/download.html) in your system.
+
+- Enable [flakes support](https://nixos.wiki/wiki/Flakes#Enable_flakes)
+
+2. Run `nix develop`. You will be put in shell containing all the dependencies.
+
+- Optionally, `nix develop --command $SHELL` will make use of your current shell instead of the default (bash).
+- You can use `direnv` to enable it automatically when `cd`-ing into the folder; for that, enable [nix-direnv](https://github.com/nix-community/nix-direnv) and `use flake` on it.
+
+3. Create a local postgres database:
+
+```sh
+mkdir -p $PGDATA && cd $PGDATA/
+initdb
+pg_ctl -l postgres.log -o "--unix_socket_directories='$PWD'" start
+createdb chainlink_test -h localhost
+createuser --superuser --password chainlink -h localhost
+# then type a test password, e.g.: chainlink, and set it in shell.nix CL_DATABASE_URL
+```
+
+4. When re-entering project, you can restart postgres: `cd $PGDATA; pg_ctl -l postgres.log -o "--unix_socket_directories='$PWD'" start`
+ Now you can run tests or compile code as usual.
+5. When you're done, stop it: `cd $PGDATA; pg_ctl -o "--unix_socket_directories='$PWD'" stop`
+
+### Changesets
+
+We use [changesets](https://github.com/changesets/changesets) to manage versioning for libs and the services.
+
+Every PR that modifies any configuration or code, should most likely accompanied by a changeset file.
+
+To install `changesets`:
+
+1. Install `pnpm` if it is not already installed - [docs](https://pnpm.io/installation).
+2. Run `pnpm install`.
+
+Either after or before you create a commit, run the `pnpm changeset` command to create an accompanying changeset entry which will reflect on the CHANGELOG for the next release.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+### Tips
+
+For more tips on how to build and test Chainlink, see our [development tips page](https://github.com/smartcontractkit/chainlink/wiki/Development-Tips).
+
+### Contributing
+
+Contributions are welcome to Chainlink's source code.
+
+Please check out our [contributing guidelines](./docs/CONTRIBUTING.md) for more details.
+
+Thank you!
diff --git a/data/readmes/chaos-mesh-v280.md b/data/readmes/chaos-mesh-v280.md
new file mode 100644
index 0000000..6031215
--- /dev/null
+++ b/data/readmes/chaos-mesh-v280.md
@@ -0,0 +1,153 @@
+# Chaos Mesh - README (v2.8.0)
+
+**Repository**: https://github.com/chaos-mesh/chaos-mesh
+**Version**: v2.8.0
+
+---
+
+
+
+
+---
+
+
+
+[](https://github.com/chaos-mesh/chaos-mesh/blob/master/LICENSE)
+[](https://codecov.io/gh/chaos-mesh/chaos-mesh)
+[](https://goreportcard.com/report/github.com/chaos-mesh/chaos-mesh)
+[](https://godoc.org/github.com/chaos-mesh/chaos-mesh)
+[](https://github.com/chaos-mesh/chaos-mesh/actions/workflows/upload_image.yml)
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fchaos-mesh%2Fchaos-mesh?ref=badge_shield)
+[](https://bestpractices.coreinfrastructure.org/projects/3680)
+[](https://artifacthub.io/packages/helm/chaos-mesh/chaos-mesh)
+
+
+
+Chaos Mesh is an open source cloud-native Chaos Engineering platform. It offers various types of fault simulation and has an enormous capability to orchestrate fault scenarios.
+
+Using Chaos Mesh, you can conveniently simulate various abnormalities that might occur in reality during the development, testing, and production environments and find potential problems in the system. To lower the threshold for a Chaos Engineering project, Chaos Mesh provides you with a visualization operation. You can easily design your Chaos scenarios on the Web UI and monitor the status of Chaos experiments.
+
+
+
+
+Chaos Mesh is a [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) incubating project. If you are an organization that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Chaos Mesh plays a role, read the CNCF [announcement](https://www.cncf.io/announcements/2020/09/02/cloud-native-computing-foundation-announces-tikv-graduation/).
+
+---
+
+At the current stage, Chaos Mesh has the following components:
+
+- **Chaos Operator**: the core component for chaos orchestration. Fully open sourced.
+- **Chaos Dashboard**: a Web UI for managing, designing, monitoring Chaos Experiments.
+
+See the following demo video for a quick view of Chaos Mesh:
+
+[](https://www.youtube.com/watch?v=ifZEwdJO868)
+
+## Chaos Operator
+
+Chaos Operator injects chaos into the applications and Kubernetes infrastructure in a manageable way, which provides easy, custom definitions for chaos experiments and automatic orchestration. There are two components at play:
+
+**Chaos Controller Manager**: is primarily responsible for the scheduling and management of Chaos experiments. This component contains several CRD Controllers, such as Workflow Controller, Scheduler Controller, and Controllers of various fault types.
+
+**Chaos Daemon**: runs as DaemonSet and has Privileged permission by default (which can be disabled). This component mainly interferes with specific network devices, file systems, kernels by hacking into the target Pod Namespace.
+
+
+
+Chaos Operator uses [CustomResourceDefinition (CRD)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to define chaos objects.
+
+The current implementation supports a few types of CRD objects for fault injection, namely `PodChaos`, `NetworkChaos`, `IOChaos`, `TimeChaos`, `StressChaos`, and so on.
+You can get the full list of CRD objects and their specifications in the [Chaos Mesh Docs](https://chaos-mesh.org/docs/).
+
+## Quick start
+
+See [Quick Start](https://chaos-mesh.org/docs/quick-start) and [Install Chaos Mesh using Helm](https://chaos-mesh.org/docs/production-installation-using-helm/).
+
+## Contributing
+
+See the [contributing guide](./CONTRIBUTING.md) and [development guide](https://chaos-mesh.org/docs/developer-guide-overview).
+
+## Adopters
+
+See [ADOPTERS](ADOPTERS.md).
+
+## Blogs
+
+Blogs on Chaos Mesh design & implementation, features, chaos engineering, community updates, etc. See [Chaos Mesh Blogs](https://chaos-mesh.org/blog). Here are some recommended ones for you to start with:
+
+- [Chaos Mesh 2.0: To a Chaos Engineering Ecology](https://chaos-mesh.org/blog/chaos-mesh-2.0-to-a-chaos-engineering-ecology/)
+- [Chaos Mesh - Your Chaos Engineering Solution for System Resiliency on Kubernetes](https://chaos-mesh.org/blog/chaos_mesh_your_chaos_engineering_solution/)
+- [Run Your First Chaos Experiment in 10 Minutes](https://chaos-mesh.org/blog/run_your_first_chaos_experiment/)
+- [How to Simulate I/O Faults at Runtime](https://chaos-mesh.org/blog/how-to-simulate-io-faults-at-runtime/)
+- [Simulating Clock Skew in K8s Without Affecting Other Containers on the Node](https://chaos-mesh.org/blog/simulating-clock-skew-in-k8s-without-affecting-other-containers-on-node/)
+- [Building an Automated Testing Framework Based on Chaos Mesh and Argo](https://chaos-mesh.org/blog/building_automated_testing_framework)
+
+## Community
+
+Please reach out for bugs, feature requests, and other issues via:
+
+- Following us on Twitter [@chaos_mesh](https://twitter.com/chaos_mesh).
+
+- Joining the `#project-chaos-mesh` channel in the [CNCF Slack](https://slack.cncf.io/) workspace.
+
+- Filling an issue or opening a PR against this repository.
+
+### Community meetings
+
+- Chaos Mesh Community Monthly (Community and project-level updates, community sharing/demo, office hours)
+ - Time: on the fourth Thursday of every month (unless otherwise specified)
+ - [RSVP here](https://community.cncf.io/chaos-mesh-community/)
+ - [Meeting minutes](https://docs.google.com/document/d/1H8IfmhIJiJ1ltg-XLjqR_P_RaMHUGrl1CzvHnKM_9Sc/edit?usp=sharing)
+
+- Chaos Mesh Development Meeting (Releases, roadmap/features/RFC planning and discussion, issue triage/discussion, etc)
+ - Time: Every other Tuesday (unless otherwise specified)
+ - [RSVP here](https://community.cncf.io/chaos-mesh-community/)
+ - [Meeting minutes](https://docs.google.com/document/d/1s9X6tTOy3OGZaLDZQesGw1BNOrxQfWExjBFIn5irpPE/edit)
+
+### Community blogs
+
+- Grant Tarrant-Fisher: [Integrate your Reliability Toolkit with Your World](https://medium.com/search?q=Integrate+your+Reliability+Toolkit+with+Your+World)
+- Yoshinori Teraoka: [Streake: Chaos Mesh によるカオスエンジニアリング](https://medium.com/sreake-jp/chaos-mesh-%E3%81%AB%E3%82%88%E3%82%8B%E3%82%AB%E3%82%AA%E3%82%B9%E3%82%A8%E3%83%B3%E3%82%B8%E3%83%8B%E3%82%A2%E3%83%AA%E3%83%B3%E3%82%B0-46fa2897c742)
+- Sébastien Prud'homme: [Chaos Mesh : un générateur de chaos pour Kubernetes](https://www.cowboysysop.com/post/chaos-mesh-un-generateur-de-chaos-pour-kubernetes/)
+- Craig Morten
+ - [K8s Chaos Dive: Chaos-Mesh Part 1](https://dev.to/craigmorten/k8s-chaos-dive-2-chaos-mesh-part-1-2i96)
+ - [K8s Chaos Dive: Chaos-Mesh Part 2](https://dev.to/craigmorten/k8s-chaos-dive-chaos-mesh-part-2-536m)
+- Ronak Banka: [Getting Started with Chaos Mesh and Kubernetes](https://itnext.io/getting-started-with-chaos-mesh-and-kubernetes-bfd98d25d481)
+- kondoumh: [Kubernetes ネイティブなカオスエンジニアリングツール Chaos Mesh を使ってみる](https://blog.kondoumh.com/entry/2020/10/23/123431)
+- Vadim Tkachenko: [ChaosMesh to Create Chaos in Kubernetes](https://www.percona.com/blog/2020/11/05/chaosmesh-to-create-chaos-in-kubernetes/)
+- Hui Zhang: [How a Top Game Company Uses Chaos Engineering to Improve Testing](https://chaos-mesh.org/blog/how-a-top-game-company-uses-chaos-engineering-to-improve-testing)
+- Anurag Paliwal
+ - [Securing tenant services while using chaos mesh using OPA](https://anuragpaliwal-93749.medium.com/securing-tenant-services-while-using-chaos-mesh-using-opa-3ae80c7f4b85)
+ - [Securing namespaces using restrict authorization feature in chaos mesh](https://anuragpaliwal-93749.medium.com/securing-namespaces-using-restrict-authorization-feature-in-chaos-mesh-2e110c3e0fb7)
+- Pavan Kumar: [Chaos Engineering in Kubernetes using Chaos Mesh](https://link.medium.com/1V90dEknugb)
+- Jessica Cherry: [Test your Kubernetes experiments with an open source web interface](https://opensource.com/article/21/6/chaos-mesh-kubernetes)
+- λ.eranga: [Chaos Engineering with Chaos Mesh](https://medium.com/rahasak/chaos-engineering-with-chaos-mesh-b040169b51bd)
+- Tomáš Kubica: [Kubernetes prakticky: zlounství s Chaos Mesh a Azure Chaos Studio](https://www.tomaskubica.cz/post/2021/kubernetes-prakticky-zlounstvi-s-chaos-mesh-a-azure-chaos-studio2/)
+- mend: [Chaos Meshで何ができるのか見てみた](https://qiita.com/mend/items/dcdfab5e980467bf58e9)
+
+### Community talks
+
+- Twain Taylor: [Chaos Mesh Simplifies & Organizes Chaos Engineering For Kubernetes](https://youtu.be/shbrjAY86ZQ)
+- Saiyam Pathak
+ - [Let's explore chaos mesh](https://youtu.be/kMbTYItsTTI)
+ - [Chaos Mesh - Chaos Engineering for Kubernetes](https://youtu.be/HAU_cjW1bMw)
+ - [Chaos Mesh 2.0](https://youtu.be/HmQ9cFwxF7g)
+
+## Media coverage
+
+- CodeZine: [オープンソースのカオステストツール「Chaos Mesh 1.0」、一般提供を開始](https://codezine.jp/article/detail/12996)
+- @IT atmarkit: [Kubernetes 向けカオスエンジニアリングプラットフォーム「Chaos Mesh 1.0」が公開](https://www.atmarkit.co.jp/ait/articles/2010/09/news108.html)
+- Publickey: [Kubernetes の Pod やネットワークをわざと落としまくってカオスエンジニアリングのテストができる「Chaos Mesh」がバージョン 1.0 に到達](https://www.publickey1.jp/blog/20/kubernetespodchaos_mesh10.html)
+- InfoQ: [Chaos Engineering on Kubernetes : Chaos Mesh Generally Available with v1.0](https://www.infoq.com/news/2020/10/kubernetes-chaos-mesh-ga/)
+- TechGenix: [Chaos Mesh Promises to Bring Order to Chaos Engineering](http://techgenix.com/chaos-mesh-chaos-engineering/)
+
+## License
+
+Chaos Mesh is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE) for the full content.
+
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fchaos-mesh%2Fchaos-mesh?ref=badge_large)
+
+## Trademark
+
+Chaos Mesh is a trademark of The Linux Foundation. All rights reserved.
diff --git a/data/readmes/chaosblade-v180.md b/data/readmes/chaosblade-v180.md
new file mode 100644
index 0000000..78ec8fa
--- /dev/null
+++ b/data/readmes/chaosblade-v180.md
@@ -0,0 +1,100 @@
+# Chaosblade - README (v1.8.0)
+
+**Repository**: https://github.com/chaosblade-io/chaosblade
+**Version**: v1.8.0
+
+---
+
+
+
+# Chaosblade: An Easy to Use and Powerful Chaos Engineering Toolkit
+[](https://travis-ci.org/chaosblade-io/chaosblade)
+[](https://opencollective.com/chaosblade) [](https://codecov.io/gh/chaosblade-io/chaosblade)
+
+[](https://bestpractices.coreinfrastructure.org/projects/5032)
+
+中文版 [README](README_CN.md)
+Wiki: [DeepWiki](https://deepwiki.com/chaosblade-io/chaosblade-for-deepwiki)
+## Introduction
+
+ChaosBlade is an Alibaba open source experimental injection tool that follows the principles of chaos engineering and chaos experimental models to help enterprises improve the fault tolerance of distributed systems and ensure business continuity during the process of enterprises going to cloud or moving to cloud native systems.
+
+Chaosblade is an internal open source project of MonkeyKing. It is based on Alibaba's nearly ten years of failure testing and drill practice, and combines the best ideas and practices of the Group's businesses.
+
+ChaosBlade is not only easy to use, but also supports rich experimental scenarios. The scenarios include:
+* Basic resources: such as CPU, memory, network, disk, process and other experimental scenarios;
+* Java applications: such as databases, caches, messages, JVM itself, microservices, etc. You can also specify any class method to inject various complex experimental scenarios;
+* C ++ applications: such as specifying arbitrary methods or experimental lines of code injection delay, tampering with variables and return values;
+* container: such as killing the container, the CPU in the container, memory, network, disk, process and other experimental scenarios;
+* Cloud-native platforms: For example, CPU, memory, network, disk, and process experimental scenarios on Kubernetes platform nodes, Pod network and Pod itself experimental scenarios such as killing Pods, and container experimental scenarios such as the aforementioned Docker container experimental scenario;
+
+Encapsulating scenes by domain into individual projects can not only standardize the scenes in the domain, but also facilitate the horizontal and vertical expansion of the scenes. By following the chaos experimental model, the chaosblade cli can be called uniformly. The items currently included are:
+* [chaosblade](https://github.com/chaosblade-io/chaosblade): Chaos experiment management tool, including commands for creating experiments, destroying experiments, querying experiments, preparing experimental environments, and canceling experimental environments. It is the execution of chaotic experiments. Tools, execution methods include CLI and HTTP. Provides complete commands, experimental scenarios, and scenario parameter descriptions, and the operation is simple and clear.
+* [chaosblade-spec-go](https://github.com/chaosblade-io/chaosblade-spec-go): Chaos experimental model Golang language definition, scenes implemented using Golang language are easy to implement based on this specification.
+* [chaosblade-exec-os](https://github.com/chaosblade-io/chaosblade-exec-os): Implementation of basic resource experimental scenarios.
+* [chaosblade-exec-docker](https://github.com/chaosblade-io/chaosblade-exec-docker): Docker container experimental scenario implementation, standardized by calling the Docker API.
+* [chaosblade-exec-cri](https://github.com/chaosblade-io/chaosblade-exec-cri): Container experimental scenario implementation, standardized by calling the CRI.
+* [chaosblade-operator](https://github.com/chaosblade-io/chaosblade-operator): Kubernetes platform experimental scenario is implemented, chaos experiments are defined by Kubernetes standard CRD method, it is very convenient to use Kubernetes resource operation method To create, update, and delete experimental scenarios, including using kubectl, client-go, etc., and also using the chaosblade cli tool described above.
+* [chaosblade-exec-jvm](https://github.com/chaosblade-io/chaosblade-exec-jvm): Java application experimental scenario implementation, using Java Agent technology to mount dynamically, without any access, zero-cost use It also supports uninstallation and completely recycles various resources created by the Agent.
+* [chaosblade-exec-cplus](https://github.com/chaosblade-io/chaosblade-exec-cplus): C ++ application experimental scenario implementation, using GDB technology to implement method and code line level experimental scenario injection.
+* [chaosblade-box](https://github.com/chaosblade-io/chaosblade-box): Possessing chaos engineering platform and resilience testing platform capabilities.For more information on the resilience testing platform capabilities, see the [main2](https://github.com/chaosblade-io/chaosblade-box/tree/main2) branch.
+
+## CLI Command
+You can download the latest chaosblade toolkit from [Releases](https://github.com/chaosblade-io/chaosblade/releases) and extract it and use it. If you want to inject Kubernetes related fault scenarios, you need to install [chaosblade-operator](https://github.com/chaosblade-io/chaosblade-operator/releases). For detailed Chinese usage documents, please see [chaosblade-help-zh-cn ](https://chaosblade-io.gitbook.io/chaosblade-help-zh-cn/).
+
+chaosblade supports CLI and HTTP invocation methods. The supported commands are as follows:
+* **prepare**: alias is p, preparation before the chaos engineering experiment, such as drilling Java applications, you need to attach the java agent. For example, to drill an application whose application name is business, execute `blade p jvm --process business` on the target host. If the attach is successful, return the uid for status query or agent revoke.
+* **revoke**: alias is r, undo chaos engineering experiment preparation before, such as detaching java agent. The command is `blade revoke UID`
+* **create**: alias is c, create a chaos engineering experiment. The command is `blade create [TARGET] [ACTION] [FLAGS]`. For example, if you implement a Dubbo consumer call xxx.xxx.Service interface delay 3s, the command executed is `blade create dubbo delay --consumer --time 3000 --Service xxx.xxx.Service`, if the injection is successful, return the experimental uid for status query and destroy the experiment.
+* **destroy**: alias is d, destroy a chaos engineering experiment, such as destroying the Dubbo delay experiment mentioned above, the command is `blade destroy UID`
+* **status**: alias s, query preparation stage or experiment status, the command is `blade status UID` or `blade status --type create`
+* **server**: start the web server, expose the HTTP service, and call chaosblade through HTTP requests. For example, execute on the target machine xxxx: `blade server start -p 9526` to perform a CPU full load experiment:` curl "http://xxxx:9526/chaosblade?cmd=create%20cpu%20fullload" `
+
+Use the `blade help [COMMAND]` or `blade [COMMAND] -h` command to view help
+
+## Experience Demo
+Download the chaosblade demo image and experience the use of the blade toolkit
+
+
+
+Download image command:
+```shell script
+docker pull chaosbladeio/chaosblade-demo
+```
+Run the demo container:
+```shell script
+docker run -it --privileged chaosbladeio/chaosblade-demo
+```
+After entering the container, you can read the README.txt file to implement the chaos experiment, Enjoy it.
+
+## Cloud Native
+[chaosblade-operator](https://github.com/chaosblade-io/chaosblade-operator) The project is a chaos experiment injection tool for cloud-native platforms. It follows the chaos experiment model to standardize the experimental scenario and defines the experiment as Kubernetes CRD Resources, mapping experimental models to Kubernetes resource attributes, and very friendly combination of chaotic experimental models with Kubernetes declarative design. While relying on chaotic experimental models to conveniently develop scenarios, it can also well integrate Kubernetes design concepts, through kubectl or Write code to directly call the Kubernetes API to create, update, and delete chaotic experiments, and the resource status can clearly indicate the execution status of the experiment, and standardize Kubernetes fault injection. In addition to using the above methods to perform experiments, you can also use the chaosblade cli method to execute kubernetes experimental scenarios and query the experimental status very conveniently. For details, please read the chinese document: [Chaos Engineering Practice under Cloud Native](CLOUDNATIVE.md)
+
+## Compile
+See [BUILD.md](BUILD.md) for the details.
+
+## Bugs and Feedback
+For bug report, questions and discussions please submit [GitHub Issues](https://github.com/chaosblade-io/chaosblade/issues).
+
+You can also contact us via:
+* Dingding group (recommended for chinese): 23177705
+* Slack group: [chaosblade-io](https://join.slack.com/t/chaosblade-io/shared_invite/zt-f0d3r3f4-TDK13Wr3QRUrAhems28p1w)
+* Gitter room: [chaosblade community](https://gitter.im/chaosblade-io/community)
+* Email: chaosblade.io.01@gmail.com
+* Twitter: [chaosblade.io](https://twitter.com/ChaosbladeI)
+
+## Contributing
+We welcome every contribution, even if it is just punctuation. See details of [CONTRIBUTING](CONTRIBUTING.md). For the promotion ladder of specific community participation students, see: ([Contributor Ladder](https://github.com/chaosblade-io/community/blob/main/Contributor_Ladder.md))
+
+## Business Registration
+The original intention of our open source project is to lower the threshold for chaos engineering to be implemented in enterprises, so we highly value the use of the project in enterprises. Welcome everyone here [ISSUE](https://github.com/chaosblade-io/chaosblade/issues/32). After registration, you will be invited to join the corporate mail group to discuss the problems encountered by Chaos Engineering in the landing of the company and share the landing experience.
+
+## Contributors
+
+### Code Contributors
+
+This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
+
+
+## License
+Chaosblade is licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for the full license text.
diff --git a/data/readmes/checkov-32495.md b/data/readmes/checkov-32495.md
new file mode 100644
index 0000000..6956d9f
--- /dev/null
+++ b/data/readmes/checkov-32495.md
@@ -0,0 +1,511 @@
+# Checkov - README (3.2.495)
+
+**Repository**: https://github.com/bridgecrewio/checkov
+**Version**: 3.2.495
+
+---
+
+[](#)
+
+[](https://prismacloud.io/?utm_source=github&utm_medium=organic_oss&utm_campaign=checkov)
+[](https://github.com/bridgecrewio/checkov/actions?query=workflow%3Abuild)
+[](https://github.com/bridgecrewio/checkov/actions?query=event%3Apush+branch%3Amaster+workflow%3Asecurity)
+[](https://github.com/bridgecrewio/checkov/actions?query=workflow%3Acoverage)
+[](https://www.checkov.io/1.Welcome/What%20is%20Checkov.html?utm_source=github&utm_medium=organic_oss&utm_campaign=checkov)
+[](https://pypi.org/project/checkov/)
+[](#)
+[](#)
+[](https://pepy.tech/project/checkov)
+[](https://hub.docker.com/r/bridgecrew/checkov)
+[](https://codifiedsecurity.slack.com/)
+
+
+**Checkov** is a static code analysis tool for infrastructure as code (IaC) and also a software composition analysis (SCA) tool for images and open source packages.
+
+It scans cloud infrastructure provisioned using [Terraform](https://terraform.io/), [Terraform plan](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Terraform%20Plan%20Scanning.md), [Cloudformation](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Cloudformation.md), [AWS SAM](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/AWS%20SAM.md), [Kubernetes](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kubernetes.md), [Helm charts](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Helm.md), [Kustomize](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Kustomize.md), [Dockerfile](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Dockerfile.md), [Serverless](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Serverless%20Framework.md), [Bicep](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Bicep.md), [OpenAPI](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/OpenAPI.md), [ARM Templates](https://github.com/bridgecrewio/checkov/blob/main/docs/7.Scan%20Examples/Azure%20ARM%20templates.md), or [OpenTofu](https://opentofu.org/) and detects security and compliance misconfigurations using graph-based scanning.
+
+It performs [Software Composition Analysis (SCA) scanning](docs/7.Scan%20Examples/Sca.md) which is a scan of open source packages and images for Common Vulnerabilities and Exposures (CVEs).
+
+Checkov also powers [**Prisma Cloud Application Security**](https://www.prismacloud.io/prisma/cloud/cloud-code-security/?utm_source=github&utm_medium=organic_oss&utm_campaign=checkov), the developer-first platform that codifies and streamlines cloud security throughout the development lifecycle. Prisma Cloud identifies, fixes, and prevents misconfigurations in cloud resources and infrastructure-as-code files.
+
+
+
+
+
+
+
+
+
+
+## **Table of contents**
+
+- [Features](#features)
+- [Screenshots](#screenshots)
+- [Getting Started](#getting-started)
+- [Disclaimer](#disclaimer)
+- [Support](#support)
+- [Migration - v2 to v3](https://github.com/bridgecrewio/checkov/blob/main/docs/1.Welcome/Migration.md)
+
+ ## Features
+
+ * [Over 1000 built-in policies](https://github.com/bridgecrewio/checkov/blob/main/docs/5.Policy%20Index/all.md) cover security and compliance best practices for AWS, Azure and Google Cloud.
+ * Scans Terraform, Terraform Plan, Terraform JSON, CloudFormation, AWS SAM, Kubernetes, Helm, Kustomize, Dockerfile, Serverless framework, Ansible, Bicep, ARM, and OpenTofu template files.
+ * Scans Argo Workflows, Azure Pipelines, BitBucket Pipelines, Circle CI Pipelines, GitHub Actions and GitLab CI workflow files
+ * Supports Context-awareness policies based on in-memory graph-based scanning.
+ * Supports Python format for attribute policies and YAML format for both attribute and composite policies.
+ * Detects [AWS credentials](https://github.com/bridgecrewio/checkov/blob/main/docs/2.Basics/Scanning%20Credentials%20and%20Secrets.md) in EC2 Userdata, Lambda environment variables and Terraform providers.
+ * [Identifies secrets](https://www.prismacloud.io/prisma/cloud/secrets-security) using regular expressions, keywords, and entropy based detection.
+ * Evaluates [Terraform Provider](https://registry.terraform.io/browse/providers) settings to regulate the creation, management, and updates of IaaS, PaaS or SaaS managed through Terraform.
+ * Policies support evaluation of [variables](https://github.com/bridgecrewio/checkov/blob/main/docs/2.Basics/Handling%20Variables.md) to their optional default value.
+ * Supports in-line [suppression](https://github.com/bridgecrewio/checkov/blob/main/docs/2.Basics/Suppressing%20and%20Skipping%20Policies.md) of accepted risks or false-positives to reduce recurring scan failures. Also supports global skip from using CLI.
+ * [Output](https://github.com/bridgecrewio/checkov/blob/main/docs/2.Basics/Reviewing%20Scan%20Results.md) currently available as CLI, [CycloneDX](https://cyclonedx.org), JSON, JUnit XML, CSV, SARIF and github markdown and link to remediation [guides](https://docs.prismacloud.io/en/enterprise-edition/policy-reference/).
+
+## Screenshots
+
+Scan results in CLI
+
+
+
+Scheduled scan result in Jenkins
+
+
+
+## Getting started
+
+### Requirements
+ * Python >= 3.9, <=3.12
+ * Terraform >= 0.12
+
+### Installation
+
+To install pip follow the official [docs](https://pip.pypa.io/en/stable/cli/pip_install/)
+
+```sh
+pip3 install checkov
+```
+
+Certain environments (e.g., Debian 12) may require you to install Checkov in a virtual environment
+
+```sh
+# Create and activate a virtual environment
+python3 -m venv /path/to/venv/checkov
+cd /path/to/venv/checkov
+source ./bin/activate
+
+# Install Checkov with pip
+pip install checkov
+
+# Optional: Create a symlink for easy access
+sudo ln -s /path/to/venv/checkov/bin/checkov /usr/local/bin/checkov
+```
+
+or with [Homebrew](https://formulae.brew.sh/formula/checkov) (macOS or Linux)
+
+```sh
+brew install checkov
+```
+
+### Enabling bash autocomplete
+```sh
+source <(register-python-argcomplete checkov)
+```
+### Upgrade
+
+if you installed checkov with pip3
+```sh
+pip3 install -U checkov
+```
+
+or with Homebrew
+
+```sh
+brew upgrade checkov
+```
+
+### Configure an input folder or file
+
+```sh
+checkov --directory /user/path/to/iac/code
+```
+
+Or a specific file or files
+
+```sh
+checkov --file /user/tf/example.tf
+```
+Or
+```sh
+checkov -f /user/cloudformation/example1.yml -f /user/cloudformation/example2.yml
+```
+
+Or a terraform plan file in json format
+```sh
+terraform init
+terraform plan -out tf.plan
+terraform show -json tf.plan > tf.json
+checkov -f tf.json
+```
+
+Note: `terraform show` output file `tf.json` will be a single line.
+For that reason all findings will be reported line number 0 by Checkov
+
+
+```sh
+check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
+ FAILED for resource: aws_s3_bucket.customer
+ File: /tf/tf.json:0-0
+ Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/aws-policies/s3-policies/s3-16-enable-versioning
+ ```
+
+If you have installed `jq` you can convert json file into multiple lines with the following command:
+```sh
+terraform show -json tf.plan | jq '.' > tf.json
+```
+Scan result would be much user friendly.
+```sh
+checkov -f tf.json
+Check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
+ FAILED for resource: aws_s3_bucket.customer
+ File: /tf/tf1.json:224-268
+ Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/aws-policies/s3-policies/s3-16-enable-versioning
+
+ 225 | "values": {
+ 226 | "acceleration_status": "",
+ 227 | "acl": "private",
+ 228 | "arn": "arn:aws:s3:::mybucket",
+
+```
+
+Alternatively, specify the repo root of the hcl files used to generate the plan file, using the `--repo-root-for-plan-enrichment` flag, to enrich the output with the appropriate file path, line numbers, and codeblock of the resource(s). An added benefit is that check suppressions will be handled accordingly.
+```sh
+checkov -f tf.json --repo-root-for-plan-enrichment /user/path/to/iac/code
+```
+
+
+### Scan result sample (CLI)
+
+```sh
+Passed Checks: 1, Failed Checks: 1, Suppressed Checks: 0
+Check: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
+/main.tf:
+ Passed for resource: aws_s3_bucket.template_bucket
+Check: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
+/../regionStack/main.tf:
+ Failed for resource: aws_s3_bucket.sls_deployment_bucket_name
+```
+
+Start using Checkov by reading the [Getting Started](https://github.com/bridgecrewio/checkov/blob/main/docs/1.Welcome/Quick%20Start.md) page.
+
+### Using Docker
+
+
+```sh
+docker pull bridgecrew/checkov
+docker run --tty --rm --volume /user/tf:/tf --workdir /tf bridgecrew/checkov --directory /tf
+```
+Note: if you are using Python 3.6(Default version in Ubuntu 18.04) checkov will not work, and it will fail with `ModuleNotFoundError: No module named 'dataclasses'` error message. In this case, you can use the docker version instead.
+
+Note that there are certain cases where redirecting `docker run --tty` output to a file - for example, if you want to save the Checkov JUnit output to a file - will cause extra control characters to be printed. This can break file parsing. If you encounter this, remove the `--tty` flag.
+
+The `--workdir /tf` flag is optional to change the working directory to the mounted volume. If you are using the SARIF output `-o sarif` this will output the results.sarif file to the mounted volume (`/user/tf` in the example above). If you do not include that flag, the working directory will be "/".
+
+### Running or skipping checks
+
+By using command line flags, you can specify to run only named checks (allow list) or run all checks except
+those listed (deny list). If you are using the platform integration via API key, you can also specify a severity threshold to skip and / or include.
+Moreover, as json files can't contain comments, one can pass regex pattern to skip json file secret scan.
+
+See the docs for more detailed information about how these flags work together.
+
+
+## Examples
+
+Allow only the two specified checks to run:
+```sh
+checkov --directory . --check CKV_AWS_20,CKV_AWS_57
+```
+
+Run all checks except the one specified:
+```sh
+checkov -d . --skip-check CKV_AWS_20
+```
+
+Run all checks except checks with specified patterns:
+```sh
+checkov -d . --skip-check CKV_AWS*
+```
+
+Run all checks that are MEDIUM severity or higher (requires API key):
+```sh
+checkov -d . --check MEDIUM --bc-api-key ...
+```
+
+Run all checks that are MEDIUM severity or higher, as well as check CKV_123 (assume this is a LOW severity check):
+```sh
+checkov -d . --check MEDIUM,CKV_123 --bc-api-key ...
+```
+
+Skip all checks that are MEDIUM severity or lower:
+```sh
+checkov -d . --skip-check MEDIUM --bc-api-key ...
+```
+
+Skip all checks that are MEDIUM severity or lower, as well as check CKV_789 (assume this is a high severity check):
+```sh
+checkov -d . --skip-check MEDIUM,CKV_789 --bc-api-key ...
+```
+
+Run all checks that are MEDIUM severity or higher, but skip check CKV_123 (assume this is a medium or higher severity check):
+```sh
+checkov -d . --check MEDIUM --skip-check CKV_123 --bc-api-key ...
+```
+
+Run check CKV_789, but skip it if it is a medium severity (the --check logic is always applied before --skip-check)
+```sh
+checkov -d . --skip-check MEDIUM --check CKV_789 --bc-api-key ...
+```
+
+For Kubernetes workloads, you can also use allow/deny namespaces. For example, do not report any results for the
+kube-system namespace:
+```sh
+checkov -d . --skip-check kube-system
+```
+
+Run a scan of a container image. First pull or build the image then refer to it by the hash, ID, or name:tag:
+```sh
+checkov --framework sca_image --docker-image sha256:1234example --dockerfile-path /Users/path/to/Dockerfile --repo-id ... --bc-api-key ...
+
+checkov --docker-image :tag --dockerfile-path /User/path/to/Dockerfile --repo-id ... --bc-api-key ...
+```
+
+You can use --image flag also to scan container image instead of --docker-image for shortener:
+```sh
+checkov --image :tag --dockerfile-path /User/path/to/Dockerfile --repo-id ... --bc-api-key ...
+```
+
+Run an SCA scan of packages in a repo:
+```sh
+checkov -d . --framework sca_package --bc-api-key ... --repo-id
+```
+
+Run a scan of a directory with environment variables removing buffering, adding debug level logs:
+```sh
+PYTHONUNBUFFERED=1 LOG_LEVEL=DEBUG checkov -d .
+```
+OR enable the environment variables for multiple runs
+```sh
+export PYTHONUNBUFFERED=1 LOG_LEVEL=DEBUG
+checkov -d .
+```
+
+Run secrets scanning on all files in MyDirectory. Skip CKV_SECRET_6 check on json files that their suffix is DontScan
+```sh
+checkov -d /MyDirectory --framework secrets --repo-id ... --bc-api-key ... --skip-check CKV_SECRET_6:.*DontScan.json$
+```
+
+Run secrets scanning on all files in MyDirectory. Skip CKV_SECRET_6 check on json files that contains "skip_test" in path
+```sh
+checkov -d /MyDirectory --framework secrets --repo-id ... --bc-api-key ... --skip-check CKV_SECRET_6:.*skip_test.*json$
+```
+
+One can mask values from scanning results by supplying a configuration file (using --config-file flag) with mask entry.
+The masking can apply on resource & value (or multiple values, separated with a comma).
+Examples:
+```sh
+mask:
+- aws_instance:user_data
+- azurerm_key_vault_secret:admin_password,user_passwords
+```
+In the example above, the following values will be masked:
+- user_data for aws_instance resource
+- both admin_password &user_passwords for azurerm_key_vault_secret
+
+
+### Suppressing/Ignoring a check
+
+Like any static-analysis tool it is limited by its analysis scope.
+For example, if a resource is managed manually, or using subsequent configuration management tooling,
+suppression can be inserted as a simple code annotation.
+
+#### Suppression comment format
+
+To skip a check on a given Terraform definition block or CloudFormation resource, apply the following comment pattern inside it's scope:
+
+`checkov:skip=:`
+
+* `` is one of the [available check scanners](docs/5.Policy Index/all.md)
+* `` is an optional suppression reason to be included in the output
+
+#### Example
+
+The following comment skips the `CKV_AWS_20` check on the resource identified by `foo-bucket`, where the scan checks if an AWS S3 bucket is private.
+In the example, the bucket is configured with public read access; Adding the suppress comment would skip the appropriate check instead of the check to fail.
+
+```hcl-terraform
+resource "aws_s3_bucket" "foo-bucket" {
+ region = var.region
+ #checkov:skip=CKV_AWS_20:The bucket is a public static content host
+ bucket = local.bucket_name
+ force_destroy = true
+ acl = "public-read"
+}
+```
+
+The output would now contain a ``SKIPPED`` check result entry:
+
+```bash
+...
+...
+Check: "S3 Bucket has an ACL defined which allows public access."
+ SKIPPED for resource: aws_s3_bucket.foo-bucket
+ Suppress comment: The bucket is a public static content host
+ File: /example_skip_acl.tf:1-25
+
+...
+```
+To skip multiple checks, add each as a new line.
+
+```
+ #checkov:skip=CKV2_AWS_6
+ #checkov:skip=CKV_AWS_20:The bucket is a public static content host
+```
+
+To suppress checks in Kubernetes manifests, annotations are used with the following format:
+`checkov.io/skip#: =`
+
+For example:
+
+```bash
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+ annotations:
+ checkov.io/skip1: CKV_K8S_20=I don't care about Privilege Escalation :-O
+ checkov.io/skip2: CKV_K8S_14
+ checkov.io/skip3: CKV_K8S_11=I have not set CPU limits as I want BestEffort QoS
+spec:
+ containers:
+...
+```
+
+#### Logging
+
+For detailed logging to stdout set up the environment variable `LOG_LEVEL` to `DEBUG`.
+
+Default is `LOG_LEVEL=WARNING`.
+
+#### Skipping directories
+To skip files or directories, use the argument `--skip-path`, which can be specified multiple times. This argument accepts regular expressions for paths relative to the current working directory. You can use it to skip entire directories and / or specific files.
+
+By default, all directories named `node_modules`, `.terraform`, and `.serverless` will be skipped, in addition to any files or directories beginning with `.`.
+To cancel skipping directories beginning with `.` override `CKV_IGNORE_HIDDEN_DIRECTORIES` environment variable `export CKV_IGNORE_HIDDEN_DIRECTORIES=false`
+
+You can override the default set of directories to skip by setting the environment variable `CKV_IGNORED_DIRECTORIES`.
+ Note that if you want to preserve this list and add to it, you must include these values. For example, `CKV_IGNORED_DIRECTORIES=mynewdir` will skip only that directory, but not the others mentioned above. This variable is legacy functionality; we recommend using the `--skip-file` flag.
+
+#### Console Output
+
+The console output is in colour by default, to switch to a monochrome output, set the environment variable:
+`ANSI_COLORS_DISABLED`
+
+#### VS Code Extension
+
+If you want to use Checkov within VS Code, give the [Prisma Cloud extension](https://marketplace.visualstudio.com/items?itemName=PrismaCloud.prisma-cloud) a try.
+
+### Configuration using a config file
+
+Checkov can be configured using a YAML configuration file. By default, checkov looks for a `.checkov.yaml` or `.checkov.yml` file in the following places in order of precedence:
+* Directory against which checkov is run. (`--directory`)
+* Current working directory where checkov is called.
+* User's home directory.
+
+**Attention**: it is a best practice for checkov configuration file to be loaded from a trusted source composed by a verified identity, so that scanned files, check ids and loaded custom checks are as desired.
+
+Users can also pass in the path to a config file via the command line. In this case, the other config files will be ignored. For example:
+```sh
+checkov --config-file path/to/config.yaml
+```
+Users can also create a config file using the `--create-config` command, which takes the current command line args and writes them out to a given path. For example:
+```sh
+checkov --compact --directory test-dir --docker-image sample-image --dockerfile-path Dockerfile --download-external-modules True --external-checks-dir sample-dir --quiet --repo-id prisma-cloud/sample-repo --skip-check CKV_DOCKER_3,CKV_DOCKER_2 --skip-framework dockerfile secrets --soft-fail --branch develop --check CKV_DOCKER_1 --create-config /Users/sample/config.yml
+```
+Will create a `config.yaml` file which looks like this:
+```yaml
+branch: develop
+check:
+ - CKV_DOCKER_1
+compact: true
+directory:
+ - test-dir
+docker-image: sample-image
+dockerfile-path: Dockerfile
+download-external-modules: true
+evaluate-variables: true
+external-checks-dir:
+ - sample-dir
+external-modules-download-path: .external_modules
+framework:
+ - all
+output: cli
+quiet: true
+repo-id: prisma-cloud/sample-repo
+skip-check:
+ - CKV_DOCKER_3
+ - CKV_DOCKER_2
+skip-framework:
+ - dockerfile
+ - secrets
+soft-fail: true
+```
+
+Users can also use the `--show-config` flag to view all the args and settings and where they came from i.e. commandline, config file, environment variable or default. For example:
+```sh
+checkov --show-config
+```
+Will display:
+```sh
+Command Line Args: --show-config
+Environment Variables:
+ BC_API_KEY: your-api-key
+Config File (/Users/sample/.checkov.yml):
+ soft-fail: False
+ branch: master
+ skip-check: ['CKV_DOCKER_3', 'CKV_DOCKER_2']
+Defaults:
+ --output: cli
+ --framework: ['all']
+ --download-external-modules:False
+ --external-modules-download-path:.external_modules
+ --evaluate-variables:True
+```
+
+## Contributing
+
+Contribution is welcomed!
+
+Start by reviewing the [contribution guidelines](https://github.com/bridgecrewio/checkov/blob/main/CONTRIBUTING.md). After that, take a look at a [good first issue](https://github.com/bridgecrewio/checkov/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22).
+
+You can even start this with one-click dev in your browser through Gitpod at the following link:
+
+[](https://gitpod.io/#https://github.com/bridgecrewio/checkov)
+
+Looking to contribute new checks? Learn how to write a new check (AKA policy) [here](https://github.com/bridgecrewio/checkov/blob/main/docs/6.Contribution/Contribution%20Overview.md).
+
+## Disclaimer
+`checkov` does not save, publish or share with anyone any identifiable customer information.
+No identifiable customer information is used to query Prisma Cloud's publicly accessible guides.
+`checkov` uses Prisma Cloud's API to enrich the results with links to remediation guides.
+To skip this API call use the flag `--skip-download`.
+
+## Support
+
+[Prisma Cloud](https://www.prismacloud.io/?utm_source=github&utm_medium=organic_oss&utm_campaign=checkov) builds and maintains Checkov to make policy-as-code simple and accessible.
+
+Start with our [Documentation](https://www.checkov.io/1.Welcome/Quick%20Start.html) for quick tutorials and examples.
+
+## Python Version Support
+We follow the official support cycle of Python, and we use automated tests for supported versions of Python.
+This means we currently support Python 3.9 - 3.13, inclusive.
+Note that Python 3.8 reached EOL on October 2024 and Python 3.9 will reach EOL in October 2025.
+If you run into any issues with any non-EOL Python version, please open an Issue.
diff --git a/data/readmes/cilium-v1190-pre3.md b/data/readmes/cilium-v1190-pre3.md
new file mode 100644
index 0000000..1a16b9a
--- /dev/null
+++ b/data/readmes/cilium-v1190-pre3.md
@@ -0,0 +1,370 @@
+# Cilium - README (v1.19.0-pre.3)
+
+**Repository**: https://github.com/cilium/cilium
+**Version**: v1.19.0-pre.3
+**Branch**: v1.19.0-pre.3
+
+---
+
+.. raw:: html
+
+
+
+
+
+
+|cii| |go-report| |clomonitor| |artifacthub| |slack| |go-doc| |rtd| |apache| |bsd| |gpl| |fossa| |gateway-api| |codespaces|
+
+Cilium is a networking, observability, and security solution with an eBPF-based
+dataplane. It provides a simple flat Layer 3 network with the ability to span
+multiple clusters in either a native routing or overlay mode. It is L7-protocol
+aware and can enforce network policies on L3-L7 using an identity based security
+model that is decoupled from network addressing.
+
+Cilium implements distributed load balancing for traffic between pods and to
+external services, and is able to fully replace kube-proxy, using efficient
+hash tables in eBPF allowing for almost unlimited scale. It also supports
+advanced functionality like integrated ingress and egress gateway, bandwidth
+management and service mesh, and provides deep network and security visibility and monitoring.
+
+A new Linux kernel technology called eBPF_ is at the foundation of Cilium. It
+supports dynamic insertion of eBPF bytecode into the Linux kernel at various
+integration points such as: network IO, application sockets, and tracepoints to
+implement security, networking and visibility logic. eBPF is highly efficient
+and flexible. To learn more about eBPF, visit `eBPF.io`_.
+
+.. image:: Documentation/images/cilium-overview.png
+ :alt: Overview of Cilium features for networking, observability, service mesh, and runtime security
+
+.. raw:: html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Stable Releases
+===============
+
+The Cilium community maintains minor stable releases for the last three minor
+Cilium versions. Older Cilium stable versions from minor releases prior to that
+are considered EOL.
+
+For upgrades to new minor releases please consult the `Cilium Upgrade Guide`_.
+
+Listed below are the actively maintained release branches along with their latest
+patch release, corresponding image pull tags and their release notes:
+
++---------------------------------------------------------+------------+------------------------------------+----------------------------------------------------------------------------+
+| `v1.18 `__ | 2025-11-12 | ``quay.io/cilium/cilium:v1.18.4`` | `Release Notes `__ |
++---------------------------------------------------------+------------+------------------------------------+----------------------------------------------------------------------------+
+| `v1.17 `__ | 2025-11-12 | ``quay.io/cilium/cilium:v1.17.10`` | `Release Notes `__ |
++---------------------------------------------------------+------------+------------------------------------+----------------------------------------------------------------------------+
+| `v1.16 `__ | 2025-11-12 | ``quay.io/cilium/cilium:v1.16.17`` | `Release Notes `__ |
++---------------------------------------------------------+------------+------------------------------------+----------------------------------------------------------------------------+
+
+Architectures
+-------------
+
+Cilium images are distributed for AMD64 and AArch64 architectures.
+
+Software Bill of Materials
+--------------------------
+
+Starting with Cilium version 1.13.0, all images include a Software Bill of
+Materials (SBOM). The SBOM is generated in `SPDX`_ format. More information
+on this is available on `Cilium SBOM`_.
+
+.. _`SPDX`: https://spdx.dev/
+.. _`Cilium SBOM`: https://docs.cilium.io/en/latest/configuration/sbom/
+
+Development
+===========
+
+For development and testing purpose, the Cilium community publishes snapshots,
+early release candidates (RC) and CI container images build from the `main
+branch `_. These images are
+not for use in production.
+
+For testing upgrades to new development releases please consult the latest
+development build of the `Cilium Upgrade Guide`_.
+
+Listed below are branches for testing along with their snapshots or RC releases,
+corresponding image pull tags and their release notes where applicable:
+
++----------------------------------------------------------------------------+------------+-----------------------------------------+---------------------------------------------------------------------------------+
+| `main `__ | daily | ``quay.io/cilium/cilium-ci:latest`` | N/A |
++----------------------------------------------------------------------------+------------+-----------------------------------------+---------------------------------------------------------------------------------+
+| `v1.19.0-pre.2 `__ | 2025-11-03 | ``quay.io/cilium/cilium:v1.19.0-pre.2`` | `Release Notes `__ |
++----------------------------------------------------------------------------+------------+-----------------------------------------+---------------------------------------------------------------------------------+
+
+Functionality Overview
+======================
+
+.. begin-functionality-overview
+
+CNI (Container Network Interface)
+---------------------------------
+
+`Cilium as a CNI plugin `_ provides a
+fast, scalable, and secure networking layer for Kubernetes clusters. Built
+on eBPF, it offers several deployment options:
+
+* **Overlay networking:** encapsulation-based virtual network spanning all
+ hosts with support for VXLAN and Geneve. It works on almost any network
+ infrastructure as the only requirement is IP connectivity between hosts
+ which is typically already given.
+
+* **Native routing mode:** Use of the regular routing table of the Linux
+ host. The network is required to be capable of routing the IP addresses
+ of the application containers. It integrates with cloud routers, routing
+ daemons, and IPv6-native infrastructure.
+
+* **Flexible routing options:** Cilium can automate route learning and
+ advertisement in common topologies such as using L2 neighbor discovery
+ when nodes share a layer 2 domain, or BGP when routing across layer 3
+ boundaries.
+
+Each mode is designed for maximum interoperability with existing
+infrastructure while minimizing operational burden.
+
+Load Balancing
+--------------
+
+Cilium implements distributed load balancing for traffic between application
+containers and to/from external services. The load balancing is implemented
+in eBPF using efficient hashtables enabling high service density and low
+latency at scale.
+
+* **East-west load balancing** rewrites service connections at the socket
+ level (``connect()``), avoiding the overhead of per-packet NAT and fully
+ `replacing kube-proxy `_.
+
+* **North-south load balancing** supports XDP for high-throughput scenarios
+ and `layer 4 load balancing `_
+ including Direct Server Return (DSR), and Maglev consistent hashing.
+
+Cluster Mesh
+------------
+
+Cilium `Cluster Mesh `_ enables
+secure, seamless connectivity across multiple Kubernetes clusters. For
+operators running hybrid or multi-cloud environments, Cluster Mesh ensures
+a consistent security and connectivity experience.
+
+* **Global service discovery**: Workloads across clusters can discover and
+ connect to services as if they were local. This enables fault tolerance,
+ like automatically failing over to backends in another cluster, and
+ exposes shared services like logging, auth, or databases across
+ environments.
+
+* **Unified identity model:** Security policies are enforced based on
+ identity, not IP address, across all clusters.
+
+Network Policy
+--------------
+
+Cilium `Network Policy `_
+provides identity-aware enforcement across L3-L7. Typical container
+firewalls secure workloads by filtering on source IP addresses and
+destination ports. This concept requires the firewalls on all servers to be
+manipulated whenever a container is started anywhere in the cluster.
+
+In order to avoid this situation which limits scale, Cilium assigns a
+security identity to groups of application containers which share identical
+security policies. The identity is then associated with all network packets
+emitted by the application containers, allowing to validate the identity at
+the receiving node.
+
+* **Identity-based security** removes reliance on brittle IP addresses.
+
+* **L3/L4 policies** restrict traffic based on labels, protocols, and ports.
+
+* **DNS-based policies:** Allow or deny traffic to FQDNs or wildcard domains
+ (e.g., ``api.example.com``, ``*.trusted.com``). This is especially useful
+ for securing egress traffic to third-party services.
+
+* **L7-aware policies** allow filtering by HTTP method, URL path, gRPC call,
+ and more:
+
+ * Example: Allow only GET requests to ``/public/.*``.
+
+ * Enforce the presence of headers like ``X-Token: [0-9]+``.
+
+CIDR-based egress and ingress policies are also supported for controlling
+access to external IPs, ideal for integrating with legacy systems or
+regulatory boundaries.
+
+Service Mesh
+------------
+
+With Cilium `Service Mesh `_,
+operators gain the benefits of fine-grained traffic control, encryption, observability,
+access control, without the cost and complexity of traditional proxy-based
+designs. Key features include:
+
+* **Mutual authentication** with automatic identity-based encryption between
+ workloads using IPSec or WireGuard.
+
+* **L7-aware policy enforcement** for security and compliance.
+
+* **Deep integration with the Kubernetes Gateway API :** Acts as a
+ `Gateway API `_ compliant data
+ plane, allowing you to declaratively manage ingress, traffic splitting, and
+ routing behavior using Kubernetes-native CRDs.
+
+Observability and Troubleshooting
+---------------------------------
+
+Observability is built into Cilium from the ground up, providing rich
+visibility that helps operators diagnose and understand system behavior
+including:
+
+* **Hubble**: A fully integrated observability platform that offers
+ real-time service maps, flow visibility with identity and label metadata,
+ and DNS-aware filtering and protocol-specific insights
+
+* **Metrics and alerting**: Integration with Prometheus, Grafana, and other
+ monitoring systems.
+
+* **Drop reasons and audit trails**: Get actionable insights into why traffic
+ was dropped, including policy or port violations and issues like failed
+ DNS lookups.
+
+.. end-functionality-overview
+
+Getting Started
+===============
+
+* `Why Cilium?`_
+* `Getting Started`_
+* `Architecture and Concepts`_
+* `Installing Cilium`_
+* `Frequently Asked Questions`_
+* Contributing_
+
+Community
+=========
+
+Slack
+-----
+
+Join the Cilium `Slack channel `_ to chat with
+Cilium developers and other Cilium users. This is a good place to learn about
+Cilium, ask questions, and share your experiences.
+
+Special Interest Groups (SIG)
+-----------------------------
+
+See `Special Interest groups
+`_ for a list of all SIGs and their meeting times.
+
+Developer meetings
+------------------
+The Cilium developer community hangs out on Zoom to chat. Everybody is welcome.
+
+* Weekly, Wednesday,
+ 5:00 pm `Europe/Zurich time `__ (CET/CEST),
+ usually equivalent to 8:00 am PT, or 11:00 am ET. `Meeting Notes and Zoom Info`_
+* Third Wednesday of each month, 9:00 am `Japan time `__ (JST). `APAC Meeting Notes and Zoom Info`_
+
+eBPF & Cilium Office Hours livestream
+-------------------------------------
+We host a weekly community `YouTube livestream called eCHO `_ which (very loosely!) stands for eBPF & Cilium Office Hours. Join us live, catch up with past episodes, or head over to the `eCHO repo `_ and let us know your ideas for topics we should cover.
+
+Governance
+----------
+The Cilium project is governed by a group of `Maintainers and Committers `__.
+How they are selected and govern is outlined in our `governance document `__.
+
+Adopters
+--------
+A list of adopters of the Cilium project who are deploying it in production, and of their use cases,
+can be found in file `USERS.md `__.
+
+License
+=======
+
+.. _apache-license: LICENSE
+.. _bsd-license: bpf/LICENSE.BSD-2-Clause
+.. _gpl-license: bpf/LICENSE.GPL-2.0
+
+The Cilium user space components are licensed under the
+`Apache License, Version 2.0 `__.
+The BPF code templates are dual-licensed under the
+`General Public License, Version 2.0 (only) `__
+and the `2-Clause BSD License `__
+(you can use the terms of either license, at your option).
+
+.. _`Cilium Upgrade Guide`: https://docs.cilium.io/en/stable/operations/upgrade/
+.. _`Why Cilium?`: https://docs.cilium.io/en/stable/overview/intro
+.. _`Getting Started`: https://docs.cilium.io/en/stable/#getting-started
+.. _`Architecture and Concepts`: https://docs.cilium.io/en/stable/overview/component-overview/
+.. _`Installing Cilium`: https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/
+.. _`Frequently Asked Questions`: https://github.com/cilium/cilium/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3Akind%2Fquestion+
+.. _Contributing: https://docs.cilium.io/en/stable/contributing/development/
+.. _Prerequisites: https://docs.cilium.io/en/stable/operations/system_requirements/
+.. _`eBPF`: https://ebpf.io
+.. _`eBPF.io`: https://ebpf.io
+.. _`Meeting Notes and Zoom Info`: https://docs.google.com/document/d/1Y_4chDk4rznD6UgXPlPvn3Dc7l-ZutGajUv1eF0VDwQ/edit#
+.. _`APAC Meeting Notes and Zoom Info`: https://docs.google.com/document/d/1egv4qLydr0geP-GjQexYKm4tz3_tHy-LCBjVQcXcT5M/edit#
+
+.. |go-report| image:: https://goreportcard.com/badge/github.com/cilium/cilium
+ :alt: Go Report Card
+ :target: https://goreportcard.com/report/github.com/cilium/cilium
+
+.. |go-doc| image:: https://godoc.org/github.com/cilium/cilium?status.svg
+ :alt: GoDoc
+ :target: https://godoc.org/github.com/cilium/cilium
+
+.. |rtd| image:: https://readthedocs.org/projects/docs/badge/?version=latest
+ :alt: Read the Docs
+ :target: https://docs.cilium.io/
+
+.. |apache| image:: https://img.shields.io/badge/license-Apache-blue.svg
+ :alt: Apache licensed
+ :target: apache-license_
+
+.. |bsd| image:: https://img.shields.io/badge/license-BSD-blue.svg
+ :alt: BSD licensed
+ :target: bsd-license_
+
+.. |gpl| image:: https://img.shields.io/badge/license-GPL-blue.svg
+ :alt: GPL licensed
+ :target: gpl-license_
+
+.. |slack| image:: https://img.shields.io/badge/slack-cilium-brightgreen.svg?logo=slack
+ :alt: Join the Cilium slack channel
+ :target: https://slack.cilium.io
+
+.. |cii| image:: https://bestpractices.coreinfrastructure.org/projects/1269/badge
+ :alt: CII Best Practices
+ :target: https://bestpractices.coreinfrastructure.org/projects/1269
+
+.. |clomonitor| image:: https://img.shields.io/endpoint?url=https://clomonitor.io/api/projects/cncf/cilium/badge
+ :alt: CLOMonitor
+ :target: https://clomonitor.io/projects/cncf/cilium
+
+.. |artifacthub| image:: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/cilium
+ :alt: Artifact Hub
+ :target: https://artifacthub.io/packages/helm/cilium/cilium
+
+.. |fossa| image:: https://app.fossa.com/api/projects/custom%2B162%2Fgit%40github.com%3Acilium%2Fcilium.git.svg?type=shield
+ :alt: FOSSA Status
+ :target: https://app.fossa.com/projects/custom%2B162%2Fgit%40github.com%3Acilium%2Fcilium.git?ref=badge_shield
+
+.. |gateway-api| image:: https://img.shields.io/badge/Gateway%20API%20Conformance%20v1.2.0-Cilium-green
+ :alt: Gateway API Status
+ :target: https://github.com/kubernetes-sigs/gateway-api/tree/main/conformance/reports/v1.2.0/cilium-cilium
+
+.. |codespaces| image:: https://img.shields.io/badge/Open_in_GitHub_Codespaces-gray?logo=github
+ :alt: Github Codespaces
+ :target: https://github.com/codespaces/new?hide_repo_select=true&ref=master&repo=48109239&machine=standardLinux32gb&location=WestEurope
diff --git a/data/readmes/cloud-custodian-09480.md b/data/readmes/cloud-custodian-09480.md
new file mode 100644
index 0000000..9340774
--- /dev/null
+++ b/data/readmes/cloud-custodian-09480.md
@@ -0,0 +1,305 @@
+# Cloud Custodian - README (0.9.48.0)
+
+**Repository**: https://github.com/cloud-custodian/cloud-custodian
+**Version**: 0.9.48.0
+
+---
+
+Cloud Custodian (c7n)
+=================
+
+
+
+---
+
+[](https://communityinviter.com/apps/cloud-custodian/c7n-chat)
+[](https://github.com/cloud-custodian/cloud-custodian/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush)
+[](https://www.apache.org/licenses/LICENSE-2.0)
+[](https://codecov.io/gh/cloud-custodian/cloud-custodian)
+[](https://requires.io/github/cloud-custodian/cloud-custodian/requirements/?branch=master)
+[](https://bestpractices.coreinfrastructure.org/projects/3402)
+
+Cloud Custodian, also known as c7n, is a rules engine for managing
+public cloud accounts and resources. It allows users to define
+policies to enable a well managed cloud infrastructure, that\'s both
+secure and cost optimized. It consolidates many of the adhoc scripts
+organizations have into a lightweight and flexible tool, with unified
+metrics and reporting.
+
+Custodian can be used to manage AWS, Azure, and GCP environments by
+ensuring real time compliance to security policies (like encryption and
+access requirements), tag policies, and cost management via garbage
+collection of unused resources and off-hours resource management.
+
+Custodian also supports running policies on infrastructure as code assets
+to provide feedback directly on developer workstations or within CI pipelines.
+
+Custodian policies are written in simple YAML configuration files that
+enable users to specify policies on a resource type (EC2, ASG, Redshift,
+CosmosDB, PubSub Topic) and are constructed from a vocabulary of filters
+and actions.
+
+It integrates with the cloud native serverless capabilities of each
+provider to provide for real time enforcement of policies with builtin
+provisioning. Or it can be run as a simple cron job on a server to
+execute against large existing fleets.
+
+Cloud Custodian is a CNCF Incubating project, lead by a community of hundreds
+of contributors.
+
+Features
+--------
+
+- Comprehensive support for public cloud services and resources with a
+ rich library of actions and filters to build policies with.
+- Run policies on infrastructure as code (terraform, etc) assets.
+- Supports arbitrary filtering on resources with nested boolean
+ conditions.
+- Dry run any policy to see what it would do.
+- Automatically provisions serverless functions and event sources (
+ AWS CloudWatchEvents, AWS Config Rules, Azure EventGrid, GCP
+ AuditLog & Pub/Sub, etc)
+- Cloud provider native metrics outputs on resources that matched a
+ policy
+- Structured outputs into cloud native object storage of which
+ resources matched a policy.
+- Intelligent cache usage to minimize api calls.
+- Supports multi-account/subscription/project usage.
+- Battle-tested - in production on some very large cloud environments.
+
+Links
+-----
+
+- [Homepage](http://cloudcustodian.io)
+- [Docs](http://cloudcustodian.io/docs/index.html)
+- [Project Roadmap](https://github.com/orgs/cloud-custodian/projects/1)
+- [Developer Install](https://cloudcustodian.io/docs/developer/installing.html)
+- [Presentations](https://www.google.com/search?q=cloud+custodian&source=lnms&tbm=vid)
+- [YouTube Channel](https://www.youtube.com/channel/UCdeXCdFLluylWnFfS0-jbDA)
+
+Quick Install
+-------------
+
+Custodian is published on pypi as a series of packages with the `c7n`
+prefix, its also available as a docker image.
+
+```shell
+$ python3 -m venv custodian
+$ source custodian/bin/activate
+(custodian) $ pip install c7n
+```
+
+
+Usage
+-----
+
+The first step to using Cloud Custodian (c7n) is writing a YAML file
+containing the policies that you want to run. Each policy specifies
+the resource type that the policy will run on, a set of filters which
+control resources will be affected by this policy, actions which the policy
+with take on the matched resources, and a mode which controls which
+how the policy will execute.
+
+The best getting started guides are the cloud provider specific tutorials.
+
+ - [AWS Getting Started](https://cloudcustodian.io/docs/aws/gettingstarted.html)
+ - [Azure Getting Started](https://cloudcustodian.io/docs/azure/gettingstarted.html)
+ - [GCP Getting Started](https://cloudcustodian.io/docs/gcp/gettingstarted.html)
+
+As a quick walk through, below are some sample policies for AWS resources.
+
+ 1. will enforce that no S3 buckets have cross-account access enabled.
+ 1. will terminate any newly launched EC2 instance that do not have an encrypted EBS volume.
+ 1. will tag any EC2 instance that does not have the follow tags
+ "Environment", "AppId", and either "OwnerContact" or "DeptID" to
+ be stopped in four days.
+
+```yaml
+policies:
+ - name: s3-cross-account
+ description: |
+ Checks S3 for buckets with cross-account access and
+ removes the cross-account access.
+ resource: aws.s3
+ region: us-east-1
+ filters:
+ - type: cross-account
+ actions:
+ - type: remove-statements
+ statement_ids: matched
+
+ - name: ec2-require-non-public-and-encrypted-volumes
+ resource: aws.ec2
+ description: |
+ Provision a lambda and cloud watch event target
+ that looks at all new instances and terminates those with
+ unencrypted volumes.
+ mode:
+ type: cloudtrail
+ role: CloudCustodian-QuickStart
+ events:
+ - RunInstances
+ filters:
+ - type: ebs
+ key: Encrypted
+ value: false
+ actions:
+ - terminate
+
+ - name: tag-compliance
+ resource: aws.ec2
+ description: |
+ Schedule a resource that does not meet tag compliance policies to be stopped in four days. Note a separate policy using the`marked-for-op` filter is required to actually stop the instances after four days.
+ filters:
+ - State.Name: running
+ - "tag:Environment": absent
+ - "tag:AppId": absent
+ - or:
+ - "tag:OwnerContact": absent
+ - "tag:DeptID": absent
+ actions:
+ - type: mark-for-op
+ op: stop
+ days: 4
+```
+
+You can validate, test, and run Cloud Custodian with the example policy with these commands:
+
+```shell
+# Validate the configuration (note this happens by default on run)
+$ custodian validate policy.yml
+
+# Dryrun on the policies (no actions executed) to see what resources
+# match each policy.
+$ custodian run --dryrun -s out policy.yml
+
+# Run the policy
+$ custodian run -s out policy.yml
+```
+
+You can run Cloud Custodian via Docker as well:
+
+```shell
+# Download the image
+$ docker pull cloudcustodian/c7n
+$ mkdir output
+
+# Run the policy
+#
+# This will run the policy using only the environment variables for authentication
+$ docker run -it \
+ -v $(pwd)/output:/home/custodian/output \
+ -v $(pwd)/policy.yml:/home/custodian/policy.yml \
+ --env-file <(env | grep "^AWS\|^AZURE\|^GOOGLE") \
+ cloudcustodian/c7n run -v -s /home/custodian/output /home/custodian/policy.yml
+
+# Run the policy (using AWS's generated credentials from STS)
+#
+# NOTE: We mount the ``.aws/credentials`` and ``.aws/config`` directories to
+# the docker container to support authentication to AWS using the same credentials
+# credentials that are available to the local user if authenticating with STS.
+
+$ docker run -it \
+ -v $(pwd)/output:/home/custodian/output \
+ -v $(pwd)/policy.yml:/home/custodian/policy.yml \
+ -v $(cd ~ && pwd)/.aws/credentials:/home/custodian/.aws/credentials \
+ -v $(cd ~ && pwd)/.aws/config:/home/custodian/.aws/config \
+ --env-file <(env | grep "^AWS") \
+ cloudcustodian/c7n run -v -s /home/custodian/output /home/custodian/policy.yml
+```
+
+The [custodian cask
+tool](https://cloudcustodian.io/docs/tools/cask.html) is a go binary
+that provides a transparent front end to docker that mirors the regular
+custodian cli, but automatically takes care of mounting volumes.
+
+Consult the documentation for additional information, or reach out on gitter.
+
+Cloud Provider Specific Help
+----------------------------
+
+For specific instructions for AWS, Azure, and GCP, visit the relevant getting started page.
+
+- [AWS](https://cloudcustodian.io/docs/aws/gettingstarted.html)
+- [Azure](https://cloudcustodian.io/docs/azure/gettingstarted.html)
+- [GCP](https://cloudcustodian.io/docs/gcp/gettingstarted.html)
+
+Get Involved
+------------
+
+- [GitHub](https://github.com/cloud-custodian/cloud-custodian) - (This page)
+- [Slack](https://communityinviter.com/apps/cloud-custodian/c7n-chat) - Real time chat if you're looking for help or interested in contributing to Custodian!
+ - [Gitter](https://gitter.im/cloud-custodian/cloud-custodian) - (Older real time chat, we're likely migrating away from this)
+- [Linen.dev](https://www.linen.dev/s/cloud-custodian/c/general) - Follow our discussions on Linen
+- [Mailing List](https://groups.google.com/forum/#!forum/cloud-custodian) - Our project mailing list, subscribe here for important project announcements, feel free to ask questions
+- [Reddit](https://reddit.com/r/cloudcustodian) - Our subreddit
+- [StackOverflow](https://stackoverflow.com/questions/tagged/cloudcustodian) - Q&A site for developers, we keep an eye on the `cloudcustodian` tag
+- [YouTube Channel](https://www.youtube.com/channel/UCdeXCdFLluylWnFfS0-jbDA/) - We're working on adding tutorials and other useful information, as well as meeting videos
+
+Community Resources
+-------------------
+
+We have a regular community meeting that is open to all users and developers of every skill level.
+Joining the [mailing list](https://groups.google.com/forum/#!forum/cloud-custodian) will automatically send you a meeting invite.
+See the notes below for more technical information on joining the meeting.
+
+- [Community Meeting Videos](https://www.youtube.com/watch?v=qy250y0UT-4&list=PLJ2Un8H_N5uBeAAWK95SnWvm_AuNJ8q2x)
+- [Community Meeting Notes Archive](https://github.com/orgs/cloud-custodian/discussions/categories/announcements)
+- [Upcoming Community Events](https://cloudcustodian.io/events/)
+- [Cloud Custodian Annual Report 2021](https://github.com/cncf/toc/blob/main/reviews/2021-cloud-custodian-annual.md) - Annual health check provided to the CNCF outlining the health of the project
+- [Ada Logics Third Party Security Audit](https://ostif.org/cc-audit-complete/)
+
+
+Additional Tools
+----------------
+
+The Custodian project also develops and maintains a suite of additional
+tools here
+:
+
+- [**_Org_:**](https://cloudcustodian.io/docs/tools/c7n-org.html) Multi-account policy execution.
+
+- [**_ShiftLeft_:**](https://cloudcustodian.io/docs/tools/c7n-left.html) Shift Left ~ run policies against Infrastructure as Code assets like terraform.
+
+- [**_PolicyStream_:**](https://cloudcustodian.io/docs/tools/c7n-policystream.html) Git history as stream of logical policy changes.
+
+- [**_Salactus_:**](https://cloudcustodian.io/docs/tools/c7n-salactus.html) Scale out s3 scanning.
+
+- [**_Mailer_:**](https://cloudcustodian.io/docs/tools/c7n-mailer.html) A reference implementation of sending messages to users to notify them.
+
+- [**_Trail Creator_:**](https://cloudcustodian.io/docs/tools/c7n-trailcreator.html) Retroactive tagging of resources creators from CloudTrail
+
+- **_TrailDB_:** Cloudtrail indexing and time series generation for dashboarding.
+
+- [**_LogExporter_:**](https://cloudcustodian.io/docs/tools/c7n-logexporter.html) Cloud watch log exporting to s3
+
+- [**_Cask_:**](https://cloudcustodian.io/docs/tools/cask.html) Easy custodian exec via docker
+
+- [**_Guardian_:**](https://cloudcustodian.io/docs/tools/c7n-guardian.html) Automated multi-account Guard Duty setup
+
+- [**_Omni SSM_:**](https://cloudcustodian.io/docs/tools/omnissm.html) EC2 Systems Manager Automation
+
+- [**_Mugc_:**](https://github.com/cloud-custodian/cloud-custodian/tree/master/tools/ops#mugc) A utility used to clean up Cloud Custodian Lambda policies that are deployed in an AWS environment.
+
+Contributing
+------------
+
+See
+
+Security
+--------
+
+If you've found a security related issue, a vulnerability, or a
+potential vulnerability in Cloud Custodian please let the Cloud
+[Custodian Security Team](mailto:security@cloudcustodian.io) know with
+the details of the vulnerability. We'll send a confirmation email to
+acknowledge your report, and we'll send an additional email when we've
+identified the issue positively or negatively.
+
+Code of Conduct
+---------------
+
+This project adheres to the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)
+
+By participating, you are expected to honor this code.
+
diff --git a/data/readmes/cloudevents-cev102.md b/data/readmes/cloudevents-cev102.md
new file mode 100644
index 0000000..cd89abb
--- /dev/null
+++ b/data/readmes/cloudevents-cev102.md
@@ -0,0 +1,153 @@
+# CloudEvents - README (ce@v1.0.2)
+
+**Repository**: https://github.com/cloudevents/spec
+**Version**: ce@v1.0.2
+
+---
+
+# CloudEvents
+
+
+
+
+
+Events are everywhere. However, event producers tend to describe events
+differently.
+
+The lack of a common way of describing events means developers must constantly
+re-learn how to consume events. This also limits the potential for libraries,
+tooling and infrastructure to aide the delivery of event data across
+environments, like SDKs, event routers or tracing systems. The portability and
+productivity we can achieve from event data is hindered overall.
+
+CloudEvents is a specification for describing event data in common formats to
+provide interoperability across services, platforms and systems.
+
+CloudEvents has received a large amount of industry interest, ranging from major
+cloud providers to popular SaaS companies. CloudEvents is hosted by the
+[Cloud Native Computing Foundation](https://cncf.io) (CNCF) and was approved as
+a Cloud Native sandbox level project on
+[May 15, 2018](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g37acf52904_1_41).
+
+## CloudEvents Documents
+
+The following documents are available ([Release Notes](misc/RELEASE_NOTES.md)):
+
+| | Latest Release | Working Draft |
+| :---------------------------- | :-----------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------: |
+| **Core Specification:** |
+| CloudEvents | [v1.0.2](cloudevents/spec.md) | [WIP](https://github.com/cloudevents/spec/tree/main/cloudevents/spec.md) |
+| |
+| **Optional Specifications:** |
+| AMQP Protocol Binding | [v1.0.2](cloudevents/bindings/amqp-protocol-binding.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/amqp-protocol-binding.md) |
+| AVRO Event Format | [v1.0.2](cloudevents/formats/avro-format.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/avro-format.md) |
+| HTTP Protocol Binding | [v1.0.2](cloudevents/bindings/http-protocol-binding.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/http-protocol-binding.md) |
+| JSON Event Format | [v1.0.2](cloudevents/formats/json-format.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/json-format.md) |
+| Kafka Protocol Binding | [v1.0.2](cloudevents/bindings/kafka-protocol-binding.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/kafka-protocol-binding.md) |
+| MQTT Protocol Binding | [v1.0.2](cloudevents/bindings/mqtt-protocol-binding.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/mqtt-protocol-binding.md) |
+| NATS Protocol Binding | [v1.0.2](cloudevents/bindings/nats-protocol-binding.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/nats-protocol-binding.md) |
+| WebSockets Protocol Binding | - | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/websockets-protocol-binding.md) |
+| Protobuf Event Format | [v1.0.2](cloudevents/formats/protobuf-format.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/protobuf-format.md) |
+| Web hook | [v1.0.2](cloudevents/http-webhook.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/http-webhook.md) |
+| |
+| **Additional Documentation:** |
+| CloudEvents Adapters | - | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/adapters.md) |
+| CloudEvents SDK Requirements | - | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/SDK.md) |
+| Documented Extensions | - | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/documented-extensions.md) |
+| Primer | [v1.0.2](cloudevents/primer.md) | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/primer.md) |
+| Proprietary Specifications | - | [WIP](https://github.com/cloudevents/spec/blob/main/cloudevents/proprietary-specs.md) |
+
+There might be additional work-in-progress specifications being developed
+in the [`main`](https://github.com/cloudevents/spec/tree/main) branch.
+
+If you are new to CloudEvents, it is recommended that you start by reading the
+[Primer](cloudevents/primer.md) for an overview of the specification's goals
+and design decisions, and then move on to the
+[core specification](cloudevents/spec.md).
+
+Since not all event producers generate CloudEvents by default, there is
+documentation describing the recommended process for adapting some popular
+events into CloudEvents, see
+[CloudEvents Adapters](https://github.com/cloudevents/spec/blob/main/cloudevents/adapters.md).
+
+## SDKs
+
+In addition to the documentation mentioned above, there is also an
+[SDK proposal](cloudevents/SDK.md). A set of SDKs is also being developed:
+
+- [CSharp](https://github.com/cloudevents/sdk-csharp)
+- [Go](https://github.com/cloudevents/sdk-go)
+- [Java](https://github.com/cloudevents/sdk-java)
+- [Javascript](https://github.com/cloudevents/sdk-javascript)
+- [PHP](https://github.com/cloudevents/sdk-php)
+- [PowerShell](https://github.com/cloudevents/sdk-powershell)
+- [Python](https://github.com/cloudevents/sdk-python)
+- [Ruby](https://github.com/cloudevents/sdk-ruby)
+- [Rust](https://github.com/cloudevents/sdk-rust)
+
+## Community
+
+Learn more about the people and organizations who are creating a dynamic cloud
+native ecosystem by making our systems interoperable with CloudEvents.
+
+- Our [Governance](community/GOVERNANCE.md) documentation.
+- How to [contribute](community/CONTRIBUTING.md) via issues and pull requests.
+- [Contributors](community/contributors.md): people and organizations who helped
+ us get started or are actively working on the CloudEvents specification.
+- [Demos & open source](community/README.md) -- if you have something to share
+ about your use of CloudEvents, please submit a PR!
+
+## Process
+
+The CloudEvents project is working to formalize the
+[specification](cloudevents/spec.md) based on
+[design goals](cloudevents/primer.md#design-goals) which focus on
+interoperability between systems which generate and respond to events.
+
+In order to achieve these goals, the project must describe:
+
+- Common attributes of an _event_ that facilitate interoperability
+- One or more common architectures that are in active use today or planned to be
+ built by its members
+- How events are transported from producer to consumer via at least one protocol
+- Identify and resolve whatever else is needed for interoperability
+
+## Communications
+
+The main mailing list for e-mail communications:
+
+- Send emails to: [cncf-cloudevents](mailto:cncf-cloudevents@lists.cncf.io)
+- To subscribe see: https://lists.cncf.io/g/cncf-cloudevents
+- Archives are at: https://lists.cncf.io/g/cncf-cloudevents/topics
+
+And a #cloudevents Slack channel under
+[CNCF's Slack workspace](http://slack.cncf.io/).
+
+For SDK related comments and questions:
+
+- Email to: [cncf-cloudevents-sdk](mailto:cncf-cloudevents-sdk@lists.cncf.io)
+- To subscribe see: https://lists.cncf.io/g/cncf-cloudevents-sdk
+- Archives are at: https://lists.cncf.io/g/cncf-cloudevents-sdk/topics
+- Slack: #cloudeventssdk on [CNCF's Slack workspace](http://slack.cncf.io/)
+
+## Meeting Time
+
+See the [CNCF public events calendar](https://www.cncf.io/community/calendar/).
+This specification is being developed by the
+[CNCF Serverless Working Group](https://github.com/cncf/wg-serverless). This
+working group meets every Thursday at 9AM PT (USA Pacific)
+([World Time Zone Converter](http://www.thetimezoneconverter.com/?t=9:00%20am&tz=San%20Francisco&)):
+
+Please see the
+[meeting minutes doc](https://docs.google.com/document/d/1OVF68rpuPK5shIHILK9JOqlZBbfe91RNzQ7u_P7YCDE/edit#)
+for the latest information on how to join the calls.
+
+Recording from our calls are available
+[here](https://www.youtube.com/channel/UC70hQml92GsoNgnB-CKNEXg/videos), and
+older ones are
+[here](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt).
+
+Periodically, the group may have in-person meetings that coincide with a major
+conference. Please see the
+[meeting minutes doc](https://docs.google.com/document/d/1OVF68rpuPK5shIHILK9JOqlZBbfe91RNzQ7u_P7YCDE/edit#)
+for any future plans.
diff --git a/data/readmes/cloudnativepg-v1280-rc2.md b/data/readmes/cloudnativepg-v1280-rc2.md
new file mode 100644
index 0000000..22ad3b3
--- /dev/null
+++ b/data/readmes/cloudnativepg-v1280-rc2.md
@@ -0,0 +1,198 @@
+# CloudNativePG - README (v1.28.0-rc2)
+
+**Repository**: https://github.com/cloudnative-pg/cloudnative-pg
+**Version**: v1.28.0-rc2
+
+---
+
+[][cncf-landscape]
+[][latest-release]
+[][license]
+[][openssf]
+[![OpenSSF Scorecard Badge][openssf-scorecard-badge]][openssf-socrecard-view]
+[![Documentation][documentation-badge]][documentation]
+[][stackoverflow]
+[![FOSSA Status][fossa-badge]][fossa]
+[](https://clomonitor.io/projects/cncf/cloudnative-pg)
+[](https://artifacthub.io/packages/search?repo=cloudnative-pg)
+
+# Welcome to the CloudNativePG Project!
+
+**CloudNativePG (CNPG)** is an open-source platform designed to seamlessly
+manage [PostgreSQL](https://www.postgresql.org/) databases in Kubernetes
+environments. It covers the entire operational lifecycle—from deployment to
+ongoing maintenance—through its core component, the CloudNativePG operator.
+
+## Table of Contents
+
+- [Code of Conduct](CODE_OF_CONDUCT.md)
+- [Governance Policies](https://github.com/cloudnative-pg/governance/blob/main/GOVERNANCE.md)
+- [Contributing](CONTRIBUTING.md)
+- [Adopters](ADOPTERS.md)
+- [Commercial Support](https://cloudnative-pg.io/support/)
+- [License](LICENSE)
+
+## Getting Started
+
+The best way to get started is the [Quickstart Guide](https://cloudnative-pg.io/documentation/current/quickstart/).
+
+## Scope
+
+### Mission
+
+CloudNativePG aims to increase PostgreSQL adoption within Kubernetes by making
+it an integral part of the development process and GitOps-driven CI/CD
+automation.
+
+### Core Principles & Features
+
+Designed by PostgreSQL experts for Kubernetes administrators, CloudNativePG
+follows a Kubernetes-native approach to PostgreSQL primary/standby cluster
+management. Instead of relying on external high-availability tools (like
+Patroni, repmgr, or Stolon), it integrates directly with the Kubernetes API to
+automate database operations that a skilled DBA would perform manually.
+
+Key design decisions include:
+
+- Direct integration with Kubernetes API: The PostgreSQL cluster’s status is
+ available directly in the `Cluster` resource, allowing users to inspect it
+ via the Kubernetes API.
+- Operator pattern: The operator ensures that the desired PostgreSQL state is
+ reconciled automatically, following Kubernetes best practices.
+- Immutable application containers: Updates follow an immutable infrastructure
+ model, as explained in
+ ["Why EDB Chose Immutable Application Containers"](https://www.enterprisedb.com/blog/why-edb-chose-immutable-application-containers).
+
+### How CloudNativePG Works
+
+The operator continuously monitors and updates the PostgreSQL cluster state.
+Examples of automated actions include:
+
+- Failover management: If the primary instance fails, the operator elects a new
+ primary, updates the cluster status, and orchestrates the transition.
+- Scaling read replicas: When the number of desired replicas changes, the
+ operator provisions or removes resources such as persistent volumes, secrets,
+ and config maps while managing streaming replication.
+- Service updates: Kubernetes remains the single source of truth, ensuring
+ that PostgreSQL service endpoints are always up to date.
+- Rolling updates: When an image is updated, the operator follows a rolling
+ strategy—first updating replica pods before performing a controlled
+ switchover for the primary.
+
+CloudNativePG manages additional Kubernetes resources to enhance PostgreSQL
+management, including: `Backup`, `ClusterImageCatalog`, `Database`,
+`ImageCatalog`, `Pooler`, `Publication`, `ScheduledBackup`, and `Subscription`.
+
+## Out of Scope
+
+- **Kubernetes only:** CloudNativePG is dedicated to vanilla Kubernetes
+ maintained by the [Cloud Native Computing Foundation
+ (CNCF)](https://kubernetes.io/).
+- **PostgreSQL only:** CloudNativePG is dedicated to vanilla PostgreSQL
+ maintained by the [PostgreSQL Global Development Group
+ (PGDG)](https://www.postgresql.org/about/).
+- **No support for forks:** Features from PostgreSQL forks will only be
+ considered if they can be integrated as extensions or pluggable frameworks.
+- **Not a general-purpose database operator:** CloudNativePG does not support
+ other databases (e.g., MariaDB).
+
+CloudNativePG can be extended via the [CNPG-I plugin interface](https://github.com/cloudnative-pg/cnpg-i).
+
+## Communications
+
+- [Github Discussions](https://github.com/cloudnative-pg/cloudnative-pg/discussions)
+- [Slack](https://cloud-native.slack.com/archives/C08MAUJ7NPM)
+ (join the [CNCF Slack Workspace](https://communityinviter.com/apps/cloud-native/cncf)).
+- [Twitter](https://twitter.com/CloudNativePg)
+- [Mastodon](https://mastodon.social/@CloudNativePG)
+- [Bluesky](https://bsky.app/profile/cloudnativepg.bsky.social)
+
+## Resources
+
+- [Roadmap](ROADMAP.md)
+- [Website](https://cloudnative-pg.io)
+- [FAQ](docs/src/faq.md)
+- [Blog](https://cloudnative-pg.io/blog/)
+- [CloudNativePG plugin Interface (CNPG-I)](https://github.com/cloudnative-pg/cnpg-i).
+
+## Adopters
+
+A list of publicly known users of the CloudNativePG operator is in [ADOPTERS.md](ADOPTERS.md).
+Help us grow our community and CloudNativePG by adding yourself and your
+organization to this list!
+
+### CloudNativePG at KubeCon
+
+- November 10, 2025, KubeCon North America 2025 in Atlanta: ["Project Lightning Talk: CloudNativePG: Running Postgres The Kubernetes Way"](https://www.youtube.com/watch?v=pYwYwehQX3U&t=4s) - Gabriele Bartolini, EDB
+- November 11, 2025, KubeCon North America 2025 in Atlanta: ["Modern PostgreSQL Authorization With Keycloak: Cloud Native Identity Meets Database Security"](https://www.youtube.com/watch?v=TYgPemq06fg) - Yoshiyuki Tabata, Hitachi, Ltd. & Gabriele Bartolini, EDB
+- November 13, 2025, KubeCon North America 2025 in Atlanta: ["Quorum-Based Consistency for Cluster Changes With CloudNativePG Operator"](https://www.youtube.com/watch?v=iQUOO3-JRK4&list=PLj6h78yzYM2MLSW4tUDO2gs2pR5UpiD0C&index=67) - Jeremy Schneider, GEICO Tech & Gabriele Bartolini, EDB
+- April 4, 2025, KubeCon Europe in London: ["Consistent Volume Group Snapshots, Unraveling the Magic"](https://sched.co/1tx8g) - Leonardo Cecchi (EDB) and Xing Yang (VMware)
+- November 11, 2024, Cloud Native Rejekts NA 2024: ["Maximising Microservice Databases with Kubernetes, Postgres, and CloudNativePG"](https://www.youtube.com/watch?v=uBzl_stoxoc&ab_channel=CloudNativeRejekts) - Gabriele Bartolini (EDB) and Leonardo Cecchi (EDB)
+- March 21, 2024, KubeCon Europe 2024 in Paris: ["Scaling Heights: Mastering Postgres Database Vertical Scalability with Kubernetes Storage Magic"](https://kccnceu2024.sched.com/event/1YeM4/scaling-heights-mastering-postgres-database-vertical-scalability-with-kubernetes-storage-magic-gabriele-bartolini-edb-gari-singh-google) - Gari Singh, Google & Gabriele Bartolini, EDB
+- March 19, 2024, Data on Kubernetes Day at KubeCon Europe 2024 in Paris: ["From Zero to Hero: Scaling Postgres in Kubernetes Using the Power of CloudNativePG"](https://colocatedeventseu2024.sched.com/event/1YFha/from-zero-to-hero-scaling-postgres-in-kubernetes-using-the-power-of-cloudnativepg-gabriele-bartolini-edb) - Gabriele Bartolini, EDB
+- November 7, 2023, KubeCon North America 2023 in Chicago: ["Disaster Recovery with Very Large Postgres Databases (in Kubernetes)"](https://kccncna2023.sched.com/event/1R2ml/disaster-recovery-with-very-large-postgres-databases-gabriele-bartolini-edb-michelle-au-google) - Michelle Au, Google & Gabriele Bartolini, EDB
+- October 27, 2022, KubeCon North America 2022 in Detroit: ["Data On Kubernetes, Deploying And Running PostgreSQL And Patterns For Databases In a Kubernetes Cluster"](https://kccncna2022.sched.com/event/182GB/data-on-kubernetes-deploying-and-running-postgresql-and-patterns-for-databases-in-a-kubernetes-cluster-chris-milsted-ondat-gabriele-bartolini-edb) - Chris Milsted, Ondat & Gabriele Bartolini, EDB
+
+### Useful links
+
+- ["Quorum-Based Consistency for Cluster Changes With CloudNativePG Operator"](https://www.youtube.com/watch?v=sRF09UMAlsI) (webinar) - Jeremy Schneider, GEICO Tech & Leonardo Cecchi, EDB
+- [Data on Kubernetes (DoK) Community](https://dok.community/)
+- ["Cloud Neutral Postgres Databases with Kubernetes and CloudNativePG" by Gabriele Bartolini](https://www.cncf.io/blog/2024/11/20/cloud-neutral-postgres-databases-with-kubernetes-and-cloudnativepg/) (November 2024)
+- ["How to migrate your PostgreSQL database in Kubernetes with ~0 downtime from anywhere" by Gabriele Bartolini](https://gabrielebartolini.it/articles/2024/03/cloudnativepg-recipe-5-how-to-migrate-your-postgresql-database-in-kubernetes-with-~0-downtime-from-anywhere/) (March 2024)
+- ["Maximizing Microservice Databases with Kubernetes, Postgres, and CloudNativePG" by Gabriele Bartolini](https://gabrielebartolini.it/articles/2024/02/maximizing-microservice-databases-with-kubernetes-postgres-and-cloudnativepg/) (February 2024)
+- ["Recommended Architectures for PostgreSQL in Kubernetes" by Gabriele Bartolini](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/) (September 2023)
+- ["The Current State of Major PostgreSQL Upgrades with CloudNativePG" by Gabriele Bartolini](https://www.enterprisedb.com/blog/current-state-major-postgresql-upgrades-cloudnativepg-kubernetes) (August 2023)
+- ["The Rise of the Kubernetes Native Database" by Jeff Carpenter](https://thenewstack.io/the-rise-of-the-kubernetes-native-database/) (December 2022)
+- ["Why Run Postgres in Kubernetes?" by Gabriele Bartolini](https://cloudnativenow.com/kubecon-cnc-eu-2022/why-run-postgres-in-kubernetes/) (May 2022)
+- ["Shift-Left Security: The Path To PostgreSQL On Kubernetes" by Gabriele Bartolini](https://www.tfir.io/shift-left-security-the-path-to-postgresql-on-kubernetes/) (April 2021)
+- ["Local Persistent Volumes and PostgreSQL usage in Kubernetes" by Gabriele Bartolini](https://www.2ndquadrant.com/en/blog/local-persistent-volumes-and-postgresql-usage-in-kubernetes/) (June 2020)
+
+---
+
+
+CloudNativePG was originally built and sponsored by EDB.
+
+
+
+
+
+
+
+
+
+
+---
+
+
+Postgres, PostgreSQL, and the Slonik Logo
+are trademarks or registered trademarks of the PostgreSQL Community Association
+of Canada, and used with their permission.
+
+
+[](https://pkg.go.dev/github.com/clusternet/clusternet)
+[](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[](https://bestpractices.coreinfrastructure.org/projects/7185)
+[](https://goreportcard.com/report/github.com/clusternet/clusternet)
+
+[](https://github.com/clusternet/clusternet/releases)
+[](https://codecov.io/gh/clusternet/clusternet)
+[](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fclusternet%2Fclusternet?ref=badge_shield)
+
+----
+
+Managing Your Clusters (including public, private, hybrid, edge, etc.) as easily as Visiting the Internet.
+
+Out of the Box.
+
+A CNCF([Cloud Native Computing Foundation](https://cncf.io/)) Sandbox Project.
+
+----
+
+
+
+Clusternet (**Cluster** Inter**net**) is an open source ***add-on*** that helps you manage thousands of millions of
+Kubernetes clusters as easily as visiting the Internet. No matter the clusters are running on public cloud, private
+cloud, hybrid cloud, or at the edge, Clusternet helps setup network tunnels in a configurable way and lets you
+manage/visit them all as if they were running locally. This also helps eliminate the need to juggle different management
+tools for each cluster.
+
+**Clusternet can also help deploy and coordinate applications to multiple clusters from a single set of APIs in a
+hosting cluster.**
+
+Clusternet also provides a Kubernetes-styled API, where you can continue using the Kubernetes way, such as KubeConfig,
+to visit a certain managed Kubernetes cluster, or a Kubernetes service.
+
+Clusternet is multiple platforms supported now, including `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`
+, `linux/386` , and `linux/arm`;
+
+----
+
+## Core Features
+
+- Kubernetes Multi-Cluster Management and Governance
+ - managing Kubernetes clusters running in cloud providers, such as AWS, Google Cloud, Tencent Cloud, Alibaba Cloud,
+ etc.
+ - managing on-premise Kubernetes clusters
+ - managing any [Certified Kubernetes Distributions](https://www.cncf.io/certification/software-conformance/), such
+ as [k3s](https://github.com/k3s-io/k3s)
+ - managing Kubernetes clusters running at the edge
+ - automatically discovering and registering clusters created by [cluster-api](https://github.com/kubernetes-sigs/cluster-api)
+ - parent cluster can also register itself as a child cluster to run workloads
+ - managing Kubernetes above v1.17.x (Learn more
+ about [Kubernetes Version Skew](https://clusternet.io/docs/introduction/#kubernetes-version-skew))
+ - visiting any managed clusters with dynamic RBAC rules (Learn more
+ from [this tutorial](https://clusternet.io/docs/tutorials/cluster-management/visiting-child-clusters-with-rbac/))
+ - cluster auto-labelling based on [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)
+- Application Coordinations
+ - Scheduling **Framework** (`in-tree` plugins, `out-of-tree` plugins)
+ - Cross-Cluster Scheduling
+ - replication scheduling
+ - static dividing scheduling by weight
+ - dynamic dividing scheduling by capacity
+ - cluster resource predictor **framework** for `in-tree` and `out-of-tree` implementations
+ - various deployment topologies for cluster resource predictors
+ - subgroup cluster scheduling
+ - Various Resource Types
+ - Kubernetes native objects, such as `Deployment`, `StatefulSet`, etc.
+ - CRDs
+ - helm charts, including [OCI-based Helm charts](https://helm.sh/docs/topics/registries/)
+ - Resource interpretations with `in-tree` or `out-of-tree` controllers
+ - [Setting Overrides](https://clusternet.io/docs/tutorials/multi-cluster-apps/setting-overrides/)
+ - two-stage priority based override strategies
+ - enables easy rollback of overrides
+ - cross-cluster canary rollout
+ - Multi-Cluster Services
+ - multi-cluster services discovery with [mcs-api](https://github.com/kubernetes-sigs/mcs-api)
+- CLI
+ - providing a kubectl plugin, which can be installed with `kubectl krew install clusternet`
+ - consistent user experience with `kubectl`
+ - create/update/watch/delete multi-cluster resources
+ - interacting with any child clusters the same as local cluster
+- Client-go
+ - easy to integrate via
+ a [client-go wrapper](https://github.com/clusternet/clusternet/blob/main/examples/clientgo/READEME.md)
+
+## Architecture
+
+
+
+Clusternet is a lightweight addon that consists of four components, `clusternet-agent`, `clusternet-scheduler`,
+`clusternet-controller-manager` and `clusternet-hub`.
+
+Explore the architecture of Clusternet on [clusternet.io](https://clusternet.io/docs/introduction/#architecture).
+
+## To start using Clusternet
+
+| | Kubernetes >=v1.30 | >=v1.28,
+
+
+
+## Contact
+
+If you've got any questions, please feel free to contact us with following ways:
+
+- [open a github issue](https://github.com/clusternet/clusternet/issues/new/choose)
+- [mailing list](mailto:clusternet@googlegroups.com)
+- [join discussion group](https://groups.google.com/g/clusternet)
+
+## Contributing & Developing
+
+If you want to get participated and become a contributor to Clusternet, please don't hesitate to refer to our
+[CONTRIBUTING](CONTRIBUTING.md) document for details.
+
+A [developer guide](https://clusternet.io/docs/developer-guide/) is ready to help you
+
+- build binaries for all platforms, such as `darwin/amd64`, `linux/amd64`, `linux/arm64`, etc.;
+- build docker images for multiple platforms, such as `linux/amd64`, `linux/arm64`, etc.;
+
+---
+
+
+
+# Clusterpedia
+
+[](/LICENSE)
+[](https://goreportcard.com/report/github.com/clusterpedia-io/clusterpedia)
+[](https://github.com/clusterpedia-io/clusterpedia/releases)
+[](https://artifacthub.io/packages/search?repo=clusterpedia)
+[](https://bestpractices.coreinfrastructure.org/projects/5539)
+[](https://cloud-native.slack.com/messages/clusterpedia)
+
+This name Clusterpedia is inspired by Wikipedia. It is an encyclopedia of multi-cluster to synchronize, search for, and simply control multi-cluster resources.
+
+Clusterpedia can synchronize resources with multiple clusters and provide more powerful search features on the basis of compatibility with Kubernetes OpenAPI to help you effectively get any multi-cluster resource that you are looking for in a quick and easy way.
+
+> The capability of Clusterpedia is not only to search for and view but also simply control resources in the future, just like Wikipedia that supports for editing entries.
+
+
+
+**Clusterpedia is a [Cloud Native Computing Foundation](https://cncf.io/) sandbox project.**
+> If you want to join the clusterpedia channel on CNCF slack, please **[get invite to CNCF slack](https://slack.cncf.io/)** and then join the [#clusterpedia](https://cloud-native.slack.com/messages/clusterpedia) channel.
+
+## Why Clusterpedia
+Clusterpedia can be deployed as a standalone platform or integrated with [Cluster API](https://github.com/kubernetes-sigs/cluster-api), [Karmada](https://github.com/karmada-io/karmada), [Clusternet](https://github.com/clusternet/clusternet), [vCluster](https://github.com/loft-sh/vcluster), [KubeVela](https://github.com/kubevela/kubevela) and other multi-cloud platforms
+
+### Automatic synchronization of clusters managed by multi-cloud platforms
+The clusterpedia can automatically synchronize the resources within the cluster managed by the multi-cloud platform.
+
+Users do not need to maintain Clusterpedia manually, Clusterpedia can work as well as the internal components of the multi-cloud platforms.
+
+Lean More About [Interfacing to Multi-Cloud Platforms](https://clusterpedia.io/docs/usage/interfacing-to-multi-cloud-platforms/)
+
+### More retrieval features and compatibility with **Kubernetes OpenAPI**
+* Support for retrieving resources using `kubectl`, `client-go` or `controller-runtime/client`, [client-go example](https://github.com/clusterpedia-io/client-go/blob/main/examples/list-clusterpedia-resources/main.go)
+* The resource metadata can be retrived via API or [client-go/metadata](https://pkg.go.dev/k8s.io/client-go/metadata)
+* Rich retrieval conditions: [Filter by cluster/namespace/name/creation](https://clusterpedia.io/docs/usage/search/multi-cluster/#basic-features), [Search by parent or ancestor owner](https://clusterpedia.io/docs/usage/search/multi-cluster/#search-by-parent-or-ancestor-owner),[Multi-Cluster Label Selector](https://clusterpedia.io/docs/usage/search/#label-selector), [Enhanced Field Selector](https://clusterpedia.io/docs/usage/search/#field-selector), [Custom Search Conditions](https://clusterpedia.io/docs/usage/search/#advanced-searchcustom-conditional-search), etc.
+### Support for importing Kubernetes 1.10+
+### Automic conversion of different versions of Kube resources and support for multiple version of resources
+* Even if you import different version of Kube, we can still use the same resource version to retrieve resources
+> For example, we can use `v1`, `v1beta2`, `v1beta1` version to retrieve the Deployments resources in different clusters.
+>
+> Notes: The version of *deployments* is `v1beta1` in Kubernetes 1.10 and it is `v1` in Kubernetes 1.24.
+```bash
+$ kubectl get --raw "/apis/clusterpedia.io/v1beta1/resources/apis/apps" | jq
+{
+ "kind": "APIGroup",
+ "apiVersion": "v1",
+ "name": "apps",
+ "versions": [
+ {
+ "groupVersion": "apps/v1",
+ "version": "v1"
+ },
+ {
+ "groupVersion": "apps/v1beta2",
+ "version": "v1beta2"
+ },
+ {
+ "groupVersion": "apps/v1beta1",
+ "version": "v1beta1"
+ }
+ ],
+ "preferredVersion": {
+ "groupVersion": "apps/v1",
+ "version": "v1"
+ }
+}
+```
+### A single API can be used to retrieve different types of resources
+* Use [`Collection Resource`](https://clusterpedia.io/docs/concepts/collection-resource/) to retrieve different types of resources, such as `Deployment`, `DaemonSet`, `StatefulSet`.
+```bash
+$ kubectl get collectionresources
+NAME RESOURCES
+any *
+workloads deployments.apps,daemonsets.apps,statefulsets.apps
+kuberesources .*,*.admission.k8s.io,*.admissionregistration.k8s.io,*.apiextensions.k8s.io,*.apps,*.authentication.k8s.io,*.authorization.k8s.io,*.autoscaling,*.batch,*.certificates.k8s.io,*.coordination.k8s.io,*.discovery.k8s.io,*.events.k8s.io,*.extensions,*.flowcontrol.apiserver.k8s.io,*.imagepolicy.k8s.io,*.internal.apiserver.k8s.io,*.networking.k8s.io,*.node.k8s.io,*.policy,*.rbac.authorization.k8s.io,*.scheduling.k8s.io,*.storage.k8s.io
+```
+### Diverse policies and intelligent synchronization
+* [Wildcards](https://clusterpedia.io/docs/usage/sync-resources/#using-wildcards-to-sync-resources) can be used to sync all types of resources within a specified group or cluster.
+* [Support for synchronizing all custom resources](https://clusterpedia.io/docs/usage/sync-resources/#sync-all-custom-resources)
+* The type and version of resources that Clusterpedia is synchroizing with can be adapted to you CRD and AA changes
+### Unify the search entry for master clusters and multi-cluster resources
+* Based on [Aggregated API](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/), the entry portal for multi-cluster retrieval is the same as that of the master cluster(IP:PORT)
+### Very low memory usage and weak network optimization
+* Optimized caches used by informer, so the memory usage is very low for resource synchronization.
+* Automatic start/stop synchronization based on cluster health status
+### High availability
+### No dependency on specific storage components
+Clusterpedia does not care about storage components and uses the storage layer to attach specific storage components,
+and will also add storage layers for **graph databases** and **ES** in the future
+
+## Architecture
+
+The architecture consists of four parts:
+
+* **Clusterpedia APIServer**: Register to `Kubernetes APIServer` by the means of [Aggregated API](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) and provide services through a unified entrance
+* **ClusterSynchro Manager**: Manage the cluster synchro that is used to synchronize cluster resources
+* **Storage Layer**: Connect with a specific storage component and then register to Clusterpedia APIServer and ClusterSynchro Manager via a storage layer interface
+* **Storage Component**: A specific storage facility such as **MySQL**, **PostgreSQL**, **Redis** or other **Graph Databases**
+
+In addition, Clusterpedia will use the Custom Resource - *PediaCluster* to implement cluster authentication and configure resources for synchronization.
+
+Clusterpedia also provides a `Default Storage Layer` that can connect with **MySQL** and **PostgreSQL**.
+> Clusterpedia does not care about the specific storage components used by users,
+> you can choose or implement the storage layer according to your own needs,
+> and then register the storage layer in Clusterpedia as a plug-in
+
+---
+[Installation](https://clusterpedia.io/docs/installation/) | [Import Clusters](https://clusterpedia.io/docs/usage/import-clusters/) | [Sync Cluster Resources](https://clusterpedia.io/docs/usage/sync-resources/)
+---
+
+## Search Label and URL Query
+|Role| Search label key|URL query|
+| -- | --------------- | ------- |
+|Filter cluster names|`search.clusterpedia.io/clusters`|`clusters`|
+|Filter namespaces|`search.clusterpedia.io/namespaces`|`namespaces`|
+|Filter resource names|`search.clusterpedia.io/names`|`names`|
+|Fuzzy Search by resource name|`internalstorage.clusterpedia.io/fuzzy-name`|-|
+|Since creation time|`search.clusterpedia.io/since`|`since`|
+|Before creation time|`search.clusterpedia.io/before`|`before`|
+|Specified Owner UID|`search.clusterpedia.io/owner-uid`|`ownerUID`|
+|Specified Owner Seniority|`search.clusterpedia.io/owner-seniority`|`ownerSeniority`|
+|Specified Owner Name|`search.clusterpedia.io/owner-name`|`ownerName`|
+|Specified Owner Group Resource|`search.clusterpedia.io/owner-gr`|`ownerGR`|
+|Order by fields|`search.clusterpedia.io/orderby`|`orderby`|
+|Set page size|`search.clusterpedia.io/size`|`limit`|
+|Set page offset|`search.clusterpedia.io/offset`|`continue`|
+|Response include Continue|`search.clusterpedia.io/with-continue`|`withContinue`
+|Response include remaining count|`search.clusterpedia.io/with-remaining-count`|`withRemainingCount`
+|[Custom Where SQL](https://clusterpedia.io/docs/usage/search/#advanced-searchcustom-conditional-search)|-|`whereSQL`|
+|[Get only the metadata of the collection resource](https://clusterpedia.io/docs/usage/search/collection-resource#only-metadata) | - |`onlyMetadata` |
+|[Specify the groups of `any collectionresource`](https://clusterpedia.io/docs/usage/search/collection-resource#any-collectionresource) | - | `groups` |
+|[Specify the resources of `any collectionresource`](https://clusterpedia.io/docs/usage/search/collection-resource#any-collectionresource) | - | `resources` |
+
+**Both Search Labels and URL Query support same operators as Label Selector:**
+* `exist`, `not exist`
+* `=`, `==`, `!=`
+* `in`, `notin`
+
+More information about [Search Conditions](https://clusterpedia.io/docs/usage/search/),
+[Label Selector](https://clusterpedia.io/docs/usage/search/#label-selector) and [Field Selector](https://clusterpedia.io/docs/usage/search/#field-selector)
+
+## Usage Samples
+You can search for resources configured in *PediaCluster*, Clusterpedia supports two types of resource search:
+* Resources that are compatible with **Kubernetes OpenAPI**
+* [`Collection Resource`](https://clusterpedia.io/docs/concepts/collection-resource/)
+```sh
+$ kubectl api-resources | grep clusterpedia.io
+collectionresources clusterpedia.io/v1beta1 false CollectionResource
+resources clusterpedia.io/v1beta1 false Resources
+```
+### Use a compatible way with Kubernetes OpenAPI
+It is possible to search resources via URL, but using `kubectl` may be more convenient if
+you [configured the cluster shortcuts for `kubectl`](https://clusterpedia.io/docs/usage/access-clusterpedia/#configure-the-cluster-shortcut-for-kubectl).
+
+We can use `kubectl --cluster ` to specify the cluster, if `` is `clusterpedia`,
+it meas it is a multi-cluster search operation.
+
+First check which resources are synchronized. We cannot find a resource until it is properly synchronized:
+```sh
+$ kubectl --cluster clusterpedia api-resources
+NAME SHORTNAMES APIVERSION NAMESPACED KIND
+configmaps cm v1 true ConfigMap
+events ev v1 true Event
+namespaces ns v1 false Namespace
+nodes no v1 false Node
+pods po v1 true Pod
+services svc v1 true Service
+daemonsets ds apps/v1 true DaemonSet
+deployments deploy apps/v1 true Deployment
+replicasets rs apps/v1 true ReplicaSet
+statefulsets sts apps/v1 true StatefulSet
+cronjobs cj batch/v1 true CronJob
+jobs batch/v1 true Job
+clusters cluster.kpanda.io/v1alpha1 false Cluster
+ingressclasses networking.k8s.io/v1 false IngressClass
+ingresses ing networking.k8s.io/v1 true Ingress
+clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
+clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
+roles rbac.authorization.k8s.io/v1 true Role
+
+$ kubectl --cluster cluster-1 api-resources
+...
+```
+
+#### Search in Multiple Clusters
+> Usage of [multi-cluster search](https://clusterpedia.io/docs/usage/search/multi-cluster/) in documents
+
+**Get deployments in the `kube-system` namespace of all clusters:**
+```sh
+$ kubectl --cluster clusterpedia get deployments -n kube-system
+CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
+cluster-1 coredns 2/2 2 2 68d
+cluster-2 calico-kube-controllers 1/1 1 1 64d
+cluster-2 coredns 2/2 2 2 64d
+```
+
+**Get deployments in the two namespaces `kube-system` and `default` of all clusters:**
+```sh
+$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/namespaces in (kube-system, default)"
+NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system cluster-1 coredns 2/2 2 2 68d
+kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
+kube-system cluster-2 coredns 2/2 2 2 64d
+default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
+default cluster-2 dd-airflow-web 0/1 1 0 54d
+default cluster-2 hello-world-server 1/1 1 1 27d
+default cluster-2 openldap 1/1 1 1 41d
+default cluster-2 phpldapadmin 1/1 1 1 41d
+```
+
+**Get deployments in the `kube-system` and `default` namespaces in cluster-1 and cluster-2:**
+```sh
+$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/clusters in (cluster-1, cluster-2),\
+ search.clusterpedia.io/namespaces in (kube-system,default)"
+NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system cluster-1 coredns 2/2 2 2 68d
+kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
+kube-system cluster-2 coredns 2/2 2 2 64d
+default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
+default cluster-2 dd-airflow-web 0/1 1 0 54d
+default cluster-2 hello-world-server 1/1 1 1 27d
+default cluster-2 openldap 1/1 1 1 41d
+default cluster-2 phpldapadmin 1/1 1 1 41d
+```
+
+**Get deployments in the `kube-system` and `default` namespaces in cluster-1 and cluster-2:**
+```sh
+$ kubectl --cluster clusterpedia get deployments -A -l "search.clusterpedia.io/clusters in (cluster-1, cluster-2),\
+ search.clusterpedia.io/namespaces in (kube-system,default),\
+ search.clusterpedia.io/orderby=name"
+NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system cluster-2 calico-kube-controllers 1/1 1 1 64d
+kube-system cluster-1 coredns 2/2 2 2 68d
+kube-system cluster-2 coredns 2/2 2 2 64d
+default cluster-2 dd-airflow-scheduler 0/1 1 0 54d
+default cluster-2 dd-airflow-web 0/1 1 0 54d
+default cluster-2 hello-world-server 1/1 1 1 27d
+default cluster-2 openldap 1/1 1 1 41d
+default cluster-2 phpldapadmin 1/1 1 1 41d
+```
+
+#### Search a specific cluster
+> Usage of [specified cluster search](https://clusterpedia.io/docs/usage/search/specified-cluster/) in documents
+
+**If you want to search a specific cluster for any resource therein, you can add --cluster to specify the cluster name:**
+```sh
+$ kubectl --cluster cluster-1 get deployments -A
+NAMESPACE CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE
+calico-apiserver cluster-1 calico-apiserver 1/1 1 1 68d
+calico-system cluster-1 calico-kube-controllers 1/1 1 1 68d
+calico-system cluster-1 calico-typha 1/1 1 1 68d
+capi-system cluster-1 capi-controller-manager 1/1 1 1 42d
+capi-kubeadm-bootstrap-system cluster-1 capi-kubeadm-bootstrap-controller-manager 1/1 1 1 42d
+capi-kubeadm-control-plane-system cluster-1 capi-kubeadm-control-plane-controller-manager 1/1 1 1 42d
+capv-system cluster-1 capv-controller-manager 1/1 1 1 42d
+cert-manager cluster-1 cert-manager 1/1 1 1 42d
+cert-manager cluster-1 cert-manager-cainjector 1/1 1 1 42d
+cert-manager cluster-1 cert-manager-webhook 1/1 1 1 42d
+clusterpedia-system cluster-1 clusterpedia-apiserver 1/1 1 1 27m
+clusterpedia-system cluster-1 clusterpedia-clustersynchro-manager 1/1 1 1 27m
+clusterpedia-system cluster-1 clusterpedia-internalstorage-mysql 1/1 1 1 29m
+kube-system cluster-1 coredns 2/2 2 2 68d
+tigera-operator cluster-1 tigera-operator 1/1 1 1 68d
+```
+Except for `search.clusterpedia.io/clusters`, the support for other complex queries is same as that for multi-cluster search.
+
+If you want to learn about the details of a resource, you need to specify which cluster it is:
+```sh
+$ kubectl --cluster cluster-1 -n kube-system get deployments coredns -o wide
+CLUSTER NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
+cluster-1 coredns 2/2 2 2 68d coredns registry.aliyuncs.com/google_containers/coredns:v1.8.4 k8s-app=kube-dns
+```
+
+**Find the related pods by the name of the deployment**
+
+First view the deployments in default namespace
+```sh
+$ kubectl --cluster cluster-1 get deployments
+NAME READY UP-TO-DATE AVAILABLE AGE
+fake-pod 3/3 3 3 104d
+test-controller-manager 0/0 0 0 7d21h
+```
+
+Use `owner-name` to specify Owner Name and use `owner-seniority` to promote the Owner's seniority.
+```
+$ kubectl --cluster cluster-1 get pods -l "search.clusterpedia.io/owner-name=fake-pod,search.clusterpedia.io/owner-seniority=1"
+NAME READY STATUS RESTARTS AGE
+fake-pod-698dfbbd5b-74cjx 1/1 Running 0 12d
+fake-pod-698dfbbd5b-tmcw7 1/1 Running 0 3s
+fake-pod-698dfbbd5b-wvtvw 1/1 Running 0 3s
+```
+
+Lean More About [Search by Parent or Ancestor Owner](https://clusterpedia.io/docs/usage/search/specified-cluster/#search-by-parent-or-ancestor-owner)
+
+### Search for [Collection Resource](https://clusterpedia.io/docs/concepts/collection-resource/)
+Clusterpedia can also perform more advanced aggregation of resources. For example, you can use `Collection Resource` to get a set of different resources at once.
+
+Let's first check which `Collection Resource` currently Clusterpedia supports:
+```sh
+$ kubectl get collectionresources
+NAME RESOURCES
+any *
+workloads deployments.apps,daemonsets.apps,statefulsets.apps
+kuberesources .*,*.admission.k8s.io,*.admissionregistration.k8s.io,*.apiextensions.k8s.io,*.apps,*.authentication.k8s.io,*.authorization.k8s.io,*.autoscaling,*.batch,*.certificates.k8s.io,*.coordination.k8s.io,*.discovery.k8s.io,*.events.k8s.io,*.extensions,*.flowcontrol.apiserver.k8s.io,*.imagepolicy.k8s.io,*.internal.apiserver.k8s.io,*.networking.k8s.io,*.node.k8s.io,*.policy,*.rbac.authorization.k8s.io,*.scheduling.k8s.io,*.storage.k8s.io
+```
+
+By getting workloads, you can get a set of resources aggregated by `deployments`, `daemonsets`, and `statefulsets`, and `Collection Resource` also supports for all complex queries.
+
+**`kubectl get collectionresources workloads` will get the corresponding resources of all namespaces in all clusters by default:**
+```sh
+$ kubectl get collectionresources workloads
+CLUSTER GROUP VERSION KIND NAMESPACE NAME AGE
+cluster-1 apps v1 DaemonSet kube-system vsphere-cloud-controller-manager 63d
+cluster-2 apps v1 Deployment kube-system calico-kube-controllers 109d
+cluster-2 apps v1 Deployment kube-system coredns-coredns 109d
+```
+> Add the collection of Daemonset in cluster-1 and some of the above output is cut out
+
+Due to the limitation of kubectl, you cannot use complex queries in kubectl and can only be queried by `URL Query`.
+
+[Lean More](https://clusterpedia.io/docs/usage/search/collection-resource/)
+
+## Proposals
+### Perform more complex control over resources
+In addition to resource search, similar to Wikipedia, Clusterpedia should also have simple capability of resource control, such as watch, create, delete, update, and more.
+
+In fact, a write action is implemented by double write + warning response.
+
+**We will discuss this feature and decide whether we should implement it according to the community needs**
+
+## Notes
+### Multi-cluster network connectivity
+Clusterpedia does not actually solve the problem of network connectivity in a multi-cluster environment. You can use tools such as [tower](https://github.com/kubesphere/tower) to connect and access sub-clusters, or use [submariner](https://github.com/submariner-io/submariner) or [skupper](https://github.com/skupperproject/skupper) to solve cross-cluster network problems.
+
+## Contact
+If you have any question, feel free to reach out to us in the following ways:
+* [@cncf/clusterpedia slack](https://cloud-native.slack.com/messages/clusterpedia)
+
+> If you want to join the clusterpedia channel on CNCF slack, please **[get invite to CNCF slack](https://slack.cncf.io/)** and then join the [#clusterpedia](https://cloud-native.slack.com/messages/clusterpedia) channel.
+
+## Contributors
+
+
+
+
+
+Made with [contrib.rocks](https://contrib.rocks).
+
+## License
+Copyright 2023 the Clusterpedia Authors. All rights reserved.
+
+Licensed under the Apache License, Version 2.0.
diff --git a/data/readmes/cni-genie-v30.md b/data/readmes/cni-genie-v30.md
new file mode 100644
index 0000000..23ac574
--- /dev/null
+++ b/data/readmes/cni-genie-v30.md
@@ -0,0 +1,83 @@
+# CNI-Genie - README (v3.0)
+
+**Repository**: https://github.com/cni-genie/CNI-Genie
+**Version**: v3.0
+
+---
+
+# CNI-Genie
+
+CNI-Genie enables container orchestrators ([Kubernetes](https://github.com/kubernetes/kubernetes), [Mesos](https://mesosphere.com/)) to seamlessly connect to the choice of CNI plugins installed on a host, including
+1. ['reference' CNI plugins](https://github.com/containernetworking/plugins), e.g., bridge, macvlan, ipvlan, loopback
+2. '3rd-party' CNI plugins, e.g., ([Calico](https://github.com/projectcalico/calico), [Romana](https://github.com/romana/romana), [Weave-net](https://github.com/weaveworks/weave))
+3. 'specialized' CNI plugins, e.g., [SR-IOV](https://github.com/hustcat/sriov-cni), DPDK (work-in-progress)
+4. any generic CNI plugin of choice installed on the host
+
+Without CNI-Genie, the orchestrator is bound to only a single CNI plugin. E.g., for the case of Kubernetes, without CNI-Genie, kubelet is bound to only a single CNI plugin passed to kubelet on start. CNI-Genie allows for the co-existance of multiple CNI plugins in runtime.
+
+[](https://travis-ci.org/Huawei-PaaS/CNI-Genie)
+[](https://goreportcard.com/report/github.com/Huawei-PaaS/CNI-Genie)
+
+Please feel free to post your feedback, questions on CNI-Genie [Slack channel](https://cni-genie.slack.com/)
+
+## Demo
+Here is a 6 minute demo video that demonstrates 3 scenarios
+1. Assign an IP address to a pod from a particular network solution, e.g., 'Weave-net'
+2. Assign multi-IP addresses to a pod from multiple network solutions, e.g., 1st IP address from 'Weave-net', 2nd IP address from 'Canal'
+3. Assign an IP address to a pod from the "less congested" network solution, e.g., from 'Canal' that is less congested
+
+[](https://asciinema.org/a/118191)
+
+# Contributing
+[Contributing](CONTRIBUTING.md)
+
+[Code of Conduct](CODE_OF_CONDUCT.md)
+
+# Why we created CNI-Genie?
+
+CNI Genie is an add-on to [Kuberenets](https://github.com/kubernetes/kubernetes) open-source project and is designed to provide the following features:
+
+1. [wide range of network offerings, CNI plugins](docs/multiple-cni-plugins/README.md), available to the users in runtime. This figure shows Kubernetes CNI Plugin landscape before and after CNI-Genie
+ 
+ - User-story: based on "performance" requirements, "application" requirements, “workload placement” requirements, the user could be interested to use different CNI plugins for different application groups
+ - Different CNI plugins are different in terms of need for port-mapping, NAT, tunneling, interrupting host ports/interfaces
+
+[Watch multiple CNI plugins demo](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#demo)
+
+
+2. [Multiple NICs per container & per pod](docs/multiple-ips/README.md). The user can select multiple NICs to be added to a container upon creating them. Each NIC can get an IP address from an existing CNI plugin of choice. This makes the container reachable across multiple networks. Some use-cases from [SIG-Network](https://github.com/kubernetes/community/wiki/SIG-Network) are depicted in the figure below
+ 
+
+[Watch multi-NICs per 'container' demo](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#demo)
+
+[Watch multi-NICs per 'pod' demo](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod) (IP addresses assigned not only to the container, but also to the Pod)
+
+3. [Network Attachment Definition](docs/network-attachment-definitions/README.md). CNI-Genie supports [NPWG Multi-Network Specification v1](https://github.com/K8sNetworkPlumbingWG/multi-net-spec) style network attachment to pods, where pods can be assigned IP according to network-attachment-definition CRD objects created by user.
+
+4. The user can leave the CNI plugin selection to CNI-Genie. CNI-Genie watches the Key Performance Indicator (KPI) that is of interest to the user and [selects the CNI plugin](docs/smart-cni-genie/README.md), accordingly.
+ - CNI Genie watches KPI(s) of interest for existing CNI plugins, e.g., occupancy rate, number of subnets, latency, bandwidth
+
+[Watch Smart CNI Plugin Selection demo](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/smart-cni-genie/README.md#demo)
+
+
+5. [Default plugin support](docs/default-plugin/README.md). Another useful feature from genie. Using this, we can ensure to get ip address(es) for a pod by selecting default set of plugins
+
+
+6. Network isolation, i.e.,
+ - Dedicated 'physical' network for a tenant
+ - Isolated 'logical' networks for different tenants on a shared 'physical'network
+
+ Usecase : [Obtaining Pod Ip address from customized subnet](docs/network-isolation/README.md)
+
+7. [CNI-Genie network policy engine](docs/network-policy/README.md) for network level ACLs
+
+8. Real-time switching between different (physical or logical) networks for a given workload. This allows for
+ - Price minimization: dynamically switching workload to a cheaper network as network prices change
+ - Maximizing network utilization: dynamically switching workload to the less congested network at a threshold
+
+ 
+
+Note: CNI-Genie itself is NOT a routing solution! It makes a call to CNI plugins that provide routing service
+
+### More docs here [Getting started](docs/GettingStarted.md), [CNI-Genie Feature Set](docs/CNIGenieFeatureSet.md)
+
diff --git a/data/readmes/composefs-v108.md b/data/readmes/composefs-v108.md
new file mode 100644
index 0000000..3f287b3
--- /dev/null
+++ b/data/readmes/composefs-v108.md
@@ -0,0 +1,202 @@
+# composefs - README (v1.0.8)
+
+**Repository**: https://github.com/composefs/composefs
+**Version**: v1.0.8
+
+---
+
+# composefs: The reliability of disk images, the flexibility of files
+
+The composefs project combines several underlying Linux features
+to provide a very flexible mechanism to support read-only
+mountable filesystem trees, stacking on top of an underlying
+"lower" Linux filesystem.
+
+The key technologies composefs uses are:
+
+- [overlayfs](https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt) as the kernel interface
+- [EROFS](https://erofs.docs.kernel.org) for a mountable metadata tree
+- [fs-verity](https://www.kernel.org/doc/html/next/filesystems/fsverity.html) (optional) from the lower filesystem
+
+The manner in which these technologies are combined is important.
+First, to emphasize: composefs does not store any persistent data itself.
+The underlying metadata and data files must be stored in a valid
+"lower" Linux filesystem. Usually on most systems, this will be a
+traditional writable persistent Linux filesystem such as `ext4`, `xfs`, `btrfs` etc.
+
+The "tagline" for this project is "The reliability of disk images, the flexibility of files",
+and is worth explaining a bit more. Disk images have a lot of desirable
+properties in contrast to other formats such as tar and zip: they're
+efficiently kernel mountable and are very explicit about all details
+of their layout. There are well known tools such as [dm-verity](https://docs.kernel.org/admin-guide/device-mapper/verity.html)
+which can apply to disk images for robust security. However, disk
+images have well known drawbacks such as commonly duplicating storage
+space on disk, can be difficult to incrementally update, and are
+generally inflexible.
+
+composefs aims to provide a similarly high level of reliability,
+security, and Linux kernel integration; but with the *flexibility* of files
+for content - avoiding doubling disk usage, worrying about partition
+tables, etc.
+
+## Separation between metadata and data
+
+A key aspect of the way composefs works is that it's designed to
+store "data" (i.e. non-empty regular files) distinct from "metadata"
+(i.e. everything else).
+
+composefs reads and writes a filesystem image which is really
+just an [EROFS](https://erofs.docs.kernel.org)
+which today is loopback mounted.
+
+However, this EROFS filesystem tree is just metadata; the underlying
+non-empty data files can be shared in a distinct "backing store"
+directory. The EROFS filesystem includes `trusted.overlay.redirect`
+extended attributes which tell the `overlayfs` mount
+how to find the real underlying files.
+
+## Mounting multiple composefs with a shared backing store
+
+The key targeted use case for composefs is versioned, immutable executable
+filesystem trees (i.e. container images and bootable host systems), where
+some of these filesystems may share *parts* of their storage (i.e. some
+files may be different, but not all).
+
+Composefs ships with a mount helper that allows you to easily mount
+images by passing the image filename and the base directory for
+the content files like this:
+
+```bash
+mount -t composefs /path/to/image -o basedir=/path/to/content /mnt
+```
+
+By storing the files content-addressed (e.g. using the hash of the content to name
+the file), shared files only need to be stored once, yet can appear in
+multiple mounts.
+
+## Backing store shared on disk *and* in page cache
+
+A crucial advantage of composefs in contrast to other approaches
+is that data files are shared in the [page cache](https://static.lwn.net/kerneldoc/admin-guide/mm/concepts.html#page-cache).
+
+This allows launching multiple container images that will
+reliably share memory.
+
+## Filesystem integrity
+
+Composefs also supports [fs-verity](https://www.kernel.org/doc/html/latest/filesystems/fsverity.html)
+validation of the content files. When using this, the digest of the
+content files is stored in the image in the `trusted.overlay.metacopy`
+extended attributes which tell overlayfs to validate that
+the content file it uses has a matching enabled fs-verity digest. This
+means that the backing content cannot be changed in any way (by
+mistake or by malice) without this being detected when the file is
+used.
+
+You can also use fs-verity on the image file itself, and pass the
+expected fs-verity digest as a mount option, which composefs will
+validate. In this case we have full trust of both data and metadata of
+the mounted file. This solves a weakness that fs-verity has when used
+on its own, in that it can only verify file data, not metadata (e.g.
+inode bits like permissions and ownership, but also directory
+structures).
+
+## Usecase: container images
+
+There are multiple container image systems; for those using e.g.
+[OCI](https://github.com/opencontainers/image-spec/blob/main/spec.md)
+a common approach (implemented by both docker and podman for example)
+is to just untar each layer by itself, and then use `overlayfs`
+to stitch them together at runtime. This is a partial inspiration
+for composefs; notably this approach does ensure that *identical
+layers* are shared.
+
+However if instead we store the file content in a content-addressed
+fashion, and then we can generate a composefs file for each layer,
+continuing to mount them with a chain of `overlayfs` *or* we
+can generate a single composefs for the final merged filesystem tree.
+
+This allows sharing of content files between images, even if the
+metadata (like the timestamps or file ownership) vary between images.
+
+Together with something like
+[zstd:chunked](https://github.com/containers/storage/pull/775) this
+will speed up pulling container images and make them available for
+usage, without the need to even create these files if already present!
+
+## Usecase: Bootable host systems (e.g. OSTree)
+
+[OSTree](https://github.com/ostreedev/ostree) already uses a content-addressed
+object store. However, normally this has to be checked out into a regular directory (using hardlinks
+into the object store for regular files). This directory is then
+bind-mounted as the rootfs when the system boots.
+
+OSTree already supports enabling fs-verity on the files in the store,
+but nothing can protect against changes to the checkout directories. A
+malicious user can add, remove or replace files there. We want to use
+composefs to avoid this.
+
+Instead of checking out to a directory, we generate a composefs image
+pointing into the object store and mount that as the root fs. We can
+then enable fs-verity of the composefs image and embed the digest of
+that in the kernel commandline which specifies the rootfs. Since
+composefs generation is reproducible, we can even verify that the
+composefs image we generated is correct by comparing its digest to one
+in the ostree metadata that was generated when the ostree image was built.
+
+For more information on ostree and composefs, see [this tracking issue](https://github.com/ostreedev/ostree/issues/2867).
+
+## tools
+
+Composefs installs two main tools:
+
+- `mkcomposefs`: Creates a composefs image given a directory pathname. Can also compute digests and create a content store directory.
+- `mount.composefs`: A mount helper that supports mounting composefs images.
+
+## mounting a composefs image
+
+The mount.composefs helper allows you to mount composefs images (of both types).
+
+The basic use is:
+
+```bash
+mount -t composefs /path/to/image.cfs -o basedir=/path/to/datafiles /mnt
+```
+
+The default behaviour for fs-verity is that any image files that
+specifies an expected digest needs the backing file to match that
+fs-verity digest, at least if this is supported in the kernel. This
+can be modified with the `verity` and `noverity` options.
+
+Mount options:
+
+- `basedir`: is the directory to use as a base when resolving relative content paths.
+- `verity`: All image files must specify a fs-verity image.
+- `noverity`: Don't verify fs-verity digests (useful for example if fs-verity is not supported on basedir).
+- `digest`: A fs-verity sha256 digest that the image file must match. If set, `verity_check` defaults to 2.
+- `upperdir`: Specify an upperdir for the overlayfs filesystem.
+- `workdir`: Specify a workdir for the overlayfs filesystem.
+- `idmap`: Specify a path to a user namespace that is used as an idmap.
+
+## Language bindings
+
+### Rust
+
+There is active work on a [composefs crate](https://github.com/containers/composefs-rs)
+which has both wrappers for invocations of the `mkcomposefs` and `composefs-info` dump tooling,
+as well as higher level repository functionality.
+
+### Go
+
+The containers/storage Go library has [code wrapping mkcomposefs](https://github.com/containers/storage/blob/5fe400b7aedc7385e07a938d393d50600ca06299/drivers/overlay/composefs.go#L41)
+that could in theory be extracted to a helper package.
+
+## Community forums
+
+- Live chat: [Matrix channel](https://matrix.to/#/#composefs:matrix.org)
+- Async forums: [Github discussions](https://github.com/containers/composefs/discussions)
+
+## Contributing
+
+We have a dedicated [CONTRIBUTING](CONTRIBUTING.md) document.
+
diff --git a/data/readmes/confidential-containers-v0170.md b/data/readmes/confidential-containers-v0170.md
new file mode 100644
index 0000000..cecd3f9
--- /dev/null
+++ b/data/readmes/confidential-containers-v0170.md
@@ -0,0 +1,54 @@
+# Confidential Containers - README (v0.17.0)
+
+**Repository**: https://github.com/confidential-containers/confidential-containers
+**Version**: v0.17.0
+
+---
+
+
+
+# Confidential Containers
+
+[](https://bestpractices.dev/projects/5719)
+
+## Welcome to confidential-containers
+
+Confidential Containers is an open source community working to leverage
+[Trusted Execution Environments](https://en.wikipedia.org/wiki/Trusted_execution_environment)
+to protect containers and data and to deliver cloud native
+confidential computing.
+
+**We have a new release every 6 weeks!**
+See [Release Notes](./releases/) or [Quickstart Guide](./quickstart.md)
+
+Our key considerations are:
+- Allow cloud native application owners to enforce application security requirements
+- Transparent deployment of unmodified containers
+- Support for multiple TEE and hardware platforms
+- A trust model which separates Cloud Service Providers (CSPs) from guest applications
+- Least privilege principles for the Kubernetes cluster administration capabilities which impact
+delivering Confidential Computing for guest applications or data inside the TEE
+
+### Get started quickly...
+- [Kubernetes Operator for Confidential
+ Computing](https://github.com/confidential-containers/operator) : An
+ operator to deploy confidential containers runtime (and required configs) on a Kubernetes cluster
+
+
+## Further Detail
+[](https://asciinema.org/a/eGHhZdQY3uYnDalFAfuB7VYqF)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fconfidential-containers%2Fcommunity?ref=badge_shield)
+
+- [Project Overview](./overview.md)
+- [Project Architecture](./architecture.md)
+- [Our Roadmap](./roadmap.md)
+- [Our Release Content Planning](https://github.com/orgs/confidential-containers/projects/6)
+- [Alignment with other Projects](alignment.md)
+
+## Contributing
+
+We welcome contributions from the community! Please see our [Contributing Guidelines](https://github.com/confidential-containers/confidential-containers/?tab=contributing-ov-file#readme) for details on how to get started.
+
+## License
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fconfidential-containers%2Fcommunity?ref=badge_large)
+
diff --git a/data/readmes/connect-rpc-v1191.md b/data/readmes/connect-rpc-v1191.md
new file mode 100644
index 0000000..71528d5
--- /dev/null
+++ b/data/readmes/connect-rpc-v1191.md
@@ -0,0 +1,191 @@
+# Connect RPC - README (v1.19.1)
+
+**Repository**: https://github.com/connectrpc/connect-go
+**Version**: v1.19.1
+
+---
+
+Connect
+=======
+
+[](https://github.com/connectrpc/connect-go/actions/workflows/ci.yaml)
+[](https://goreportcard.com/report/connectrpc.com/connect)
+[](https://pkg.go.dev/connectrpc.com/connect)
+[][slack]
+[](https://www.bestpractices.dev/projects/8972)
+
+Connect is a slim library for building browser and gRPC-compatible HTTP APIs.
+You write a short [Protocol Buffer][protobuf] schema and implement your
+application logic, and Connect generates code to handle marshaling, routing,
+compression, and content type negotiation. It also generates an idiomatic,
+type-safe client. Handlers and clients support three protocols: gRPC, gRPC-Web,
+and Connect's own protocol.
+
+The [Connect protocol][protocol] is a simple protocol that works over HTTP/1.1
+or HTTP/2. It takes the best portions of gRPC and gRPC-Web, including
+streaming, and packages them into a protocol that works equally well in
+browsers, monoliths, and microservices. Calling a Connect API is as easy as
+using `curl`. Try it with our live demo:
+
+```
+curl \
+ --header "Content-Type: application/json" \
+ --data '{"sentence": "I feel happy."}' \
+ https://demo.connectrpc.com/connectrpc.eliza.v1.ElizaService/Say
+```
+
+Handlers and clients also support the gRPC and gRPC-Web protocols, including
+streaming, headers, trailers, and error details. gRPC-compatible [server
+reflection][grpcreflect] and [health checks][grpchealth] are available as
+standalone packages. Instead of cURL, we could call our API with a gRPC client:
+
+```
+go install github.com/bufbuild/buf/cmd/buf@latest
+buf curl --protocol grpc \
+ --data '{"sentence": "I feel happy."}' \
+ https://demo.connectrpc.com/connectrpc.eliza.v1.ElizaService/Say
+```
+
+Under the hood, Connect is just [Protocol Buffers][protobuf] and the standard
+library: no custom HTTP implementation, no new name resolution or load
+balancing APIs, and no surprises. Everything you already know about `net/http`
+still applies, and any package that works with an `http.Server`, `http.Client`,
+or `http.Handler` also works with Connect.
+
+For more on Connect, see the [announcement blog post][blog], the documentation
+on [connectrpc.com][docs] (especially the [Getting Started] guide for Go), the
+[demo service][examples-go], or the [protocol specification][protocol].
+
+## A small example
+
+Curious what all this looks like in practice? From a [Protobuf
+schema](internal/proto/connect/ping/v1/ping.proto), we generate [a small RPC
+package](internal/gen/simple/connect/ping/v1/pingv1connect/ping.connect.go). Using that
+package, we can build a server. This example is available at [internal/example](internal/example):
+
+```go
+package main
+
+import (
+ "context"
+ "log"
+ "net/http"
+
+ "connectrpc.com/connect"
+ pingv1 "connectrpc.com/connect/internal/gen/connect/ping/v1"
+ "connectrpc.com/connect/internal/gen/simple/connect/ping/v1/pingv1connect"
+ "connectrpc.com/validate"
+)
+
+type PingServer struct {
+ pingv1connect.UnimplementedPingServiceHandler // returns errors from all methods
+}
+
+func (ps *PingServer) Ping(ctx context.Context, req *pingv1.PingRequest) (*pingv1.PingResponse, error) {
+ return &pingv1.PingResponse{
+ Number: req.Number,
+ }, nil
+}
+
+func main() {
+ mux := http.NewServeMux()
+ // The generated constructors return a path and a plain net/http
+ // handler.
+ mux.Handle(
+ pingv1connect.NewPingServiceHandler(
+ &PingServer{},
+ // Validation via Protovalidate is almost always recommended
+ connect.WithInterceptors(validate.NewInterceptor()),
+ ),
+ )
+ p := new(http.Protocols)
+ p.SetHTTP1(true)
+ // For gRPC clients, it's convenient to support HTTP/2 without TLS.
+ p.SetUnencryptedHTTP2(true)
+ s := &http.Server{
+ Addr: "localhost:8080",
+ Handler: mux,
+ Protocols: p,
+ }
+ if err := s.ListenAndServe(); err != nil {
+ log.Fatalf("listen failed: %v", err)
+ }
+}
+```
+
+With that server running, you can make requests with any gRPC or Connect
+client. To write a client using Connect:
+
+```go
+package main
+
+import (
+ "context"
+ "log"
+ "net/http"
+
+ pingv1 "connectrpc.com/connect/internal/gen/connect/ping/v1"
+ "connectrpc.com/connect/internal/gen/simple/connect/ping/v1/pingv1connect"
+)
+
+func main() {
+ client := pingv1connect.NewPingServiceClient(
+ http.DefaultClient,
+ "http://localhost:8080/",
+ )
+ req := &pingv1.PingRequest{Number: 42}
+ res, err := client.Ping(context.Background(), req)
+ if err != nil {
+ log.Fatalln(err)
+ }
+ log.Println(res)
+}
+```
+
+Of course, `http.ListenAndServe` and `http.DefaultClient` aren't fit for
+production use! See Connect's [deployment docs][docs-deployment] for a guide to
+configuring timeouts, connection pools, observability, and h2c.
+
+## Ecosystem
+
+* [grpchealth]: gRPC-compatible health checks for connect-go
+* [grpcreflect]: gRPC-compatible server reflection for connect-go
+* [validate]: [Protovalidate][protovalidate] interceptor for connect-go
+* [examples-go]: service powering [demo.connectrpc.com](https://demo.connectrpc.com), including bidi streaming
+* [connect-es]: Type-safe APIs with Protobuf and TypeScript
+* [Buf Studio]: web UI for ad-hoc RPCs
+* [conformance]: Connect, gRPC, and gRPC-Web interoperability tests
+
+## Status: Stable
+
+This module is stable. It supports:
+
+* The two most recent major releases of Go (the same versions of Go that continue
+ to [receive security patches][go-support-policy]).
+* [APIv2] of Protocol Buffers in Go (`google.golang.org/protobuf`).
+
+Within those parameters, `connect` follows semantic versioning. We will
+_not_ make breaking changes in the 1.x series of releases.
+
+## Legal
+
+Offered under the [Apache 2 license][license].
+
+[APIv2]: https://blog.golang.org/protobuf-apiv2
+[Buf Studio]: https://buf.build/studio
+[Getting Started]: https://connectrpc.com/docs/go/getting-started
+[blog]: https://buf.build/blog/connect-a-better-grpc
+[conformance]: https://github.com/connectrpc/conformance
+[grpchealth]: https://github.com/connectrpc/grpchealth-go
+[grpcreflect]: https://github.com/connectrpc/grpcreflect-go
+[connect-es]: https://github.com/connectrpc/connect-es
+[examples-go]: https://github.com/connectrpc/examples-go
+[docs-deployment]: https://connectrpc.com/docs/go/deployment
+[docs]: https://connectrpc.com
+[go-support-policy]: https://golang.org/doc/devel/release#policy
+[license]: https://github.com/connectrpc/connect-go/blob/main/LICENSE
+[protobuf]: https://developers.google.com/protocol-buffers
+[protocol]: https://connectrpc.com/docs/protocol
+[slack]: https://buf.build/links/slack
+[validate]: https://github.com/connectrpc/validate-go
+[protovalidate]: https://protovalidate.com
diff --git a/data/readmes/consul-ent-changelog-1220.md b/data/readmes/consul-ent-changelog-1220.md
new file mode 100644
index 0000000..83de4e2
--- /dev/null
+++ b/data/readmes/consul-ent-changelog-1220.md
@@ -0,0 +1,75 @@
+# Consul - README (ent-changelog-1.22.0)
+
+**Repository**: https://github.com/hashicorp/consul
+**Version**: ent-changelog-1.22.0
+
+---
+
+
+
+ Consul
+
+
+[](LICENSE)
+[](https://hub.docker.com/r/hashicorp/consul)
+[](https://goreportcard.com/report/github.com/hashicorp/consul)
+
+Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
+
+* Documentation and Tutorials: [https://developer.hashicorp.com/consul]
+* Forum: [Discuss](https://discuss.hashicorp.com/c/consul)
+
+Consul provides several key features:
+
+* **Multi-Datacenter** - Consul is built to be datacenter aware, and can
+ support any number of regions without complex configuration.
+
+* **Service Mesh** - Consul Service Mesh enables secure service-to-service
+ communication with automatic TLS encryption and identity-based authorization. Applications
+ can use sidecar proxies in a service mesh configuration to establish TLS
+ connections for inbound and outbound connections with Transparent Proxy.
+
+* **API Gateway** - Consul API Gateway manages access to services within Consul Service Mesh,
+ allow users to define traffic and authorization policies to services deployed within the mesh.
+
+* **Service Discovery** - Consul makes it simple for services to register
+ themselves and to discover other services via a DNS or HTTP interface.
+ External services such as SaaS providers can be registered as well.
+
+* **Health Checking** - Health Checking enables Consul to quickly alert
+ operators about any issues in a cluster. The integration with service
+ discovery prevents routing traffic to unhealthy hosts and enables service
+ level circuit breakers.
+
+* **Dynamic App Configuration** - An HTTP API that allows users to store indexed objects within Consul,
+ for storing configuration parameters and application metadata.
+
+Consul runs on Linux, macOS, FreeBSD, Solaris, and Windows and includes an
+optional [browser based UI](https://demo.consul.io). A commercial version
+called [Consul Enterprise](https://developer.hashicorp.com/consul/docs/enterprise) is also
+available.
+
+**Please note**: We take Consul's security and our users' trust very seriously. If you
+believe you have found a security issue in Consul, please [responsibly disclose](https://www.hashicorp.com/security#vulnerability-reporting)
+by contacting us at security@hashicorp.com.
+
+## Quick Start
+
+A few quick start guides are available on the Consul website:
+
+* **Standalone binary install:** https://learn.hashicorp.com/collections/consul/get-started-vms
+* **Minikube install:** https://learn.hashicorp.com/tutorials/consul/kubernetes-minikube
+* **Kind install:** https://learn.hashicorp.com/tutorials/consul/kubernetes-kind
+* **Kubernetes install:** https://learn.hashicorp.com/tutorials/consul/kubernetes-deployment-guide
+* **Deploy HCP Consul:** https://learn.hashicorp.com/tutorials/consul/hcp-gs-deploy
+
+## Documentation
+
+Full, comprehensive documentation is available on the Consul website: https://developer.hashicorp.com/consul/docs
+
+## Contributing
+
+Thank you for your interest in contributing! Please refer to [CONTRIBUTING.md](https://github.com/hashicorp/consul/blob/main/.github/CONTRIBUTING.md)
+for guidance. For contributions specifically to the browser based UI, please
+refer to the UI's [README.md](https://github.com/hashicorp/consul/blob/main/ui/packages/consul-ui/README.md)
+for guidance.
diff --git a/data/readmes/container-network-interface-cni-v130.md b/data/readmes/container-network-interface-cni-v130.md
new file mode 100644
index 0000000..d8fb404
--- /dev/null
+++ b/data/readmes/container-network-interface-cni-v130.md
@@ -0,0 +1,227 @@
+# Container Network Interface (CNI) - README (v1.3.0)
+
+**Repository**: https://github.com/containernetworking/cni
+**Version**: v1.3.0
+
+---
+
+
+
+---
+
+# CNI - the Container Network Interface
+
+[](https://bestpractices.coreinfrastructure.org/projects/2446)
+[](https://securityscorecards.dev/viewer/?uri=github.com/containernetworking/cni)
+
+## What is CNI?
+
+CNI (_Container Network Interface_), a [Cloud Native Computing Foundation](https://cncf.io) project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins.
+CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
+Because of this focus, CNI has a wide range of support and the specification is simple to implement.
+
+As well as the [specification](SPEC.md), this repository contains the Go source code of a [library for integrating CNI into applications](libcni) and an [example command-line tool](cnitool) for executing CNI plugins. A [separate repository contains reference plugins](https://github.com/containernetworking/plugins) and a template for making new plugins.
+
+The template code makes it straight-forward to create a CNI plugin for an existing container networking project.
+CNI also makes a good framework for creating a new container networking project from scratch.
+
+Here are the recordings of two sessions that the CNI maintainers hosted at KubeCon/CloudNativeCon 2019:
+
+- [Introduction to CNI](https://youtu.be/YjjrQiJOyME)
+- [CNI deep dive](https://youtu.be/zChkx-AB5Xc)
+
+
+## Contributing to CNI
+
+We welcome contributions, including [bug reports](https://github.com/containernetworking/cni/issues), and code and documentation improvements.
+If you intend to contribute to code or documentation, please read [CONTRIBUTING.md](CONTRIBUTING.md). Also see the [contact section](#contact) in this README.
+
+The CNI project has a [biweekly meeting](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=Yzg1NDlibnA5Y2c0Nm5scDI4ZG5udWpmY2JfMjAyNTEwMTNUMTQwMDAwWiAyMmM0NjU1ZjFjMjkzZTg0NDRhNTU2OTVmNDIxODg3MDgwYzc1OWU0YTQ1MjVhYmQ2NTFmYmI2MGVlYTc2YzE5QGc&tmsrc=22c4655f1c293e8444a55695f421887080c759e4a4525abd651fbb60eea76c19%40group.calendar.google.com&scp=ALL) on [jitsi](https://meet.jit.si/CNIMaintainersMeeting) ([notes](https://github.com/containernetworking/meeting-notes)). It takes place Mondays at 10:00 US/Eastern. All are welcome to join.
+
+## Why develop CNI?
+
+Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific.
+We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.
+
+To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.
+
+## Who is using CNI?
+### Container runtimes
+- [Kubernetes - a system to simplify container operations](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
+- [OpenShift - Kubernetes with additional enterprise features](https://github.com/openshift/origin/blob/master/docs/openshift_networking_requirements.md)
+- [Cloud Foundry - a platform for cloud applications](https://github.com/cloudfoundry-incubator/cf-networking-release)
+- [Apache Mesos - a distributed systems kernel](https://github.com/apache/mesos/blob/master/docs/cni.md)
+- [Amazon ECS - a highly scalable, high performance container management service](https://aws.amazon.com/ecs/)
+- [Singularity - container platform optimized for HPC, EPC, and AI](https://github.com/sylabs/singularity)
+- [OpenSVC - orchestrator for legacy and containerized application stacks](https://docs.opensvc.com/latest/fr/agent.configure.cni.html)
+
+### 3rd party plugins
+- [Project Calico - a layer 3 virtual network](https://github.com/projectcalico/calico)
+- [Contiv Networking - policy networking for various use cases](https://github.com/contiv/netplugin)
+- [SR-IOV](https://github.com/hustcat/sriov-cni)
+- [Cilium - eBPF & XDP for containers](https://github.com/cilium/cilium)
+- [Multus - a Multi plugin](https://github.com/k8snetworkplumbingwg/multus-cni)
+- [Romana - Layer 3 CNI plugin supporting network policy for Kubernetes](https://github.com/romana/kube)
+- [CNI-Genie - generic CNI network plugin](https://github.com/Huawei-PaaS/CNI-Genie)
+- [Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support ](https://github.com/nuagenetworks/nuage-cni)
+- [Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment](https://github.com/John-Lin/linen-cni)
+- [Vhostuser - a Dataplane network plugin - Supports OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin)
+- [Amazon ECS CNI Plugins - a collection of CNI Plugins to configure containers with Amazon EC2 elastic network interfaces (ENIs)](https://github.com/aws/amazon-ecs-cni-plugins)
+- [Bonding CNI - a Link aggregating plugin to address failover and high availability network](https://github.com/Intel-Corp/bond-cni)
+- [ovn-kubernetes - an container network plugin built on Open vSwitch (OVS) and Open Virtual Networking (OVN) with support for both Linux and Windows](https://github.com/openvswitch/ovn-kubernetes)
+- [Juniper Contrail](https://www.juniper.net/cloud) / [TungstenFabric](https://tungstenfabric.io) - Provides overlay SDN solution, delivering multicloud networking, hybrid cloud networking, simultaneous overlay-underlay support, network policy enforcement, network isolation, service chaining and flexible load balancing
+- [Knitter - a CNI plugin supporting multiple networking for Kubernetes](https://github.com/ZTE/Knitter)
+- [DANM - a CNI-compliant networking solution for TelCo workloads running on Kubernetes](https://github.com/nokia/danm)
+- [cni-route-override - a meta CNI plugin that override route information](https://github.com/redhat-nfvpe/cni-route-override)
+- [Terway - a collection of CNI Plugins based on alibaba cloud VPC/ECS network product](https://github.com/AliyunContainerService/terway)
+- [Cisco ACI CNI - for on-prem and cloud container networking with consistent policy and security model.](https://github.com/noironetworks/aci-containers)
+- [Kube-OVN - a CNI plugin that bases on OVN/OVS and provides advanced features like subnet, static ip, ACL, QoS, etc.](https://github.com/kubeovn/kube-ovn)
+- [Project Antrea - an Open vSwitch k8s CNI](https://github.com/vmware-tanzu/antrea)
+- [Azure CNI - a CNI plugin that natively extends Azure Virtual Networks to containers](https://github.com/Azure/azure-container-networking)
+- [Hybridnet - a CNI plugin designed for hybrid clouds which provides both overlay and underlay networking for containers in one or more clusters. Overlay and underlay containers can run on the same node and have cluster-wide bidirectional network connectivity.](https://github.com/alibaba/hybridnet)
+- [Spiderpool - An IP Address Management (IPAM) CNI plugin of Kubernetes for managing static ip for underlay network](https://github.com/spidernet-io/spiderpool)
+- [AWS VPC CNI - Networking plugin for pod networking in Kubernetes using Elastic Network Interfaces on AWS](https://github.com/aws/amazon-vpc-cni-k8s)
+
+The CNI team also maintains some [core plugins in a separate repository](https://github.com/containernetworking/plugins).
+
+
+## How do I use CNI?
+
+### Requirements
+
+The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. You can find the Go versions covered by our [automated tests](https://travis-ci.org/containernetworking/cni/builds) in [.travis.yaml](.travis.yml).
+
+### Reference Plugins
+
+The CNI project maintains a set of [reference plugins](https://github.com/containernetworking/plugins) that implement the CNI specification.
+NOTE: the reference plugins used to live in this repository but have been split out into a [separate repository](https://github.com/containernetworking/plugins) as of May 2017.
+
+### Running the plugins
+
+After building and installing the [reference plugins](https://github.com/containernetworking/plugins), you can use the `priv-net-run.sh` and `docker-run.sh` scripts in the `scripts/` directory to exercise the plugins.
+
+**note - priv-net-run.sh depends on `jq`**
+
+Start out by creating a netconf file to describe a network:
+
+```bash
+$ mkdir -p /etc/cni/net.d
+$ cat >/etc/cni/net.d/10-mynet.conf </etc/cni/net.d/99-loopback.conf <
+
An SSH Server that Launches Containers in Kubernetes and Docker
+
+[](https://containerssh.io/)
+[](https://github.com/containerssh/containerssh/actions)
+[](https://github.com/containerssh/containerssh/releases)
+[](http://hub.docker.com/r/containerssh/containerssh)
+[](https://goreportcard.com/report/github.com/containerssh/containerssh)
+[](LICENSE.md)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2FContainerSSH%2FContainerSSH?ref=badge_shield&issueType=license)
+
+## ContainerSSH in One Minute
+
+In a hurry? This one-minute video explains everything you need to know about ContainerSSH.
+
+[](https://youtu.be/Cs9OrnPi2IM)
+
+## Need help?
+
+[Join the #containerssh Slack channel on the CNCF Slack »](https://communityinviter.com/apps/cloud-native/cncf)
+
+## Use cases
+
+### Build a lab
+
+Building a lab environment can be time-consuming. ContainerSSH solves this by providing dynamic SSH access with APIs, automatic cleanup on logout using ephemeral containers, and persistent volumes for storing data. **Perfect for vendor and student labs.**
+
+[Read more »](https://containerssh.io/usecases/lab/)
+
+### Debug a production system
+
+Provide **production access to your developers**, give them their usual tools while logging all changes. Authorize their access and create short-lived credentials for the database using simple webhooks. Clean up the environment on disconnect.
+
+[Read more »](https://containerssh.io/usecases/debugging/)
+
+### Run a honeypot
+
+Study SSH attack patterns up close. Drop attackers safely into network-isolated containers or even virtual machines, and **capture their every move** using the audit logging ContainerSSH provides. The built-in S3 upload ensures you don't lose your data.
+
+[Read more »](https://containerssh.io/usecases/honeypots/)
+
+## How does it work?
+
+
+
+1. The user opens an SSH connection to ContainerSSH.
+2. ContainerSSH calls the authentication server with the users username and password/pubkey to check if its valid.
+3. ContainerSSH calls the config server to obtain backend location and configuration (if configured)
+4. ContainerSSH calls the container backend to launch the container with the
+ specified configuration. All input from the user is sent directly to the backend, output from the container is sent
+ to the user.
+
+[▶️ Watch as video »](https://youtu.be/Cs9OrnPi2IM) | [🚀 Get started »](https://containerssh.io/quickstart/)
+
+## Demo
+
+
+
+[🚀 Get started »](https://containerssh.io/quickstart/)
+
+## Verify provenance
+
+Each of the releases come with a SLSA provenance data file `multiple.intoto.jsonl`. This file can be used to verify the source and provenance of the produced artifacts with [`slsa-verifier`](https://github.com/slsa-framework/slsa-verifier).
+
+
+This aims to ensure the users that the artifacts are coming from containerssh.
+
+An example of verification :
+```sh
+slsa-verifier verify-artifact \
+--provenance-path \
+--source-uri github.com/containerssh/containerssh
+```
+
+If the verification is successful, the process should produce the following output :
+```
+Verifying artifact : PASSED
+PASSED: Verified SLSA provenance
+```
+
+
+## Contributing
+
+If you would like to contribute, please check out our [Code of Conduct](https://github.com/ContainerSSH/community/blob/main/CODE_OF_CONDUCT.md) as well as our [contribution documentation](https://containerssh.io/development/).
+
+## Embedding ContainerSSH
+
+You can fully embed ContainerSSH into your own application. First, you will need to create the configuration structure:
+
+```go
+cfg := config.AppConfig{}
+// Set the default configuration:
+cfg.Default()
+```
+
+You can then populate this config with your options and create a ContainerSSH instance like this:
+
+```go
+pool, lifecycle, err := containerssh.New(cfg, loggerFactory)
+if err != nil {
+ return err
+}
+```
+
+You will receive a service pool and a lifecycle as a response. You can use these to start the service pool of ContainerSSH. This will block execution until ContainerSSH stops.
+
+```go
+err := lifecycle.Run()
+```
+
+This will run ContainerSSH in the current Goroutine. You can also use the lifecycle to add hooks to lifecycle states of ContainerSSH. You must do this *before* you call `Run()`. For example:
+
+```go
+lifecycle.OnStarting(
+ func(s service.Service, l service.Lifecycle) {
+ print("ContainerSSH is starting...")
+ },
+)
+```
+
+You can also have ContainerSSH stop gracefully by using the `Stop()` function on the lifecycle. This takes a context as an argument, which is taken as a timeout for the graceful shutdown.
+
+Finally, you can use the returned `pool` variable to rotate the logs. This will trigger all ContainerSSH services to close and reopen their log files.
+
+```
+pool.RotateLogs()
+```
+
+## Building an authentication webhook server
+
+## Building a configuration webhook server
+
+The configuration webhook lets you dynamically configure ContainerSSH. This library contains the tools to create a tiny webserver to serve these webhook requests.
+
+First, you need to fetch this library as a dependency using [go modules](https://blog.golang.org/using-go-modules):
+
+```bash
+go get go.containerssh.io/containerssh
+```
+
+Next, you will have to write an implementation for the following interface:
+
+```go
+package main
+
+import (
+ "go.containerssh.io/containerssh/config"
+)
+
+type ConfigRequestHandler interface {
+ OnConfig(request config.Request) (config.AppConfig, error)
+}
+```
+
+The best way to do this is creating a struct and adding a method with a receiver:
+
+```go
+type myConfigReqHandler struct {
+}
+
+func (m *myConfigReqHandler) OnConfig(
+ request configuration.ConfigRequest,
+) (config configuration.AppConfig, err error) {
+ // We recommend using an IDE to discover the possible options here.
+ if request.Username == "foo" {
+ config.Docker.Config.ContainerConfig.Image = "yourcompany/yourimage"
+ }
+ return config, err
+}
+```
+
+**Warning!** Your `OnConfig` method should *only* return an error if it can genuinely not serve the request. This should not be used as a means to reject users. This should be done using the authentication server. If you return an error ContainerSSH will retry the request several times in an attempt to work around network failures.
+
+Once you have your handler implemented you must decide which method you want to use for integration.
+
+### The full server method
+
+This method is useful if you don't want to run anything else on the webserver, only the config endpoint. You can create a new server like this:
+
+```go
+package main
+
+import (
+ "signal"
+
+ "go.containerssh.io/containerssh/config"
+ "go.containerssh.io/containerssh/config/webhook"
+ "go.containerssh.io/containerssh/log"
+ "go.containerssh.io/containerssh/service"
+)
+
+func main() {
+ logger := log.NewLogger(&config.LogConfig{
+ // Add logging configuration here
+ })
+ // Create the webserver service
+ srv, err := webhook.NewServer(
+ config.HTTPServerConfiguration{
+ Listen: "0.0.0.0:8080",
+ },
+ &myConfigReqHandler{},
+ logger,
+ )
+ if err != nil {
+ panic(err)
+ }
+
+ // Set up the lifecycle handler
+ lifecycle := service.NewLifecycle(srv)
+
+ // Launch the webserver in the background
+ go func() {
+ //Ignore error, handled later.
+ _ = lifecycle.Run()
+ }()
+
+ // Handle signals and terminate webserver gracefully when needed.
+ signals := make(chan os.Signal, 1)
+ signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
+ go func() {
+ if _, ok := <-signals; ok {
+ // ok means the channel wasn't closed, let's trigger a shutdown.
+ // The context given is the timeout for the shutdown.
+ lifecycle.Stop(
+ context.WithTimeout(
+ context.Background(),
+ 20 * time.Second,
+ ),
+ )
+ }
+ }()
+ // Wait for the service to terminate.
+ lastError := lifecycle.Wait()
+ // We are already shutting down, ignore further signals
+ signal.Ignore(syscall.SIGINT, syscall.SIGTERM)
+ // close signals channel so the signal handler gets terminated
+ close(signals)
+
+ if lastError != nil {
+ // Exit with a non-zero signal
+ fmt.Fprintf(
+ os.Stderr,
+ "an error happened while running the server (%v)",
+ lastError,
+ )
+ os.Exit(1)
+ }
+ os.Exit(0)
+}
+```
+
+**Note:** We recommend securing client-server communication with certificates. Please see the [Securing webhooks section below](#securing-webhooks)/
+
+### Integrating with an existing HTTP server
+
+Use this method if you want to integrate your handler with an existing Go HTTP server. This is rather simple:
+
+```go
+handler, err := configuration.NewHandler(&myConfigReqHandler{}, logger)
+```
+
+You can now use the `handler` variable as a handler for the [`http` package](https://golang.org/pkg/net/http/) or a MUX like [gorilla/mux](https://github.com/gorilla/mux).
+
+## Using the config client
+
+This library also contains the components to call the configuration server in a simplified fashion. To create a client simply call the following method:
+
+```go
+client, err := configuration.NewClient(
+ configuration.ClientConfig{
+ http.ClientConfiguration{
+ URL: "http://your-server/config-endpoint/"
+ }
+ },
+ logger,
+ metricsCollector,
+)
+```
+
+The `logger` is a logger from the [log library](https://github.com/ContainerSSH/ContainerSSH/tree/main/log), the `metricsCollector` is supplied by the [metrics library](https://github.com/containerssh/tree/main/metrics).
+
+You can now use the `client` variable to fetch the configuration specific to a connecting client:
+
+```go
+connectionID := "0123456789ABCDEF"
+appConfig, err := client.Get(
+ ctx,
+ "my-name-is-trinity",
+ net.TCPAddr{
+ IP: net.ParseIP("127.0.0.1"),
+ Port: 2222,
+ },
+ connectionID,
+) (AppConfig, error)
+```
+
+Now you have the client-specific configuration in `appConfig`.
+
+**Note:** We recommend securing client-server communication with certificates. The details about securing your HTTP requests are documented in the [HTTP library](https://github.com/containerssh/containerssh/tree/main/http).
+
+## Loading the configuration from a file
+
+This library also provides simplified methods for reading the configuration from an `io.Reader` and writing it to an `io.Writer`.
+
+```go
+file, err := os.Open("file.yaml")
+// ...
+loader, err := configuration.NewReaderLoader(
+ file,
+ logger,
+ configuration.FormatYAML,
+)
+// Read global config
+appConfig := &configuration.AppConfig{}
+err := loader.Load(ctx, appConfig)
+// Read connection-specific config:
+err := loader.LoadConnection(
+ ctx,
+ "my-name-is-trinity",
+ net.TCPAddr{
+ IP: net.ParseIP("127.0.0.1"),
+ Port: 2222,
+ },
+ connectionID,
+ appConfig,
+)
+```
+
+As you can see these loaders are designed to be chained together. For example, you could add an HTTP loader after the file loader:
+
+```go
+httpLoader, err := configuration.NewHTTPLoader(clientConfig, logger)
+```
+
+This HTTP loader calls the HTTP client described above.
+
+Conversely, you can write the configuration to a YAML format:
+
+```go
+saver, err := configuration.NewWriterSaver(
+ os.Stdout,
+ logger,
+ configuration.FormatYAML,
+)
+err := saver.Save(appConfig)
+```
+
+
+## Building a combined configuration-authentication webhook server
+
+## Securing webhooks
+
+## Reading audit logs
diff --git a/data/readmes/contour-v1330.md b/data/readmes/contour-v1330.md
new file mode 100644
index 0000000..b17552c
--- /dev/null
+++ b/data/readmes/contour-v1330.md
@@ -0,0 +1,71 @@
+# Contour - README (v1.33.0)
+
+**Repository**: https://github.com/projectcontour/contour
+**Version**: v1.33.0
+
+---
+
+# Contour
+
+ [](https://opensource.org/licenses/Apache-2.0) [](https://kubernetes.slack.com/messages/contour)
+
+ [](https://goreportcard.com/report/github.com/projectcontour/contour) [](https://securityscorecards.dev/viewer/?uri=github.com/projectcontour/contour) [](https://bestpractices.coreinfrastructure.org/projects/4141)
+
+
+
+
+## Overview
+
+Contour is an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) for Kubernetes that works by deploying the [Envoy proxy](https://www.envoyproxy.io/) as a reverse proxy and load balancer.
+Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.
+
+Contour supports multiple configuration APIs in order to meet the needs of as many users as possible:
+
+- **[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/)** - A stable upstream API that enables basic ingress use cases.
+- **[HTTPProxy](https://projectcontour.io/docs/main/config/fundamentals/)** - Contour's Custom Resource Definition (CRD) which expands upon the functionality of the Ingress API to allow for a richer user experience as well as solve shortcomings in the original design.
+- **[Gateway API](https://gateway-api.sigs.k8s.io/)** - A new CRD-based API managed by the [Kubernetes SIG-Network community](https://github.com/kubernetes/community/tree/master/sig-network) that aims to evolve Kubernetes service networking APIs in a vendor-neutral way.
+
+## Prerequisites
+
+See the [compatibility matrix](https://projectcontour.io/resources/compatibility-matrix/) for the Kubernetes versions Contour is supported with.
+
+RBAC must be enabled on your cluster.
+
+## Get started
+
+Getting started with Contour is as simple as one command.
+See the [Getting Started](https://projectcontour.io/getting-started) document.
+
+## Troubleshooting
+
+If you encounter issues, review the Troubleshooting section of [the docs](https://projectcontour.io/docs), [file an issue](https://github.com/projectcontour/contour/issue), or talk to us on the [#contour channel](https://kubernetes.slack.com/messages/contour) on the Kubernetes Slack server.
+
+## Contributing
+
+Thanks for taking the time to join our community and start contributing!
+
+- Please familiarize yourself with the [Code of Conduct](/CODE_OF_CONDUCT.md) before contributing.
+- See [CONTRIBUTING.md](/CONTRIBUTING.md) for information about setting up your environment, the workflow that we expect, and instructions on the developer certificate of origin that we require.
+- Check out the [open issues](https://github.com/projectcontour/contour/issues).
+- Join our Kubernetes Slack channel: [#contour](https://kubernetes.slack.com/messages/contour/)
+- Join the **Contour Community Meetings** - [schedule, notes, and recordings can be found here](https://projectcontour.io/community)
+- Find GOVERNANCE in our [Community repo](https://github.com/projectcontour/community)
+## Roadmap
+
+See [Contour's roadmap](https://github.com/projectcontour/community/blob/main/ROADMAP.md) to learn more about where we are headed.
+
+## Security
+
+### Security Audit
+
+A third party security audit was performed by Cure53 in December of 2020. You can see the full report [here](Contour_Security_Audit_Dec2020.pdf).
+
+### Reporting security vulnerabilities
+
+If you've found a security related issue, a vulnerability, or a potential vulnerability in Contour please let the [Contour Security Team](mailto:cncf-contour-maintainers@lists.cncf.io) know with the details of the vulnerability. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issue positively or negatively.
+
+For further details please see our [security policy](SECURITY.md).
+
+## Changelog
+
+See [the list of releases](https://github.com/projectcontour/contour/releases) to find out about feature changes.
diff --git a/data/readmes/copa-v0120.md b/data/readmes/copa-v0120.md
new file mode 100644
index 0000000..74934b7
--- /dev/null
+++ b/data/readmes/copa-v0120.md
@@ -0,0 +1,69 @@
+# Copa - README (v0.12.0)
+
+**Repository**: https://github.com/project-copacetic/copacetic
+**Version**: v0.12.0
+
+---
+
+# Project Copacetic: Directly patch container image vulnerabilities
+
+
+[](https://codecov.io/gh/project-copacetic/copacetic)
+[](https://www.bestpractices.dev/projects/8031)
+[](https://api.securityscorecards.dev/projects/github.com/project-copacetic/copacetic)
+
+
+
+
+
+`copa` is a CLI tool written in [Go](https://golang.org) and based on [buildkit](https://github.com/moby/buildkit) that can be used to directly patch container images without full rebuilds. `copa` can also patch container images using the vulnerability scanning results from popular tools like [Trivy](https://github.com/aquasecurity/trivy).
+
+For more details and how to get started, please refer to [full documentation](https://project-copacetic.github.io/copacetic/).
+
+## Demo
+
+
+
+## Why?
+
+We needed the ability to patch containers quickly without going upstream for a full rebuild. As the window between [vulnerability disclosure and active exploitation continues to narrow](https://www.bleepingcomputer.com/news/security/hackers-scan-for-vulnerabilities-within-15-minutes-of-disclosure/), there is a growing operational need to patch critical security vulnerabilities in container images so they can be quickly redeployed into production. The need is especially acute when those vulnerabilities are:
+
+- inherited from base images several levels deep and waiting on updated releases to percolate through the supply chain is not an option
+- found in 3rd party app images you don't maintain with update cadences that don't meet your security SLAs.
+
+
+
+In addition to filling the operational gap not met by left-shift security practices and tools, the ability of `copa` to patch a container without requiring a rebuild of the container image provides other benefits:
+
+- Allows users other than the image publishers to also patch container images, such as DevSecOps engineers.
+- Reduces the storage and transmission costs of redistributing patched images by only creating an additional patch layer, instead of rebuilding the entire image which usually results in different layer hashes that break layer caching.
+- Reduces the turnaround time for patching a container image by not having to wait for base image updates and being a faster operation than a full image rebuild.
+- Reduces the complexity of patching the image from running a rebuild pipeline to running a single tool on the image.
+
+## How?
+
+The `copa` tool is an extensible engine that:
+
+1. Parses the needed update packages from the container image’s vulnerability report produced by a scanner like Trivy. New adapters can be written to accommodate more report formats.
+2. Obtains and processes the needed update packages using the appropriate package manager tools such as apt, apk, etc. New adapters can be written to support more package managers.
+3. Applies the resulting update binaries to the container image using buildkit.
+
+
+
+This approach is motivated by the core principles of making direct container patching broadly applicable and accessible:
+
+- **Copa supports patching _existing_ container images**.
+ - Devs don't need to build their images using specific tools or modify them in some way just to support container patching.
+- **Copa works with the existing vulnerability scanning and mitigation ecosystems**.
+ - Image publishers don't need to create new workflows for container patching since Copa supports patching container images using the security update packages already being published today.
+ - Consumers do not need to migrate to a new and potentially more limited support ecosystem for custom distros or change their container vulnerability scanning pipelines to include remediation, since Copa can be integrated seamlessly as an extra step to patch containers based on those scanning reports.
+- **Copa reduces the technical expertise needed and waiting on dependencies needed to patch an image**.
+ - For OS package vulnerabilities, no specialized knowledge about a specific image is needed to be patch it as Copa relies on the vulnerability remediation knowledge already embedded in the reports produced by popular container scanning tools today.
+
+## Contributing
+There are several ways to get involved:
+* Join the [mailing list](https://groups.google.com/g/project-copa) to get notifications for releases, security announcements, etc.
+* Join the [biweekly community meetings](https://docs.google.com/document/d/1QdskbeCtgKcdWYHI6EXkLFxyzTCyVT6e8MgB3CaAhWI/edit#heading=h.294j02tlxam) to discuss development, issues, use cases, etc.
+* Join the [`#copacetic`](https://cloud-native.slack.com/archives/C071UU5QDKJ) channel on the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf).
+
+The project welcomes contributions and suggestions that abide by the [CNCF Code of Conduct](./CODE_OF_CONDUCT.md).
diff --git a/data/readmes/coredns-v1131.md b/data/readmes/coredns-v1131.md
new file mode 100644
index 0000000..d9ebee6
--- /dev/null
+++ b/data/readmes/coredns-v1131.md
@@ -0,0 +1,319 @@
+# CoreDNS - README (v1.13.1)
+
+**Repository**: https://github.com/coredns/coredns
+**Version**: v1.13.1
+
+---
+
+[](https://coredns.io)
+
+[](https://godoc.org/github.com/coredns/coredns)
+
+
+[](https://circleci.com/gh/coredns/coredns)
+[](https://codecov.io/github/coredns/coredns?branch=master)
+[](https://hub.docker.com/r/coredns/coredns)
+[](https://goreportcard.com/report/coredns/coredns)
+[](https://bestpractices.coreinfrastructure.org/projects/1250)
+[](https://scorecard.dev/viewer/?uri=github.com/coredns/coredns)
+
+CoreDNS is a DNS server/forwarder, written in Go, that chains [plugins](https://coredns.io/plugins).
+Each plugin performs a (DNS) function.
+
+CoreDNS is a [Cloud Native Computing Foundation](https://cncf.io) graduated project.
+
+CoreDNS is a fast and flexible DNS server. The key word here is *flexible*: with CoreDNS you
+are able to do what you want with your DNS data by utilizing plugins. If some functionality is not
+provided out of the box you can add it by [writing a plugin](https://coredns.io/explugins).
+
+CoreDNS can listen for DNS requests coming in over:
+* UDP/TCP (go'old DNS).
+* TLS - DoT ([RFC 7858](https://tools.ietf.org/html/rfc7858)).
+* DNS over HTTP/2 - DoH ([RFC 8484](https://tools.ietf.org/html/rfc8484)).
+* DNS over QUIC - DoQ ([RFC 9250](https://tools.ietf.org/html/rfc9250)).
+* [gRPC](https://grpc.io) (not a standard).
+
+Currently CoreDNS is able to:
+
+* Serve zone data from a file; both DNSSEC (NSEC only) and DNS are supported (*file* and *auto*).
+* Retrieve zone data from primaries, i.e., act as a secondary server (AXFR only) (*secondary*).
+* Sign zone data on-the-fly (*dnssec*).
+* Load balancing of responses (*loadbalance*).
+* Allow for zone transfers, i.e., act as a primary server (*file* + *transfer*).
+* Automatically load zone files from disk (*auto*).
+* Caching of DNS responses (*cache*).
+* Use etcd as a backend (replacing [SkyDNS](https://github.com/skynetservices/skydns)) (*etcd*).
+* Use k8s (kubernetes) as a backend (*kubernetes*).
+* Serve as a proxy to forward queries to some other (recursive) nameserver (*forward*).
+* Provide metrics (by using Prometheus) (*prometheus*).
+* Provide query (*log*) and error (*errors*) logging.
+* Integrate with cloud providers (*route53*).
+* Support the CH class: `version.bind` and friends (*chaos*).
+* Support the RFC 5001 DNS name server identifier (NSID) option (*nsid*).
+* Profiling support (*pprof*).
+* Rewrite queries (qtype, qclass and qname) (*rewrite* and *template*).
+* Block ANY queries (*any*).
+* Provide DNS64 IPv6 Translation (*dns64*).
+
+And more. Each of the plugins is documented. See [coredns.io/plugins](https://coredns.io/plugins)
+for all in-tree plugins, and [coredns.io/explugins](https://coredns.io/explugins) for all
+out-of-tree plugins.
+
+## Compilation from Source
+
+To compile CoreDNS, we assume you have a working Go setup. See various tutorials if you don’t have
+that already configured.
+
+First, make sure your golang version is 1.24.0 or higher as `go mod` support and other api is needed.
+See [here](https://github.com/golang/go/wiki/Modules) for `go mod` details.
+Then, check out the project and run `make` to compile the binary:
+
+~~~
+$ git clone https://github.com/coredns/coredns
+$ cd coredns
+$ make
+~~~
+
+> **_NOTE:_** extra plugins may be enabled when building by setting the `COREDNS_PLUGINS` environment variable with comma separate list of plugins in the same format as plugin.cfg
+
+This should yield a `coredns` binary.
+
+## Compilation with Docker
+
+CoreDNS requires Go to compile. However, if you already have docker installed and prefer not to
+setup a Go environment, you could build CoreDNS easily:
+
+```
+docker run --rm -i -t \
+ -v $PWD:/go/src/github.com/coredns/coredns -w /go/src/github.com/coredns/coredns \
+ golang:1.24 sh -c 'GOFLAGS="-buildvcs=false" make gen && GOFLAGS="-buildvcs=false" make'
+```
+
+The above command alone will have `coredns` binary generated.
+
+## Examples
+
+When starting CoreDNS without any configuration, it loads the
+[*whoami*](https://coredns.io/plugins/whoami) and [*log*](https://coredns.io/plugins/log) plugins
+and starts listening on port 53 (override with `-dns.port`), it should show the following:
+
+~~~ txt
+.:53
+CoreDNS-1.6.6
+linux/amd64, go1.16.10, aa8c32
+~~~
+
+The following could be used to query the CoreDNS server that is running now:
+
+~~~ txt
+dig @127.0.0.1 -p 53 www.example.com
+~~~
+
+Any query sent to port 53 should return some information; your sending address, port and protocol
+used. The query should also be logged to standard output.
+
+The configuration of CoreDNS is done through a file named `Corefile`. When CoreDNS starts, it will
+look for the `Corefile` from the current working directory. A `Corefile` for CoreDNS server that listens
+on port `53` and enables `whoami` plugin is:
+
+~~~ corefile
+.:53 {
+ whoami
+}
+~~~
+
+Sometimes port number 53 is occupied by system processes. In that case you can start the CoreDNS server
+while modifying the `Corefile` as given below so that the CoreDNS server starts on port 1053.
+
+~~~ corefile
+.:1053 {
+ whoami
+}
+~~~
+
+If you have a `Corefile` without a port number specified it will, by default, use port 53, but you can
+override the port with the `-dns.port` flag: `coredns -dns.port 1053`, runs the server on port 1053.
+
+You may import other text files into the `Corefile` using the _import_ directive. You can use globs to match multiple
+files with a single _import_ directive.
+
+~~~ txt
+.:53 {
+ import example1.txt
+}
+import example2.txt
+~~~
+
+You can use environment variables in the `Corefile` with `{$VARIABLE}`. Note that each environment variable is inserted
+into the `Corefile` as a single token. For example, an environment variable with a space in it will be treated as a single
+token, not as two separate tokens.
+
+~~~ txt
+.:53 {
+ {$ENV_VAR}
+}
+~~~
+
+A Corefile for a CoreDNS server that forward any queries to an upstream DNS (e.g., `8.8.8.8`) is as follows:
+
+~~~ corefile
+.:53 {
+ forward . 8.8.8.8:53
+ log
+}
+~~~
+
+Start CoreDNS and then query on that port (53). The query should be forwarded to 8.8.8.8 and the
+response will be returned. Each query should also show up in the log which is printed on standard
+output.
+
+To serve the (NSEC) DNSSEC-signed `example.org` on port 1053, with errors and logging sent to standard
+output. Allow zone transfers to everybody, but specifically mention 1 IP address so that CoreDNS can
+send notifies to it.
+
+~~~ txt
+example.org:1053 {
+ file /var/lib/coredns/example.org.signed
+ transfer {
+ to * 2001:500:8f::53
+ }
+ errors
+ log
+}
+~~~
+
+Serve `example.org` on port 1053, but forward everything that does *not* match `example.org` to a
+recursive nameserver *and* rewrite ANY queries to HINFO.
+
+~~~ txt
+example.org:1053 {
+ file /var/lib/coredns/example.org.signed
+ transfer {
+ to * 2001:500:8f::53
+ }
+ errors
+ log
+}
+
+. {
+ any
+ forward . 8.8.8.8:53
+ errors
+ log
+}
+~~~
+
+IP addresses are also allowed. They are automatically converted to reverse zones:
+
+~~~ corefile
+10.0.0.0/24 {
+ whoami
+}
+~~~
+Means you are authoritative for `0.0.10.in-addr.arpa.`.
+
+This also works for IPv6 addresses. If for some reason you want to serve a zone named `10.0.0.0/24`
+add the closing dot: `10.0.0.0/24.` as this also stops the conversion.
+
+This even works for CIDR (See RFC 1518 and 1519) addressing, i.e. `10.0.0.0/25`, CoreDNS will then
+check if the `in-addr` request falls in the correct range.
+
+Listening on TLS (DoT) and for gRPC? Use:
+
+~~~ corefile
+tls://example.org grpc://example.org {
+ whoami
+}
+~~~
+
+Similarly, for QUIC (DoQ):
+
+~~~ corefile
+quic://example.org {
+ whoami
+ tls mycert mykey
+}
+~~~
+
+And for DNS over HTTP/2 (DoH) use:
+
+~~~ corefile
+https://example.org {
+ whoami
+ tls mycert mykey
+}
+~~~
+in this setup, the CoreDNS will be responsible for TLS termination
+
+you can also start DNS server serving DoH without TLS termination (plain HTTP), but beware that in such scenario there has to be some kind
+of TLS termination proxy before CoreDNS instance, which forwards DNS requests otherwise clients will not be able to communicate via DoH with the server
+~~~ corefile
+https://example.org {
+ whoami
+}
+~~~
+
+Specifying ports works in the same way:
+
+~~~ txt
+grpc://example.org:1443 https://example.org:1444 {
+ # ...
+}
+~~~
+
+When no transport protocol is specified the default `dns://` is assumed.
+
+## Community
+
+We're most active on GitHub (and Slack):
+
+- GitHub:
+- Slack: #coredns on
+
+More resources can be found:
+
+- Website:
+- Blog:
+- Twitter: [@corednsio](https://twitter.com/corednsio)
+- Mailing list/group: (not very active)
+
+## Contribution guidelines
+
+If you want to contribute to CoreDNS, be sure to review the [contribution
+guidelines](./.github/CONTRIBUTING.md).
+
+## Deployment
+
+Examples for deployment via systemd and other use cases can be found in the [deployment
+repository](https://github.com/coredns/deployment).
+
+## Deprecation Policy
+
+When there is a backwards incompatible change in CoreDNS the following process is followed:
+
+* Release x.y.z: Announce that in the next release we will make backward incompatible changes.
+* Release x.y+1.0: Increase the minor version and set the patch version to 0. Make the changes,
+ but allow the old configuration to be parsed. I.e. CoreDNS will start from an unchanged
+ Corefile.
+* Release x.y+1.1: Increase the patch version to 1. Remove the lenient parsing, so CoreDNS will
+ not start if those features are still used.
+
+E.g. 1.3.1 announce a change. 1.4.0 a new release with the change but backward compatible config.
+And finally 1.4.1 that removes the config workarounds.
+
+## Security
+
+### Security Audits
+
+Third party security audits have been performed by:
+* [Cure53](https://cure53.de) in March 2018. [Full Report](https://coredns.io/assets/DNS-01-report.pdf)
+* [Trail of Bits](https://www.trailofbits.com) in March 2022. [Full Report](https://github.com/trailofbits/publications/blob/master/reviews/CoreDNS.pdf)
+
+### Reporting security vulnerabilities
+
+If you find a security vulnerability or any security related issues, please DO NOT file a public
+issue, instead send your report privately to `security@coredns.io`. Security reports are greatly
+appreciated and we will publicly thank you for it.
+
+Please consult [security vulnerability disclosures and security fix and release process
+document](https://github.com/coredns/coredns/blob/master/.github/SECURITY.md)
diff --git a/data/readmes/cortex-v1201.md b/data/readmes/cortex-v1201.md
new file mode 100644
index 0000000..47cfc1b
--- /dev/null
+++ b/data/readmes/cortex-v1201.md
@@ -0,0 +1,181 @@
+# Cortex - README (v1.20.1)
+
+**Repository**: https://github.com/cortexproject/cortex
+**Version**: v1.20.1
+
+---
+
+
+
+[](https://scorecard.dev/viewer/?uri=github.com/cortexproject/cortex)
+[](https://github.com/cortexproject/cortex/actions)
+[](https://godoc.org/github.com/cortexproject/cortex)
+
+
+
+[](https://clomonitor.io/projects/cncf/cortex)
+
+
+# Cortex
+
+Cortex is a horizontally scalable, highly available, multi-tenant, long-term storage solution for [Prometheus](https://prometheus.io) and [OpenTelemetry Metrics](https://opentelemetry.io/docs/specs/otel/metrics/).
+
+## Features
+
+- **Horizontally scalable:** Cortex can run across multiple machines in a cluster, exceeding the throughput and storage of a single machine.
+- **Highly available:** When run in a cluster, Cortex can replicate data between machines.
+- **Multi-tenant:** Cortex can isolate data and queries from multiple different independent Prometheus sources in a single cluster.
+- **Long-term storage:** Cortex supports S3, GCS, Swift and Microsoft Azure for long-term storage of metric data.
+
+## Documentation
+
+- [Getting Started](https://cortexmetrics.io/docs/getting-started/)
+- [Architecture Overview](https://cortexmetrics.io/docs/architecture/)
+- [Configuration](https://cortexmetrics.io/docs/configuration/)
+- [Guides](https://cortexmetrics.io/docs/guides/)
+- [Security](https://cortexmetrics.io/docs/guides/security/)
+- [Contributing](https://cortexmetrics.io/docs/contributing/)
+
+## Community and Support
+
+If you have any questions about Cortex, you can:
+
+- Ask a question on the [Cortex Slack channel](https://cloud-native.slack.com/messages/cortex/). To invite yourself to
+ the CNCF Slack, visit http://slack.cncf.io/.
+- [File an issue](https://github.com/cortexproject/cortex/issues/new).
+- Email [cortex-users@lists.cncf.io](mailto:cortex-users@lists.cncf.io).
+
+Your feedback is always welcome.
+
+For security issues see https://github.com/cortexproject/cortex/security/policy
+
+## Engage with Our Community
+
+We invite you to participate in the Cortex Community Calls, an exciting opportunity to connect with fellow
+developers and enthusiasts. These meetings are held every 4 weeks on Thursdays at 1700 UTC,
+providing a platform for open discussion, collaboration, and knowledge sharing.
+
+Our meeting notes are meticulously documented and can be
+accessed [here](https://docs.google.com/document/d/1shtXSAqp3t7fiC-9uZcKkq3mgwsItAJlH6YW6x1joZo/edit), offering a
+comprehensive overview of the topics discussed and decisions made.
+
+To ensure you never miss a meeting, we've made it easy for you to keep track:
+
+- View the Cortex Community Call schedule in your
+ browser [here](https://zoom-lfx.platform.linuxfoundation.org/meetings/cortex?view=month).
+- Alternatively, download the .ics
+ file [here](https://webcal.prod.itx.linuxfoundation.org/lfx/a092M00001IfTjPQAV) for
+ use with any calendar application or service that supports the iCal format.
+
+Join us in shaping the future of Cortex, and let's build something amazing together!
+
+## Further reading
+
+### Talks
+
+- Apr 2025 KubeCon talk "Cortex: Insights, Updates and Roadmap" ([video](https://youtu.be/3aUg2qxfoZU), [slides](https://static.sched.com/hosted_files/kccnceu2025/6c/Cortex%20Talk%20KubeCon%20EU%202025.pdf))
+- Apr 2025 KubeCon talk "Taming 50 Billion Time Series: Operating Global-Scale Prometheus Deployments on Kubernetes" ([video](https://youtu.be/OqLpKJwKZlk), [slides](https://static.sched.com/hosted_files/kccnceu2025/b2/kubecon%20-%2050b%20-%20final.pdf))
+- Nov 2024 KubeCon talk "Cortex Intro: Multi-Tenant Scalable Prometheus" ([video](https://youtu.be/OGAEWCoM6Tw), [slides](https://static.sched.com/hosted_files/kccncna2024/0f/Cortex%20Talk%20KubeCon%20US%202024.pdf))
+- Mar 2024 KubeCon talk "Cortex Intro: Multi-Tenant Scalable Prometheus" ([video](https://youtu.be/by538PPSPQ0), [slides](https://static.sched.com/hosted_files/kccnceu2024/a1/Cortex%20Talk%20KubeConEU24.pptx.pdf))
+- Apr 2023 KubeCon talk "How to Run a Rock Solid Multi-Tenant Prometheus" ([video](https://youtu.be/Pl5hEoRPLJU), [slides](https://static.sched.com/hosted_files/kccnceu2023/49/Kubecon2023.pptx.pdf))
+- Oct 2022 KubeCon talk "Current State and the Future of Cortex" ([video](https://youtu.be/u1SfBAGWHgQ), [slides](https://static.sched.com/hosted_files/kccncna2022/93/KubeCon%20%2B%20CloudNativeCon%20NA%202022%20PowerPoint%20-%20Cortex.pdf))
+- Oct 2021 KubeCon talk "Cortex: Intro and Production Tips" ([video](https://youtu.be/zNE_kGcUGuI), [slides](https://static.sched.com/hosted_files/kccncna2021/8e/KubeCon%202021%20NA%20Cortex%20Maintainer.pdf))
+- Sep 2020 KubeCon talk "Scaling Prometheus: How We Got Some Thanos Into Cortex" ([video](https://www.youtube.com/watch?v=Z5OJzRogAS4), [slides](https://static.sched.com/hosted_files/kccnceu20/ec/2020-08%20-%20KubeCon%20EU%20-%20Cortex%20blocks%20storage.pdf))
+- Jul 2020 PromCon talk "Sharing is Caring: Leveraging Open Source to Improve Cortex & Thanos" ([video](https://www.youtube.com/watch?v=2oTLouUvsac), [slides](https://docs.google.com/presentation/d/1OuKYD7-k9Grb7unppYycdmVGWN0Bo0UwdJRySOoPdpg/edit))
+- Nov 2019 KubeCon talks "[Cortex 101: Horizontally Scalable Long Term Storage for Prometheus][kubecon-cortex-101]" ([video][kubecon-cortex-101-video], [slides][kubecon-cortex-101-slides]), "[Configuring Cortex for Max
+ Performance][kubecon-cortex-201]" ([video][kubecon-cortex-201-video], [slides][kubecon-cortex-201-slides], [write up][kubecon-cortex-201-writeup]) and "[Blazin' Fast PromQL][kubecon-blazin]" ([slides][kubecon-blazin-slides], [video][kubecon-blazin-video], [write up][kubecon-blazin-writeup])
+- Nov 2019 PromCon talk "[Two Households, Both Alike in Dignity: Cortex and Thanos][promcon-two-households]" ([video][promcon-two-households-video], [slides][promcon-two-households-slides], [write up][promcon-two-households-writeup])
+- May 2019 KubeCon talks; "[Cortex: Intro][kubecon-cortex-intro]" ([video][kubecon-cortex-intro-video], [slides][kubecon-cortex-intro-slides], [blog post][kubecon-cortex-intro-blog]) and "[Cortex: Deep Dive][kubecon-cortex-deepdive]" ([video][kubecon-cortex-deepdive-video], [slides][kubecon-cortex-deepdive-slides])
+- Nov 2018 CloudNative London meetup talk; "Cortex: Horizontally Scalable, Highly Available Prometheus" ([slides][cloudnative-london-2018-slides])
+- Aug 2018 PromCon panel; "[Prometheus Long-Term Storage Approaches][promcon-2018-panel]" ([video][promcon-2018-video])
+- Dec 2018 KubeCon talk; "[Cortex: Infinitely Scalable Prometheus][kubecon-2018-talk]" ([video][kubecon-2018-video], [slides][kubecon-2018-slides])
+- Aug 2017 PromCon talk; "[Cortex: Prometheus as a Service, One Year On][promcon-2017-talk]" ([video][promcon-2017-video], [slides][promcon-2017-slides], write up [part 1][promcon-2017-writeup-1], [part 2][promcon-2017-writeup-2], [part 3][promcon-2017-writeup-3])
+- Jun 2017 Prometheus London meetup talk; "Cortex: open-source, horizontally-scalable, distributed Prometheus" ([video][prometheus-london-2017-video])
+- Dec 2016 KubeCon talk; "Weave Cortex: Multi-tenant, horizontally scalable Prometheus as a Service" ([video][kubecon-2016-video], [slides][kubecon-2016-slides])
+- Aug 2016 PromCon talk; "Project Frankenstein: Multitenant, Scale-Out Prometheus": ([video][promcon-2016-video], [slides][promcon-2016-slides])
+
+### Blog Posts
+
+- Dec 2020 blog post "[How AWS and Grafana Labs are scaling Cortex for the cloud](https://aws.amazon.com/blogs/opensource/how-aws-and-grafana-labs-are-scaling-cortex-for-the-cloud/)"
+- Oct 2020 blog post "[How to switch Cortex from chunks to blocks storage (and why you won't look back)](https://grafana.com/blog/2020/10/19/how-to-switch-cortex-from-chunks-to-blocks-storage-and-why-you-wont-look-back/)"
+- Oct 2020 blog post "[Now GA: Cortex blocks storage for running Prometheus at scale with reduced operational complexity](https://grafana.com/blog/2020/10/06/now-ga-cortex-blocks-storage-for-running-prometheus-at-scale-with-reduced-operational-complexity/)"
+- Sep 2020 blog post "[A Tale of Tail Latencies](https://www.weave.works/blog/a-tale-of-tail-latencies)"
+- Aug 2020 blog post "[Scaling Prometheus: How we're pushing Cortex blocks storage to its limit and beyond](https://grafana.com/blog/2020/08/12/scaling-prometheus-how-were-pushing-cortex-blocks-storage-to-its-limit-and-beyond/)"
+- Jul 2020 blog post "[How blocks storage in Cortex reduces operational complexity for running Prometheus at massive scale](https://grafana.com/blog/2020/07/29/how-blocks-storage-in-cortex-reduces-operational-complexity-for-running-prometheus-at-massive-scale/)"
+- Mar 2020 blog post "[Cortex: Zone Aware Replication](https://kenhaines.net/cortex-zone-aware-replication/)"
+- Mar 2020 blog post "[How we're using gossip to improve Cortex and Loki availability](https://grafana.com/blog/2020/03/25/how-were-using-gossip-to-improve-cortex-and-loki-availability/)"
+- Jan 2020 blog post "[The Future of Cortex: Into the Next Decade](https://grafana.com/blog/2020/01/21/the-future-of-cortex-into-the-next-decade/)"
+- Feb 2019 blog post & podcast; "[Prometheus Scalability with Bryan Boreham][prometheus-scalability]" ([podcast][prometheus-scalability-podcast])
+- Feb 2019 blog post; "[How Aspen Mesh Runs Cortex in Production][aspen-mesh-2019]"
+- Dec 2018 CNCF blog post; "[Cortex: a multi-tenant, horizontally scalable Prometheus-as-a-Service][cncf-2018-blog]"
+- Nov 2018 CNCF TOC Presentation; "Horizontally Scalable, Multi-tenant Prometheus" ([slides][cncf-toc-presentation])
+- Sept 2018 blog post; "[What is Cortex?][what-is-cortex]"
+- Jul 2018 design doc; "[Cortex Query Optimisations][cortex-query-optimisation-2018]"
+- Jun 2016 design document; "[Project Frankenstein: A Multi Tenant, Scale Out Prometheus](http://goo.gl/prdUYV)"
+
+[kubecon-cortex-101]: https://kccncna19.sched.com/event/UaiH/cortex-101-horizontally-scalable-long-term-storage-for-prometheus-chris-marchbanks-splunk
+[kubecon-cortex-101-video]: https://www.youtube.com/watch?v=f8GmbH0U_kI
+[kubecon-cortex-101-slides]: https://static.sched.com/hosted_files/kccncna19/92/cortex_101.pdf
+[kubecon-cortex-201]: https://kccncna19.sched.com/event/UagC/performance-tuning-and-day-2-operations-goutham-veeramachaneni-grafana-labs
+[kubecon-cortex-201-slides]: https://static.sched.com/hosted_files/kccncna19/87/Taming%20Cortex_%20Configuring%20for%20maximum%20performance%281%29.pdf
+[kubecon-cortex-201-video]: https://www.youtube.com/watch?v=VuE5aDHDexU
+[kubecon-cortex-201-writeup]: https://grafana.com/blog/2019/12/02/kubecon-recap-configuring-cortex-for-maximum-performance-at-scale/
+[kubecon-blazin]: https://kccncna19.sched.com/event/UaWT/blazin-fast-promql-tom-wilkie-grafana-labs
+[kubecon-blazin-slides]: https://static.sched.com/hosted_files/kccncna19/0b/2019-11%20Blazin%27%20Fast%20PromQL.pdf
+[kubecon-blazin-video]: https://www.youtube.com/watch?v=yYgdZyeBOck
+[kubecon-blazin-writeup]: https://grafana.com/blog/2019/09/19/how-to-get-blazin-fast-promql/
+[promcon-two-households]: https://promcon.io/2019-munich/talks/two-households-both-alike-in-dignity-cortex-and-thanos/
+[promcon-two-households-video]: https://www.youtube.com/watch?v=KmJnmd3K3Ws&feature=youtu.be
+[promcon-two-households-slides]: https://promcon.io/2019-munich/slides/two-households-both-alike-in-dignity-cortex-and-thanos.pdf
+[promcon-two-households-writeup]: https://grafana.com/blog/2019/11/21/promcon-recap-two-households-both-alike-in-dignity-cortex-and-thanos/
+[kubecon-cortex-intro]: https://kccnceu19.sched.com/event/MPhX/intro-cortex-tom-wilkie-grafana-labs-bryan-boreham-weaveworks
+[kubecon-cortex-intro-video]: https://www.youtube.com/watch?v=_7Wnta-3-W0
+[kubecon-cortex-intro-slides]: https://static.sched.com/hosted_files/kccnceu19/af/Cortex%20Intro%20KubeCon%20EU%202019.pdf
+[kubecon-cortex-intro-blog]: https://grafana.com/blog/2019/05/21/grafana-labs-at-kubecon-the-latest-on-cortex/
+[kubecon-cortex-deepdive]: https://kccnceu19.sched.com/event/MPjK/deep-dive-cortex-tom-wilkie-grafana-labs-bryan-boreham-weaveworks
+[kubecon-cortex-deepdive-video]: https://www.youtube.com/watch?v=mYyFT4ChHio
+[kubecon-cortex-deepdive-slides]: https://static.sched.com/hosted_files/kccnceu19/52/Cortex%20Deep%20Dive%20KubeCon%20EU%202019.pdf
+[prometheus-scalability]: https://www.weave.works/blog/prometheus-scalability-with-bryan-boreham
+[prometheus-scalability-podcast]: https://softwareengineeringdaily.com/2019/01/21/prometheus-scalability-with-bryan-boreham/
+[aspen-mesh-2019]: https://www.weave.works/blog/how-aspen-mesh-runs-cortex-in-production
+[kubecon-2018-talk]: https://kccna18.sched.com/event/GrXL/cortex-infinitely-scalable-prometheus-bryan-boreham-weaveworks
+[kubecon-2018-video]: https://www.youtube.com/watch?v=iyN40FsRQEo
+[kubecon-2018-slides]: https://static.sched.com/hosted_files/kccna18/9b/Cortex%20CloudNativeCon%202018.pdf
+[cloudnative-london-2018-slides]: https://www.slideshare.net/grafana/cortex-horizontally-scalable-highly-available-prometheus
+[cncf-2018-blog]: https://www.cncf.io/blog/2018/12/18/cortex-a-multi-tenant-horizontally-scalable-prometheus-as-a-service/
+[cncf-toc-presentation]: https://docs.google.com/presentation/d/190oIFgujktVYxWZLhLYN4q8p9dtQYoe4sxHgn4deBSI/edit#slide=id.g3b8e2d6f7e_0_6
+[what-is-cortex]: https://medium.com/weaveworks/what-is-cortex-2c30bcbd247d
+[promcon-2018-panel]: https://promcon.io/2018-munich/talks/panel-discussion-prometheus-long-term-storage-approaches/
+[promcon-2018-video]: https://www.youtube.com/watch?v=3pTG_N8yGSU
+[prometheus-london-2017-video]: https://www.youtube.com/watch?v=Xi4jq2IUbLs
+[promcon-2017-talk]: https://promcon.io/2017-munich/talks/cortex-prometheus-as-a-service-one-year-on/
+[promcon-2017-video]: https://www.youtube.com/watch?v=_8DmPW4iQBQ
+[promcon-2017-slides]: https://promcon.io/2017-munich/slides/cortex-prometheus-as-a-service-one-year-on.pdf
+[promcon-2017-writeup-1]: https://kausal.co/blog/cortex-prometheus-aas-promcon-1/
+[promcon-2017-writeup-2]: https://kausal.co/blog/cortex-prometheus-aas-promcon-2/
+[promcon-2017-writeup-3]: https://kausal.co/blog/cortex-prometheus-aas-promcon-3/
+[cortex-query-optimisation-2018]: https://docs.google.com/document/d/1lsvSkv0tiAMPQv-V8vI2LZ8f4i9JuTRsuPI_i-XcAqY
+[kubecon-2016-video]: https://www.youtube.com/watch?v=9Uctgnazfwk
+[kubecon-2016-slides]: http://www.slideshare.net/weaveworks/weave-cortex-multitenant-horizontally-scalable-prometheus-as-a-service
+[promcon-2016-video]: https://youtu.be/3Tb4Wc0kfCM
+[promcon-2016-slides]: http://www.slideshare.net/weaveworks/project-frankenstein-a-multitenant-horizontally-scalable-prometheus-as-a-service
+
+## Hosted Cortex
+
+### Amazon Managed Service for Prometheus (AMP)
+
+[Amazon Managed Service for Prometheus (AMP)](https://aws.amazon.com/prometheus/) is a Prometheus-compatible monitoring service that makes it easy to monitor containerized applications at scale. It is a highly available, secure, and managed monitoring service for your containers. Get started [here](https://console.aws.amazon.com/prometheus/home). To learn more about AMP, reference our [documentation](https://docs.aws.amazon.com/prometheus/latest/userguide/what-is-Amazon-Managed-Service-Prometheus.html) and [Getting Started with AMP blog](https://aws.amazon.com/blogs/mt/getting-started-amazon-managed-service-for-prometheus/).
+
+## Emeritus Maintainers
+
+* Peter Štibraný @pstibrany
+* Marco Pracucci @pracucci
+* Bryan Boreham @bboreham
+* Goutham Veeramachaneni @gouthamve
+* Jacob Lisi @jtlisi
+* Tom Wilkie @tomwilkie
+* Alvin Lin @alvinlin123
+
+## History of Cortex
+
+The Cortex project was started by Tom Wilkie and Julius Volz (Prometheus' co-founder) in June 2016.
diff --git a/data/readmes/cosign-v302.md b/data/readmes/cosign-v302.md
new file mode 100644
index 0000000..230e9e6
--- /dev/null
+++ b/data/readmes/cosign-v302.md
@@ -0,0 +1,787 @@
+# Cosign - README (v3.0.2)
+
+**Repository**: https://github.com/sigstore/cosign
+**Version**: v3.0.2
+
+---
+
+
+
+
+
+# cosign
+
+Signing OCI containers (and other artifacts) using [Sigstore](https://sigstore.dev/)!
+
+[](https://goreportcard.com/report/github.com/sigstore/cosign)
+[](https://github.com/sigstore/cosign/actions/workflows/e2e-tests.yml)
+[](https://bestpractices.coreinfrastructure.org/projects/5715)
+[](https://securityscorecards.dev/viewer/?uri=github.com/sigstore/cosign)
+
+Cosign aims to make signatures **invisible infrastructure**.
+
+Cosign supports:
+
+* "Keyless signing" with the Sigstore public good Fulcio certificate authority and Rekor transparency log (default)
+* Hardware and KMS signing
+* Signing with a cosign generated encrypted private/public keypair
+* Container Signing, Verification and Storage in an OCI registry.
+* Bring-your-own PKI
+
+## Info
+
+`Cosign` is developed as part of the [`sigstore`](https://sigstore.dev) project.
+We also use a [slack channel](https://sigstore.slack.com)!
+Click [here](https://join.slack.com/t/sigstore/shared_invite/zt-2ub0ztl5z-PkWb_Ldwef5d6nb~oryaTA) for the invite link.
+
+## Installation
+
+For Homebrew, Arch, Nix, GitHub Action, and Kubernetes installs see the [installation docs](https://docs.sigstore.dev/cosign/system_config/installation/).
+
+For Linux and macOS binaries see the [GitHub release assets](https://github.com/sigstore/cosign/releases/latest).
+
+:rotating_light: If you are downloading releases of cosign from our GCS bucket - please see more information on the July 31, 2023 [deprecation notice](https://blog.sigstore.dev/cosign-releases-bucket-deprecation/) :rotating_light:
+
+## Developer Installation
+
+If you have Go 1.22+, you can setup a development environment:
+
+```shell
+$ git clone https://github.com/sigstore/cosign
+$ cd cosign
+$ go install ./cmd/cosign
+$ $(go env GOPATH)/bin/cosign
+```
+
+## Contributing
+
+If you are interested in contributing to `cosign`, please read the [contributing documentation](./CONTRIBUTING.md).
+
+Future Cosign development will be focused the next major release which will be based on
+[sigstore-go](https://github.com/sigstore/sigstore-go). Maintainers will be focused on feature development within
+sigstore-go. Contributions to sigstore-go, particularly around bring-your-own keys and signing, are appreciated.
+Please see the [issue tracker](https://github.com/sigstore/sigstore-go/issues) for good first issues.
+
+Cosign 2.x is a stable release and will continue to receive periodic feature updates and bug fixes. PRs
+that are small in scope and size are most likely to be quickly reviewed.
+
+PRs which significantly modify or break the API will not be accepted. PRs which are significant in size but do not
+introduce breaking changes may be accepted, but will be considered lower priority than PRs in sigstore-go.
+
+## Dockerfile
+
+Here is how to install and use cosign inside a Dockerfile through the ghcr.io/sigstore/cosign/cosign image:
+
+```shell
+FROM ghcr.io/sigstore/cosign/cosign:v2.4.1 as cosign-bin
+
+# Source: https://github.com/chainguard-images/static
+FROM cgr.dev/chainguard/static:latest
+COPY --from=cosign-bin /ko-app/cosign /usr/local/bin/cosign
+ENTRYPOINT [ "cosign" ]
+```
+
+## Quick Start
+
+This shows how to:
+* sign a container image with the default identity-based "keyless signing" method (see [the documentation for more information](https://docs.sigstore.dev/cosign/signing/overview/))
+* verify the container image
+
+### Sign a container and store the signature in the registry
+
+Note that you should always sign images based on their digest (`@sha256:...`)
+rather than a tag (`:latest`) because otherwise you might sign something you
+didn't intend to!
+
+```shell
+ cosign sign $IMAGE
+
+Generating ephemeral keys...
+Retrieving signed certificate...
+
+ Note that there may be personally identifiable information associated with this signed artifact.
+ This may include the email address associated with the account with which you authenticate.
+ This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
+
+By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
+Are you sure you would like to continue? [y/N] y
+Your browser will now be opened to:
+https://oauth2.sigstore.dev/auth/auth?access_type=online&client_id=sigstore&code_challenge=OrXitVKUZm2lEWHVt1oQWR4HZvn0rSlKhLcltglYxCY&code_challenge_method=S256&nonce=2KvOWeTFxYfxyzHtssvlIXmY6Jk&redirect_uri=http%3A%2F%2Flocalhost%3A57102%2Fauth%2Fcallback&response_type=code&scope=openid+email&state=2KvOWfbQJ1caqScgjwibzK2qJmb
+Successfully verified SCT...
+tlog entry created with index: 12086900
+Pushing signature to: $IMAGE
+```
+
+Cosign will prompt you to authenticate via OIDC, where you'll sign in with your email address.
+Under the hood, cosign will request a code signing certificate from the Fulcio certificate authority.
+The subject of the certificate will match the email address you logged in with.
+Cosign will then store the signature and certificate in the Rekor transparency log, and upload the signature to the OCI registry alongside the image you're signing.
+
+
+### Verify a container
+
+To verify the image, you'll need to pass in the expected certificate subject and certificate issuer via the `--certificate-identity` and `--certificate-oidc-issuer` flags:
+
+```
+cosign verify $IMAGE --certificate-identity=$IDENTITY --certificate-oidc-issuer=$OIDC_ISSUER
+```
+
+You can also pass in a regex for the certificate identity and issuer flags, `--certificate-identity-regexp` and `--certificate-oidc-issuer-regexp`.
+
+### Verify a container against a public key
+
+This command returns `0` if *at least one* `cosign` formatted signature for the image is found
+matching the public key.
+See the detailed usage below for information and caveats on other signature formats.
+
+Any valid payloads are printed to stdout, in json format.
+Note that these signed payloads include the digest of the container image, which is how we can be
+sure these "detached" signatures cover the correct image.
+
+```shell
+$ cosign verify --key cosign.pub $IMAGE_URI:1h
+The following checks were performed on these signatures:
+ - The cosign claims were validated
+ - The signatures were verified against the specified public key
+{"Critical":{"Identity":{"docker-reference":""},"Image":{"Docker-manifest-digest":"sha256:87ef60f558bad79beea6425a3b28989f01dd417164150ab3baab98dcbf04def8"},"Type":"cosign container image signature"},"Optional":null}
+```
+
+### Verify a container in an air-gapped environment
+
+**Note:** This section is out of date.
+
+**Note:** Most verification workflows require periodically requesting service keys from a TUF repository.
+For airgapped verification of signatures using the public-good instance, you will need to retrieve the
+[trusted root](https://github.com/sigstore/root-signing/blob/main/targets/trusted_root.json) file from the production
+TUF repository. The contents of this file will change without notification. By not using TUF, you will need
+to build your own mechanism to keep your airgapped copy of this file up-to-date.
+
+Cosign can do completely offline verification by verifying a [bundle](./specs/SIGNATURE_SPEC.md#properties) which is typically distributed as an annotation on the image manifest.
+As long as this annotation is present, then offline verification can be done.
+This bundle annotation is always included by default for keyless signing, so the default `cosign sign` functionality will include all materials needed for offline verification.
+
+To verify an image in an air-gapped environment, the image and signatures must be available locally on the filesystem.
+
+An image can be saved locally using `cosign save` (note, this step must be done with a network connection):
+
+```
+cosign initialize # This will pull in the latest TUF root
+cosign save $IMAGE_NAME --dir ./path/to/dir
+```
+
+Now, in an air-gapped environment, this local image can be verified:
+
+```shell
+cosign verify \
+ --certificate-identity $CERT_IDENTITY \
+ --certificate-oidc-issuer $CERT_OIDC_ISSUER \
+ --offline=true \
+ --new-bundle-format=false \ # for artifacts signed without the new protobuf bundle format
+ --trusted-root ~/.sigstore/root/tuf-repo-cdn.sigstore.dev/targets/trusted_root.json \ # default location of trusted root
+ --local-image ./path/to/dir
+```
+
+You'll need to pass in expected values for `$CERT_IDENTITY` and `$CERT_OIDC_ISSUER` to correctly verify this image.
+If you signed with a keypair, the same command will work, assuming the public key material is present locally:
+
+```
+cosign verify --key cosign.pub --offline --local-image ./path/to/dir
+```
+
+### What ** is not ** production ready?
+
+While parts of `cosign` are stable, we are continuing to experiment and add new features.
+The following feature set is not considered stable yet, but we are committed to stabilizing it over time!
+
+#### Formats/Specifications
+
+While the `cosign` code for uploading, signing, retrieving, and verifying several artifact types is stable,
+the format specifications for some of those types may not be considered stable yet.
+Some of these are developed outside of the `cosign` project, so we are waiting for them to stabilize first.
+
+These include:
+
+* The SBOM specification for storing SBOMs in a container registry
+* The In-Toto attestation format
+
+## Working with Other Artifacts
+
+OCI registries are useful for storing more than just container images!
+`Cosign` also includes some utilities for publishing generic artifacts, including binaries, scripts, and configuration files using the OCI protocol.
+
+This section shows how to leverage these for an easy-to-use, backwards-compatible artifact distribution system that integrates well with the rest of Sigstore.
+
+See [the documentation](https://docs.sigstore.dev/cosign/signing/other_types/) for more information.
+
+### Blobs
+
+You can publish an artifact with `cosign upload blob`:
+
+```shell
+$ echo "my first artifact" > artifact
+$ BLOB_SUM=$(shasum -a 256 artifact | cut -d' ' -f 1) && echo "$BLOB_SUM"
+c69d72c98b55258f9026f984e4656f0e9fd3ef024ea3fac1d7e5c7e6249f1626
+$ BLOB_NAME=my-artifact-$(uuidgen | head -c 8 | tr 'A-Z' 'a-z')
+$ BLOB_URI=ttl.sh/$BLOB_NAME:1h
+
+$ BLOB_URI_DIGEST=$(cosign upload blob -f artifact $BLOB_URI) && echo "$BLOB_URI_DIGEST"
+Uploading file from [artifact] to [ttl.sh/my-artifact-f42c22e0:5m] with media type [text/plain]
+File [artifact] is available directly at [ttl.sh/v2/my-artifact-f42c22e0/blobs/sha256:c69d72c98b55258f9026f984e4656f0e9fd3ef024ea3fac1d7e5c7e6249f1626]
+Uploaded image to:
+ttl.sh/my-artifact-f42c22e0@sha256:790d47850411e902aabebc3a684eeb78fcae853d4dd6e1cc554d70db7f05f99f
+```
+
+Your users can download it from the "direct" url with standard tools like curl or wget:
+
+```shell
+$ curl -L ttl.sh/v2/$BLOB_NAME/blobs/sha256:$BLOB_SUM > artifact-fetched
+```
+
+The digest is baked right into the URL, so they can check that as well:
+
+```shell
+$ cat artifact-fetched | shasum -a 256
+c69d72c98b55258f9026f984e4656f0e9fd3ef024ea3fac1d7e5c7e6249f1626 -
+```
+
+You can sign it with the normal `cosign sign` command and flags:
+
+```shell
+$ cosign sign --key cosign.key $BLOB_URI_DIGEST
+Enter password for private key:
+Pushing signature to: ttl.sh/my-artifact-f42c22e0
+```
+
+As usual, make sure to reference any images you sign by their digest to make sure you don't sign the wrong thing!
+
+#### Tekton Bundles
+
+[Tekton](https://tekton.dev) bundles can be uploaded and managed within an OCI registry.
+The specification is [here](https://tekton.dev/docs/pipelines/tekton-bundle-contracts/).
+This means they can also be signed and verified with `cosign`.
+
+Tekton Bundles can currently be uploaded with the [tkn cli](https://github.com/tektoncd/cli), but we may add this support to
+`cosign` in the future.
+
+```shell
+$ tkn bundle push us.gcr.io/dlorenc-vmtest2/pipeline:latest -f task-output-image.yaml
+Creating Tekton Bundle:
+ - Added TaskRun: to image
+
+Pushed Tekton Bundle to us.gcr.io/dlorenc-vmtest2/pipeline@sha256:124e1fdee94fe5c5f902bc94da2d6e2fea243934c74e76c2368acdc8d3ac7155
+$ cosign sign --key cosign.key us.gcr.io/dlorenc-vmtest2/pipeline@sha256:124e1fdee94fe5c5f902bc94da2d6e2fea243934c74e76c2368acdc8d3ac7155
+Enter password for private key:
+tlog entry created with index: 5086
+Pushing signature to: us.gcr.io/dlorenc-vmtest2/demo:sha256-124e1fdee94fe5c5f902bc94da2d6e2fea243934c74e76c2368acdc8d3ac7155.sig
+```
+
+#### WASM
+
+Web Assembly Modules can also be stored in an OCI registry, using this [specification](https://github.com/solo-io/wasm/tree/master/spec).
+
+Cosign can upload these using the `cosign wasm upload` command:
+
+```shell
+$ cosign upload wasm -f hello.wasm us.gcr.io/dlorenc-vmtest2/wasm
+$ cosign sign --key cosign.key us.gcr.io/dlorenc-vmtest2/wasm@sha256:9e7a511fb3130ee4641baf1adc0400bed674d4afc3f1b81bb581c3c8f613f812
+Enter password for private key:
+tlog entry created with index: 5198
+Pushing signature to: us.gcr.io/dlorenc-vmtest2/wasm:sha256-9e7a511fb3130ee4641baf1adc0400bed674d4afc3f1b81bb581c3c8f613f812.sig
+```
+#### eBPF
+
+[eBPF](https://ebpf.io) modules can also be stored in an OCI registry, using this [specification](https://github.com/solo-io/bumblebee/tree/main/spec).
+
+The image below was built using the `bee` tool. More information can be found [here](https://github.com/solo-io/bumblebee/)
+
+Cosign can then sign these images as they can any other OCI image.
+
+```shell
+$ bee build ./examples/tcpconnect/tcpconnect.c localhost:5000/tcpconnect:test
+$ bee push localhost:5000/tcpconnect:test
+$ cosign sign --key cosign.key localhost:5000/tcpconnect@sha256:7a91c50d922925f152fec96ed1d84b7bc6b2079c169d68826f6cf307f22d40e6
+Enter password for private key:
+Pushing signature to: localhost:5000/tcpconnect
+$ cosign verify --key cosign.pub localhost:5000/tcpconnect:test
+
+Verification for localhost:5000/tcpconnect:test --
+The following checks were performed on each of these signatures:
+ - The cosign claims were validated
+ - The signatures were verified against the specified public key
+
+[{"critical":{"identity":{"docker-reference":"localhost:5000/tcpconnect"},"image":{"docker-manifest-digest":"sha256:7a91c50d922925f152fec96ed1d84b7bc6b2079c169d68826f6cf307f22d40e6"},"type":"cosign container image signature"},"optional":null}]
+
+```
+
+#### In-Toto Attestations
+
+Cosign also has built-in support for [in-toto](https://in-toto.io) attestations.
+The specification for these is defined [here](https://github.com/in-toto/attestation).
+
+You can create and sign one from a local predicate file using the following commands:
+
+```shell
+$ cosign attest --predicate --key cosign.key $IMAGE_URI_DIGEST
+```
+
+All of the standard key management systems are supported.
+Payloads are signed using the DSSE signing spec, defined [here](https://github.com/secure-systems-lab/dsse).
+
+To verify:
+
+```shell
+$ cosign verify-attestation --key cosign.pub $IMAGE_URI
+```
+
+## Detailed Usage
+
+See the [Usage documentation](https://docs.sigstore.dev/cosign/signing/overview/) for more information.
+
+## Hardware-based Tokens
+
+See the [Hardware Tokens documentation](https://docs.sigstore.dev/cosign/key_management/hardware-based-tokens/) for information on how to use `cosign` with hardware.
+
+## Registry Support
+
+`cosign` uses [go-containerregistry](https://github.com/google/go-containerregistry) for registry
+interactions, which has generally excellent compatibility, but some registries may have quirks.
+
+Today, `cosign` has been tested and works against the following registries:
+
+* AWS Elastic Container Registry
+* GCP's Artifact Registry and Container Registry
+* Docker Hub
+* Azure Container Registry
+* JFrog Artifactory Container Registry
+* The CNCF distribution/distribution Registry
+* GitLab Container Registry
+* GitHub Container Registry
+* The CNCF Harbor Registry
+* Digital Ocean Container Registry
+* Sonatype Nexus Container Registry
+* Alibaba Cloud Container Registry
+* Red Hat Quay Container Registry 3.6+ / Red Hat quay.io
+* Elastic Container Registry
+* IBM Cloud Container Registry
+* Cloudsmith Container Registry
+* The CNCF zot Registry
+* OVHcloud Managed Private Registry
+
+We aim for wide registry support. To `sign` images in registries which do not yet fully support [OCI media types](https://github.com/sigstore/cosign/blob/main/specs/SIGNATURE_SPEC.md), one may need to use `COSIGN_DOCKER_MEDIA_TYPES` to fall back to legacy equivalents. For example:
+
+```shell
+COSIGN_DOCKER_MEDIA_TYPES=1 cosign sign --key cosign.key legacy-registry.example.com/my/image@$DIGEST
+```
+
+Please help test and file bugs if you see issues!
+Instructions can be found in the [tracking issue](https://github.com/sigstore/cosign/issues/40).
+
+## Caveats
+
+### Intentionally Missing Features
+
+`cosign` only generates ECDSA-P256 keys and uses SHA256 hashes, for both ephemeral keyless signing and managed key signing.
+Keys are stored in PEM-encoded PKCS8 format.
+However, you can use `cosign` to store and retrieve signatures in any format, from any algorithm.
+
+### Things That Should Probably Change
+
+#### Payload Formats
+
+`cosign` only supports Red Hat's [simple signing](https://www.redhat.com/en/blog/container-image-signing)
+format for payloads.
+That looks like:
+
+```json
+{
+ "critical": {
+ "identity": {
+ "docker-reference": "testing/manifest"
+ },
+ "image": {
+ "Docker-manifest-digest": "sha256:20be...fe55"
+ },
+ "type": "cosign container image signature"
+ },
+ "optional": {
+ "creator": "Bob the Builder",
+ "timestamp": 1458239713
+ }
+}
+```
+
+**Note:** This can be generated for an image reference using `cosign generate $IMAGE_URI_DIGEST`.
+
+I'm happy to switch this format to something else if it makes sense.
+See https://github.com/notaryproject/nv2/issues/40 for one option.
+
+#### Registry Details
+
+`cosign` signatures are stored as separate objects in the OCI registry, with only a weak
+reference back to the object they "sign".
+This means this relationship is opaque to the registry, and signatures *will not* be deleted
+or garbage-collected when the image is deleted.
+Similarly, they **can** easily be copied from one environment to another, but this is not
+automatic.
+
+Multiple signatures are stored in a list which is unfortunately a race condition today.
+To add a signature, clients orchestrate a "read-append-write" operation, so the last write
+will win in the case of contention.
+
+##### Specifying Registry
+
+`cosign` will default to storing signatures in the same repo as the image it is signing.
+To specify a different repo for signatures, you can set the `COSIGN_REPOSITORY` environment variable.
+
+This will replace the repo in the provided image like this:
+
+```shell
+$ export COSIGN_REPOSITORY=gcr.io/my-new-repo
+$ cosign sign --key cosign.key $IMAGE_URI_DIGEST
+```
+
+So the signature for `gcr.io/dlorenc-vmtest2/demo` will be stored in `gcr.io/my-new-repo/demo:sha256-DIGEST.sig`.
+
+Note: different registries might expect different formats for the "repository."
+
+* To use [GCR](https://cloud.google.com/container-registry), a registry name
+ like `gcr.io/$REPO` is sufficient, as in the example above.
+* To use [Artifact Registry](https://cloud.google.com/artifact-registry),
+ specify a full image name like
+ `$LOCATION-docker.pkg.dev/$PROJECT/$REPO/$STORAGE_IMAGE`, not just a
+ repository. For example,
+
+ ```shell
+ $ export COSIGN_REPOSITORY=us-docker.pkg.dev/my-new-repo/demo
+ $ cosign sign --key cosign.key $IMAGE_URI_DIGEST
+ ```
+
+ where the `sha256-DIGEST` will match the digest for
+ `gcr.io/dlorenc-vmtest2/demo`. Specifying just a repo like
+ `$LOCATION-docker.pkg.dev/$PROJECT/$REPO` will not work in Artifact Registry.
+
+
+## Signature Specification
+
+`cosign` is inspired by tools like [minisign](https://jedisct1.github.io/minisign/) and
+[signify](https://www.openbsd.org/papers/bsdcan-signify.html).
+
+Generated private keys are stored in PEM format.
+The keys encrypted under a password using scrypt as a KDF and nacl/secretbox for encryption.
+
+They have a PEM header of `ENCRYPTED SIGSTORE PRIVATE KEY`:
+
+```shell
+-----BEGIN ENCRYPTED SIGSTORE PRIVATE KEY-----
+...
+-----END ENCRYPTED SIGSTORE PRIVATE KEY-----
+```
+
+Public keys are stored on disk in PEM-encoded standard PKIX format with a header of `PUBLIC KEY`.
+```
+-----BEGIN PUBLIC KEY-----
+MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELigCnlLNKgOglRTx1D7JhI7eRw99
+QolE9Jo4QUxnbMy5nUuBL+UZF9qqfm/Dg1BNeHRThHzWh2ki9vAEgWEDOw==
+-----END PUBLIC KEY-----
+```
+
+## Storage Specification
+
+`cosign` stores signatures in an OCI registry, and uses a naming convention (tag based
+on the sha256 of what we're signing) for locating the signature index.
+
+
+
+
+
+`reg.example.com/ubuntu@sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715` has signatures located at `reg.example.com/ubuntu:sha256-703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715.sig`
+
+Roughly (ignoring ports in the hostname): `s/:/-/g` and `s/@/:/g` to find the signature index.
+
+See [Race conditions](#registry-details) for some caveats around this strategy.
+
+Alternative implementations could use transparency logs, local filesystem, a separate repository
+registry, an explicit reference to a signature index, a new registry API, grafeas, etc.
+
+### Signing subjects
+
+`cosign` only works for artifacts stored as "manifests" in the registry today.
+The proposed mechanism is flexible enough to support signing arbitrary things.
+
+### KMS Support
+
+`cosign` supports using a KMS provider to generate and sign keys.
+Right now cosign supports Hashicorp Vault, AWS KMS, GCP KMS, Azure Key Vault and we are hoping to support more in the future!
+
+See the [KMS docs](https://docs.sigstore.dev/cosign/key_management/overview/) for more details.
+
+### OCI Artifacts
+
+Push an artifact to a registry using [oras](https://github.com/deislabs/oras) (in this case, `cosign` itself!):
+
+```shell
+$ oras push us-central1-docker.pkg.dev/dlorenc-vmtest2/test/artifact ./cosign
+Uploading f53604826795 cosign
+Pushed us-central1-docker.pkg.dev/dlorenc-vmtest2/test/artifact
+Digest: sha256:551e6cce7ed2e5c914998f931b277bc879e675b74843e6f29bc17f3b5f692bef
+```
+
+Now sign it! Using `cosign` of course:
+
+```shell
+$ cosign sign --key cosign.key us-central1-docker.pkg.dev/dlorenc-vmtest2/test/artifact@sha256:551e6cce7ed2e5c914998f931b277bc879e675b74843e6f29bc17f3b5f692bef
+Enter password for private key:
+Pushing signature to: us-central1-docker.pkg.dev/dlorenc-vmtest2/test/artifact:sha256-551e6cce7ed2e5c914998f931b277bc879e675b74843e6f29bc17f3b5f692bef.sig
+```
+
+Finally, verify `cosign` with `cosign` again:
+
+```shell
+$ cosign verify --key cosign.pub us-central1-docker.pkg.dev/dlorenc-vmtest2/test/artifact@sha256:551e6cce7ed2e5c914998f931b277bc879e675b74843e6f29bc17f3b5f692bef
+The following checks were performed on each of these signatures:
+ - The cosign claims were validated
+ - The claims were present in the transparency log
+ - The signatures were integrated into the transparency log when the certificate was valid
+ - The signatures were verified against the specified public key
+ - The code-signing certificate was verified using trusted certificate authority certificates
+
+{"Critical":{"Identity":{"docker-reference":""},"Image":{"Docker-manifest-digest":"sha256:551e6cce7ed2e5c914998f931b277bc879e675b74843e6f29bc17f3b5f692bef"},"Type":"cosign container image signature"},"Optional":null}
+```
+
+## FAQ
+
+### Why not use Notary v2
+
+It's hard to answer this briefly.
+This post contains some comparisons:
+
+[Notary V2 and Cosign](https://medium.com/@dlorenc/notary-v2-and-cosign-b816658f044d)
+
+If you find other comparison posts, please send a PR here and we'll link them all.
+
+### Why not use containers/image signing
+
+`containers/image` signing is close to `cosign`, and we reuse payload formats.
+`cosign` differs in that it signs with ECDSA-P256 keys instead of PGP, and stores
+signatures in the registry.
+
+### Why not use TUF?
+
+I believe this tool is complementary to TUF, and they can be used together.
+I haven't tried yet, but think we can also reuse a registry for TUF storage.
+
+## Design Requirements
+
+* No external services for signature storage, querying, or retrieval
+* We aim for as much registry support as possible
+* Everything should work over the registry API
+* PGP should not be required at all.
+* Users must be able to find all signatures for an image
+* Signers can sign an image after push
+* Multiple entities can sign an image
+* Signing an image does not mutate the image
+* Pure-go implementation
+
+## Future Ideas
+
+### Registry API Changes
+
+The naming convention and read-modify-write update patterns we use to store things in
+a registry are a bit, well, "hacky".
+I think they're the best (only) real option available today, but if the registry API
+changes we can improve these.
+
+### Other Types
+
+`cosign` can sign anything in a registry.
+These examples show signing a single image, but you could also sign a multi-platform `Index`,
+or any other type of artifact.
+This includes Helm Charts, Tekton Pipelines, and anything else currently using OCI registries
+for distribution.
+
+This also means new artifact types can be uploaded to a registry and signed.
+One interesting type to store and sign would be TUF repositories.
+I haven't tried yet, but I'm fairly certain TUF could be implemented on top of this.
+
+### Tag Signing
+
+`cosign` signatures protect the digests of objects stored in a registry.
+The optional `annotations` support (via the `-a` flag to `cosign sign`) can be used to add extra
+data to the payload that is signed and protected by the signature.
+One use-case for this might be to sign a tag->digest mapping.
+
+If you would like to attest that a specific tag (or set of tags) should point at a digest, you can
+run something like:
+
+```shell
+$ docker push $IMAGE_URI
+The push refers to repository [dlorenc/demo]
+994393dc58e7: Pushed
+5m: digest: sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 size: 528
+$ TAG=sign-me
+$ cosign sign --key cosign.key -a tag=$TAG $IMAGE_URI_DIGEST
+Enter password for private key:
+Pushing signature to: dlorenc/demo:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870.sig
+```
+
+Then you can verify that the tag->digest mapping is also covered in the signature, using the `-a` flag to `cosign verify`.
+This example verifies that the digest `$TAG` which points to (`sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870`)
+has been signed, **and also** that the `tag` annotation has the value `sign-me`:
+
+```shell
+$ cosign verify --key cosign.pub -a tag=$TAG $IMAGE_URI | jq .
+{
+ "Critical": {
+ "Identity": {
+ "docker-reference": ""
+ },
+ "Image": {
+ "Docker-manifest-digest": "97fc222cee7991b5b061d4d4afdb5f3428fcb0c9054e1690313786befa1e4e36"
+ },
+ "Type": "cosign container image signature"
+ },
+ "Optional": {
+ "tag": "sign-me"
+ }
+}
+```
+
+Timestamps could also be added here, to implement TUF-style freeze-attack prevention.
+
+### Base Image/Layer Signing
+
+Again, `cosign` can sign anything in a registry.
+You could use `cosign` to sign an image that is intended to be used as a base image,
+and include that provenance metadata in resulting derived images.
+This could be used to enforce that an image was built from an authorized base image.
+
+Rough Idea:
+* OCI manifests have an ordered list of `layer` `Descriptors`, which can contain annotations.
+ See [here](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for the
+ specification.
+* A base image is an ordered list of layers to which other layers are appended, as well as an
+ initial configuration object that is mutated.
+ * A derived image is free to completely delete/destroy/recreate the config from its base image,
+ so signing the config would provided limited value.
+* We can sign the full set of ordered base layers, and attach that signature as an annotation to
+ the **last** layer in the resulting child image.
+
+This example manifest manifest represents an image that has been built from a base image with two
+layers.
+One additional layer is added, forming the final image.
+
+```json
+{
+ "schemaVersion": 2,
+ "config": {
+ "mediaType": "application/vnd.oci.image.config.v1+json",
+ "size": 7023,
+ "digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7"
+ },
+ "layers": [
+ {
+ "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+ "size": 32654,
+ "digest": "sha256:9834876dcfb05cb167a5c24953eba58c4ac89b1adf57f28f2f9d09af107ee8f0"
+ },
+ {
+ "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+ "size": 16724,
+ "digest": "sha256:3c3a4604a545cdc127456d94e421cd355bca5b528f4a9c1905b15da2eb4a4c6b",
+ "annotations": {
+ "dev.cosign.signature.baseimage": "Ejy6ipGJjUzMDoQFePWixqPBYF0iSnIvpMWps3mlcYNSEcRRZelL7GzimKXaMjxfhy5bshNGvDT5QoUJ0tqUAg=="
+ }
+ },
+ {
+ "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+ "size": 73109,
+ "digest": "sha256:ec4b8955958665577945c89419d1af06b5f7636b4ac3da7f12184802ad867736"
+ }
+ ],
+}
+```
+
+Note that this could be applied recursively, for multiple intermediate base images.
+
+### Counter-Signing
+
+Cosign signatures (and their protected payloads) are stored as artifacts in a registry.
+These signature objects can also be signed, resulting in a new, "counter-signature" artifact.
+This "counter-signature" protects the signature (or set of signatures) **and** the referenced artifact, which allows
+it to act as an attestation to the **signature(s) themselves**.
+
+Before we sign the signature artifact, we first give it a memorable name so we can find it later.
+
+```shell
+$ cosign sign --key cosign.key -a sig=original $IMAGE_URI_DIGEST
+Enter password for private key:
+Pushing signature to: dlorenc/demo:sha256-97fc222cee7991b5b061d4d4afdb5f3428fcb0c9054e1690313786befa1e4e36.sig
+$ cosign verify --key cosign.pub dlorenc/demo | jq .
+{
+ "Critical": {
+ "Identity": {
+ "docker-reference": ""
+ },
+ "Image": {
+ "Docker-manifest-digest": "97fc222cee7991b5b061d4d4afdb5f3428fcb0c9054e1690313786befa1e4e36"
+ },
+ "Type": "cosign container image signature"
+ },
+ "Optional": {
+ "sig": "original"
+ }
+}
+```
+
+
+
+Now give that signature a memorable name, then sign that:
+
+```shell
+$ crane tag $(cosign triangulate $IMAGE_URI) mysignature
+2021/02/15 20:22:55 dlorenc/demo:mysignature: digest: sha256:71f70e5d29bde87f988740665257c35b1c6f52dafa20fab4ba16b3b1f4c6ba0e size: 556
+$ cosign sign --key cosign.key -a sig=counter dlorenc/demo:mysignature
+Enter password for private key:
+Pushing signature to: dlorenc/demo:sha256-71f70e5d29bde87f988740665257c35b1c6f52dafa20fab4ba16b3b1f4c6ba0e.sig
+$ cosign verify --key cosign.pub dlorenc/demo:mysignature
+{"Critical":{"Identity":{"docker-reference":""},"Image":{"Docker-manifest-digest":"71f70e5d29bde87f988740665257c35b1c6f52dafa20fab4ba16b3b1f4c6ba0e"},"Type":"cosign container image signature"},"Optional":{"sig":"counter"}}
+```
+
+Finally, check the original signature:
+
+```shell
+$ crane manifest dlorenc/demo@sha256:71f70e5d29bde87f988740665257c35b1c6f52dafa20fab4ba16b3b1f4c6ba0e
+{
+ "schemaVersion": 2,
+ "config": {
+ "mediaType": "application/vnd.oci.image.config.v1+json",
+ "size": 233,
+ "digest": "sha256:3b25a088710d03f39be26629d22eb68cd277a01673b9cb461c4c24fbf8c81c89"
+ },
+ "layers": [
+ {
+ "mediaType": "application/vnd.oci.descriptor.v1+json",
+ "size": 217,
+ "digest": "sha256:0e79a356609f038089088ec46fd95f4649d04de989487220b1a0adbcc63fadae",
+ "annotations": {
+ "dev.sigstore.cosign/signature": "5uNZKEP9rm8zxAL0VVX7McMmyArzLqtxMTNPjPO2ns+5GJpBeXg+i9ILU+WjmGAKBCqiexTxzLC1/nkOzD4cDA=="
+ }
+ }
+ ]
+}
+```
+
+## Release Cadence
+
+We cut releases as needed. Patch releases are cut to fix small bugs. Minor releases are
+cut periodically when there are multiple bugs fixed or features added. Major releases
+will be released when there are breaking features.
+
+## Security
+
+Should you discover any security issues, please refer to sigstore's [security
+process](https://github.com/sigstore/.github/blob/main/SECURITY.md)
+
+## Bundle files in GitHub Release Assets
+
+The GitHub release assets for `cosign` contain Sigstore bundle files produced by [GoReleaser](https://github.com/sigstore/cosign/blob/ac999344eb381ae91455b0a9c5c267e747608d76/.goreleaser.yml#L166) while signing the cosign blob that is used to verify the integrity of the release binaries. This file is not used by cosign itself, but is provided for users who wish to [verify the integrity of the release binaries](https://docs.sigstore.dev/cosign/system_config/installation/#verifying-cosign-with-artifact-key).
diff --git a/data/readmes/couchdb-nouveau-01.md b/data/readmes/couchdb-nouveau-01.md
new file mode 100644
index 0000000..0776103
--- /dev/null
+++ b/data/readmes/couchdb-nouveau-01.md
@@ -0,0 +1,137 @@
+# Couchdb - README (nouveau-0.1)
+
+**Repository**: https://github.com/apache/couchdb
+**Version**: nouveau-0.1
+**Branch**: nouveau-0.1
+
+---
+
+Apache CouchDB README
+=====================
+
++---------+
+| |1| |2| |
++---------+
+
+.. |1| image:: https://ci-couchdb.apache.org/job/jenkins-cm1/job/FullPlatformMatrix/job/main/badge/icon?subject=main
+ :target: https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FFullPlatformMatrix/activity?branch=main
+.. |2| image:: https://readthedocs.org/projects/couchdb/badge/?version=main
+ :target: https://docs.couchdb.org/en/main/?badge=main
+
+Installation
+------------
+
+For a high-level guide to Unix-like systems, inc. Mac OS X and Ubuntu, see:
+
+ INSTALL.Unix
+
+For a high-level guide to Microsoft Windows, see:
+
+ INSTALL.Windows
+
+Follow the proper instructions to get CouchDB installed on your system.
+
+If you're having problems, skip to the next section.
+
+Documentation
+-------------
+
+We have documentation:
+
+ https://docs.couchdb.org/
+
+It includes a changelog:
+
+ https://docs.couchdb.org/en/latest/whatsnew/
+
+For troubleshooting or cryptic error messages, see:
+
+ https://docs.couchdb.org/en/latest/install/troubleshooting.html
+
+For general help, see:
+
+ https://couchdb.apache.org/#mailing-list
+
+We also have an IRC channel:
+
+ https://web.libera.chat/#couchdb
+
+The mailing lists provide a wealth of support and knowledge for you to tap into.
+Feel free to drop by with your questions or discussion. See the official CouchDB
+website for more information about our community resources.
+
+Verifying your Installation
+---------------------------
+
+Run a basic test suite for CouchDB by browsing here:
+
+ http://127.0.0.1:5984/_utils/#verifyinstall
+
+Getting started with developing
+-------------------------------
+
+**Quickstart:**
+
+
+.. image:: https://img.shields.io/static/v1?label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode
+ :target: https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/apache/couchdb
+
+If you already have VS Code and Docker installed, you can click the badge above or
+`here `_
+to get started. Clicking these links will cause VS Code to automatically install the
+Remote - Containers extension if needed, clone the source code into a container volume,
+and spin up a dev container for use.
+
+This ``devcontainer`` will automatically run ``./configure && make`` the first time it is created.
+While this may take some extra time to spin up, this tradeoff means you will be able to
+run things like ``./dev/run``, ``./dev/run --admin=admin:admin``, ``./dev/run --with-admin-party-please``,
+and ``make check`` straight away. Subsequent startups should be quick.
+
+**Manual Dev Setup:**
+
+For more detail, read the README-DEV.rst file in this directory.
+
+Basically you just have to install the needed dependencies which are
+documented in the install docs and then run ``./configure && make``.
+
+You don't need to run ``make install`` after compiling, just use
+``./dev/run`` to spin up three nodes. You can add haproxy as a caching
+layer in front of this cluster by running ``./dev/run --with-haproxy
+--haproxy=/path/to/haproxy`` . You will now have a local cluster
+listening on port 5984.
+
+For Fauxton developers fixing the admin-party does not work via the button in
+Fauxton. To fix the admin party you have to run ``./dev/run`` with the ``admin``
+flag, e.g. ``./dev/run --admin=username:password``. If you want to have an
+admin-party, just omit the flag.
+
+Contributing to CouchDB
+-----------------------
+
+You can learn more about our contributing process here:
+
+ https://github.com/apache/couchdb/blob/main/CONTRIBUTING.md
+
+Cryptographic Software Notice
+-----------------------------
+
+This distribution includes cryptographic software. The country in which you
+currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software. BEFORE using any
+encryption software, please check your country's laws, regulations and policies
+concerning the import, possession, or use, and re-export of encryption software,
+to see if this is permitted. See for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+The following provides more details on the included cryptographic software:
+
+CouchDB includes a HTTP client (ibrowse) with SSL functionality.
diff --git a/data/readmes/cozystack-v0383.md b/data/readmes/cozystack-v0383.md
new file mode 100644
index 0000000..faa63e7
--- /dev/null
+++ b/data/readmes/cozystack-v0383.md
@@ -0,0 +1,82 @@
+# Cozystack - README (v0.38.3)
+
+**Repository**: https://github.com/cozystack/cozystack
+**Version**: v0.38.3
+
+---
+
+
+
+
+[](https://opensource.org/)
+[](https://opensource.org/licenses/)
+[](https://cozystack.io/support/)
+[](https://github.com/cozystack/cozystack)
+[](https://github.com/cozystack/cozystack/releases/latest)
+[](https://github.com/cozystack/cozystack/graphs/contributors)
+
+# Cozystack
+
+**Cozystack** is a free PaaS platform and framework for building clouds.
+
+Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
+
+With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
+Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
+
+Use Cozystack to build your own cloud or provide a cost-effective development environment.
+
+
+
+## Use-Cases
+
+* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
+You can use Cozystack as a backend for a public cloud
+
+* [**Using Cozystack to build a private cloud**](https://cozystack.io/docs/guides/use-cases/private-cloud/)
+You can use Cozystack as a platform to build a private cloud powered by Infrastructure-as-Code approach
+
+* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
+You can use Cozystack as a Kubernetes distribution for Bare Metal
+
+
+## Documentation
+
+The documentation is located on the [cozystack.io](https://cozystack.io) website.
+
+Read the [Getting Started](https://cozystack.io/docs/getting-started/) section for a quick start.
+
+If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/operations/troubleshooting/) and work your way through the process that we've outlined.
+
+## Versioning
+
+Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
+A full list of the available releases is available in the GitHub repository's [Release](https://github.com/cozystack/cozystack/releases) section.
+
+- [Roadmap](https://cozystack.io/docs/roadmap/)
+
+## Contributions
+
+Contributions are highly appreciated and very welcomed!
+
+In case of bugs, please check if the issue has already been opened by checking the [GitHub Issues](https://github.com/cozystack/cozystack/issues) section.
+If it isn't, you can open a new one. A detailed report will help us replicate it, assess it, and work on a fix.
+
+You can express your intention to on the fix on your own.
+Commits are used to generate the changelog, and their author will be referenced in it.
+
+If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
+
+## Community
+
+You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
+Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
+
+## License
+
+Cozystack is licensed under Apache 2.0.
+The code is provided as-is with no warranties.
+
+## Commercial Support
+
+A list of companies providing commercial support for this project can be found on [official site](https://cozystack.io/support/).
diff --git a/data/readmes/crane-v0207.md b/data/readmes/crane-v0207.md
new file mode 100644
index 0000000..91caa80
--- /dev/null
+++ b/data/readmes/crane-v0207.md
@@ -0,0 +1,157 @@
+# Crane - README (v0.20.7)
+
+**Repository**: https://github.com/google/go-containerregistry
+**Version**: v0.20.7
+
+---
+
+# go-containerregistry
+
+[](https://github.com/google/go-containerregistry/actions?query=workflow%3ABuild)
+[](https://godoc.org/github.com/google/go-containerregistry)
+[](https://codecov.io/gh/google/go-containerregistry)
+
+## Introduction
+
+This is a golang library for working with container registries.
+It's largely based on the [Python library of the same name](https://github.com/google/containerregistry).
+
+The following diagram shows the main types that this library handles.
+
+
+## Philosophy
+
+The overarching design philosophy of this library is to define interfaces that present an immutable
+view of resources (e.g. [`Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1#Image),
+[`Layer`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1#Layer),
+[`ImageIndex`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1#ImageIndex)),
+which can be backed by a variety of medium (e.g. [registry](./pkg/v1/remote/README.md),
+[tarball](./pkg/v1/tarball/README.md), [daemon](./pkg/v1/daemon/README.md), ...).
+
+To complement these immutable views, we support functional mutations that produce new immutable views
+of the resulting resource (e.g. [mutate](./pkg/v1/mutate/README.md)). The end goal is to provide a
+set of versatile primitives that can compose to do extraordinarily powerful things efficiently and easily.
+
+Both the resource views and mutations may be lazy, eager, memoizing, etc, and most are optimized
+for common paths based on the tooling we have seen in the wild (e.g. writing new images from disk
+to the registry as a compressed tarball).
+
+
+### Experiments
+
+Over time, we will add new functionality under experimental environment variables listed here.
+
+| Env Var | Value(s) | What is does |
+|---------|----------|--------------|
+| `GGCR_EXPERIMENT_ESTARGZ` | `"1"` | ⚠️DEPRECATED⚠️: When enabled this experiment will direct `tarball.LayerFromOpener` to emit [estargz](https://github.com/opencontainers/image-spec/issues/815) compatible layers, which enable them to be lazily loaded by an appropriately configured containerd. |
+
+
+### `v1.Image`
+
+#### Sources
+
+* [`remote.Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#Image)
+* [`tarball.Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/tarball#Image)
+* [`daemon.Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/daemon#Image)
+* [`layout.Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/layout#Path.Image)
+* [`random.Image`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/random#Image)
+
+#### Sinks
+
+* [`remote.Write`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#Write)
+* [`tarball.Write`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/tarball#Write)
+* [`daemon.Write`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/daemon#Write)
+* [`legacy/tarball.Write`](https://godoc.org/github.com/google/go-containerregistry/pkg/legacy/tarball#Write)
+* [`layout.AppendImage`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/layout#Path.AppendImage)
+
+### `v1.ImageIndex`
+
+#### Sources
+
+* [`remote.Index`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#Index)
+* [`random.Index`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/random#Index)
+* [`layout.ImageIndexFromPath`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/layout#ImageIndexFromPath)
+
+#### Sinks
+
+* [`remote.WriteIndex`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#WriteIndex)
+* [`layout.Write`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/layout#Write)
+
+### `v1.Layer`
+
+#### Sources
+
+* [`remote.Layer`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#Layer)
+* [`tarball.LayerFromFile`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/tarball#LayerFromFile)
+* [`random.Layer`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/random#Layer)
+* [`stream.Layer`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/stream#Layer)
+
+#### Sinks
+
+* [`remote.WriteLayer`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote#WriteLayer)
+
+## Overview
+
+### `mutate`
+
+The simplest use for these libraries is to read from one source and write to another.
+
+For example,
+
+ * `crane pull` is `remote.Image -> tarball.Write`,
+ * `crane push` is `tarball.Image -> remote.Write`,
+ * `crane cp` is `remote.Image -> remote.Write`.
+
+However, often you actually want to _change something_ about an image.
+This is the purpose of the [`mutate`](pkg/v1/mutate) package, which exposes
+some commonly useful things to change about an image.
+
+### `partial`
+
+If you're trying to use this library with a different source or sink than it already supports,
+it can be somewhat cumbersome. The `Image` and `Layer` interfaces are pretty wide, with a lot
+of redundant information. This is somewhat by design, because we want to expose this information
+as efficiently as possible where we can, but again it is a pain to implement yourself.
+
+The purpose of the [`partial`](pkg/v1/partial) package is to make implementing a `v1.Image`
+much easier, by filling in all the derived accessors for you if you implement a minimal
+subset of `v1.Image`.
+
+### `transport`
+
+You might think our abstractions are bad and you just want to authenticate
+and send requests to a registry.
+
+This is the purpose of the [`transport`](pkg/v1/remote/transport) and [`authn`](pkg/authn) packages.
+
+## Tools
+
+This repo hosts some tools built on top of the library.
+
+### `crane`
+
+[`crane`](cmd/crane/README.md) is a tool for interacting with remote images
+and registries.
+
+### `gcrane`
+
+[`gcrane`](cmd/gcrane/README.md) is a GCR-specific variant of `crane` that has
+richer output for the `ls` subcommand and some basic garbage collection support.
+
+### `krane`
+
+[`krane`](cmd/krane/README.md) is a drop-in replacement for `crane` that supports
+common Kubernetes-based workload identity mechanisms using [`k8schain`](#k8schain)
+as a fallback to traditional authentication mechanisms.
+
+### `k8schain`
+
+[`k8schain`](pkg/authn/k8schain/README.md) implements the authentication
+semantics used by kubelets in a way that is easily consumable by this library.
+
+`k8schain` is not a standalone tool, but it is linked here for visibility.
+
+### Emeritus: [`ko`](https://github.com/google/ko)
+
+This tool was originally developed in this repo but has since been moved to its
+own repo.
diff --git a/data/readmes/cri-o-v1343.md b/data/readmes/cri-o-v1343.md
new file mode 100644
index 0000000..78fd386
--- /dev/null
+++ b/data/readmes/cri-o-v1343.md
@@ -0,0 +1,329 @@
+# CRI-O - README (v1.34.3)
+
+**Repository**: https://github.com/cri-o/cri-o
+**Version**: v1.34.3
+
+---
+
+
+
+
+
+# CRI-O - OCI-based implementation of Kubernetes Container Runtime Interface
+
+
+
+[](#)
+[](https://codecov.io/gh/cri-o/cri-o)
+[](https://github.com/cri-o/packaging)
+[](https://cri-o.github.io/cri-o)
+[](https://cri-o.github.io/cri-o/dependencies)
+[](https://godoc.org/github.com/cri-o/cri-o)
+[](https://scorecard.dev/viewer/?uri=github.com/cri-o/cri-o)
+[](https://bestpractices.coreinfrastructure.org/projects/2298)
+[](https://goreportcard.com/report/github.com/cri-o/cri-o)
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Fcri-o%2Fcri-o?ref=badge_shield)
+[](awesome.md)
+[](https://gitpod.io/#https://github.com/cri-o/cri-o)
+
+
+
+- [Compatibility matrix: CRI-O ⬄ Kubernetes](#compatibility-matrix-cri-o--kubernetes)
+- [What is the scope of this project?](#what-is-the-scope-of-this-project)
+- [What is not in the scope of this project?](#what-is-not-in-the-scope-of-this-project)
+- [Roadmap](#roadmap)
+- [CI images and jobs](#ci-images-and-jobs)
+- [Commands](#commands)
+- [Configuration](#configuration)
+- [Security](#security)
+- [OCI Hooks Support](#oci-hooks-support)
+- [CRI-O Usage Transfer](#cri-o-usage-transfer)
+- [Communication](#communication)
+- [Awesome CRI-O](#awesome-cri-o)
+- [Getting started](#getting-started)
+ - [Installing CRI-O](#installing-cri-o)
+ - [Running Kubernetes with CRI-O](#running-kubernetes-with-cri-o)
+ - [The HTTP status API](#the-http-status-api)
+ - [Metrics](#metrics)
+ - [Tracing](#tracing)
+ - [Container Runtime Interface special cases](#container-runtime-interface-special-cases)
+ - [Debugging tips](#debugging-tips)
+- [Adopters](#adopters)
+- [Weekly Meeting](#weekly-meeting)
+- [Governance](#governance)
+- [AI Assistants](#ai-assistants)
+- [License Scan](#license-scan)
+
+
+## Compatibility matrix: CRI-O ⬄ Kubernetes
+
+CRI-O follows the Kubernetes release cycles with respect to its minor versions
+(`1.x.y`). Patch releases (`1.x.z`) for Kubernetes are not in sync with those from
+CRI-O, because they are scheduled for each month, whereas CRI-O provides
+them only if necessary. If a Kubernetes release goes [End of
+Life](https://kubernetes.io/releases/patch-releases/),
+then the corresponding CRI-O version can be considered in the same way.
+
+This means that CRI-O also follows the Kubernetes `n-2` release version skew
+policy when it comes to feature graduation, deprecation or removal. This also
+applies to features which are independent from Kubernetes. Nevertheless, feature
+backports to supported release branches, which are independent from Kubernetes
+or other tools like cri-tools, are still possible. This allows CRI-O to decouple
+from the Kubernetes release cycle and have enough flexibility when it comes to
+implement new features. Every feature to be backported will be a case by case
+decision of the community while the overall compatibility matrix should not be
+compromised.
+
+For more information visit the [Kubernetes Version Skew
+Policy](https://kubernetes.io/releases/version-skew-policy/).
+
+
+
+| CRI-O | Kubernetes | Maintenance status |
+| ------------------------------- | ------------------------------- | --------------------------------------------------------------------- |
+| `main` branch | `master` branch | Features from the main Kubernetes repository are actively implemented |
+| `release-1.x` branch (`v1.x.y`) | `release-1.x` branch (`v1.x.z`) | Maintenance is manual, only bugfixes will be backported. |
+
+
+
+The release notes for CRI-O are hand-crafted and can be continuously retrieved
+from [our GitHub pages website](https://cri-o.github.io/cri-o).
+
+## What is the scope of this project?
+
+CRI-O is meant to provide an integration path between OCI conformant runtimes and
+the Kubelet.
+Specifically, it implements the Kubelet [Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
+using OCI conformant runtimes.
+The scope of CRI-O is tied to the scope of the CRI.
+
+At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:
+
+- Support multiple image formats including the existing Docker image format
+- Support for multiple means to download images including trust & image verification
+- Container image management (managing image layers, overlay filesystems, etc)
+- Container process lifecycle management
+- Monitoring and logging required to satisfy the CRI
+- Resource isolation as required by the CRI
+
+## What is not in the scope of this project?
+
+- Building, signing and pushing images to various image storages
+- A CLI utility for interacting with CRI-O. Any CLIs built as part of this project
+ are only meant for testing this project and there will be no guarantees on the
+ backward compatibility with it.
+
+CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI)
+that will allow Kubernetes to directly launch and manage
+Open Container Initiative (OCI) containers.
+
+The plan is to use OCI projects and best of breed libraries for different aspects:
+
+- Runtime: [runc](https://github.com/opencontainers/runc)
+ (or any OCI runtime-spec implementation) and [oci runtime tools](https://github.com/opencontainers/runtime-tools)
+- Images: Image management using [container-libs/image](https://github.com/containers/container-libs/tree/main/image)
+- Storage: Storage and management of image layers using [container-libs/storage](https://github.com/containers/container-libs/tree/main/storage)
+- Networking: Networking support through the use of [CNI](https://github.com/containernetworking/cni)
+
+It is currently in active development in the Kubernetes community through the
+[design proposal](https://github.com/kubernetes/kubernetes/pull/26788).
+Questions and issues should be raised in the Kubernetes [sig-node Slack channel](https://kubernetes.slack.com/archives/sig-node).
+
+## Roadmap
+
+A roadmap that describes the direction of CRI-O can be found [here](/roadmap.md).
+The project is tracking all ongoing efforts as part of the [Feature Roadmap
+GitHub project](https://github.com/orgs/cri-o/projects/1).
+
+## CI images and jobs
+
+CRI-O's CI is split-up between GitHub actions and [OpenShift CI (Prow)](https://prow.ci.openshift.org).
+Relevant virtual machine images used for the prow jobs are built periodically in
+the jobs:
+
+- [periodic-ci-cri-o-cri-o-main-periodics-setup-periodic](https://prow.ci.openshift.org/?job=periodic-ci-cri-o-cri-o-main-periodics-setup-periodic)
+- [periodic-ci-cri-o-cri-o-main-periodics-setup-fedora-periodic](https://prow.ci.openshift.org/?job=periodic-ci-cri-o-cri-o-main-periodics-setup-fedora-periodic)
+- [periodic-ci-cri-o-cri-o-main-periodics-evented-pleg-periodic](https://prow.ci.openshift.org/?job=periodic-ci-cri-o-cri-o-main-periodics-evented-pleg-periodic)
+
+The jobs are maintained [from the openshift/release repository](https://github.com/openshift/release/blob/ecdeb0a/ci-operator/jobs/cri-o/cri-o/cri-o-cri-o-main-periodics.yaml)
+and define workflows used for the particular jobs. The actual job definitions
+can be found in the same repository under [ci-operator/jobs/cri-o/cri-o/cri-o-cri-o-main-presubmits.yaml](https://github.com/openshift/release/blob/ecdeb0a/ci-operator/jobs/cri-o/cri-o/cri-o-cri-o-main-presubmits.yaml)
+for the `main` branch as well as the corresponding files for the release
+branches. The base image configuration for those jobs is available in the same
+repository under [ci-operator/config/cri-o/cri-o](https://github.com/openshift/release/tree/ecdeb0a/ci-operator/config/cri-o/cri-o).
+
+## Commands
+
+| Command | Description |
+| -------------------------- | --------------------------------------- |
+| [crio(8)](/docs/crio.8.md) | OCI Kubernetes Container Runtime daemon |
+
+Examples of commandline tools to interact with CRI-O
+(or other CRI compatible runtimes) are [Crictl](https://github.com/kubernetes-sigs/cri-tools/releases)
+and [Podman](https://github.com/containers/podman).
+
+## Configuration
+
+
+
+| File | Description |
+| ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------- |
+| [crio.conf(5)](/docs/crio.conf.5.md) | CRI-O Configuration file |
+| [policy.json(5)](https://github.com/containers/container-libs/blob/main/image/docs/containers-policy.json.5.md) | Signature Verification Policy File(s) |
+| [registries.conf(5)](https://github.com/containers/container-libs/blob/main/image/docs/containers-registries.conf.5.md) | Registries Configuration file |
+| [storage.conf(5)](https://github.com/containers/container-libs/blob/main/storage/docs/containers-storage.conf.5.md) | Storage Configuration file |
+
+
+
+For information about CRI-O annotations and their migration to Kubernetes-recommended naming conventions,
+see the [Annotation Migration Guide](ANNOTATION_MIGRATION.md).
+
+## Security
+
+The security process for reporting vulnerabilities is described in [SECURITY.md](./SECURITY.md).
+
+## OCI Hooks Support
+
+[You can configure CRI-O][podman-hooks] to inject
+[OCI Hooks][spec-hooks] when creating containers.
+
+## CRI-O Usage Transfer
+
+We provide [useful information for operations and development transfer](transfer.md)
+as it relates to infrastructure that utilizes CRI-O.
+
+## Communication
+
+For async communication and long-running discussions please use [issues](https://github.com/cri-o/cri-o/issues)
+and [pull requests](https://github.com/cri-o/cri-o/pulls) on the [GitHub repo](https://github.com/cri-o/cri-o).
+This will be the best place to discuss design and implementation.
+
+For chat communication, we have a [channel on the Kubernetes slack](https://kubernetes.slack.com/archives/crio)
+that everyone is welcome to join and chat about development.
+
+## Awesome CRI-O
+
+We maintain a curated [list of links related to CRI-O](awesome.md). Did you find
+something interesting on the web about the project? Awesome, feel free to open
+up a PR and add it to the list.
+
+## Getting started
+
+### Installing CRI-O
+
+To install `CRI-O`, you can follow our [installation guide](install.md).
+Alternatively, if you'd rather build `CRI-O` from source, checkout our [setup
+guide](install.md#build-and-install-cri-o-from-source).
+
+### Running Kubernetes with CRI-O
+
+Before you begin, you'll need to [start CRI-O](install.md#starting-cri-o)
+
+You can run a local version of Kubernetes with `CRI-O` using `local-up-cluster.sh`:
+
+1. Clone the [Kubernetes repository](https://github.com/kubernetes/kubernetes)
+1. From the Kubernetes project directory, run:
+
+```console
+CGROUP_DRIVER=systemd \
+CONTAINER_RUNTIME=remote \
+CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock' \
+./hack/local-up-cluster.sh
+```
+
+For more guidance in running `CRI-O`, visit our [tutorial page](tutorial.md)
+
+[podman-hooks]: https://github.com/containers/podman/blob/v3.0.1/pkg/hooks/README.md
+[spec-hooks]: https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
+
+#### The HTTP status API
+
+CRI-O exposes per default the [gRPC](https://grpc.io/) API to fulfill the
+Container Runtime Interface (CRI) of Kubernetes. Besides this, there exists an
+additional HTTP API to retrieve further runtime status information about CRI-O.
+Please be aware that this API is not considered to be stable and production
+use-cases should not rely on it.
+
+On a running CRI-O instance, we can access the API via an HTTP transfer tool like
+[curl](https://curl.haxx.se):
+
+```console
+$ sudo curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
+{
+ "storage_driver": "btrfs",
+ "storage_root": "/var/lib/containers/storage",
+ "cgroup_driver": "systemd",
+ "default_id_mappings": { ... }
+}
+```
+
+The following API entry points are currently supported:
+
+
+
+| Path | Content-Type | Description |
+| ------------------- | ------------------ | ---------------------------------------------------------------------------------- |
+| `/info` | `application/json` | General information about the runtime, like `storage_driver` and `storage_root`. |
+| `/containers/:id` | `application/json` | Dedicated container information, like `name`, `pid` and `image`. |
+| `/config` | `application/toml` | The complete TOML configuration (defaults to `/etc/crio/crio.conf`) used by CRI-O. |
+| `/pause/:id` | `application/json` | Pause a running container. |
+| `/unpause/:id` | `application/json` | Unpause a paused container. |
+| `/debug/goroutines` | `text/plain` | Print the goroutine stacks. |
+| `/debug/heap` | `text/plain` | Write the heap dump. |
+
+
+
+The subcommand `crio status` can be used to access the API with a dedicated command
+line tool. It supports all API endpoints via the dedicated subcommands `config`,
+`info` and `containers`, for example:
+
+```console
+$ sudo crio status info
+cgroup driver: systemd
+storage driver: btrfs
+storage root: /var/lib/containers/storage
+default GID mappings (format ::):
+ 0:0:4294967295
+default UID mappings (format ::):
+ 0:0:4294967295
+```
+
+#### Metrics
+
+Please refer to the [CRI-O Metrics guide](tutorials/metrics.md).
+
+#### Tracing
+
+Please refer to the [CRI-O Tracing guide](tutorials/tracing.md).
+
+#### Container Runtime Interface special cases
+
+Some aspects of the Container Runtime are worth some additional explanation.
+These details are summarized in a [dedicated guide](cri.md).
+
+#### Debugging tips
+
+Having an issue? There are some tips and tricks for debugging located in
+[our debugging guide](tutorials/debugging.md)
+
+## Adopters
+
+An incomplete list of adopters of CRI-O in production environments can be found [here](ADOPTERS.md).
+If you're a user, please help us complete it by submitting a pull-request!
+
+## Weekly Meeting
+
+A weekly meeting is held to discuss CRI-O development. It is open to everyone.
+The details to join the meeting are on the [wiki](https://github.com/cri-o/cri-o/wiki/CRI-O-Weekly-Meeting).
+
+## Governance
+
+For more information on how CRI-O is governed, take a look at the [governance file](GOVERNANCE.md)
+
+## AI Assistants
+
+For AI coding assistants working with this codebase, see [AGENTS.md](AGENTS.md) for project context, workflow patterns, and development guidelines.
+
+## License Scan
+
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Fcri-o%2Fcri-o?ref=badge_large)
diff --git a/data/readmes/crossplane-v213.md b/data/readmes/crossplane-v213.md
new file mode 100644
index 0000000..a93104d
--- /dev/null
+++ b/data/readmes/crossplane-v213.md
@@ -0,0 +1,172 @@
+# Crossplane - README (v2.1.3)
+
+**Repository**: https://github.com/crossplane/crossplane
+**Version**: v2.1.3
+
+---
+
+[](https://www.bestpractices.dev/projects/3260)  [](https://goreportcard.com/report/github.com/crossplane/crossplane)
+
+
+
+Crossplane is a framework for building cloud native control planes without
+needing to write code. It has a highly extensible backend that enables you to
+build a control plane that can orchestrate applications and infrastructure no
+matter where they run, and a highly configurable frontend that puts you in
+control of the schema of the declarative API it offers.
+
+Crossplane is a [Cloud Native Computing Foundation][cncf] project.
+
+## Get Started
+
+Crossplane's [Get Started Docs] covers install and resource quickstarts.
+
+## Releases
+
+[](https://github.com/crossplane/crossplane/releases) [](https://artifacthub.io/packages/helm/crossplane/crossplane)
+
+Currently maintained releases, as well as the next few upcoming releases are
+listed below. For more information take a look at the Crossplane [release cycle
+documentation].
+
+| Release | Release Date | EOL |
+|:-------:|:-------------:|:--------:|
+| v1.20 | May 21, 2025 | Feb 2026 |
+| v2.0 | Aug 8, 2025 | May 2026 |
+| v2.1 | Nov 5, 2025 | Aug 2026 |
+| v2.2 | Early Feb '26 | Nov 2026 |
+| v2.3 | Early May '26 | Feb 2027 |
+| v2.4 | Early Aug '26 | May 2027 |
+
+You can subscribe to the [community calendar] to track all release dates, and
+find the most recent releases on the [releases] page.
+
+The release process is fully documented in the [`crossplane/release`] repo.
+
+## Roadmap
+
+The public roadmap for Crossplane is published as a GitHub project board. Issues
+added to the roadmap have been triaged and identified as valuable to the
+community, and therefore a priority for the project that we expect to invest in.
+
+The maintainer team regularly triages requests from the community to identify
+features and issues of suitable scope and impact to include in this roadmap. The
+community is encouraged to show their support for potential roadmap issues by
+adding a :+1: reaction, leaving descriptive comments, and attending the
+[regular community meetings] to discuss their requirements and use cases.
+
+The maintainer team updates the roadmap on an as needed basis, in response to
+demand, priority, and available resources. The public roadmap can be updated at
+any time.
+
+Milestones assigned to any issues in the roadmap are intended to give a sense of
+overall priority and the expected order of delivery. They should be considered
+approximate estimations and are **not** a strict commitment to a specific
+delivery timeline.
+
+[Crossplane Roadmap]
+
+## Get Involved
+
+[](https://slack.crossplane.io) [](https://bsky.app/profile/crossplane.io) [](https://twitter.com/intent/follow?screen_name=crossplane_io&user_id=788180534543339520) [](https://www.youtube.com/@Crossplane)
+
+Crossplane is a community driven project; we welcome your contribution. To file
+a bug, suggest an improvement, or request a new feature please open an [issue
+against Crossplane] or the relevant provider. Refer to our [contributing guide]
+for more information on how you can help.
+
+* Discuss Crossplane on [Slack].
+* Follow us on [Bluesky], [Twitter], or [LinkedIn].
+* Contact us via [Email].
+* Join our regular community meetings.
+* Provide feedback on our [roadmap and releases board].
+
+The Crossplane community meeting takes place every 4 weeks on [Thursday at
+10:00am Pacific Time][community meeting time]. You can find the up to date
+meeting schedule on the [Community Calendar][community calendar].
+
+Anyone who wants to discuss the direction of the project, design and
+implementation reviews, or raise general questions with the broader community is
+encouraged to join.
+
+* Meeting link:
+* [Current agenda and past meeting notes]
+* [Past meeting recordings]
+* [Community Calendar][community calendar]
+
+### Special Interest Groups (SIG)
+
+The Crossplane project supports SIGs as discussion groups that bring together
+community members with shared interests. SIGs have no decision making authority
+or ownership responsibilities. They serve purely as collaborative forums for
+community discussion.
+
+If you're interested in any of the areas below, consider joining the discussion
+in their Slack channels. To propose a new SIG that isn't represented, reach out
+through any of the contact methods in the [get involved] section.
+
+Each SIG collaborates primarily in Slack, and some groups hold regular meetings
+that you can find in the [Community Calendar][community calendar].
+
+- [#sig-cli][sig-cli]
+- [#sig-composition-environments][sig-composition-environments-slack]
+- [#sig-composition-functions][sig-composition-functions-slack]
+- [#sig-deletion-ordering][sig-deletion-ordering-slack]
+- [#sig-devex][sig-devex-slack]
+- [#sig-docs][sig-docs-slack]
+- [#sig-e2e-testing][sig-e2e-testing-slack]
+- [#sig-observability][sig-observability-slack]
+- [#sig-observe-only][sig-observe-only-slack]
+- [#sig-provider-families][sig-provider-families-slack]
+- [#sig-secret-stores][sig-secret-stores-slack]
+- [#sig-upjet][sig-upjet-slack]
+
+## Adopters
+
+A list of publicly known users of the Crossplane project can be found in [ADOPTERS.md]. We
+encourage all users of Crossplane to add themselves to this list - we want to see the community's
+growing success!
+
+## License
+
+Crossplane is under the Apache 2.0 license.
+
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Fcrossplane%2Fcrossplane?ref=badge_large)
+
+
+
+[Crossplane]: https://crossplane.io
+[release cycle documentation]: https://docs.crossplane.io/knowledge-base/guides/release-cycle
+[install]: https://crossplane.io/docs/latest
+[Slack]: https://slack.crossplane.io
+[Bluesky]: https://bsky.app/profile/crossplane.io
+[Twitter]: https://twitter.com/crossplane_io
+[LinkedIn]: https://www.linkedin.com/company/crossplane/
+[Email]: mailto:crossplane-info@lists.cncf.io
+[issue against Crossplane]: https://github.com/crossplane/crossplane/issues
+[contributing guide]: contributing/README.md
+[community meeting time]: https://www.thetimezoneconverter.com/?t=10:00&tz=PT%20%28Pacific%20Time%29
+[Current agenda and past meeting notes]: https://docs.google.com/document/d/1q_sp2jLQsDEOX7Yug6TPOv7Fwrys6EwcF5Itxjkno7Y/edit?usp=sharing
+[Past meeting recordings]: https://www.youtube.com/playlist?list=PL510POnNVaaYYYDSICFSNWFqNbx1EMr-M
+[roadmap and releases board]: https://github.com/orgs/crossplane/projects/20/views/9?pane=info
+[cncf]: https://www.cncf.io/
+[Get Started Docs]: https://docs.crossplane.io/latest/get-started/get-started-with-composition
+[community calendar]: https://zoom-lfx.platform.linuxfoundation.org/meetings/crossplane?view=month
+[releases]: https://github.com/crossplane/crossplane/releases
+[`crossplane/release`]: https://github.com/crossplane/release
+[ADOPTERS.md]: ADOPTERS.md
+[regular community meetings]: https://github.com/crossplane/crossplane/blob/main/README.md#get-involved
+[Crossplane Roadmap]: https://github.com/orgs/crossplane/projects/20/views/9?pane=info
+[get involved]: https://github.com/crossplane/crossplane/blob/main/README.md#get-involved
+[sig-cli]: https://crossplane.slack.com/archives/C08V9PMLRQA
+[sig-composition-environments-slack]: https://crossplane.slack.com/archives/C05BP6QFLUW
+[sig-composition-functions-slack]: https://crossplane.slack.com/archives/C031Y29CSAE
+[sig-deletion-ordering-slack]: https://crossplane.slack.com/archives/C05BP8W5ALW
+[sig-devex-slack]: https://crossplane.slack.com/archives/C05U1LLM3B2
+[sig-docs-slack]: https://crossplane.slack.com/archives/C02CAQ52DPU
+[sig-e2e-testing-slack]: https://crossplane.slack.com/archives/C05C8CCTVNV
+[sig-observability-slack]: https://crossplane.slack.com/archives/C061GNH3LA0
+[sig-observe-only-slack]: https://crossplane.slack.com/archives/C04D5988QEA
+[sig-provider-families-slack]: https://crossplane.slack.com/archives/C056YAQRV16
+[sig-secret-stores-slack]: https://crossplane.slack.com/archives/C05BY7DKFV2
+[sig-upjet-slack]: https://crossplane.slack.com/archives/C05T19TB729
diff --git a/data/readmes/cubefs-v352.md b/data/readmes/cubefs-v352.md
new file mode 100644
index 0000000..d691d52
--- /dev/null
+++ b/data/readmes/cubefs-v352.md
@@ -0,0 +1,106 @@
+# CubeFS - README (v3.5.2)
+
+**Repository**: https://github.com/cubefs/cubefs
+**Version**: v3.5.2
+
+---
+
+# CubeFS
+
+[](https://www.cncf.io/projects)
+[](https://github.com/cubefs/cubefs/actions/workflows/ci.yml)
+[](https://github.com/cubefs/cubefs/blob/master/LICENSE)
+[](https://golang.org/)
+[](https://goreportcard.com/report/github.com/cubefs/cubefs)
+[](https://cubefs.io/docs/master/overview/introduction.html)
+[](https://www.bestpractices.dev/projects/6232)
+[](https://securityscorecards.dev/viewer/?uri=github.com/cubefs/cubefs)
+[](https://codecov.io/gh/cubefs/cubefs)
+[](https://artifacthub.io/packages/helm/cubefs/cubefs)
+[](https://clomonitor.io/projects/cncf/chubao-fs)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fcubefs%2Fcubefs?ref=badge_shield)
+[](https://github.com/cubefs/cubefs/releases)
+[](https://github.com/cubefs/cubefs/tags)
+[](https://gurubase.io/g/cubefs)
+
+| Community Meeting|
+|------------------|
+| The CubeFS Project holds bi-weekly community online meeting. To join or watch previous meeting notes and recordings, please see [meeting schedule](https://github.com/cubefs/community/wiki/Meeting-Schedule) and [meeting minutes](https://github.com/cubefs/community/wiki/Meeting-Agenda-and-Notes). |
+
+
+
+
+
+## Overview
+
+CubeFS ("储宝" in Chinese) is an open-source cloud-native distributed file & object storage system, hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF) as a [graduated](https://www.cncf.io/projects/) project.
+
+## What can you build with CubeFS
+
+* As an open-source distributed storage, CubeFS can serve as your datacenter filesystem, data lake storage infra, and private or hybrid cloud storage.
+* Moreover, it can be run in public cloud services, providing cache acceleration and file system semantics on top of public cloud storage such as S3.
+
+* In particular, CubeFS enables the separation of storage/compute architecture for databases, search systems, and AI/ML applications.
+
+Some key features of CubeFS include:
+
+- Multiple access protocols such as POSIX, HDFS, S3, and its own REST API
+- Highly scalable metadata service with strong consistency
+- Performance optimization of large/small files and sequential/random writes
+- Multi-tenancy support with better resource utilization and tenant isolation
+- Hybrid cloud I/O acceleration through multi-level caching
+- Flexible storage policies, high-performance replication or low-cost erasure coding
+
+
+
+
+## Documents
+
+- English version: https://cubefs.io/docs/master/overview/introduction.html
+- Chinese version: https://cubefs.io/zh/docs/master/overview/introduction.html
+
+## Community
+
+- Homepage: [cubefs.io](https://cubefs.io/)
+- Mailing list: users@cubefs.groups.io.
+ - Please subscribe on the page https://groups.io/g/cubefs-users/ or send your email to cubefs-users+subscribe@groups.io to apply.
+- Slack: [cubefs.slack.com](https://cubefs.slack.com/)
+- WeChat: detail see [here](https://github.com/cubefs/cubefs/issues/604)
+- Twitter: [cubefs_storage](https://twitter.com/cubefs_storage)
+
+## Governance
+
+[Governance documentation](https://github.com/cubefs/cubefs/blob/master/GOVERNANCE.md) plays a crucial role in establishing clear guidelines, procedures, and structures within an organization or project
+
+## Contribute
+[Contributing to CubeFS](https://github.com/cubefs/cubefs/blob/master/CONTRIBUTING.md)
+
+There is a clear definition of roles and their promotion paths.
+- [Becoming a Maintainer](https://github.com/cubefs/cubefs/blob/master/GOVERNANCE.md#becoming-a-maintainer)
+- [Becoming a committer](https://github.com/cubefs/cubefs/blob/master/GOVERNANCE.md#becoming-a-committer)
+- [Becoming a TSC Member](https://github.com/cubefs/cubefs/blob/master/GOVERNANCE.md#becoming-a-tsc-member)
+
+
+## Partners and Users
+
+There is the list of users and success stories [ADOPTERS.md](ADOPTERS.md).
+
+## Reference
+
+Haifeng Liu, et al., CFS: A Distributed File System for Large Scale Container Platforms. SIGMOD‘19, June 30-July 5, 2019, Amsterdam, Netherlands.
+
+For more information, please refer to https://dl.acm.org/citation.cfm?doid=3299869.3314046 and https://arxiv.org/abs/1911.03001
+
+
+## License
+
+CubeFS is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
+For detail see [LICENSE](LICENSE) and [NOTICE](NOTICE).
+
+## Note
+
+The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get a stable set of binaries.
+
+## Star History
+
+[](https://star-history.com/#cubefs/cubefs&Date)
diff --git a/data/readmes/curl-curl-8_17_0.md b/data/readmes/curl-curl-8_17_0.md
new file mode 100644
index 0000000..5aeba65
--- /dev/null
+++ b/data/readmes/curl-curl-8_17_0.md
@@ -0,0 +1,73 @@
+# curl - README (curl-8_17_0)
+
+**Repository**: https://github.com/curl/curl
+**Version**: curl-8_17_0
+
+---
+
+
+
+# [](https://curl.se/)
+
+curl is a command-line tool for transferring data from or to a server using
+URLs. It supports these protocols: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS,
+HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP,
+SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.
+
+Learn how to use curl by reading [the
+man page](https://curl.se/docs/manpage.html) or [everything
+curl](https://everything.curl.dev/).
+
+Find out how to install curl by reading [the INSTALL
+document](https://curl.se/docs/install.html).
+
+libcurl is the library curl is using to do its job. It is readily available to
+be used by your software. Read [the libcurl
+man page](https://curl.se/libcurl/c/libcurl.html) to learn how.
+
+## Open Source
+
+curl is Open Source and is distributed under an MIT-like
+[license](https://curl.se/docs/copyright.html).
+
+## Contact
+
+Contact us on a suitable [mailing list](https://curl.se/mail/) or
+use GitHub [issues](https://github.com/curl/curl/issues)/
+[pull requests](https://github.com/curl/curl/pulls)/
+[discussions](https://github.com/curl/curl/discussions).
+
+All contributors to the project are listed in [the THANKS
+document](https://curl.se/docs/thanks.html).
+
+## Commercial support
+
+For commercial support, maybe private and dedicated help with your problems or
+applications using (lib)curl visit [the support page](https://curl.se/support.html).
+
+## Website
+
+Visit the [curl website](https://curl.se/) for the latest news and downloads.
+
+## Source code
+
+Download the latest source from the Git server:
+
+ git clone https://github.com/curl/curl
+
+## Security problems
+
+Report suspected security problems via [our HackerOne
+page](https://hackerone.com/curl) and not in public.
+
+## Backers
+
+Thank you to all our backers :pray: [Become a backer](https://opencollective.com/curl#section-contribute).
+
+## Sponsors
+
+Support this project by becoming a [sponsor](https://curl.se/sponsors.html).
diff --git a/data/readmes/curve-v127-rc6.md b/data/readmes/curve-v127-rc6.md
new file mode 100644
index 0000000..67f2209
--- /dev/null
+++ b/data/readmes/curve-v127-rc6.md
@@ -0,0 +1,98 @@
+# Curve - README (v1.2.7-rc6)
+
+**Repository**: https://github.com/opencurve/curve
+**Version**: v1.2.7-rc6
+
+---
+
+[English version](README_en.md)
+
+
+
+
+# CURVE
+
+[](http://59.111.93.165:8080/job/curve_untest_job/HTML_20Report/)
+[](http://59.111.93.165:8080/job/curve_failover_testjob/)
+[](http://59.111.93.165:8080/job/curve_robot_job/)
+[](http://59.111.93.165:8080/job/opencurve_multijob/lastBuild)
+[](https://github.com/opencurve/curve/tree/master/docs)
+[](https://github.com/opencurve/curve/releases)
+[](https://github.com/opencurve/curve/blob/master/LICENSE)
+
+
+CURVE是网易自主设计研发的高性能、高可用、高可靠分布式存储系统,具有非常良好的扩展性。基于该存储底座可以打造适用于不同应用场景的存储系统,如块存储、对象存储、云原生数据库等。当前我们基于CURVE已经实现了高性能块存储系统,支持快照克隆和恢复 ,支持QEMU虚拟机和物理机NBD设备两种挂载方式, 在网易内部作为高性能云盘使用。
+
+## 设计文档
+
+- 通过 [CURVE概述](https://opencurve.github.io/) 可以了解 CURVE 架构
+- CURVE相关文档
+ - [NEBD](docs/cn/nebd.md)
+ - [MDS](docs/cn/mds.md)
+ - [Chunkserver](docs/cn/chunkserver_design.md)
+ - [Snapshotcloneserver](docs/cn/snapshotcloneserver.md)
+ - [CURVE质量体系介绍](docs/cn/quality.md)
+ - [CURVE监控体系介绍](docs/cn/monitor.md)
+ - [Client](docs/cn/curve-client.md)
+ - [Client Python API](docs/cn/curve-client-python-api.md)
+- CURVE上层应用
+ - [对接k8s文档](docs/cn/k8s_csi_interface.md)
+
+## 快速开始
+
+在您开始动手部署前请先仔细阅读特别说明部分:[特别说明](docs/cn/deploy.md#%E7%89%B9%E5%88%AB%E8%AF%B4%E6%98%8E)
+
+### 部署All-in-one体验环境
+
+[单机部署](docs/cn/deploy.md#%E5%8D%95%E6%9C%BA%E9%83%A8%E7%BD%B2)
+
+### 部署多机集群
+
+[多机部署](docs/cn/deploy.md#%E5%A4%9A%E6%9C%BA%E9%83%A8%E7%BD%B2)
+
+### 查询工具说明
+
+[查询工具说明](docs/cn/curve_ops_tool.md)
+
+## 参与开发
+
+
+### 部署编译开发环境
+
+[编译开发环境搭建](docs/cn/build_and_run.md)
+
+### 测试用例编译及运行
+[测试用例编译及运行](docs/cn/build_and_run.md#%E6%B5%8B%E8%AF%95%E7%94%A8%E4%BE%8B%E7%BC%96%E8%AF%91%E5%8F%8A%E6%89%A7%E8%A1%8C)
+
+### 编码规范
+CURVE编码规范严格按照[Google C++开源项目编码指南](https://zh-google-styleguide.readthedocs.io/en/latest/google-cpp-styleguide/contents/)来进行代码编写,请您也遵循这一指南来提交您的代码。
+
+### 测试覆盖率要求
+1. 单元测试:增量行覆盖80%以上,增量分支覆盖70%以上
+2. 集成测试:与单元测试合并统计,满足上述覆盖率要求即可
+3. 异常测试:暂不做要求
+
+### 其他开发流程说明
+代码开发完成之后,提[pr](https://github.com/opencurve/curve/compare)到curve的master分支。提交pr时,请填写pr模板。pr提交之后会自动触发CI,CI通过并且经过review之后,代码才可合入。
+具体规则请见[CONTRIBUTING](https://github.com/opencurve/curve/blob/master/CONTRIBUTING.md).
+
+## 版本发布周期
+- CURVE版本发布周期:大版本半年,小版本1~2个月
+- 版本号规则:采用3段式版本号,x.y.z{-后缀},x是大版本,y是小版本,z是bugfix,后缀用来区beta版本(-beta)、rc版本(-rc)、和稳定版本(没有后缀)。每半年的大版本是指x增加1,每1~2个月的小版本是y增加1。正式版本发布之后,如果有bugfix是z增加1。
+
+## 分支规则
+所有的开发都在master分支开发,如果需要发布版本,从master拉取新的分支**release-x.y**。版本发布从release-x.y分支发布。
+
+## 反馈及交流
+
+- [Github Issues](https://github.com/openCURVE/CURVE/issues):欢迎提交BUG、建议,使用中如遇到问题可参考FAQ或加入我们的User group进行咨询
+- [FAQ](https://github.com/openCURVE/CURVE/wiki/CURVE-FAQ):主要根据User group中常见问题整理,还在逐步完善中
+- User group:当前为微信群,由于群人数过多,需要先添加以下个人微信,再邀请进群。
+
+
+
+
+
+
+
+
diff --git a/data/readmes/dapr-v1164.md b/data/readmes/dapr-v1164.md
new file mode 100644
index 0000000..35201b5
--- /dev/null
+++ b/data/readmes/dapr-v1164.md
@@ -0,0 +1,167 @@
+# Dapr - README (v1.16.4)
+
+**Repository**: https://github.com/dapr/dapr
+**Version**: v1.16.4
+
+---
+
+
+
APIs for Building Secure and Reliable Microservices
+
+
+[![Go Report][go-report-badge]][go-report-url] [![OpenSSF][openssf-badge]][openssf-url] [![Docker Pulls][docker-badge]][docker-url] [![Build Status][actions-badge]][actions-url] [![Test Status][e2e-badge]][e2e-url] [![Code Coverage][codecov-badge]][codecov-url] [![License: Apache 2.0][apache-badge]][apache-url] [![FOSSA Status][fossa-badge]][fossa-url] [![TODOs][todo-badge]][todo-url] [![Good First Issues][gfi-badge]][gfi-url] [![discord][discord-badge]][discord-url] [![YouTube][youtube-badge]][youtube-link] [![Bluesky][bluesky-badge]][bluesky-link] [![X/Twitter][x-badge]][x-link]
+
+[go-report-badge]: https://goreportcard.com/badge/github.com/dapr/dapr
+[go-report-url]: https://goreportcard.com/report/github.com/dapr/dapr
+[openssf-badge]: https://www.bestpractices.dev/projects/5044/badge
+[openssf-url]: https://www.bestpractices.dev/projects/5044
+[docker-badge]: https://img.shields.io/docker/pulls/daprio/daprd?style=flat&logo=docker
+[docker-url]: https://hub.docker.com/r/daprio/dapr
+[apache-badge]: https://img.shields.io/github/license/dapr/dapr?style=flat&label=License&logo=github
+[apache-url]: https://github.com/dapr/dapr/blob/master/LICENSE
+[actions-badge]: https://github.com/dapr/dapr/workflows/dapr/badge.svg?event=push&branch=master
+[actions-url]: https://github.com/dapr/dapr/actions?workflow=dapr
+[e2e-badge]: https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/dapr-bot/14e974e8fd6c6eab03a2475beb1d547a/raw/dapr-test-badge.json
+[e2e-url]: https://github.com/dapr/dapr/actions?workflow=dapr-test&event=schedule
+[codecov-badge]: https://codecov.io/gh/dapr/dapr/branch/master/graph/badge.svg
+[codecov-url]: https://codecov.io/gh/dapr/dapr
+[fossa-badge]: https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Fdapr%2Fdapr.svg?type=shield
+[fossa-url]: https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fdapr%2Fdapr?ref=badge_shield
+[todo-badge]: https://badgen.net/https/api.tickgit.com/badgen/github.com/dapr/dapr
+[todo-url]: https://www.tickgit.com/browse?repo=github.com/dapr/dapr
+[gfi-badge]:https://img.shields.io/github/issues-search/dapr/dapr?query=type%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22&label=Good%20first%20issues&style=flat&logo=github
+[gfi-url]:https://github.com/dapr/dapr/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22
+[discord-badge]: https://img.shields.io/discord/778680217417809931?label=Discord&style=flat&logo=discord
+[discord-url]: http://bit.ly/dapr-discord
+[youtube-badge]:https://img.shields.io/youtube/channel/views/UCtpSQ9BLB_3EXdWAUQYwnRA?style=flat&label=YouTube%20views&logo=youtube
+[youtube-link]:https://youtube.com/@daprdev
+[bluesky-badge]:https://img.shields.io/badge/Follow-%40daprdev.bsky.social-0056A1?logo=bluesky
+[bluesky-link]:https://bsky.app/profile/daprdev.bsky.social
+[x-badge]:https://img.shields.io/twitter/follow/daprdev?logo=x&style=flat
+[x-link]:https://twitter.com/daprdev
+
+Dapr is a set of integrated APIs with built-in best practices and patterns to build distributed applications. Dapr increases your developer productivity by 20-40% with out-of-the-box features such as workflow, pub/sub, state management, secret stores, external configuration, bindings, actors, distributed lock, and cryptography. You benefit from the built-in security, reliability, and observability capabilities, so you don't need to write boilerplate code to achieve production-ready applications.
+
+With Dapr, a graduated CNCF project, platform teams can configure complex setups while exposing simple interfaces to application development teams, making it easier for them to build highly scalable distributed applications. Many platform teams have adopted Dapr to provide governance and golden paths for API-based infrastructure interaction.
+
+
+
+We are a Cloud Native Computing Foundation (CNCF) graduated project.
+
+
+## Goals
+
+- Enable developers using *any* language or framework to write distributed applications
+- Solve the hard problems developers face building microservice applications by providing best practice building blocks
+- Be community driven, open and vendor neutral
+- Gain new contributors
+- Provide consistency and portability through open APIs
+- Be platform agnostic across cloud and edge
+- Embrace extensibility and provide pluggable components without vendor lock-in
+- Enable IoT and edge scenarios by being highly performant and lightweight
+- Be incrementally adoptable from existing code, with no runtime dependency
+
+## How it works
+
+Dapr injects a side-car (container or process) to each compute unit. The side-car interacts with event triggers and communicates with the compute unit via standard HTTP or gRPC protocols. This enables Dapr to support all existing and future programming languages without requiring you to import frameworks or libraries.
+
+Dapr offers built-in state management, reliable messaging (at least once delivery), triggers and bindings through standard HTTP verbs or gRPC interfaces. This allows you to write stateless, stateful and actor-like services following the same programming paradigm. You can freely choose consistency model, threading model and message delivery patterns.
+
+Dapr runs natively on Kubernetes, as a self hosted binary on your machine, on an IoT device, or as a container that can be injected into any system, in the cloud or on-premises.
+
+Dapr uses pluggable component state stores and message buses such as Redis as well as gRPC to offer a wide range of communication methods, including direct dapr-to-dapr using gRPC and async Pub-Sub with guaranteed delivery and at-least-once semantics.
+
+
+## Why Dapr?
+
+Writing highly performant, scalable and reliable distributed application is hard. Dapr brings proven patterns and practices to you. It unifies event-driven and actors semantics into a simple, consistent programming model. It supports all programming languages without framework lock-in. You are not exposed to low-level primitives such as threading, concurrency control, partitioning and scaling. Instead, you can write your code by implementing a simple web server using familiar web frameworks of your choice.
+
+Dapr is flexible in threading and state consistency models. You can leverage multi-threading if you choose to, and you can choose among different consistency models. This flexibility enables you to implement advanced scenarios without artificial constraints. Dapr is unique because you can transition seamlessly between platforms and underlying implementations without rewriting your code.
+
+## Features
+
+* Event-driven Pub-Sub system with pluggable providers and at-least-once semantics
+* Input and output bindings with pluggable providers
+* State management with pluggable data stores
+* Consistent service-to-service discovery and invocation
+* Opt-in stateful models: Strong/Eventual consistency, First-write/Last-write wins
+* Cross platform virtual actors
+* Secret management to retrieve secrets from secure key vaults
+* Rate limiting
+* Built-in [Observability](https://docs.dapr.io/concepts/observability-concept/) support
+* Runs natively on Kubernetes using a dedicated Operator and CRDs
+* Supports all programming languages via HTTP and gRPC
+* Multi-Cloud, open components (bindings, pub-sub, state) from Azure, AWS, GCP
+* Runs anywhere, as a process or containerized
+* Lightweight (58MB binary, 4MB physical memory)
+* Runs as a sidecar - removes the need for special SDKs or libraries
+* Dedicated CLI - developer friendly experience with easy debugging
+* Clients for Java, .NET Core, Go, Javascript, Python, Rust and C++
+
+## Get Started using Dapr
+
+See our [Getting Started](https://docs.dapr.io/getting-started/) guide over in our docs.
+
+## Quickstarts and Samples
+
+* See the [quickstarts repository](https://github.com/dapr/quickstarts) for code examples that can help you get started with Dapr.
+* Explore additional samples in the Dapr [samples repository](https://github.com/dapr/samples).
+
+## Community
+We want your contributions and suggestions! One of the easiest ways to contribute is to participate in discussions on the mailing list, chat on IM or the bi-weekly community calls.
+For more information on the community engagement, developer and contributing guidelines and more, head over to the [Dapr community repo](https://github.com/dapr/community#dapr-community).
+
+### Contact Us
+
+Reach out with any questions you may have and we'll make sure to answer them as soon as possible!
+
+| Platform | Link |
+|:----------|:------------|
+| 💬 Discord (preferred) | [](https://aka.ms/dapr-discord)
+| 💭 LinkedIn | [@daprdev](https://www.linkedin.com/company/daprdev)
+| 🦋 BlueSky | [@daprdev.bsky.social](https://bsky.app/profile/daprdev.bsky.social)
+| 🐤 Twitter | [@daprdev](https://twitter.com/daprdev)
+
+
+### Community Call
+
+Every two weeks we host a community call to showcase new features, review upcoming milestones, and engage in a Q&A. All are welcome!
+
+📞 Visit [Upcoming Dapr Community Calls](https://github.com/dapr/community/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22community%20call%22) for upcoming dates and the meeting link.
+
+📺 Visit https://www.youtube.com/@DaprDev/streams for previous community call live streams.
+
+### Videos and Podcasts
+
+We have a variety of keynotes, podcasts, and presentations available to reference and learn from.
+
+📺 Visit https://docs.dapr.io/contributing/presentations/ for previous talks and slide decks or our YouTube channel https://www.youtube.com/@DaprDev/videos.
+
+### Contributing to Dapr
+
+See the [Development Guide](https://docs.dapr.io/contributing/) to get started with building and developing.
+
+## Repositories
+
+| Repo | Description |
+|:-----|:------------|
+| [Dapr](https://github.com/dapr/dapr) | The main repository that you are currently in. Contains the Dapr runtime code and overview documentation.
+| [CLI](https://github.com/dapr/cli) | The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, launches and manages Dapr instances.
+| [Docs](https://docs.dapr.io) | The documentation for Dapr.
+| [Quickstarts](https://github.com/dapr/quickstarts) | This repository contains a series of simple code samples that highlight the main Dapr capabilities.
+| [Samples](https://github.com/dapr/samples) | This repository holds community maintained samples for various Dapr use cases.
+| [Components-contrib ](https://github.com/dapr/components-contrib) | The purpose of components contrib is to provide open, community driven reusable components for building distributed applications.
+| [Dashboard ](https://github.com/dapr/dashboard) | General purpose dashboard for Dapr
+| [Go-sdk](https://github.com/dapr/go-sdk) | Dapr SDK for Go
+| [Java-sdk](https://github.com/dapr/java-sdk) | Dapr SDK for Java
+| [JS-sdk](https://github.com/dapr/js-sdk) | Dapr SDK for JavaScript
+| [Python-sdk](https://github.com/dapr/python-sdk) | Dapr SDK for Python
+| [Dotnet-sdk](https://github.com/dapr/dotnet-sdk) | Dapr SDK for .NET
+| [Rust-sdk](https://github.com/dapr/rust-sdk) | Dapr SDK for Rust
+| [Cpp-sdk](https://github.com/dapr/cpp-sdk) | Dapr SDK for C++
+| [PHP-sdk](https://github.com/dapr/php-sdk) | Dapr SDK for PHP
+
+
+## Code of Conduct
+
+Please refer to our [Dapr Community Code of Conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md)
diff --git a/data/readmes/dasel-v281.md b/data/readmes/dasel-v281.md
new file mode 100644
index 0000000..5df7dd4
--- /dev/null
+++ b/data/readmes/dasel-v281.md
@@ -0,0 +1,168 @@
+# Dasel - README (v2.8.1)
+
+**Repository**: https://github.com/TomWright/dasel
+**Version**: v2.8.1
+
+---
+
+[](https://daseldocs.tomwright.me)
+[](https://goreportcard.com/report/github.com/tomwright/dasel/v3)
+[](https://pkg.go.dev/github.com/tomwright/dasel/v3)
+
+
+[](https://codecov.io/gh/TomWright/dasel)
+[](https://github.com/avelino/awesome-go)
+
+
+[](https://github.com/TomWright/dasel/releases/latest)
+[](https://formulae.brew.sh/formula/dasel)
+
+
+
+
+
+# Dasel
+
+Dasel (short for **Data-Select**) is a command-line tool and library for querying, modifying, and transforming data structures such as JSON, YAML, TOML, XML, and CSV.
+
+It provides a consistent, powerful syntax to traverse and update data — making it useful for developers, DevOps, and data wrangling tasks.
+
+---
+
+## Features
+
+* **Multi-format support**: JSON, YAML, TOML, XML, CSV, HCL (with more planned).
+* **Unified query syntax**: Access data in any format with the same selectors.
+* **Query & search**: Extract values, lists, or structures with intuitive syntax.
+* **Modify in place**: Update, insert, or delete values directly in structured files.
+* **Convert between formats**: Seamlessly transform data from JSON → YAML, TOML → JSON, etc.
+* **Script-friendly**: Simple CLI integration for shell scripts and pipelines.
+* **Library support**: Import and use in Go projects.
+
+---
+
+## Installation
+
+### Homebrew (macOS/Linux)
+
+```sh
+brew install dasel
+```
+
+### Go Install
+
+```sh
+go install github.com/tomwright/dasel/v3/cmd/dasel@master
+```
+
+### Prebuilt Binaries
+
+Prebuilt binaries are available on the [Releases](https://github.com/TomWright/dasel/releases) page for Linux, macOS, and Windows.
+
+### None of the above?
+
+See the [installation docs](https://daseldocs.tomwright.me/getting-started/installation) for more options.
+
+---
+
+## Basic Usage
+
+### Selecting Values
+
+By default, Dasel evaluates the final selector and prints the result.
+
+```sh
+echo '{"foo": {"bar": "baz"}}' | dasel -i json 'foo.bar'
+# Output: "baz"
+```
+
+### Modifying Values
+
+Update values inline:
+
+```sh
+echo '{"foo": {"bar": "baz"}}' | dasel -i json 'foo.bar = "bong"'
+# Output: "bong"
+```
+
+Use `--root` to output the full document after modification:
+
+```sh
+echo '{"foo": {"bar": "baz"}}' | dasel -i json --root 'foo.bar = "bong"'
+# Output:
+{
+ "foo": {
+ "bar": "bong"
+ }
+}
+```
+
+Update values based on previous value:
+
+```sh
+echo '[1,2,3,4,5]' | dasel -i json --root 'each($this = $this*2)'
+# Output:
+[
+ 2,
+ 4,
+ 6,
+ 8,
+ 10
+]
+```
+
+### Format Conversion
+
+```sh
+cat data.json | dasel -i json -o yaml
+```
+
+### Recursive Descent (`..`)
+
+Searches all nested objects and arrays for a matching key or index.
+
+```sh
+echo '{"foo": {"bar": "baz"}}' | dasel -i json '..bar'
+# Output:
+[
+ "baz"
+]
+
+```
+
+### Search (`search`)
+
+Finds all values matching a condition anywhere in the structure.
+
+```sh
+echo '{"foo": {"bar": "baz"}}' | dasel -i json 'search(bar == "baz")'
+# Output:
+[
+ {
+ "bar": "baz"
+ }
+]
+
+```
+
+---
+
+## Documentation
+
+Full documentation is available at [daseldocs.tomwright.me](https://daseldocs.tomwright.me).
+
+---
+
+## Contributing
+
+Contributions are welcome! Please see the [CONTRIBUTING.md](./CONTRIBUTING.md) for details.
+
+---
+
+## License
+
+MIT License. See [LICENSE](./LICENSE) for details.
+
+## Stargazers over time
+
+[](https://starchart.cc/TomWright/dasel)
diff --git a/data/readmes/devfile-v230.md b/data/readmes/devfile-v230.md
new file mode 100644
index 0000000..9806eed
--- /dev/null
+++ b/data/readmes/devfile-v230.md
@@ -0,0 +1,77 @@
+# Devfile - README (v2.3.0)
+
+**Repository**: https://github.com/devfile/api
+**Version**: v2.3.0
+
+---
+
+# Kube-native API for cloud development workspaces specification
+
+
+
+[](LICENSE)
+[](https://workspaces.openshift.com/f?url=https://github.com/devfile/api)
+[](https://www.bestpractices.dev/projects/8179)
+[](https://securityscorecards.dev/viewer/?uri=github.com/devfile/api)
+
+
+Sources for this API are defined in Go code, starting from the
+[devworkspace_types.go source file](pkg/apis/workspaces/v1alpha2/devworkspace_types.go)
+
+From these Go sources, several files are generated:
+
+- A Kubernetes Custom Resource Definition(CRD) with an embedded OpenApi schema,
+- json schemas (in the [schemas](schemas) folder) generated from the above CRD, to specify the syntax of:
+ - the DevWorkspace CRD itself;
+ - the DevWorkspaceTemplate CRD (a devworkspace content, without runtime information);
+ - the Devfile 2.0.0 format, which is generated from the `DevWorkspace` API.
+
+Generated files are created by a build script (see section [How to build](#how-to-build)).
+
+## Devfile 2.0.0 file format
+
+A Subset of this `DevWorkspace` API defines a structure (workspace template content), which is also at the core of the **Devfile 2.0** format specification.
+For more information about this, please look into the [Devfile support README](https://github.com/devfile/registry-support/blob/main/README.md)
+
+You can read the available generated [documentation of the Devfile 2.0 format](https://devfile.io/docs/2.3.0/devfile-schema), based on its json schema.
+
+Typescript model is build on each commit of main branch and available as an [NPM package](https://www.npmjs.com/package/@devfile/api).
+
+## Release
+
+Release details and process are found in [Devfile Release](RELEASE.md)
+
+## How to build
+
+For information about building this project visit [CONTRIBUTING.md](./CONTRIBUTING.md#building).
+
+## Specification status
+
+This work is still in an early stage of specification, and the related API and schemas are still a draft proposal.
+
+## Quickly open and test ?
+
+In order to test existing or new Devfile 2.0 or DevWorkspace sample files in a self-service Che workspace (hosted on che.openshift.io), just click on the button below:
+
+[](https://workspaces.openshift.com/f?url=https://github.com/devfile/api)
+
+As soon as the devworkspace is opened, you should be able to:
+
+- open the `yaml` files in the following folders:
+ - `samples/`
+ - `devfile-support/samples`
+- have `yaml` language support (completion and documentation) based on the current Json schemas.
+
+## Contributing
+
+Please see our [contributing.md](./CONTRIBUTING.md).
+
+## License
+
+Apache License 2.0, see [LICENSE](./LICENSE) for details.
+
+### Adding License Headers
+
+[`license_header`](./license_header.txt) contains the license header to be contained under all source files. For Go sources, this can be included by running `bash add_licenses.sh`.
+
+Ensure `github.com/google/addlicense` is installed by running `go install github.com/google/addlicense@latest`.
diff --git a/data/readmes/devspace-v6318.md b/data/readmes/devspace-v6318.md
new file mode 100644
index 0000000..8d63efc
--- /dev/null
+++ b/data/readmes/devspace-v6318.md
@@ -0,0 +1,222 @@
+# DevSpace - README (v6.3.18)
+
+**Repository**: https://github.com/devspace-sh/devspace
+**Version**: v6.3.18
+
+---
+
+
+
+### **[Website](https://devspace.sh)** • **[Quickstart](#quickstart)** • **[Documentation](https://devspace.sh/cli/docs/introduction)** • **[Blog](https://loft.sh/blog)** • **[Twitter](https://twitter.com/devspace)**
+
+
+
+
+
+
+
+
+[](https://slack.loft.sh/)
+
+### Client-Only Developer Tool for Cloud-Native Development with Kubernetes
+- **Build, test and debug applications directly inside Kubernetes**
+- **Develop with hot reloading**: updates your running containers without rebuilding images or restarting containers
+- **Unify deployment workflows** within your team and across dev, staging and production
+- **Automate repetitive tasks** for image building and deployment
+
+
+
+
+
+
+
+
+⭐️ Do you like DevSpace? Support the project with a star ⭐️
+
+
+
+
+DevSpace was created by [Loft Labs](https://loft.sh) and is a [Cloud Native Computing Foundation (CNCF) sandbox project](https://www.cncf.io/sandbox-projects/).
+
+
+
+## Contents
+- [Why DevSpace?](#why-devspace)
+- [Quickstart Guide](#quickstart)
+- [Architecture & Workflow](#architecture--workflow)
+- [Contributing](#contributing)
+- [FAQ](#faq)
+
+
+
+## Why DevSpace?
+Building modern, distributed and highly scalable microservices with Kubernetes is hard - and it is even harder for large teams of developers. DevSpace is the next-generation tool for fast cloud-native software development.
+
+
+Standardize & Version Your Workflows
+
+
+DevSpace allows you to store all your workflows in one declarative config file: `devspace.yaml`
+- **Codify workflow knowledge** about building images, deploying your project and its dependencies etc.
+- **Version your workflows together with your code** (i.e. you can get any old version up and running with just a single command)
+- **Share your workflows** with your team mates
+
+
+
+
+
+Let Everyone on Your Team Deploy to Kubernetes
+
+
+DevSpace helps your team to standardize deployment and development workflows without requiring everyone on your team to become a Kubernetes expert.
+- The DevOps and Kubernetes expert on your team can configure DevSpace using `devspace.yaml` and simply commits it via git
+- If other developers on your team check out the project, they only need to run `devspace deploy` to deploy the project (including image building and deployment of other related project etc.) and they have a running instance of the project
+- The configuration of DevSpace is highly dynamic, so you can configure everything using [config variables](https://devspace.sh/cli/docs/configuration/variables/basics) that make it much easier to have one base configuration but still allow differences among developers (e.g. different sub-domains for testing)
+
+> Giving everyone on your team on-demand access to a Kubernetes cluster is a challenging problem for system administrators and infrastructure managers. If you want to efficiently share dev clusters for your engineering team, take a look at [www.loft.sh](https://loft.sh/).
+
+
+
+
+
+Speed Up Cloud-Native Development
+
+
+Instead of rebuilding images and redeploying containers, DevSpace allows you to **hot reload running containers while you are coding**:
+- Simply edit your files with your IDE and see how your application reloads within the running container.
+- The **high performance, bi-directional file synchronization** detects code changes immediately and synchronizes files immediately between your local dev environment and the containers running in Kubernetes
+- Stream logs, connect debuggers or open a container terminal directly from your IDE with just a single command.
+
+
+
+
+
+Automate Repetitive Tasks
+
+
+Deploying and debugging services with Kubernetes requires a lot of knowledge and forces you to repeatedly run commands like `kubectl get pod` and copy pod ids back and forth. Stop wasting time and let DevSpace automate the tedious parts of working with Kubernetes:
+- DevSpace lets you build multiple images in parallel, tag them automatically and and deploy your entire application (including its dependencies) with just a single command
+- Let DevSpace automatically start port-fowarding and log streaming, so you don't have to constantly copy and paste pod ids or run 10 commands to get everything started.
+
+
+
+
+
+Works with Any Kubernetes Clusters
+
+
+DevSpace is battle tested with many Kubernetes distributions including:
+- **local Kubernetes clusters** like minikube, k3s, MikroK8s, kind
+- **managed Kubernetes clusters** in GKE (Google Cloud), EKS (Amazon Web Service), AKS (Microsoft Azure), Digital Ocean
+- **self-managed Kubernetes clusters** created with Rancher
+
+> DevSpace also lets you switch seamlessly between clusters and namespaces. You can work with a local cluster as long as that is sufficient. If things get more advanced, you need cloud power like GPUs or you simply want to share a complex system such as Kafka with your team, simply tell DevSpace to use a remote cluster by switching your kube-context and continue working.
+
+
+
+
+
+
+## Quickstart
+
+Please take a look at our [getting started guide](https://devspace.sh/docs/getting-started/installation).
+
+
+
+## Architecture & Workflow
+
+
+DevSpace runs as a single binary CLI tool directly on your computer and ideally, you use it straight from the terminal within your IDE. DevSpace does not require a server-side component as it communicates directly to your Kubernetes cluster using your kube-context, just like kubectl.
+
+
+
+## Contributing
+
+Participation in the DevSpace project is governed by the [CNCF code of conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
+
+Help us make DevSpace the best tool for developing, deploying and debugging Kubernetes apps.
+
+[](https://slack.loft.sh/)
+
+### Reporting Issues
+
+If you find a bug while working with the DevSpace, please [open an issue on GitHub](https://github.com/loft-sh/devspace/issues/new?labels=kind%2Fbug&template=bug-report.md&title=Bug:) and let us know what went wrong. We will try to fix it as quickly as we can.
+
+### Feedback & Feature Requests
+
+You are more than welcome to open issues in this project to:
+
+- [Give feedback](https://github.com/loft-sh/devspace/issues/new?labels=kind%2Ffeedback&title=Feedback:)
+- [Suggest new features](https://github.com/loft-sh/devspace/issues/new?labels=kind%2Ffeature&template=feature-request.md&title=Feature%20Request:)
+- [Report Bugs](https://github.com/loft-sh/devspace/issues/new?labels=kind%2Fbug&template=bug-report.md&title=Bug%20Report:)
+
+### Contributing Code
+
+This project is mainly written in Golang. If you want to contribute code:
+
+1. Ensure you are running golang version 1.11.4 or greater for go module support
+2. Set the following environment variables:
+ ```
+ GO111MODULE=on
+ GOFLAGS=-mod=vendor
+ ```
+3. Check-out the project: `git clone https://github.com/loft-sh/devspace && cd devspace`
+4. Make changes to the code
+5. Build the project, e.g. via `go build -o devspace[.exe]`
+6. Evaluate and test your changes `./devspace [SOME_COMMAND]`
+
+See [Contributing Guidelines](CONTRIBUTING.md) for more information.
+
+The DevSpace project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+
+
+## FAQ
+
+
+What is DevSpace?
+
+DevSpace is an open-source command-line tool that provides everything you need to develop, deploy and debug applications with Docker and Kubernetes. It lets you streamline deployment workflows and share them with your colleagues through a declarative configuration file `devspace.yaml`.
+
+
+
+
+Is DevSpace free?
+
+**YES.** DevSpace is open-source and you can use it for free for any private projects and even for commercial projects.
+
+
+
+
+Do I need a Kubernetes cluster to use DevSpace?
+
+**Yes.** You can either use a local cluster such as Docker Desktop Kubernetes, minikube, or Kind, but you can also use a remote cluster such as GKE, EKS, AKS, RKE (Rancher), or DOKS.
+
+
+
+
+Can I use DevSpace with my existing Kubernetes clusters?
+
+**Yes.** DevSpace is using your regular kube-context. As long as you can run `kubectl` commands with a cluster, you can use this cluster with DevSpace as well.
+
+
+
+
+What is a Helm chart?
+
+[Helm](https://helm.sh/) is the package manager for Kubernetes. Packages in Helm are called Helm charts.
+
+
+
+
+
+
+## License
+
+DevSpace is released under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
+
+DevSpace is a [Cloud Native Computing Foundation (CNCF) sandbox project](https://www.cncf.io/sandbox-projects/) and was contributed by [Loft Labs](https://www.loft.sh).
+
+
+
+
diff --git a/data/readmes/devstream-v0133.md b/data/readmes/devstream-v0133.md
new file mode 100644
index 0000000..adb0e71
--- /dev/null
+++ b/data/readmes/devstream-v0133.md
@@ -0,0 +1,9 @@
+# DevStream - README (v0.13.3)
+
+**Repository**: https://github.com/devstream-io/devstream
+**Version**: v0.13.3
+
+---
+
+# devstream
+Intelligent Workflow Engine Driven by Natural Language.
diff --git a/data/readmes/dex-v2440.md b/data/readmes/dex-v2440.md
new file mode 100644
index 0000000..f4c9749
--- /dev/null
+++ b/data/readmes/dex-v2440.md
@@ -0,0 +1,153 @@
+# Dex - README (v2.44.0)
+
+**Repository**: https://github.com/dexidp/dex
+**Version**: v2.44.0
+
+---
+
+# dex - A federated OpenID Connect provider
+
+
+[](https://api.securityscorecards.dev/projects/github.com/dexidp/dex)
+[](https://goreportcard.com/report/github.com/dexidp/dex)
+[](https://gitpod.io/#https://github.com/dexidp/dex)
+
+
+
+Dex is an identity service that uses [OpenID Connect][openid-connect] to drive authentication for other apps.
+
+Dex acts as a portal to other identity providers through ["connectors."](#connectors) This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory. Clients write their authentication logic once to talk to dex, then dex handles the protocols for a given backend.
+
+## ID Tokens
+
+ID Tokens are an OAuth2 extension introduced by OpenID Connect and dex's primary feature. ID Tokens are [JSON Web Tokens][jwt-io] (JWTs) signed by dex and returned as part of the OAuth2 response that attests to the end user's identity. An example JWT might look like:
+
+```
+eyJhbGciOiJSUzI1NiIsImtpZCI6IjlkNDQ3NDFmNzczYjkzOGNmNjVkZDMyNjY4NWI4NjE4MGMzMjRkOTkifQ.eyJpc3MiOiJodHRwOi8vMTI3LjAuMC4xOjU1NTYvZGV4Iiwic3ViIjoiQ2djeU16UXlOelE1RWdabmFYUm9kV0kiLCJhdWQiOiJleGFtcGxlLWFwcCIsImV4cCI6MTQ5Mjg4MjA0MiwiaWF0IjoxNDkyNzk1NjQyLCJhdF9oYXNoIjoiYmk5NmdPWFpTaHZsV1l0YWw5RXFpdyIsImVtYWlsIjoiZXJpYy5jaGlhbmdAY29yZW9zLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJncm91cHMiOlsiYWRtaW5zIiwiZGV2ZWxvcGVycyJdLCJuYW1lIjoiRXJpYyBDaGlhbmcifQ.OhROPq_0eP-zsQRjg87KZ4wGkjiQGnTi5QuG877AdJDb3R2ZCOk2Vkf5SdP8cPyb3VMqL32G4hLDayniiv8f1_ZXAde0sKrayfQ10XAXFgZl_P1yilkLdknxn6nbhDRVllpWcB12ki9vmAxklAr0B1C4kr5nI3-BZLrFcUR5sQbxwJj4oW1OuG6jJCNGHXGNTBTNEaM28eD-9nhfBeuBTzzO7BKwPsojjj4C9ogU4JQhGvm_l4yfVi0boSx8c0FX3JsiB0yLa1ZdJVWVl9m90XmbWRSD85pNDQHcWZP9hR6CMgbvGkZsgjG32qeRwUL_eNkNowSBNWLrGNPoON1gMg
+```
+
+ID Tokens contains standard claims assert which client app logged the user in, when the token expires, and the identity of the user.
+
+```json
+{
+ "iss": "http://127.0.0.1:5556/dex",
+ "sub": "CgcyMzQyNzQ5EgZnaXRodWI",
+ "aud": "example-app",
+ "exp": 1492882042,
+ "iat": 1492795642,
+ "at_hash": "bi96gOXZShvlWYtal9Eqiw",
+ "email": "jane.doe@coreos.com",
+ "email_verified": true,
+ "groups": [
+ "admins",
+ "developers"
+ ],
+ "name": "Jane Doe"
+}
+```
+
+Because these tokens are signed by dex and [contain standard-based claims][standard-claims] other services can consume them as service-to-service credentials. Systems that can already consume OpenID Connect ID Tokens issued by dex include:
+
+* [Kubernetes][kubernetes]
+* [AWS STS][aws-sts]
+
+For details on how to request or validate an ID Token, see [_"Writing apps that use dex"_][using-dex].
+
+## Kubernetes and Dex
+
+Dex runs natively on top of any Kubernetes cluster using Custom Resource Definitions and can drive API server authentication through the OpenID Connect plugin. Clients, such as the [`kubernetes-dashboard`](https://github.com/kubernetes/dashboard) and `kubectl`, can act on behalf of users who can login to the cluster through any identity provider dex supports.
+
+* More docs for running dex as a Kubernetes authenticator can be found [here](https://dexidp.io/docs/guides/kubernetes/).
+* You can find more about companies and projects which use dex, [here](./ADOPTERS.md).
+
+## Connectors
+
+When a user logs in through dex, the user's identity is usually stored in another user-management system: a LDAP directory, a GitHub org, etc. Dex acts as a shim between a client app and the upstream identity provider. The client only needs to understand OpenID Connect to query dex, while dex implements an array of protocols for querying other user-management systems.
+
+
+
+A "connector" is a strategy used by dex for authenticating a user against another identity provider. Dex implements connectors that target specific platforms such as GitHub, LinkedIn, and Microsoft as well as established protocols like LDAP and SAML.
+
+Depending on the connectors limitations in protocols can prevent dex from issuing [refresh tokens][scopes] or returning [group membership][scopes] claims. For example, because SAML doesn't provide a non-interactive way to refresh assertions, if a user logs in through the SAML connector dex won't issue a refresh token to its client. Refresh token support is required for clients that require offline access, such as `kubectl`.
+
+Dex implements the following connectors:
+
+| Name | supports refresh tokens | supports groups claim | supports preferred_username claim | status | notes |
+| ---- | ----------------------- | --------------------- | --------------------------------- | ------ | ----- |
+| [LDAP](https://dexidp.io/docs/connectors/ldap/) | yes | yes | yes | stable | |
+| [GitHub](https://dexidp.io/docs/connectors/github/) | yes | yes | yes | stable | |
+| [SAML 2.0](https://dexidp.io/docs/connectors/saml/) | no | yes | no | stable | WARNING: Unmaintained and likely vulnerable to auth bypasses ([#1884](https://github.com/dexidp/dex/discussions/1884)) |
+| [GitLab](https://dexidp.io/docs/connectors/gitlab/) | yes | yes | yes | beta | |
+| [OpenID Connect](https://dexidp.io/docs/connectors/oidc/) | yes | yes | yes | beta | Includes Salesforce, Azure, etc. |
+| [OAuth 2.0](https://dexidp.io/docs/connectors/oauth/) | no | yes | yes | alpha | |
+| [Google](https://dexidp.io/docs/connectors/google/) | yes | yes | yes | alpha | |
+| [LinkedIn](https://dexidp.io/docs/connectors/linkedin/) | yes | no | no | beta | |
+| [Microsoft](https://dexidp.io/docs/connectors/microsoft/) | yes | yes | no | beta | |
+| [AuthProxy](https://dexidp.io/docs/connectors/authproxy/) | no | yes | no | alpha | Authentication proxies such as Apache2 mod_auth, etc. |
+| [Bitbucket Cloud](https://dexidp.io/docs/connectors/bitbucketcloud/) | yes | yes | no | alpha | |
+| [OpenShift](https://dexidp.io/docs/connectors/openshift/) | yes | yes | no | alpha | |
+| [Atlassian Crowd](https://dexidp.io/docs/connectors/atlassian-crowd/) | yes | yes | yes * | beta | preferred_username claim must be configured through config |
+| [Gitea](https://dexidp.io/docs/connectors/gitea/) | yes | no | yes | beta | |
+| [OpenStack Keystone](https://dexidp.io/docs/connectors/keystone/) | yes | yes | no | alpha | |
+
+Stable, beta, and alpha are defined as:
+
+* Stable: well tested, in active use, and will not change in backward incompatible ways.
+* Beta: tested and unlikely to change in backward incompatible ways.
+* Alpha: may be untested by core maintainers and is subject to change in backward incompatible ways.
+
+All changes or deprecations of connector features will be announced in the [release notes][release-notes].
+
+## Documentation
+
+* [Getting started](https://dexidp.io/docs/getting-started/)
+* [Intro to OpenID Connect](https://dexidp.io/docs/openid-connect/)
+* [Writing apps that use dex][using-dex]
+* [What's new in v2](https://dexidp.io/docs/archive/v2/)
+* [Custom scopes, claims, and client features](https://dexidp.io/docs/custom-scopes-claims-clients/)
+* [Storage options](https://dexidp.io/docs/storage/)
+* [gRPC API](https://dexidp.io/docs/api/)
+* [Using Kubernetes with dex](https://dexidp.io/docs/kubernetes/)
+* Client libraries
+ * [Go][go-oidc]
+
+## Reporting a vulnerability
+
+Please see our [security policy](.github/SECURITY.md) for details about reporting vulnerabilities.
+
+## Getting help
+
+- For feature requests and bugs, file an [issue](https://github.com/dexidp/dex/issues).
+- For general discussion about both using and developing Dex:
+ - join the [#dexidp](https://cloud-native.slack.com/messages/dexidp) on the CNCF Slack
+ - open a new [discussion](https://github.com/dexidp/dex/discussions)
+ - join the [dex-dev](https://groups.google.com/forum/#!forum/dex-dev) mailing list
+
+[openid-connect]: https://openid.net/connect/
+[standard-claims]: https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims
+[scopes]: https://dexidp.io/docs/custom-scopes-claims-clients/#scopes
+[using-dex]: https://dexidp.io/docs/using-dex/
+[jwt-io]: https://jwt.io/
+[kubernetes]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
+[aws-sts]: https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html
+[go-oidc]: https://github.com/coreos/go-oidc
+[issue-1065]: https://github.com/dexidp/dex/issues/1065
+[release-notes]: https://github.com/dexidp/dex/releases
+
+## Development
+
+When all coding and testing is done, please run the test suite:
+
+```shell
+make testall
+```
+
+For the best developer experience, install [Nix](https://builtwithnix.org/) and [direnv](https://direnv.net/).
+
+Alternatively, install Go and Docker manually or using a package manager. Install the rest of the dependencies by running `make deps`.
+
+For release process, please read the [release documentation](https://dexidp.io/docs/development/releases/).
+
+## License
+
+The project is licensed under the [Apache License, Version 2.0](LICENSE).
diff --git a/data/readmes/digitalocean-cli-v11470.md b/data/readmes/digitalocean-cli-v11470.md
new file mode 100644
index 0000000..0cb46ee
--- /dev/null
+++ b/data/readmes/digitalocean-cli-v11470.md
@@ -0,0 +1,444 @@
+# DigitalOcean CLI - README (v1.147.0)
+
+**Repository**: https://github.com/digitalocean/doctl
+**Version**: v1.147.0
+
+---
+
+
+
+```
+doctl is a command-line interface (CLI) for the DigitalOcean API.
+
+Usage:
+ doctl [command]
+
+Available Commands:
+ 1-click Display commands that pertain to 1-click applications
+ account Display commands that retrieve account details
+ apps Display commands for working with apps
+ auth Display commands for authenticating doctl with an account
+ balance Display commands for retrieving your account balance
+ billing-history Display commands for retrieving your billing history
+ completion Modify your shell so doctl commands autocomplete with TAB
+ compute Display commands that manage infrastructure
+ databases Display commands that manage databases
+ help Help about any command
+ invoice Display commands for retrieving invoices for your account
+ kubernetes Displays commands to manage Kubernetes clusters and configurations
+ monitoring [Beta] Display commands to manage monitoring
+ projects Manage projects and assign resources to them
+ registry Display commands for working with container registries
+ version Show the current version
+ vpcs Display commands that manage VPCs
+
+Flags:
+ -t, --access-token string API V2 access token
+ -u, --api-url string Override default API endpoint
+ -c, --config string Specify a custom config file (default "$HOME/.config/doctl/config.yaml")
+ --context string Specify a custom authentication context name
+ -h, --help help for doctl
+ -o, --output string Desired output format [text|json] (default "text")
+ --trace Show a log of network activity while performing a command
+ -v, --verbose Enable verbose output
+
+Use "doctl [command] --help" for more information about a command.
+```
+
+See the [full reference documentation](https://www.digitalocean.com/docs/apis-clis/doctl/reference/) for information about each available command.
+
+- [Installing `doctl`](#installing-doctl)
+ - [Using a Package Manager (Preferred)](#using-a-package-manager-preferred)
+ - [MacOS](#macos)
+ - [Snap supported OS](#snap-supported-os)
+ - [Use with `kubectl`](#use-with-kubectl)
+ - [Using `doctl compute ssh`](#using-doctl-compute-ssh)
+ - [Use with Docker](#use-with-docker)
+ - [Arch Linux](#arch-linux)
+ - [Fedora](#fedora)
+ - [Nix supported OS](#nix-supported-os)
+ - [Docker Hub](#docker-hub)
+ - [Downloading a Release from GitHub](#downloading-a-release-from-github)
+ - [Building with Docker](#building-with-docker)
+ - [Building the Development Version from Source](#building-the-development-version-from-source)
+ - [Dependencies](#dependencies)
+- [Authenticating with DigitalOcean](#authenticating-with-digitalocean)
+ - [Logging into multiple DigitalOcean accounts](#logging-into-multiple-digitalocean-accounts)
+- [Configuring Default Values](#configuring-default-values)
+ - [Environment Variables](#environment-variables)
+- [Enabling Shell Auto-Completion](#enabling-shell-auto-completion)
+ - [Linux Auto Completion](#linux-auto-completion)
+ - [MacOS](#macos-1)
+- [Uninstalling `doctl`](#uninstalling-doctl)
+ - [Using a Package Manager](#using-a-package-manager)
+ - [MacOS Uninstall](#macos-uninstall)
+- [Examples](#examples)
+- [Tutorials](#tutorials)
+
+
+## Installing `doctl`
+
+### Using a Package Manager (Preferred)
+
+A package manager allows you to install and keep up with new `doctl` versions using only a few commands.
+Our community distributes `doctl` via a growing set of package managers in addition to the officially
+supported set listed below; chances are good a solution exists for your platform.
+
+#### MacOS
+
+Use [Homebrew](https://brew.sh/) to install `doctl` on macOS:
+
+```
+brew install doctl
+```
+
+`doctl` is also available via [MacPorts](https://www.macports.org/ports.php?by=name&substr=doctl). Note that
+the port is community maintained and may not be on the latest version.
+
+#### Snap supported OS
+
+Use [Snap](https://snapcraft.io/) on [Snap-supported](https://snapcraft.io/docs/core/install) systems to
+install `doctl`:
+
+```
+sudo snap install doctl
+```
+
+##### Use with `kubectl`
+
+Using `kubectl` requires the `kube-config` personal-files connection for `doctl`:
+
+ sudo snap connect doctl:kube-config
+
+##### Using `doctl compute ssh`
+
+Using `doctl compute ssh` requires the core [ssh-keys interface](https://docs.snapcraft.io/ssh-keys-interface):
+
+ sudo snap connect doctl:ssh-keys :ssh-keys
+
+##### Use with Docker
+
+Using `doctl registry login` requires the `dot-docker` personal-files connection for `doctl`:
+
+ sudo snap connect doctl:dot-docker
+
+This allows `doctl` to add DigitalOcean container registry credentials to your Docker configuration file.
+
+#### Arch Linux
+
+`doctl` is available in the official Arch Linux repository:
+
+ sudo pacman -S doctl
+
+#### Fedora
+
+`doctl` is available in the official Fedora repository:
+
+ sudo dnf install doctl
+
+#### Nix supported OS
+
+Users of NixOS or other [supported
+platforms](https://nixos.org/nixpkgs/) may install ```doctl``` from
+[Nixpkgs](https://nixos.org/nixos/packages.html#doctl). Please note
+this package is also community maintained and may not be on the latest
+version.
+
+### Docker Hub
+
+Containers for each release are available under the `digitalocean`
+organization on [Docker Hub](https://hub.docker.com/r/digitalocean/doctl).
+Links to the containers are available in the GitHub releases.
+
+### Downloading a Release from GitHub
+
+Visit the [Releases
+page](https://github.com/digitalocean/doctl/releases) for the
+[`doctl` GitHub project](https://github.com/digitalocean/doctl), and find the
+appropriate archive for your operating system and architecture.
+Download the archive from your browser or copy its URL and
+retrieve it to your home directory with `wget` or `curl`.
+
+For example, with `wget`:
+
+```
+cd ~
+wget https://github.com/digitalocean/doctl/releases/download/v/doctl--linux-amd64.tar.gz
+```
+
+Or with `curl`:
+
+```
+cd ~
+curl -OL https://github.com/digitalocean/doctl/releases/download/v/doctl--linux-amd64.tar.gz
+```
+
+Extract the binary:
+
+```
+tar xf ~/doctl--linux-amd64.tar.gz
+```
+
+Or download and extract with this oneliner:
+```
+curl -sL https://github.com/digitalocean/doctl/releases/download/v/doctl--linux-amd64.tar.gz | tar -xzv
+```
+
+where `` is the full semantic version, e.g., `1.17.0`.
+
+On Windows systems, you should be able to double-click the zip archive to extract the `doctl` executable.
+
+Move the `doctl` binary to somewhere in your path. For example, on GNU/Linux and OS X systems:
+
+```
+sudo mv ~/doctl /usr/local/bin
+```
+
+Windows users can follow [How to: Add Tool Locations to the PATH Environment Variable](https://msdn.microsoft.com/en-us/library/office/ee537574(v=office.14).aspx) in order to add `doctl` to their `PATH`.
+
+### Building with Docker
+
+If you have
+[Docker](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04)
+configured, you can build a local Docker image using `doctl`'s
+[Dockerfile](https://github.com/digitalocean/doctl/blob/main/Dockerfile)
+and run `doctl` within a container.
+
+```
+docker build --tag=doctl .
+```
+
+Then you can run it within a container.
+
+```
+docker run --rm --interactive --tty --env=DIGITALOCEAN_ACCESS_TOKEN="your_DO_token" doctl any_doctl_command
+```
+
+### Building the Development Version from Source
+
+If you have a [Go environment](https://www.digitalocean.com/community/tutorials/how-to-install-go-1-6-on-ubuntu-16-04)
+configured, you can install the development version of `doctl` from
+the command line.
+
+```
+go install github.com/digitalocean/doctl/cmd/doctl@latest
+```
+
+While the development version is a good way to take a peek at
+`doctl`'s latest features before they get released, be aware that it
+may have bugs. Officially released versions will generally be more
+stable.
+
+### Dependencies
+
+`doctl` uses Go modules with vendoring.
+
+## Authenticating with DigitalOcean
+
+To use `doctl`, you need to authenticate with DigitalOcean by providing an access token, which can be created from the [Applications & API](https://cloud.digitalocean.com/account/api/tokens) section of the Control Panel. You can learn how to generate a token by following the [DigitalOcean API guide](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2).
+
+Docker users will have to use the `DIGITALOCEAN_ACCESS_TOKEN` environmental variable to authenticate, as explained in the Installation section of this document.
+
+If you're not using Docker to run `doctl`, authenticate with the `auth init` command.
+
+```
+doctl auth init
+```
+
+You will be prompted to enter the DigitalOcean access token that you generated in the DigitalOcean control panel.
+
+```
+DigitalOcean access token: your_DO_token
+```
+
+After entering your token, you will receive confirmation that the credentials were accepted. If the token doesn't validate, make sure you copied and pasted it correctly.
+
+```
+Validating token: OK
+```
+
+This will create the necessary directory structure and configuration file to store your credentials.
+
+### Logging into multiple DigitalOcean accounts
+
+`doctl` allows you to log in to multiple DigitalOcean accounts at the same time and easily switch between them with the use of authentication contexts.
+
+By default, a context named `default` is used. To create a new context, run `doctl auth init --context `. You may also pass the new context's name using the `DIGITALOCEAN_CONTEXT` [environment variable](#environment-variables). You will be prompted for your API access token which will be associated with the new context.
+
+To use a non-default context, pass the context name to any `doctl` command. For example:
+
+```
+doctl compute droplet list --context
+```
+
+To set a new default context, run `doctl auth switch --context `. This command will save the current context to the config file and use it for all commands by default if a context is not specified.
+
+The `--access-token` flag or `DIGITALOCEAN_ACCESS_TOKEN` [environment variable](#environment-variables) are acknowledged only if the `default` context is used. Otherwise, they will have no effect on what API access token is used. To temporarily override the access token if a different context is set as default, use `doctl --context default --access-token your_DO_token ...`.
+
+## Configuring Default Values
+
+The `doctl` configuration file is used to store your API Access Token as well as the defaults for command flags. If you find yourself using certain flags frequently, you can change their default values to avoid typing them every time. This can be useful when, for example, you want to change the username or port used for SSH.
+
+On OS X, `doctl` saves its configuration as `${HOME}/Library/Application Support/doctl/config.yaml`. The `${HOME}/Library/Application Support/doctl/` directory will be created once you run `doctl auth init`.
+
+On Linux, `doctl` saves its configuration as `${XDG_CONFIG_HOME}/doctl/config.yaml` if the `${XDG_CONFIG_HOME}` environmental variable is set, or `~/.config/doctl/config.yaml` if it is not. On Windows, the config file location is `%APPDATA%\doctl\config.yaml`.
+
+The configuration file is automatically created and populated with default properties when you authenticate with `doctl` for the first time. The typical format for a property is `category.command.sub-command.flag: value`. For example, the property for the `force` flag with tag deletion is `tag.delete.force`.
+
+To change the default SSH user used when connecting to a Droplet with `doctl`, look for the `compute.ssh.ssh-user` property and change the value after the colon. In this example, we changed it to the username **sammy**.
+
+```
+. . .
+compute.ssh.ssh-user: sammy
+. . .
+```
+
+Save and close the file. The next time you use `doctl`, the new default values you set will be in effect. In this example, that means that it will SSH as the **sammy** user (instead of the default **root** user) next time you log into a Droplet.
+
+### Environment variables
+
+In addition to specifying configuration using `config.yaml` file or program arguments, it is also possible to override values just for the given session with environment variables:
+
+```
+# Use instead of --context argument
+DIGITALOCEAN_CONTEXT=my-context doctl auth list
+```
+
+```
+# Use instead of --access-token argument
+DIGITALOCEAN_ACCESS_TOKEN=my-do-token doctl
+```
+
+## Enabling Shell Auto-Completion
+
+`doctl` also has auto-completion support. It can be set up so that if you partially type a command and then press `TAB`, the rest of the command is automatically filled in. For example, if you type `doctl comp drop` with auto-completion enabled, you'll see `doctl compute droplet` appear on your command prompt.
+
+**Note:** Shell auto-completion is not available for Windows users.
+
+How you enable auto-completion depends on which operating system you're using. If you installed `doctl` via Homebrew, auto-completion is activated automatically, though you may need to configure your local environment to enable it.
+
+`doctl` can generate an auto-completion script with the `doctl completion your_shell_here` command. Valid arguments for the shell are Bash (`bash`), ZSH (`zsh`), and fish (`fish`). By default, the script will be printed to the command line output. For more usage examples for the `completion` command, use `doctl completion --help`.
+
+### Linux Auto Completion
+
+The most common way to use the `completion` command is by adding a line to your local profile configuration. At the end of your `~/.profile` file, add this line:
+
+```
+source <(doctl completion your_shell_here)
+```
+
+If you are using ZSH, add this line to your `~/.zshrc` file:
+
+```
+compdef _doctl doctl
+```
+
+Then refresh your profile.
+
+```
+source ~/.profile
+```
+
+### MacOS (bash)
+
+macOS users will have to install the `bash-completion` framework to use the auto-completion feature.
+
+```
+brew install bash-completion
+```
+
+After it's installed, load `bash_completion` by adding the following line to your `.profile` or `.bashrc` file.
+
+```
+source $(brew --prefix)/etc/bash_completion
+```
+
+Then refresh your profile using the appropriate command for the bash configurations file.
+
+```
+source ~/.profile
+source ~/.bashrc
+```
+
+### MacOS (zsh)
+
+Add the following line to your `~/.zshrc` file:
+
+```zsh
+autoload -U +X compinit; compinit
+```
+
+Then refresh your profile.
+
+## Uninstalling `doctl`
+
+### Using a Package Manager
+
+#### MacOS Uninstall
+
+Use [Homebrew](https://brew.sh/) to uninstall all current and previous versions of the `doctl` formula on macOS:
+
+```
+brew uninstall -f doctl
+```
+
+To completely remove the configuration, also remove the following directory:
+
+```
+rm -rf "$HOME/Library/Application Support/doctl"
+```
+
+
+## Examples
+
+`doctl` is able to interact with all of your DigitalOcean resources. Below are a few common usage examples. To learn more about the features available, see [the full tutorial on the DigitalOcean community site](https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client).
+
+* List all Droplets on your account:
+```
+doctl compute droplet list
+```
+* Create a Droplet:
+```
+doctl compute droplet create --region --image --size
+```
+* Assign a Floating IP to a Droplet:
+```
+doctl compute floating-ip-action assign
+```
+* Create a new A record for an existing domain:
+```
+doctl compute domain records create --record-type A --record-name www --record-data
+```
+
+`doctl` also simplifies actions without an API endpoint. For instance, it allows you to SSH to your Droplet by name:
+```
+doctl compute ssh
+```
+
+By default, it assumes you are using the `root` user. If you want to SSH as a specific user, you can do that as well:
+```
+doctl compute ssh @
+```
+
+## Tutorials
+
+* [How To Use Doctl, the Official DigitalOcean Command-Line Client](https://www.digitalocean.com/community/tutorials/how-to-use-doctl-the-official-digitalocean-command-line-client)
+* [How To Work with DigitalOcean Load Balancers Using Doctl](https://www.digitalocean.com/community/tutorials/how-to-work-with-digitalocean-load-balancers-using-doctl)
+* [How To Secure Web Server Infrastructure With DigitalOcean Cloud Firewalls Using Doctl](https://www.digitalocean.com/community/tutorials/how-to-secure-web-server-infrastructure-with-digitalocean-cloud-firewalls-using-doctl)
+* [How To Work with DigitalOcean Block Storage Using Doctl](https://www.digitalocean.com/community/tutorials/how-to-work-with-digitalocean-block-storage-using-doctl)
diff --git a/data/readmes/distribution-v300.md b/data/readmes/distribution-v300.md
new file mode 100644
index 0000000..21890b4
--- /dev/null
+++ b/data/readmes/distribution-v300.md
@@ -0,0 +1,81 @@
+# Distribution - README (v3.0.0)
+
+**Repository**: https://github.com/distribution/distribution
+**Version**: v3.0.0
+
+---
+
+
+
+
+
+[](https://github.com/distribution/distribution/actions/workflows/build.yml?query=workflow%3Abuild)
+[](https://pkg.go.dev/github.com/distribution/distribution)
+[](LICENSE)
+[](https://codecov.io/gh/distribution/distribution)
+[](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Fdistribution%2Fdistribution?ref=badge_shield)
+[](https://github.com/distribution/distribution/actions?query=workflow%3Aconformance)
+[](https://securityscorecards.dev/viewer/?uri=github.com/distribution/distribution)
+
+The toolset to pack, ship, store, and deliver content.
+
+This repository's main product is the Open Source Registry implementation
+for storing and distributing container images and other content using the
+[OCI Distribution Specification](https://github.com/opencontainers/distribution-spec).
+The goal of this project is to provide a simple, secure, and scalable base
+for building a large scale registry solution or running a simple private registry.
+It is a core library for many registry operators including Docker Hub, GitHub Container Registry,
+GitLab Container Registry and DigitalOcean Container Registry, as well as the CNCF Harbor
+Project, and VMware Harbor Registry.
+
+This repository contains the following components:
+
+|**Component** |Description |
+|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **registry** | An implementation of the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). |
+| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://pkg.go.dev/github.com/distribution/distribution) for details. **Note**: The interfaces for these libraries are **unstable**. |
+| **documentation** | Full documentation is available at [https://distribution.github.io/distribution](https://distribution.github.io/distribution/).
+
+### How does this integrate with Docker, containerd, and other OCI client?
+
+Clients implement against the OCI specification and communicate with the
+registry using HTTP. This project contains a client implementation which
+is currently in use by Docker, however, it is deprecated for the
+[implementation in containerd](https://github.com/containerd/containerd/tree/master/remotes/docker)
+and will not support new features.
+
+### What are the long term goals of the Distribution project?
+
+The _Distribution_ project has the further long term goal of providing a
+secure tool chain for distributing content. The specifications, APIs and tools
+should be as useful with Docker as they are without.
+
+Our goal is to design a professional grade and extensible content distribution
+system that allow users to:
+
+* Enjoy an efficient, secured and reliable way to store, manage, package and
+ exchange content
+* Hack/roll their own on top of healthy open-source components
+* Implement their own home made solution through good specs, and solid
+ extensions mechanism.
+
+## Contribution
+
+Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to contribute
+issues, fixes, and patches to this project. If you are contributing code, see
+the instructions for [building a development environment](BUILDING.md).
+
+## Communication
+
+For async communication and long running discussions please use issues and pull requests on the github repo.
+This will be the best place to discuss design and implementation.
+
+For sync communication we have a #distribution channel in the [CNCF Slack](https://slack.cncf.io/)
+that everyone is welcome to join and chat about development.
+
+## Licenses
+
+The distribution codebase is released under the [Apache 2.0 license](LICENSE).
+The README.md file, and files in the "docs" folder are licensed under the
+Creative Commons Attribution 4.0 International License. You may obtain a
+copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.
diff --git a/data/readmes/dive-v0131.md b/data/readmes/dive-v0131.md
new file mode 100644
index 0000000..874f007
--- /dev/null
+++ b/data/readmes/dive-v0131.md
@@ -0,0 +1,357 @@
+# Dive - README (v0.13.1)
+
+**Repository**: https://github.com/wagoodman/dive
+**Version**: v0.13.1
+
+---
+
+# dive
+[](https://github.com/wagoodman/dive/releases/latest)
+[](https://github.com/wagoodman/dive/actions/workflows/validations.yaml)
+[](https://goreportcard.com/report/github.com/wagoodman/dive)
+[](https://github.com/wagoodman/dive/blob/main/LICENSE)
+[](https://www.paypal.me/wagoodman)
+
+**A tool for exploring a Docker image, layer contents, and discovering ways to shrink the size of your Docker/OCI image.**
+
+
+
+
+To analyze a Docker image simply run dive with an image tag/id/digest:
+```bash
+dive
+```
+
+or you can dive with Docker directly:
+```
+alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock docker.io/wagoodman/dive"
+dive
+
+# for example
+dive nginx:latest
+```
+
+or if you want to build your image then jump straight into analyzing it:
+```bash
+dive build -t .
+```
+
+Building on macOS (supporting only the Docker container engine):
+
+```bash
+docker run --rm -it \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ -v "$(pwd)":"$(pwd)" \
+ -w "$(pwd)" \
+ -v "$HOME/.dive.yaml":"$HOME/.dive.yaml" \
+ docker.io/wagoodman/dive:latest build -t .
+```
+
+Additionally you can run this in your CI pipeline to ensure you're keeping wasted space to a minimum (this skips the UI):
+```
+CI=true dive
+```
+
+
+
+**This is beta quality!** *Feel free to submit an issue if you want a new feature or find a bug :)*
+
+## Basic Features
+
+**Show Docker image contents broken down by layer**
+
+As you select a layer on the left, you are shown the contents of that layer combined with all previous layers on the right. Also, you can fully explore the file tree with the arrow keys.
+
+**Indicate what's changed in each layer**
+
+Files that have changed, been modified, added, or removed are indicated in the file tree. This can be adjusted to show changes for a specific layer, or aggregated changes up to this layer.
+
+**Estimate "image efficiency"**
+
+The lower left pane shows basic layer info and an experimental metric that will guess how much wasted space your image contains. This might be from duplicating files across layers, moving files across layers, or not fully removing files. Both a percentage "score" and total wasted file space is provided.
+
+**Quick build/analysis cycles**
+
+You can build a Docker image and do an immediate analysis with one command:
+`dive build -t some-tag .`
+
+You only need to replace your `docker build` command with the same `dive build`
+command.
+
+**CI Integration**
+
+Analyze an image and get a pass/fail result based on the image efficiency and wasted space. Simply set `CI=true` in the environment when invoking any valid dive command.
+
+**Multiple Image Sources and Container Engines Supported**
+
+With the `--source` option, you can select where to fetch the container image from:
+```bash
+dive --source
+```
+or
+```bash
+dive ://
+```
+
+With valid `source` options as such:
+- `docker`: Docker engine (the default option)
+- `docker-archive`: A Docker Tar Archive from disk
+- `podman`: Podman engine (linux only)
+
+## Installation
+
+**Ubuntu/Debian**
+
+Using debs:
+```bash
+DIVE_VERSION=$(curl -sL "https://api.github.com/repos/wagoodman/dive/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/')
+curl -fOL "https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_amd64.deb"
+sudo apt install ./dive_${DIVE_VERSION}_linux_amd64.deb
+```
+
+Using snap:
+```bash
+sudo snap install docker
+sudo snap install dive
+sudo snap connect dive:docker-executables docker:docker-executables
+sudo snap connect dive:docker-daemon docker:docker-daemon
+```
+
+> [!CAUTION]
+> The Snap method is not recommended if you installed Docker via `apt-get`, since it might break your existing Docker daemon.
+>
+> See also: https://github.com/wagoodman/dive/issues/546
+
+
+**RHEL/Centos**
+```bash
+DIVE_VERSION=$(curl -sL "https://api.github.com/repos/wagoodman/dive/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/')
+curl -fOL "https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_amd64.rpm"
+rpm -i dive_${DIVE_VERSION}_linux_amd64.rpm
+```
+
+**Arch Linux**
+
+Available in the [extra repository](https://archlinux.org/packages/extra/x86_64/dive/) and can be installed via [pacman](https://wiki.archlinux.org/title/Pacman):
+
+```bash
+pacman -S dive
+```
+
+**Mac**
+
+If you use [Homebrew](https://brew.sh):
+
+```bash
+brew install dive
+```
+
+If you use [MacPorts](https://www.macports.org):
+
+```bash
+sudo port install dive
+```
+
+Or download the latest Darwin build from the [releases page](https://github.com/wagoodman/dive/releases/latest).
+
+**Windows**
+
+If you use [Chocolatey](https://chocolatey.org)
+
+```powershell
+choco install dive
+```
+
+If you use [scoop](https://scoop.sh/)
+
+```powershell
+scoop install main/dive
+```
+
+If you use [winget](https://learn.microsoft.com/en-gb/windows/package-manager/):
+
+```powershell
+winget install --id wagoodman.dive
+```
+
+Or download the latest Windows build from the [releases page](https://github.com/wagoodman/dive/releases/latest).
+
+**Go tools**
+Requires Go version 1.10 or higher.
+
+```bash
+go install github.com/wagoodman/dive@latest
+```
+*Note*: installing in this way you will not see a proper version when running `dive -v`.
+
+**Nix/NixOS**
+
+On NixOS:
+```bash
+nix-env -iA nixos.dive
+```
+On non-NixOS (Linux, Mac)
+```bash
+nix-env -iA nixpkgs.dive
+```
+
+**X-CMD**
+
+[x-cmd](https://www.x-cmd.com/) is a **toolbox for Posix Shell**, offering a lightweight package manager built using shell and awk.
+```sh
+x env use dive
+```
+
+**Docker**
+```bash
+docker pull docker.io/wagoodman/dive
+# or alternatively
+docker pull ghcr.io/wagoodman/dive
+```
+
+When running you'll need to include the Docker socket file:
+```bash
+docker run --rm -it \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ docker.io/wagoodman/dive:latest
+```
+
+Docker for Windows (showing PowerShell compatible line breaks; collapse to a single line for Command Prompt compatibility)
+```bash
+docker run --rm -it `
+ -v /var/run/docker.sock:/var/run/docker.sock `
+ docker.io/wagoodman/dive:latest
+```
+
+**Note:** depending on the version of docker you are running locally you may need to specify the docker API version as an environment variable:
+```bash
+ DOCKER_API_VERSION=1.37 dive ...
+```
+or if you are running with a docker image:
+```bash
+docker run --rm -it \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ -e DOCKER_API_VERSION=1.37 \
+ docker.io/wagoodman/dive:latest
+```
+if you are using an alternative runtime (Colima etc) then you may need to specify the docker host as an environment variable in order to pull local images:
+```bash
+ export DOCKER_HOST=$(docker context inspect -f '{{ .Endpoints.docker.Host }}')
+```
+
+## CI Integration
+
+When running dive with the environment variable `CI=true` then the dive UI will be bypassed and will instead analyze your docker image, giving it a pass/fail indication via return code. Currently there are three metrics supported via a `.dive-ci` file that you can put at the root of your repo:
+```
+rules:
+ # If the efficiency is measured below X%, mark as failed.
+ # Expressed as a ratio between 0-1.
+ lowestEfficiency: 0.95
+
+ # If the amount of wasted space is at least X or larger than X, mark as failed.
+ # Expressed in B, KB, MB, and GB.
+ highestWastedBytes: 20MB
+
+ # If the amount of wasted space makes up for X% or more of the image, mark as failed.
+ # Note: the base image layer is NOT included in the total image size.
+ # Expressed as a ratio between 0-1; fails if the threshold is met or crossed.
+ highestUserWastedPercent: 0.20
+```
+You can override the CI config path with the `--ci-config` option.
+
+## KeyBindings
+
+Key Binding | Description
+-------------------------------------------|---------------------------------------------------------
+Ctrl + C or Q | Exit
+Tab | Switch between the layer and filetree views
+Ctrl + F | Filter files
+ESC | Close filter files
+PageUp or U | Scroll up a page
+PageDown or D | Scroll down a page
+Up or K | Move up one line within a page
+Down or J | Move down one line within a page
+Ctrl + A | Layer view: see aggregated image modifications
+Ctrl + L | Layer view: see current layer modifications
+Space | Filetree view: collapse/uncollapse a directory
+Ctrl + Space | Filetree view: collapse/uncollapse all directories
+Ctrl + A | Filetree view: show/hide added files
+Ctrl + R | Filetree view: show/hide removed files
+Ctrl + M | Filetree view: show/hide modified files
+Ctrl + U | Filetree view: show/hide unmodified files
+Ctrl + B | Filetree view: show/hide file attributes
+PageUp or U | Filetree view: scroll up a page
+PageDown or D | Filetree view: scroll down a page
+
+## UI Configuration
+
+No configuration is necessary, however, you can create a config file and override values:
+```yaml
+# supported options are "docker" and "podman"
+container-engine: docker
+# continue with analysis even if there are errors parsing the image archive
+ignore-errors: false
+log:
+ enabled: true
+ path: ./dive.log
+ level: info
+
+# Note: you can specify multiple bindings by separating values with a comma.
+# Note: UI hinting is derived from the first binding
+keybinding:
+ # Global bindings
+ quit: ctrl+c
+ toggle-view: tab
+ filter-files: ctrl+f, ctrl+slash
+ close-filter-files: esc
+ up: up,k
+ down: down,j
+ left: left,h
+ right: right,l
+
+ # Layer view specific bindings
+ compare-all: ctrl+a
+ compare-layer: ctrl+l
+
+ # File view specific bindings
+ toggle-collapse-dir: space
+ toggle-collapse-all-dir: ctrl+space
+ toggle-added-files: ctrl+a
+ toggle-removed-files: ctrl+r
+ toggle-modified-files: ctrl+m
+ toggle-unmodified-files: ctrl+u
+ toggle-filetree-attributes: ctrl+b
+ page-up: pgup,u
+ page-down: pgdn,d
+
+diff:
+ # You can change the default files shown in the filetree (right pane). All diff types are shown by default.
+ hide:
+ - added
+ - removed
+ - modified
+ - unmodified
+
+filetree:
+ # The default directory-collapse state
+ collapse-dir: false
+
+ # The percentage of screen width the filetree should take on the screen (must be >0 and <1)
+ pane-width: 0.5
+
+ # Show the file attributes next to the filetree
+ show-attributes: true
+
+layer:
+ # Enable showing all changes from this layer and every previous layer
+ show-aggregated-changes: false
+
+```
+
+dive will search for configs in the following locations:
+- `$XDG_CONFIG_HOME/dive/*.yaml`
+- `$XDG_CONFIG_DIRS/dive/*.yaml`
+- `~/.config/dive/*.yaml`
+- `~/.dive.yaml`
+
+`.yml` can be used instead of `.yaml` if desired.
diff --git a/data/readmes/dolphinscheduler-332.md b/data/readmes/dolphinscheduler-332.md
new file mode 100644
index 0000000..0878a11
--- /dev/null
+++ b/data/readmes/dolphinscheduler-332.md
@@ -0,0 +1,85 @@
+# DolphinScheduler - README (3.3.2)
+
+**Repository**: https://github.com/apache/dolphinscheduler
+**Version**: 3.3.2
+
+---
+
+# Apache Dolphinscheduler
+
+[](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+[](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
+[](https://twitter.com/dolphinschedule)
+[](https://s.apache.org/dolphinscheduler-slack)
+[](README_zh_CN.md)
+
+## About
+
+Apache DolphinScheduler is a modern data orchestration platform that empowers agile, low-code development of high-performance workflows.
+It is dedicated to handling complex task dependencies in data pipelines, and provides a wide range of built-in job types ** out of the box**
+
+Key features for DolphinScheduler are as follows:
+
+- Easy to deploy, provides four deployment modes including Standalone, Cluster, Docker and Kubernetes.
+- Easy to use, workflows can be created and managed via Web UI, [Python SDK](https://dolphinscheduler.apache.org/python/main/index.html) or Open API
+- Highly reliable and high availability, with a decentralized, multi-master and multi-worker architecture and native supports for horizontal scaling.
+- High performance, its performance is several times faster than other orchestration platforms, and it is capable of handling tens of millions of tasks per day
+- Cloud Native, DolphinScheduler supports orchestrating workflows cross multiple clouds and data centers, and allows custom task types
+- Workflow Versioning, provides version control for both workflows and individual workflow instances, including tasks.
+- Flexible state control of workflows and tasks, supports pause/stop/recover them in any time
+- Multi-tenancy support
+- Additional features, backfill support(Web UI native), permission control including project and data source etc.
+
+## QuickStart
+
+- For quick experience
+ - Want to [start with standalone](https://dolphinscheduler.apache.org/en-us/docs/3.3.0-alpha/guide/installation/standalone)
+ - Want to [start with Docker](https://dolphinscheduler.apache.org/en-us/docs/3.3.0-alpha/guide/start/docker)
+- For Kubernetes
+ - [Start with Kubernetes](https://dolphinscheduler.apache.org/en-us/docs/3.3.0-alpha/guide/installation/kubernetes)
+- For Terraform
+ - [Start with Terraform](deploy/terraform/README.md)
+
+## User Interface Screenshots
+
+* **Homepage:** Project and workflow overview, including the latest workflow instance and task instance status statistics.
+
+
+* **Workflow Definition:** Create and manage workflows by drag and drop, easy to build and maintain complex workflows, support [a wide range of tasks](https://dolphinscheduler.apache.org/en-us/docs/3.3.0-alpha/introduction-to-functions_menu/task_menu) out of box.
+
+
+* **Workflow Tree View:** Abstract tree structure could provide a clearer understanding of task relationships
+
+
+* **Data source:** Supports multiple external data sources, provides unified data access capabilities for MySQL, PostgreSQL, Hive, Trino, etc.
+
+
+* **Monitor:** View the status of the master, worker and database in real time, including server resource usage and load, do quick health check without logging in to the server.
+
+
+## Suggestions & Bug Reports
+
+Follow [this guide](https://github.com/apache/dolphinscheduler/issues/new/choose) to report your suggestions or bugs.
+
+## Contributing
+
+The community welcomes contributions from everyone. Please refer to this page to find out more details: [How to contribute](docs/docs/en/contribute/join/contribute.md).
+Check out good first issue in [here](https://github.com/apache/dolphinscheduler/contribute) if you are new to DolphinScheduler.
+
+## Community
+
+Welcome to join the Apache DolphinScheduler community by:
+
+- Join the [DolphinScheduler Slack](https://s.apache.org/dolphinscheduler-slack) to keep in touch with the community
+- Follow the [DolphinScheduler Twitter](https://twitter.com/dolphinschedule) and get the latest news
+- Subscribe DolphinScheduler mail list, [users@dolphinscheduler.apache.org](mailto:users-subscribe@dolphinscheduler.apache.org) for users and [dev@dolphinscheduler.apache.org](mailto:dev-subscribe@dolphinscheduler.apache.org) for developers
+
+# Landscapes
+
+
+
+---
+
+
+
+
+Apache Doris is an easy-to-use, high-performance and real-time analytical database based on MPP architecture, known for its extreme speed and ease of use. It only requires a sub-second response time to return query results under massive data and can support not only high-concurrency point query scenarios but also high-throughput complex analysis scenarios.
+
+All this makes Apache Doris an ideal tool for scenarios including report analysis, ad-hoc query, unified data warehouse, and data lake query acceleration. On Apache Doris, users can build various applications, such as user behavior analysis, AB test platform, log retrieval analysis, user portrait analysis, and order analysis.
+
+🎉 Check out the 🔗[All releases](https://doris.apache.org/docs/releasenotes/all-release), where you'll find a chronological summary of Apache Doris versions released over the past year.
+
+👀 Explore the 🔗[Official Website](https://doris.apache.org/) to discover Apache Doris's core features, blogs, and user cases in detail.
+
+## 📈 Usage Scenarios
+
+As shown in the figure below, after various data integration and processing, the data sources are usually stored in the real-time data warehouse Apache Doris and the offline data lake or data warehouse (in Apache Hive, Apache Iceberg or Apache Hudi).
+
+
+
+
+
+
+
+
+Apache Doris is widely used in the following scenarios:
+
+- **Real-time Data Analysis**:
+
+ - **Real-time Reporting and Decision-making**: Doris provides real-time updated reports and dashboards for both internal and external enterprise use, supporting real-time decision-making in automated processes.
+
+ - **Ad Hoc Analysis**: Doris offers multidimensional data analysis capabilities, enabling rapid business intelligence analysis and ad hoc queries to help users quickly uncover insights from complex data.
+
+ - **User Profiling and Behavior Analysis**: Doris can analyze user behaviors such as participation, retention, and conversion, while also supporting scenarios like population insights and crowd selection for behavior analysis.
+
+- **Lakehouse Analytics**:
+
+ - **Lakehouse Query Acceleration**: Doris accelerates lakehouse data queries with its efficient query engine.
+
+ - **Federated Analytics**: Doris supports federated queries across multiple data sources, simplifying architecture and eliminating data silos.
+
+ - **Real-time Data Processing**: Doris combines real-time data streams and batch data processing capabilities to meet the needs of high concurrency and low-latency complex business requirements.
+
+- **SQL-based Observability**:
+
+ - **Log and Event Analysis**: Doris enables real-time or batch analysis of logs and events in distributed systems, helping to identify issues and optimize performance.
+
+
+## Overall Architecture
+
+Apache Doris uses the MySQL protocol, is highly compatible with MySQL syntax, and supports standard SQL. Users can access Apache Doris through various client tools, and it seamlessly integrates with BI tools.
+
+### Storage-Compute Integrated Architecture
+
+The storage-compute integrated architecture of Apache Doris is streamlined and easy to maintain. As shown in the figure below, it consists of only two types of processes:
+
+- **Frontend (FE):** Primarily responsible for handling user requests, query parsing and planning, metadata management, and node management tasks.
+
+- **Backend (BE):** Primarily responsible for data storage and query execution. Data is partitioned into shards and stored with multiple replicas across BE nodes.
+
+
+
+
+
+In a production environment, multiple FE nodes can be deployed for disaster recovery. Each FE node maintains a full copy of the metadata. The FE nodes are divided into three roles:
+
+| Role | Function |
+| --------- | ------------------------------------------------------------ |
+| Master | The FE Master node is responsible for metadata read and write operations. When metadata changes occur in the Master, they are synchronized to Follower or Observer nodes via the BDB JE protocol. |
+| Follower | The Follower node is responsible for reading metadata. If the Master node fails, a Follower node can be selected as the new Master. |
+| Observer | The Observer node is responsible for reading metadata and is mainly used to increase query concurrency. It does not participate in cluster leadership elections. |
+
+Both FE and BE processes are horizontally scalable, enabling a single cluster to support hundreds of machines and tens of petabytes of storage capacity. The FE and BE processes use a consistency protocol to ensure high availability of services and high reliability of data. The storage-compute integrated architecture is highly integrated, significantly reducing the operational complexity of distributed systems.
+
+
+## Core Features of Apache Doris
+
+- **High Availability**: In Apache Doris, both metadata and data are stored with multiple replicas, synchronizing data logs via the quorum protocol. Data write is considered successful once a majority of replicas have completed the write, ensuring that the cluster remains available even if a few nodes fail. Apache Doris supports both same-city and cross-region disaster recovery, enabling dual-cluster master-slave modes. When some nodes experience failures, the cluster can automatically isolate the faulty nodes, preventing the overall cluster availability from being affected.
+
+- **High Compatibility**: Apache Doris is highly compatible with the MySQL protocol and supports standard SQL syntax, covering most MySQL and Hive functions. This high compatibility allows users to seamlessly migrate and integrate existing applications and tools. Apache Doris supports the MySQL ecosystem, enabling users to connect Doris using MySQL Client tools for more convenient operations and maintenance. It also supports MySQL protocol compatibility for BI reporting tools and data transmission tools, ensuring efficiency and stability in data analysis and data transmission processes.
+
+- **Real-Time Data Warehouse**: Based on Apache Doris, a real-time data warehouse service can be built. Apache Doris offers second-level data ingestion capabilities, capturing incremental changes from upstream online transactional databases into Doris within seconds. Leveraging vectorized engines, MPP architecture, and Pipeline execution engines, Doris provides sub-second data query capabilities, thereby constructing a high-performance, low-latency real-time data warehouse platform.
+
+- **Unified Lakehouse**: Apache Doris can build a unified lakehouse architecture based on external data sources such as data lakes or relational databases. The Doris unified lakehouse solution enables seamless integration and free data flow between data lakes and data warehouses, helping users directly utilize data warehouse capabilities to solve data analysis problems in data lakes while fully leveraging data lake data management capabilities to enhance data value.
+
+- **Flexible Modeling**: Apache Doris offers various modeling approaches, such as wide table models, pre-aggregation models, star/snowflake schemas, etc. During data import, data can be flattened into wide tables and written into Doris through compute engines like Flink or Spark, or data can be directly imported into Doris, performing data modeling operations through views, materialized views, or real-time multi-table joins.
+
+## Technical overview
+
+Doris provides an efficient SQL interface and is fully compatible with the MySQL protocol. Its query engine is based on an MPP (Massively Parallel Processing) architecture, capable of efficiently executing complex analytical queries and achieving low-latency real-time queries. Through columnar storage technology for data encoding and compression, it significantly optimizes query performance and storage compression ratio.
+
+### Interface
+
+Apache Doris adopts the MySQL protocol, supports standard SQL, and is highly compatible with MySQL syntax. Users can access Apache Doris through various client tools and seamlessly integrate it with BI tools, including but not limited to Smartbi, DataEase, FineBI, Tableau, Power BI, and Apache Superset. Apache Doris can work as the data source for any BI tools that support the MySQL protocol.
+
+### Storage engine
+
+Apache Doris has a columnar storage engine, which encodes, compresses, and reads data by column. This enables a very high data compression ratio and largely reduces unnecessary data scanning, thus making more efficient use of IO and CPU resources.
+
+Apache Doris supports various index structures to minimize data scans:
+
+- **Sorted Compound Key Index**: Users can specify three columns at most to form a compound sort key. This can effectively prune data to better support highly concurrent reporting scenarios.
+
+- **Min/Max Index**: This enables effective data filtering in equivalence and range queries of numeric types.
+
+- **BloomFilter Index**: This is very effective in equivalence filtering and pruning of high-cardinality columns.
+
+- **Inverted Index**: This enables fast searching for any field.
+
+Apache Doris supports a variety of data models and has optimized them for different scenarios:
+
+- **Detail Model (Duplicate Key Model):** A detail data model designed to meet the detailed storage requirements of fact tables.
+
+- **Primary Key Model (Unique Key Model):** Ensures unique keys; data with the same key is overwritten, enabling row-level data updates.
+
+- **Aggregate Model (Aggregate Key Model):** Merges value columns with the same key, significantly improving performance through pre-aggregation.
+
+Apache Doris also supports strongly consistent single-table materialized views and asynchronously refreshed multi-table materialized views. Single-table materialized views are automatically refreshed and maintained by the system, requiring no manual intervention from users. Multi-table materialized views can be refreshed periodically using in-cluster scheduling or external scheduling tools, reducing the complexity of data modeling.
+
+### 🔍 Query Engine
+
+Apache Doris has an MPP-based query engine for parallel execution between and within nodes. It supports distributed shuffle join for large tables to better handle complicated queries.
+
+
+
+
+
+
+
+The query engine of Apache Doris is fully vectorized, with all memory structures laid out in a columnar format. This can largely reduce virtual function calls, increase cache hit rates, and make efficient use of SIMD instructions. Apache Doris delivers a 5~10 times higher performance in wide table aggregation scenarios than non-vectorized engines.
+
+
+
+
+
+
+
+Apache Doris uses adaptive query execution technology to dynamically adjust the execution plan based on runtime statistics. For example, it can generate a runtime filter and push it to the probe side. Specifically, it pushes the filters to the lowest-level scan node on the probe side, which largely reduces the data amount to be processed and increases join performance. The runtime filter of Apache Doris supports In/Min/Max/Bloom Filter.
+
+Apache Doris uses a Pipeline execution engine that breaks down queries into multiple sub-tasks for parallel execution, fully leveraging multi-core CPU capabilities. It simultaneously addresses the thread explosion problem by limiting the number of query threads. The Pipeline execution engine reduces data copying and sharing, optimizes sorting and aggregation operations, thereby significantly improving query efficiency and throughput.
+
+In terms of the optimizer, Apache Doris employs a combined optimization strategy of CBO (Cost-Based Optimizer), RBO (Rule-Based Optimizer), and HBO (History-Based Optimizer). RBO supports constant folding, subquery rewriting, predicate pushdown, and more. CBO supports join reordering and other optimizations. HBO recommends the optimal execution plan based on historical query information. These multiple optimization measures ensure that Doris can enumerate high-performance query plans across various types of queries.
+
+
+## 🎆 Why choose Apache Doris?
+
+- 🎯 **Easy to Use:** Two processes, no other dependencies; online cluster scaling, automatic replica recovery; compatible with MySQL protocol, and using standard SQL.
+
+- 🚀 **High Performance:** Extremely fast performance for low-latency and high-throughput queries with columnar storage engine, modern MPP architecture, vectorized query engine, pre-aggregated materialized view and data index.
+
+- 🖥️ **Single Unified:** A single system can support real-time data serving, interactive data analysis and offline data processing scenarios.
+
+- ⚛️ **Federated Querying:** Supports federated querying of data lakes such as Hive, Iceberg, Hudi, and databases such as MySQL and Elasticsearch.
+
+- ⏩ **Various Data Import Methods:** Supports batch import from HDFS/S3 and stream import from MySQL Binlog/Kafka; supports micro-batch writing through HTTP interface and real-time writing using Insert in JDBC.
+
+- 🚙 **Rich Ecology:** Spark uses Spark-Doris-Connector to read and write Doris; Flink-Doris-Connector enables Flink CDC to implement exactly-once data writing to Doris; DBT Doris Adapter is provided to transform data in Doris with DBT.
+
+## 🙌 Contributors
+
+**Apache Doris has graduated from Apache incubator successfully and become a Top-Level Project in June 2022**.
+
+We deeply appreciate 🔗[community contributors](https://github.com/apache/doris/graphs/contributors) for their contribution to Apache Doris.
+
+[](https://github.com/apache/doris/graphs/contributors)
+
+## 👨👩👧👦 Users
+
+Apache Doris now has a wide user base in China and around the world, and as of today, **Apache Doris is used in production environments in thousands of companies worldwide.** More than 80% of the top 50 Internet companies in China in terms of market capitalization or valuation have been using Apache Doris for a long time, including Baidu, Meituan, Xiaomi, Jingdong, Bytedance, Tencent, NetEase, Kwai, Sina, 360, Mihoyo, and Ke Holdings. It is also widely used in some traditional industries such as finance, energy, manufacturing, and telecommunications.
+
+The users of Apache Doris: 🔗[Users](https://doris.apache.org/users)
+
+Add your company logo at Apache Doris Website: 🔗[Add Your Company](https://github.com/apache/doris/discussions/27683)
+
+## 👣 Get Started
+
+### 📚 Docs
+
+All Documentation 🔗[Docs](https://doris.apache.org/docs/gettingStarted/what-is-apache-doris)
+
+### ⬇️ Download
+
+All release and binary version 🔗[Download](https://doris.apache.org/download)
+
+### 🗄️ Compile
+
+See how to compile 🔗[Compilation](https://doris.apache.org/community/source-install/compilation-with-docker))
+
+### 📮 Install
+
+See how to install and deploy 🔗[Installation and deployment](https://doris.apache.org/docs/install/preparation/env-checking)
+
+## 🧩 Components
+
+### 📝 Doris Connector
+
+Doris provides support for Spark/Flink to read data stored in Doris through Connector, and also supports to write data to Doris through Connector.
+
+🔗[apache/doris-flink-connector](https://github.com/apache/doris-flink-connector)
+
+🔗[apache/doris-spark-connector](https://github.com/apache/doris-spark-connector)
+
+
+## 🌈 Community and Support
+
+### 📤 Subscribe Mailing Lists
+
+Mail List is the most recognized form of communication in Apache community. See how to 🔗[Subscribe Mailing Lists](https://doris.apache.org/community/subscribe-mail-list)
+
+### 🙋 Report Issues or Submit Pull Request
+
+If you meet any questions, feel free to file a 🔗[GitHub Issue](https://github.com/apache/doris/issues) or post it in 🔗[GitHub Discussion](https://github.com/apache/doris/discussions) and fix it by submitting a 🔗[Pull Request](https://github.com/apache/doris/pulls)
+
+### 🍻 How to Contribute
+
+We welcome your suggestions, comments (including criticisms), comments and contributions. See 🔗[How to Contribute](https://doris.apache.org/community/how-to-contribute/) and 🔗[Code Submission Guide](https://doris.apache.org/community/how-to-contribute/pull-request/)
+
+### ⌨️ Doris Improvement Proposals (DSIP)
+
+🔗[Doris Improvement Proposal (DSIP)](https://cwiki.apache.org/confluence/display/DORIS/Doris+Improvement+Proposals) can be thought of as **A Collection of Design Documents for all Major Feature Updates or Improvements**.
+
+### 🔑 Backend C++ Coding Specification
+🔗 [Backend C++ Coding Specification](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=240883637) should be strictly followed, which will help us achieve better code quality.
+
+## 💬 Contact Us
+
+Contact us through the following mailing list.
+
+| Name | Scope | | | |
+|:------------------------------------------------------------------------------|:--------------------------------|:----------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------------------------------------------------------------------|
+| [dev@doris.apache.org](mailto:dev@doris.apache.org) | Development-related discussions | [Subscribe](mailto:dev-subscribe@doris.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@doris.apache.org) | [Archives](http://mail-archives.apache.org/mod_mbox/doris-dev/) |
+
+## 🧰 Links
+
+* Apache Doris Official Website - [Site](https://doris.apache.org)
+* Developer Mailing list - . Mail to , follow the reply to subscribe the mail list.
+* Slack channel - [Join the Slack](https://join.slack.com/t/apachedoriscommunity/shared_invite/zt-35mzao67o-BrpU70FNKPyB6UlgpXf8_w)
+* Twitter - [Follow @doris_apache](https://twitter.com/doris_apache)
+
+
+## 📜 License
+
+[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
+
+> **Note**
+> Some licenses of the third-party dependencies are not compatible with Apache 2.0 License. So you need to disable
+some Doris features to be complied with Apache 2.0 License. For details, refer to the `thirdparty/LICENSE.txt`
+
+
+
diff --git a/data/readmes/dragonfly-v235-beta0.md b/data/readmes/dragonfly-v235-beta0.md
new file mode 100644
index 0000000..4d8112b
--- /dev/null
+++ b/data/readmes/dragonfly-v235-beta0.md
@@ -0,0 +1,87 @@
+# Dragonfly - README (v2.3.5-beta.0)
+
+**Repository**: https://github.com/dragonflyoss/dragonfly
+**Version**: v2.3.5-beta.0
+
+---
+
+# Dragonfly
+
+![alt][logo-linear]
+
+[](https://github.com/dragonflyoss/dragonfly/releases)
+[](https://artifacthub.io/packages/search?repo=dragonfly)
+[](https://scorecard.dev/viewer/?uri=github.com/dragonflyoss/dragonfly)
+[](https://github.com/dragonflyoss/dragonfly/actions/workflows/ci.yml)
+[](https://goreportcard.com/report/github.com/dragonflyoss/dragonfly)
+[](https://github.com/dragonflyoss/dragonfly/discussions)
+[](https://www.bestpractices.dev/projects/10432)
+[](https://twitter.com/dragonfly_oss)
+[](https://github.com/dragonflyoss/dragonfly/blob/main/LICENSE)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fdragonflyoss%2Fdragonfly?ref=badge_shield&issueType=license)
+[](https://insights.linuxfoundation.org/project/d7y)
+[](https://www.cncf.io/projects/dragonfly/)
+
+## Introduction
+
+Delivers efficient, stable, and secure data distribution and acceleration powered by P2P technology,
+with an optional content‑addressable filesystem that accelerates OCI container launch.
+It aims to provide a best‑practice, standards‑based solution for cloud‑native architectures,
+improving large‑scale delivery of files, container images, OCI artifacts, AI/ML models, caches,
+logs, dependencies, etc.
+
+## Documentation
+
+You can find the full documentation on the [d7y.io][d7y.io].
+
+## Community
+
+Join the conversation and help the community grow. Here are the ways to get involved:
+
+- **Slack Channel**: [#dragonfly](https://cloud-native.slack.com/messages/dragonfly/) on [CNCF Slack](https://slack.cncf.io/)
+- **Github Discussions**: [Dragonfly Discussion Forum][discussion]
+- **Developer Group**:
+- **Mailing Lists**:
+ - **Developers**:
+ - **Maintainers**:
+- **Twitter**: [@dragonfly_oss](https://twitter.com/dragonfly_oss)
+- **DingTalk Group**: `22880028764`
+
+You can also find community information in the [community repository](https://github.com/dragonflyoss/community).
+
+## Roadmap
+
+You can find the [roadmap](https://github.com/dragonflyoss/community/blob/master/ROADMAP.md)
+in the [community repository](https://github.com/dragonflyoss/community).
+
+## Security
+
+### Security Audit
+
+A third-party security audit of Dragonfly was performed by Trail of Bits, with the full report available at
+[Dragonfly Comprehensive Report](docs/security/dragonfly-comprehensive-report-2023.pdf).
+
+### Reporting security vulnerabilities
+
+If you discover a vulnerability, please report it per our Security Policy at [Security Policy](https://github.com/dragonflyoss/community/blob/master/SECURITY.md),
+and security insights are detailed in [SECURITY-INSIGHTS.yml](SECURITY-INSIGHTS.yml).
+
+## Software bill of materials
+
+We publish SBOMs with all of our releases. You can find them in Github release assets.
+
+[arch]: docs/images/arch.png
+[logo-linear]: docs/images/logo/dragonfly-linear.svg
+[discussion]: https://github.com/dragonflyoss/dragonfly/discussions
+[contributing]: https://github.com/dragonflyoss/community/blob/master/CONTRIBUTING.md
+[codeconduct]: https://github.com/dragonflyoss/community/blob/master/CODE_OF_CONDUCT.md
+[d7y.io]: https://d7y.io/
+
+## Contributing
+
+You should check out our
+[CONTRIBUTING][contributing] and develop the project together.
+
+## Code of Conduct
+
+Please refer to our [Code of Conduct][codeconduct] which applies to all Dragonfly community members.
diff --git a/data/readmes/drill-drill-1220.md b/data/readmes/drill-drill-1220.md
new file mode 100644
index 0000000..d670256
--- /dev/null
+++ b/data/readmes/drill-drill-1220.md
@@ -0,0 +1,42 @@
+# Drill - README (drill-1.22.0)
+
+**Repository**: https://github.com/apache/drill
+**Version**: drill-1.22.0
+
+---
+
+# Apache Drill
+
+[](https://github.com/apache/drill/actions)
+[](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.drill%22%20AND%20a%3A%22distribution%22)
+[](http://www.apache.org/licenses/LICENSE-2.0)
+[](http://stackoverflow.com/questions/tagged/apache-drill)
+[](http://apache-drill.slack.com "Join our Slack community")
+
+
+Apache Drill is a distributed MPP query layer that supports SQL and alternative query languages against NoSQL and Hadoop data storage systems. It was inspired in part by [Google's Dremel](http://research.google.com/pubs/pub36632.html).
+
+## Developers
+
+Please read [Environment.md](docs/dev/Environment.md) for setting up and running Apache Drill. For complete developer documentation see [DevDocs.md](docs/dev/DevDocs.md).
+
+## More Information
+Please see the [Apache Drill Website](http://drill.apache.org/) or the [Apache Drill Documentation](http://drill.apache.org/docs/) for more information including:
+
+ * Remote Execution Installation Instructions
+ * [Running Drill on Docker instructions](https://drill.apache.org/docs/running-drill-on-docker/)
+ * Information about how to submit logical and distributed physical plans
+ * More example queries and sample data
+ * Find out ways to be involved or discuss Drill
+
+
+## Join the community!
+Apache Drill is an Apache Foundation project and is seeking all types of users and contributions.
+Please say hello on the [Apache Drill mailing list](http://drill.apache.org/mailinglists/).You can also join our [Google Hangouts](http://drill.apache.org/community-resources/)
+or [join](https://bit.ly/2VM0XS8) our [Slack Channel](https://join.slack.com/t/apache-drill/shared_invite/enQtNTQ4MjM1MDA3MzQ2LTJlYmUxMTRkMmUwYmQ2NTllYmFmMjU4MDk0NjYwZjBmYjg0MDZmOTE2ZDg0ZjBlYmI3Yjc4Y2I2NTQyNGVlZTc) if you need help with using or developing Apache Drill (more information can be found on [Apache Drill website](http://drill.apache.org/)).
+
+## Export Control
+This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See for more information.
+The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
+The following provides more details on the included cryptographic software:
+ Java SE Security packages are used to provide support for authentication, authorization and secure sockets communication. The Jetty Web Server is used to provide communication via HTTPS. The Cyrus SASL libraries, Kerberos Libraries and OpenSSL Libraries are used to provide SASL based authentication and SSL communication.
diff --git a/data/readmes/druid-druid-3500.md b/data/readmes/druid-druid-3500.md
new file mode 100644
index 0000000..42401fa
--- /dev/null
+++ b/data/readmes/druid-druid-3500.md
@@ -0,0 +1,128 @@
+# Druid - README (druid-35.0.0)
+
+**Repository**: https://github.com/apache/druid
+**Version**: druid-35.0.0
+
+---
+
+
+
+[](https://codecov.io/gh/apache/druid)
+[](https://hub.docker.com/r/apache/druid)
+[](https://github.com/asdf2014/druid-helm)
+
+
+
+| Workflow | Status |
+| :----------------------------------- | :----------------------------------------------------------- |
+| ⚙️ CodeQL Config | [](https://github.com/apache/druid/actions/workflows/codeql-config.yml) |
+| 🔍 CodeQL | [](https://github.com/apache/druid/actions/workflows/codeql.yml) |
+| 🕒 Cron Job ITS | [](https://github.com/apache/druid/actions/workflows/cron-job-its.yml) |
+| 🏷️ Labeler | [](https://github.com/apache/druid/actions/workflows/labeler.yml) |
+| ♻️ Reusable Revised ITS | [](https://github.com/apache/druid/actions/workflows/reusable-revised-its.yml) |
+| ♻️ Reusable Standard ITS | [](https://github.com/apache/druid/actions/workflows/reusable-standard-its.yml) |
+| 🔄 Revised ITS | [](https://github.com/apache/druid/actions/workflows/revised-its.yml) |
+| 🔧 Standard ITS | [](https://github.com/apache/druid/actions/workflows/standard-its.yml) |
+| 🛠️ Static Checks | [](https://github.com/apache/druid/actions/workflows/static-checks.yml) |
+| 🧪 Unit and Integration Tests Unified | [](https://github.com/apache/druid/actions/workflows/unit-and-integration-tests-unified.yml) |
+
+---
+
+[](https://druid.apache.org/)
+[](https://twitter.com/druidio)
+[](https://druid.apache.org/downloads.html)
+[](#getting-started)
+[](https://druid.apache.org/docs/latest/design/)
+[](#community)
+[](#building-from-source)
+[](#contributing)
+[](#license)
+
+---
+
+## Apache Druid
+
+Druid is a high performance real-time analytics database. Druid's main value add is to reduce time to insight and action.
+
+Druid is designed for workflows where fast queries and ingest really matter. Druid excels at powering UIs, running operational (ad-hoc) queries, or handling high concurrency. Consider Druid as an open source alternative to data warehouses for a variety of use cases. The [design documentation](https://druid.apache.org/docs/latest/design/architecture.html) explains the key concepts.
+
+### Getting started
+
+You can get started with Druid with our [local](https://druid.apache.org/docs/latest/tutorials/quickstart.html) or [Docker](http://druid.apache.org/docs/latest/tutorials/docker.html) quickstart.
+
+Druid provides a rich set of APIs (via HTTP and [JDBC](https://druid.apache.org/docs/latest/querying/sql.html#jdbc)) for loading, managing, and querying your data.
+You can also interact with Druid via the built-in [web console](https://druid.apache.org/docs/latest/operations/web-console.html) (shown below).
+
+#### Load data
+
+[](https://druid.apache.org/docs/latest/ingestion/index.html)
+
+Load [streaming](https://druid.apache.org/docs/latest/ingestion/index.html#streaming) and [batch](https://druid.apache.org/docs/latest/ingestion/index.html#batch) data using a point-and-click wizard to guide you through ingestion setup. Monitor one off tasks and ingestion supervisors.
+
+#### Manage the cluster
+
+[](https://druid.apache.org/docs/latest/ingestion/data-management.html)
+
+Manage your cluster with ease. Get a view of your [datasources](https://druid.apache.org/docs/latest/design/architecture.html), [segments](https://druid.apache.org/docs/latest/design/segments.html), [ingestion tasks](https://druid.apache.org/docs/latest/ingestion/tasks.html), and [services](https://druid.apache.org/docs/latest/design/processes.html) from one convenient location. All powered by [SQL systems tables](https://druid.apache.org/docs/latest/querying/sql.html#metadata-tables), allowing you to see the underlying query for each view.
+
+#### Issue queries
+
+[](https://druid.apache.org/docs/latest/querying/sql.html)
+
+Use the built-in query workbench to prototype [DruidSQL](https://druid.apache.org/docs/latest/querying/sql.html) and [native](https://druid.apache.org/docs/latest/querying/querying.html) queries or connect one of the [many tools](https://druid.apache.org/libraries.html) that help you make the most out of Druid.
+
+### Documentation
+
+See the [latest documentation](https://druid.apache.org/docs/latest/) for the documentation for the current official release. If you need information on a previous release, you can browse [previous releases documentation](https://druid.apache.org/docs/).
+
+Make documentation and tutorials updates in [`/docs`](https://github.com/apache/druid/tree/master/docs) using [Markdown](https://www.markdownguide.org/) or extended Markdown [(MDX)](https://mdxjs.com/). Then, open a pull request.
+
+To build the site locally, you need Node 18 or higher and to install Docusaurus 3 with `npm|yarn install` in the `website` directory. Then you can run `npm|yarn start` to launch a local build of the docs.
+
+If you're looking to update non-doc pages like Use Cases, those files are in the [`druid-website-src`](https://github.com/apache/druid-website-src/tree/master) repo.
+
+For more information, see the [README in the `./website` directory](./website/README.md).
+
+### Community
+
+Visit the official project [community](https://druid.apache.org/community/) page to read about getting involved in contributing to Apache Druid, and how we help one another use and operate Druid.
+
+* Druid users can find help in the [`druid-user`](https://groups.google.com/forum/#!forum/druid-user) mailing list on Google Groups, and have more technical conversations in `#troubleshooting` on Slack.
+* Druid development discussions take place in the [`druid-dev`](https://lists.apache.org/list.html?dev@druid.apache.org) mailing list ([dev@druid.apache.org](https://lists.apache.org/list.html?dev@druid.apache.org)). Subscribe by emailing [dev-subscribe@druid.apache.org](mailto:dev-subscribe@druid.apache.org). For live conversations, join the `#dev` channel on Slack.
+
+Check out the official [community](https://druid.apache.org/community/) page for details of how to join the community Slack channels.
+
+Find articles written by community members and a calendar of upcoming events on the [project site](https://druid.apache.org/) - contribute your own events and articles by submitting a PR in the [`apache/druid-website-src`](https://github.com/apache/druid-website-src/tree/master/_data) repository.
+
+### Building from source
+
+Please note that JDK 11 or JDK 17 is required to build Druid.
+
+See the latest [build guide](https://druid.apache.org/docs/latest/development/build.html) for instructions on building Apache Druid from source.
+
+### Contributing
+
+Please follow the [community guidelines](https://druid.apache.org/community/) for contributing.
+
+For instructions on setting up IntelliJ [dev/intellij-setup.md](dev/intellij-setup.md)
+
+### License
+
+[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0)
diff --git a/data/readmes/dubbo-admin-060.md b/data/readmes/dubbo-admin-060.md
new file mode 100644
index 0000000..70723b1
--- /dev/null
+++ b/data/readmes/dubbo-admin-060.md
@@ -0,0 +1,195 @@
+# Dubbo-Admin - README (0.6.0)
+
+**Repository**: https://github.com/apache/dubbo-admin
+**Version**: 0.6.0
+
+---
+
+# Dubbo Admin
+
+
+[](https://codecov.io/gh/apache/dubbo-admin/branches/develop)
+
+[](http://isitmaintained.com/project/apache/dubbo-admin "Average time to resolve an issue")
+[](http://isitmaintained.com/project/apache/dubbo-admin "Percentage of issues still open")
+
+[中文说明](README_ZH.md)
+
+Dubbo Admin is the console designed for better visualization of Dubbo services, it provides support for Dubbo3 and is compatible with 2.7.x, 2.6.x and 2.5.x.
+
+
+
+There are four ways to deploy Dubbo Admin to a production environment.
+
+1. [Linux with Admin](#11-linux-with-admin)
+2. [Docker with Admin](#12-docker-with-admin)
+3. [Kubernetes with Admin](#13-kubernetes-with-admin)
+4. [Helm with Admin](#14-helm-with-admin)
+
+Choose either method based on your environment, where Helm is the recommended installation method because Helm can be installed with a single click and automatically helps manage all of Admin's required production environment dependencies.
+
+## 1.1 Linux with Admin
+
+1. Download code: `git clone https://github.com/apache/dubbo-admin.git`
+2. `dubbo-admin-server/src/main/resources/application.properties` Designated Registration Center Address
+3. Build
+ - `mvn clean package -Dmaven.test.skip=true`
+4. Start
+ * `mvn --projects dubbo-admin-server spring-boot:run`
+ or
+ * `cd dubbo-admin-distribution/target; java -jar dubbo-admin-${project.version}.jar`
+5. Visit `http://localhost:38080`
+
+## 1.2 Docker with Admin
+Admin image hosting at: https://hub.docker.com/repository/docker/apache/dubbo-admin
+
+ 1, the following `172.17.0.2` registry address is the docker run zookeeper registry address, modify the `application.properties` file default parameters, such as registry address, etc.
+ 2、Get the zookeeper registry address through `docker inspect`.
+ 3.Change `172.17.0.2` registry address to your current docker running zookeeper registry address.
+```
+ admin.registry.address: zookeeper://172.17.0.2:2181
+ admin.config-center: zookeeper://172.17.0.2:2181
+ admin.metadata-report.address: zookeeper://172.17.0.2:2181
+```
+docker start
+```sh
+$ docker run -p 38080:38080 --name dubbo-admin -d dubbo-admin
+```
+
+Visit `http://localhost:38080`
+
+## 1.3 Kubernetes with Admin
+
+**1. Download Kubernetes manifests**
+```sh
+$ git clone https://github.com/apache/dubbo-admin.git
+```
+
+Switch to the 'deploy/k8s' directory to see the Admin kubernetes resource file
+```sh
+$ cd /dubbo-admin/deploy/kubernetes
+```
+
+**2. Install Dubbo Admin**
+
+modify [application.properties](./dubbo-admin-server/src/main/resources/application.properties) Parameter configuration in `configmap.yaml` ,Just define the parameters to be overwritten。
+
+Run the following command:
+
+```sh
+$ kubectl apply -f ./
+```
+
+**3. Visit Admin**
+```sh
+$ kubectl port-forward service dubbo-admin 38080:38080
+```
+
+Visit `http://localhost:38080`
+
+
+## 1.4 Helm with Admin
+There are two ways to run Admin through Help. They have the same effect, so you can choose any of the following.
+
+### 1.4.1 Run Admin based on Chart source file
+**1. Download chart source file**
+
+clone Dubbo Admin project storehouse:
+
+```sh
+$ git clone https://github.com/apache/dubbo-admin.git
+```
+
+Switch from the warehouse root directory to the following directory `deploy/charts/dubbo-admin`
+
+```sh
+$ cd dubbo-admin/deploy/charts/dubbo-admin
+```
+**2. Install helm chart**
+
+Start parameters of Admin so that Admin can connect to the real production environment registry or configuration center. You can specify a custom configuration file through the following `-f` help parameter:
+```yaml
+properties:
+ admin.registry.address: zookeeper://zookeeper:2181
+ admin.config-center: zookeeper://zookeeper:2181
+ admin.metadata-report.address: zookeeper://zookeeper:2181
+```
+
+`zookeeper://zookeeper:2181` Visit address of the Kubernetes Cluster registration center zookeeper。
+```sh
+$ helm install dubbo-admin -f values.yaml .
+```
+
+`properties` The content specified in the field will be overwritten Admin [application.properties](./dubbo-admin-server/src/main/resources/application.properties) Specified default configuration,In addition to 'properties', you can customize other properties defined by Admin help chart,Here is available[Complete parameters](./deploy/helm/dubbo-admin/values.yaml)。
+
+**3. Visit Admin**
+
+Visit http://127.0.0.1:38080
+
+### 1.4.2 Run Admin based on Chart warehouse
+
+**1. Add helm chart (Temporarily unavailable)**
+
+```sh
+$ helm repo add dubbo-charts https://dubbo.apache.org/dubbo-charts
+$ helm repo update
+```
+
+**2. Install helm chart**
+```sh
+$ helm install dubbo-admin dubbo-charts/dubbo-admin
+```
+
+reference resources [1.4.1 Run Admin based on Chart warehouse](1.4.1-Run-from-helm-chart-sources) Learn how to customize installation parameters.
+
+```sh
+$ helm install dubbo-admin -f properties.yaml dubbo-charts/dubbo-admin
+```
+
+**3. Visit Dubbo Admin**
+
+Dubbo Admin Now that the installation should be successful, run the following command to obtain the access address:
+
+```sh
+$ kubectl --namespace default port-forward service/dubbo-admin 38080:38080
+```
+
+Visit http://127.0.0.1:38080
+
+# 2. Want To Contribute
+
+Below contains the description of the project structure for developers who want to contribute to make Dubbo Admin better.
+
+## 2.1 Admin UI
+
+- [Vue.js](https://vuejs.org) and [Vue Cli](https://cli.vuejs.org/)
+- [dubbo-admin-ui/README.md](dubbo-admin-ui/README.md) for more detail
+- Set npm **proxy mirror**:
+
+ If you have network issue, you can set npm proxy mirror to speedup npm install:
+
+ add `registry=https://registry.npmmirror.com` to ~/.npmrc
+
+## 2.2 Admin Server
+
+* Standard spring boot project
+* [configurations in application.properties](https://github.com/apache/dubbo-admin/wiki/Dubbo-Admin-configuration)
+
+
+## 2.3 Setting up a local developing environment
+* Run admin server project
+
+ backend is a standard spring boot project, you can run it in any java IDE
+
+* Run admin ui project
+
+ at directory `dubbo-admin-ui`, run with `npm run dev`.
+
+* visit web page
+
+ visit `http://localhost:38082`, frontend supports hot reload.
+
+# 3 License
+
+Apache Dubbo admin is under the Apache 2.0 license, Version 2.0.
+See [LICENSE](https://github.com/apache/dubbo-admin/blob/develop/LICENSE) for full license text.
diff --git a/data/readmes/easegress-v2101.md b/data/readmes/easegress-v2101.md
new file mode 100644
index 0000000..014786d
--- /dev/null
+++ b/data/readmes/easegress-v2101.md
@@ -0,0 +1,237 @@
+# Easegress - README (v2.10.1)
+
+**Repository**: https://github.com/easegress-io/easegress
+**Version**: v2.10.1
+
+---
+
+# Easegress
+
+[](https://goreportcard.com/report/github.com/megaease/easegress)
+[](https://github.com/megaease/easegress/actions/workflows/test.yml)
+[](https://codecov.io/gh/megaease/easegress)
+[](https://hub.docker.com/r/megaease/easegress)
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://github.com/megaease/easegress/blob/main/go.mod)
+[](https://cloud-native.slack.com/messages/easegress)
+[](https://www.bestpractices.dev/projects/8265)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Feasegress-io%2Feasegress?ref=badge_shield)
+
+
+
+
+
+- [What is Easegress](#what-is-easegress)
+- [Features](#features)
+- [Getting Started](#getting-started)
+ - [Launch Easegress](#launch-easegress)
+ - [Reverse Proxy](#reverse-proxy)
+- [Use Cases](#use-cases)
+- [Documentation](#documentation)
+- [Easegress Portal](#easegress-portal)
+ - [Screenshots](#screenshots)
+- [Community](#community)
+- [Contributing](#contributing)
+- [License](#license)
+
+## What is Easegress
+
+`Easegress` is a Cloud Native traffic orchestration system designed for:
+
+- **High Availability:** Built-in Raft consensus & leader election provides 99.99% availability.
+- **Traffic Orchestration:** Simple orchestration of various filters for each traffic pipeline.
+- **High Performance:** Lightweight and essential features speed up the performance.
+- **Observability:** There are many meaningful statistics periodically in a readable way.
+- **Extensibility:** It's easy to develop your own filter or controller with high-level programming language.
+- **Integration:** The simple interfaces make it easy to integrate with other systems, such as Kubernetes Ingress, [EaseMesh](https://github.com/megaease/easemesh) sidecar, Workflow, etc.
+
+The architecture of Easegress:
+
+
+
+And you can check [Easegress DeepWiki Page](https://deepwiki.com/easegress-io/easegress) to dive into more details.
+
+## Features
+
+- **Service Management**
+ - **Multiple protocols:**
+ - HTTP/1.1
+ - HTTP/2
+ - HTTP/3(QUIC)
+ - MQTT
+ - **Rich Routing Rules:** exact path, path prefix, regular expression of the path, method, headers, clientIPs.
+ - **Resilience&Fault Tolerance**
+ - **CircuitBreaker:** temporarily blocks possible failures.
+ - **RateLimiter:** limits the rate of incoming requests.
+ - **Retry:** repeats failed executions.
+ - **TimeLimiter:** limits the duration of execution.
+ - **Deployment Management**
+ - **Blue-green Strategy:** switches traffic at one time.
+ - **Canary Strategy:** schedules traffic slightly.
+ - **API Management**
+ - **API Aggregation:** aggregates results of multiple APIs.
+ - **API Orchestration:** orchestrates the flow of APIs.
+ - **Security**
+ - **IP Filter:** Limits access to IP addresses.
+ - **Static HTTPS:** static certificate files.
+ - **API Signature:** supports [HMAC](https://en.wikipedia.org/wiki/HMAC) verification.
+ - **JWT Verification:** verifies [JWT Token](https://jwt.io/).
+ - **OAuth2:** validates [OAuth/2](https://datatracker.ietf.org/doc/html/rfc6749) requests.
+ - **Let's Encrypt:** automatically manage certificate files.
+ - **Pipeline-Filter Mechanism**
+ - **Filter Management:** makes it easy to develop new filters.
+ - **Service Mesh**
+ - **Mesh Master:** is the control plane to manage the lifecycle of mesh services.
+ - **Mesh Sidecar:** is the data plane as the endpoint to do traffic interception and routing.
+ - **Mesh Ingress Controller:** is the mesh-specific ingress controller to route external traffic to mesh services.
+ > Notes: This feature is leveraged by [EaseMesh](https://github.com/megaease/easemesh)
+ - **Third-Part Integration**
+ - **FaaS** integrates with the serverless platform Knative.
+ - **Service Discovery** integrates with Eureka, Consul, Etcd, and Zookeeper.
+ - **Ingress Controller** integrates with Kubernetes as an ingress controller.
+- **Extensibility**
+ - **WebAssembly** executes user developed [WebAssembly](https://webassembly.org/) code.
+- **High Performance and Availability**
+ - **Adaption**: adapts request, response in the handling chain.
+ - **Validation**: headers validation, OAuth2, JWT, and HMAC verification.
+ - **Load Balance:** round-robin, random, weighted random, IP hash, header hash and support sticky sessions.
+ - **Cache:** for the backend servers.
+ - **Compression:** compresses body for the response.
+ - **Hot-Update:** updates both config and binary of Easegress in place without losing connections.
+- **Operation**
+ - **Easy to Integrate:** command line([egctl](docs/02.Tutorials/2.1.egctl-Usage.md)), Easegress Portal, HTTP clients such as curl, postman, etc.
+ - **Distributed Tracing**
+ - Built-in [OpenTelemetry](https://opentelemetry.io/), which provides a vendor-neutral API.
+ - **Observability**
+ - **Node:** role(primary, secondary), raft leader status, healthy or not, last heartbeat time, and so on
+ - **Traffic:** in multi-dimension: server and backend.
+ - **Throughput:** total and error statistics of request count, TPS/m1, m5, m15, and error percent, etc.
+ - **Latency:** p25, p50, p75, p95, p98, p99, p999.
+ - **Data Size:** request and response size.
+ - **Status Codes:** HTTP status codes.
+ - **TopN:** sorted by aggregated APIs(only in server dimension).
+- **AI Integration**
+ - **Proxy:** proxy requests to LLM providers like OpenAI, DeepSeek, Anthropic, etc.
+ - **Anthropic API Adaption:** adapts requests and responses in Anthropic API to OpenAI format.
+ - **Vector Database:** integrates with vector databases for caching.
+ - **Monitoring:** provides insights into the performance and usage of AI models.
+
+## Getting Started
+
+The basic usage of Easegress is to quickly set up a proxy for the backend servers.
+
+### Launch Easegress
+
+Easegress can be installed from pre-built binaries or from source. For details, see [Install](docs/01.Getting-Started/1.2.Install.md).
+
+Then we can execute the server:
+
+```bash
+$ easegress-server
+2023-09-06T15:12:49.256+08:00 INFO cluster/config.go:110 config: advertise-client-urls: ...
+...
+```
+
+By default, Easegress opens ports 2379, 2380, and 2381; however, you can modify these settings along with other arguments either in the configuration file or via command-line arguments. For a complete list of arguments, please refer to the `easegress-server --help` command.
+
+After launching successfully, we could check the status of the one-node cluster.
+
+```bash
+$ egctl get member
+...
+
+$ egctl describe member
+...
+```
+
+### Reverse Proxy
+
+Assuming you have two backend HTTP services running at `127.0.0.1:9095` and `127.0.0.1:9096`, you can initiate an HTTP proxy from port 10080 to these backends using the following command:
+
+```bash
+$ egctl create httpproxy demo --port 10080 \
+ --rule="/pipeline=http://127.0.0.1:9095,http://127.0.0.1:9096"
+```
+
+Then try it:
+
+```bash
+curl -v 127.0.0.1:10080/pipeline
+```
+
+The request will be forwarded to either `127.0.0.1:9095/pipeline` or `127.0.0.1:9096/pipeline`, utilizing a round-robin load-balancing policy.
+
+More about getting started with Easegress:
+
+- [Quick Start](docs/01.Getting-Started/1.1.Quick-Start.md)
+- [Install Easegress](docs/01.Getting-Started/1.2.Install.md)
+- [Main Concepts](docs/01.Getting-Started/1.3.Concepts.md)
+
+## Use Cases
+
+The following examples show how to use Easegress for different scenarios.
+
+- [API Aggregation](docs/02.Tutorials/2.3.Pipeline-Explained.md#api-aggregation) - Aggregating many APIs into a single API.
+- [Cluster Deployment](docs/05.Administration/5.1.Config-and-Cluster-Deployment.md) - How to deploy multiple Easegress cluster nodes.
+- [Canary Release](docs/03.Advanced-Cookbook/3.04.Canary-Release.md) - How to do canary release with Easegress.
+- [Distributed Tracing](docs/03.Advanced-Cookbook/3.05.Distributed-Tracing.md) - How to do APM tracing - Zipkin.
+- [FaaS](docs/03.Advanced-Cookbook/3.09.FaaS.md) - Supporting Knative FaaS integration
+- [Flash Sale](docs/03.Advanced-Cookbook/3.09.FaaS.md) - How to do high concurrent promotion sales with Easegress
+- [Kubernetes Ingress Controller](docs/04.Cloud-Native/4.1.Kubernetes-Ingress-Controller.md) - How to integrate with Kubernetes as ingress controller
+- [LoadBalancer](docs/02.Tutorials/2.3.Pipeline-Explained.md#load-balancer) - A number of the strategies of load balancing
+- [MQTTProxy](docs/03.Advanced-Cookbook/3.01.MQTT-Proxy.md) - An Example to MQTT proxy with Kafka backend.
+- [Multiple API Orchestration](docs/03.Advanced-Cookbook/3.03.Multiple-API-Orchestration.md) - An Telegram translation bot.
+- [Performance](docs/03.Advanced-Cookbook/3.11.Performance.md) - Performance optimization - compression, caching etc.
+- [Pipeline](docs/02.Tutorials/2.3.Pipeline-Explained.md) - How to orchestrate HTTP filters for requests/responses handling
+- [Resilience and Fault Tolerance](docs/02.Tutorials/2.4.Resilience.md) - CircuitBreaker, RateLimiter, Retry, TimeLimiter, etc. (Porting from [Java resilience4j](https://github.com/resilience4j/resilience4j))
+- [Security](docs/02.Tutorials/2.5.Traffic-Verification.md) - How to do authentication by Header, JWT, HMAC, OAuth2, etc.
+- [Service Registry](docs/03.Advanced-Cookbook/3.06.Service-Registry.md) - Supporting the Microservice registries - Zookeeper, Eureka, Consul, Nacos, etc.
+- [WebAssembly](docs/03.Advanced-Cookbook/3.07.WasmHost.md) - Using AssemblyScript to extend the Easegress
+- [WebSocket](docs/02.Tutorials/2.6.Websocket.md) - WebSocket proxy for Easegress
+- [Workflow](docs/03.Advanced-Cookbook/3.10.Workflow.md) - An Example to make a workflow for a number of APIs.
+
+For full list, see [Tutorials](docs/02.Tutorials/README.md) and [Cookbook](docs/03.Advanced-Cookbook/README.md).
+
+## Documentation
+
+- [Getting Started](docs/01.Getting-Started/README.md)
+- [Tutorials](docs/02.Tutorials/README.md)
+- [Advanced Cookbook](docs/03.Advanced-Cookbook/README.md)
+- [Cloud Native](docs/04.Cloud-Native/README.md)
+- [Administration](docs/05.Administration/README.md)
+- [Development](docs/06.Development-for-Easegress/README.md)
+- [Reference](docs/07.Reference/README.md)
+
+## Easegress Portal
+
+[Easegress Portal](https://github.com/megaease/easegress-portal) is an intuitive, open-source user interface for the Easegress traffic orchestration system. Developed with React.js, this portal provides config management, metrics, and visualizations, enhancing the overall Easegress experience.
+
+### Screenshots
+
+**1. Cluster Management**
+
+
+
+**2. Traffic Management**
+
+
+
+**3. Pipeline Management**
+
+
+
+## Community
+
+- [Join Slack Workspace](https://cloud-native.slack.com/messages/easegress) for requirement, issue and development.
+- [MegaEase on Twitter](https://twitter.com/megaease)
+
+## Contributing
+
+See [Contributing guide](./CONTRIBUTING.md#contributing). The project welcomes contributions and suggestions that abide by the [CNCF Code of Conduct](./CODE_OF_CONDUCT.md).
+
+## License
+
+Easegress is under the Apache 2.0 license. See the [LICENSE](./LICENSE) file for details.
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Feasegress-io%2Feasegress?ref=badge_large)
diff --git a/data/readmes/elasticsearch-v922.md b/data/readmes/elasticsearch-v922.md
new file mode 100644
index 0000000..65aa2b5
--- /dev/null
+++ b/data/readmes/elasticsearch-v922.md
@@ -0,0 +1,308 @@
+# Elasticsearch - README (v9.2.2)
+
+**Repository**: https://github.com/elastic/elasticsearch
+**Version**: v9.2.2
+**Branch**: v9.2.2
+
+---
+
+= Elasticsearch
+
+Elasticsearch is a distributed search and analytics engine, scalable data store and vector database optimized for speed and relevance on production-scale workloads. Elasticsearch is the foundation of Elastic's open Stack platform. Search in near real-time over massive datasets, perform vector searches, integrate with generative AI applications, and much more.
+
+Use cases enabled by Elasticsearch include:
+
+* https://www.elastic.co/search-labs/blog/articles/retrieval-augmented-generation-rag[Retrieval Augmented Generation (RAG)]
+* https://www.elastic.co/search-labs/blog/categories/vector-search[Vector search]
+* Full-text search
+* Logs
+* Metrics
+* Application performance monitoring (APM)
+* Security logs
+
+\... and more!
+
+To learn more about Elasticsearch's features and capabilities, see our
+https://www.elastic.co/products/elasticsearch[product page].
+
+To access information on https://www.elastic.co/search-labs/blog/categories/ml-research[machine learning innovations] and the latest https://www.elastic.co/search-labs/blog/categories/lucene[Lucene contributions from Elastic], more information can be found in https://www.elastic.co/search-labs[Search Labs].
+
+[[get-started]]
+== Get started
+
+The simplest way to set up Elasticsearch is to create a managed deployment with
+https://www.elastic.co/cloud/as-a-service[Elasticsearch Service on Elastic
+Cloud].
+
+If you prefer to install and manage Elasticsearch yourself, you can download
+the latest version from
+https://www.elastic.co/downloads/elasticsearch[elastic.co/downloads/elasticsearch].
+
+=== Run Elasticsearch locally
+
+////
+IMPORTANT: This content is replicated in the Elasticsearch repo. See `run-elasticsearch-locally.asciidoc`.
+Ensure both files are in sync.
+
+https://github.com/elastic/start-local is the source of truth.
+////
+
+[WARNING]
+====
+DO NOT USE THESE INSTRUCTIONS FOR PRODUCTION DEPLOYMENTS.
+
+This setup is intended for local development and testing only.
+====
+
+Quickly set up Elasticsearch and Kibana in Docker for local development or testing, using the https://github.com/elastic/start-local?tab=readme-ov-file#-try-elasticsearch-and-kibana-locally[`start-local` script].
+
+ℹ️ For more detailed information about the `start-local` setup, refer to the https://github.com/elastic/start-local[README on GitHub].
+
+==== Prerequisites
+
+- If you don't have Docker installed, https://www.docker.com/products/docker-desktop[download and install Docker Desktop] for your operating system.
+- If you're using Microsoft Windows, then install https://learn.microsoft.com/en-us/windows/wsl/install[Windows Subsystem for Linux (WSL)].
+
+==== Trial license
+This setup comes with a one-month trial license that includes all Elastic features.
+
+After the trial period, the license reverts to *Free and open - Basic*.
+Refer to https://www.elastic.co/subscriptions[Elastic subscriptions] for more information.
+
+==== Run `start-local`
+
+To set up Elasticsearch and Kibana locally, run the `start-local` script:
+
+[source,sh]
+----
+curl -fsSL https://elastic.co/start-local | sh
+----
+// NOTCONSOLE
+
+This script creates an `elastic-start-local` folder containing configuration files and starts both Elasticsearch and Kibana using Docker.
+
+After running the script, you can access Elastic services at the following endpoints:
+
+* *Elasticsearch*: http://localhost:9200
+* *Kibana*: http://localhost:5601
+
+The script generates a random password for the `elastic` user, which is displayed at the end of the installation and stored in the `.env` file.
+
+[CAUTION]
+====
+This setup is for local testing only. HTTPS is disabled, and Basic authentication is used for Elasticsearch. For security, Elasticsearch and Kibana are accessible only through `localhost`.
+====
+
+==== API access
+
+An API key for Elasticsearch is generated and stored in the `.env` file as `ES_LOCAL_API_KEY`.
+Use this key to connect to Elasticsearch with a https://www.elastic.co/guide/en/elasticsearch/client/index.html[programming language client] or the https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html[REST API].
+
+From the `elastic-start-local` folder, check the connection to Elasticsearch using `curl`:
+
+[source,sh]
+----
+source .env
+curl $ES_LOCAL_URL -H "Authorization: ApiKey ${ES_LOCAL_API_KEY}"
+----
+
+To use the password for the `elastic` user, set and export the `ES_LOCAL_PASSWORD` environment variable. For example:
+
+[source,sh]
+----
+source .env
+export ES_LOCAL_PASSWORD
+----
+
+// NOTCONSOLE
+
+=== Send requests to Elasticsearch
+
+You send data and other requests to Elasticsearch through REST APIs.
+You can interact with Elasticsearch using any client that sends HTTP requests,
+such as the https://www.elastic.co/guide/en/elasticsearch/client/index.html[Elasticsearch
+language clients] and https://curl.se[curl].
+
+==== Using curl
+
+Here's an example curl command to create a new Elasticsearch index, using basic auth:
+
+[source,sh]
+----
+curl -u elastic:$ES_LOCAL_PASSWORD \
+ -X PUT \
+ http://localhost:9200/my-new-index \
+ -H 'Content-Type: application/json'
+----
+
+// NOTCONSOLE
+
+==== Using a language client
+
+To connect to your local dev Elasticsearch cluster with a language client, you can use basic authentication with the `elastic` username and the password stored in the `ES_LOCAL_PASSWORD` environment variable.
+
+You'll use the following connection details:
+
+* **Elasticsearch endpoint**: `http://localhost:9200`
+* **Username**: `elastic`
+* **Password**: `$ES_LOCAL_PASSWORD` (Value you set in the environment variable)
+
+For example, to connect with the Python `elasticsearch` client:
+
+[source,python]
+----
+import os
+from elasticsearch import Elasticsearch
+
+username = 'elastic'
+password = os.getenv('ES_LOCAL_PASSWORD') # Value you set in the environment variable
+
+client = Elasticsearch(
+ "http://localhost:9200",
+ basic_auth=(username, password)
+)
+
+print(client.info())
+----
+
+==== Using the Dev Tools Console
+
+Kibana's developer console provides an easy way to experiment and test requests.
+To access the console, open Kibana, then go to **Management** > **Dev Tools**.
+
+**Add data**
+
+You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs.
+Whether you have structured or unstructured text, numerical data, or geospatial data,
+Elasticsearch efficiently stores and indexes it in a way that supports fast searches.
+
+For timestamped data such as logs and metrics, you typically add documents to a
+data stream made up of multiple auto-generated backing indices.
+
+To add a single document to an index, submit an HTTP post request that targets the index.
+
+----
+POST /customer/_doc/1
+{
+ "firstname": "Jennifer",
+ "lastname": "Walters"
+}
+----
+
+This request automatically creates the `customer` index if it doesn't exist,
+adds a new document that has an ID of 1, and
+stores and indexes the `firstname` and `lastname` fields.
+
+The new document is available immediately from any node in the cluster.
+You can retrieve it with a GET request that specifies its document ID:
+
+----
+GET /customer/_doc/1
+----
+
+To add multiple documents in one request, use the `_bulk` API.
+Bulk data must be newline-delimited JSON (NDJSON).
+Each line must end in a newline character (`\n`), including the last line.
+
+----
+PUT customer/_bulk
+{ "create": { } }
+{ "firstname": "Monica","lastname":"Rambeau"}
+{ "create": { } }
+{ "firstname": "Carol","lastname":"Danvers"}
+{ "create": { } }
+{ "firstname": "Wanda","lastname":"Maximoff"}
+{ "create": { } }
+{ "firstname": "Jennifer","lastname":"Takeda"}
+----
+
+**Search**
+
+Indexed documents are available for search in near real-time.
+The following search matches all customers with a first name of _Jennifer_
+in the `customer` index.
+
+----
+GET customer/_search
+{
+ "query" : {
+ "match" : { "firstname": "Jennifer" }
+ }
+}
+----
+
+**Explore**
+
+You can use Discover in Kibana to interactively search and filter your data.
+From there, you can start creating visualizations and building and sharing dashboards.
+
+To get started, create a _data view_ that connects to one or more Elasticsearch indices,
+data streams, or index aliases.
+
+. Go to **Management > Stack Management > Kibana > Data Views**.
+. Select **Create data view**.
+. Enter a name for the data view and a pattern that matches one or more indices,
+such as _customer_.
+. Select **Save data view to Kibana**.
+
+To start exploring, go to **Analytics > Discover**.
+
+[[upgrade]]
+== Upgrade
+
+To upgrade from an earlier version of Elasticsearch, see the
+https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[Elasticsearch upgrade
+documentation].
+
+[[build-source]]
+== Build from source
+
+Elasticsearch uses https://gradle.org[Gradle] for its build system.
+
+To build a distribution for your local OS and print its output location upon
+completion, run:
+----
+./gradlew localDistro
+----
+
+To build a distribution for another platform, run the related command:
+----
+./gradlew :distribution:archives:linux-tar:assemble
+./gradlew :distribution:archives:darwin-tar:assemble
+./gradlew :distribution:archives:windows-zip:assemble
+----
+
+Distributions are output to `distribution/archives`.
+
+To run the test suite, see xref:TESTING.asciidoc[TESTING].
+
+[[docs]]
+== Documentation
+
+For the complete Elasticsearch documentation visit
+https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html[elastic.co].
+
+For information about our documentation processes, see the
+xref:https://github.com/elastic/elasticsearch/blob/main/docs/README.md[docs README].
+
+[[examples]]
+== Examples and guides
+
+The https://github.com/elastic/elasticsearch-labs[`elasticsearch-labs`] repo contains executable Python notebooks, sample apps, and resources to test out Elasticsearch for vector search, hybrid search and generative AI use cases.
+
+
+[[contribute]]
+== Contribute
+
+For contribution guidelines, see xref:CONTRIBUTING.md[CONTRIBUTING].
+
+[[questions]]
+== Questions? Problems? Suggestions?
+
+* To report a bug or request a feature, create a
+https://github.com/elastic/elasticsearch/issues/new/choose[GitHub Issue]. Please
+ensure someone else hasn't created an issue for the same topic.
+
+* Need help using Elasticsearch? Reach out on the
+https://discuss.elastic.co[Elastic Forum] or https://ela.st/slack[Slack]. A
+fellow community member or Elastic engineer will be happy to help you out.
diff --git a/data/readmes/emissary-ingress-v400-rc1.md b/data/readmes/emissary-ingress-v400-rc1.md
new file mode 100644
index 0000000..307e0f8
--- /dev/null
+++ b/data/readmes/emissary-ingress-v400-rc1.md
@@ -0,0 +1,163 @@
+# Emissary-Ingress - README (v4.0.0-rc.1)
+
+**Repository**: https://github.com/emissary-ingress/emissary
+**Version**: v4.0.0-rc.1
+
+---
+
+Emissary-ingress
+================
+
+
+[![Version][badge-version-img]][badge-version-link]
+[![Docker Repository][badge-docker-img]][badge-docker-link]
+[![Join Slack][badge-slack-img]][badge-slack-link]
+[![Core Infrastructure Initiative: Best Practices][badge-cii-img]][badge-cii-link]
+[![Artifact HUB][badge-artifacthub-img]][badge-artifacthub-link]
+
+[badge-version-img]: https://img.shields.io/docker/v/emissaryingress/emissary?sort=semver
+[badge-version-link]: https://github.com/emissary-ingress/emissary/releases
+[badge-docker-img]: https://img.shields.io/docker/pulls/emissaryingress/emissary
+[badge-docker-link]: https://hub.docker.com/r/emissaryingress/emissary
+[badge-slack-img]: https://img.shields.io/badge/slack-join-orange.svg
+[badge-slack-link]: https://communityinviter.com/apps/cloud-native/cncf
+[badge-cii-img]: https://bestpractices.coreinfrastructure.org/projects/1852/badge
+[badge-cii-link]: https://bestpractices.coreinfrastructure.org/projects/1852
+[badge-artifacthub-img]: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/emissary-ingress
+[badge-artifacthub-link]: https://artifacthub.io/packages/helm/datawire/emissary-ingress
+
+
+
+---
+
+## QUICKSTART
+
+Looking to get started as quickly as possible? Check out [the
+QUICKSTART](https://emissary-ingress.dev/docs/3.10/quick-start/)!
+
+### Latest Release
+
+The latest production version of Emissary is **3.10.0**.
+
+**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
+**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
+diverged and will continue to do so.
+
+---
+
+Emissary-ingress
+================
+
+[Emissary-ingress](https://www.getambassador.io/docs/open-source) is an
+open-source, developer-centric, Kubernetes-native API gateway built on [Envoy
+Proxy]. Emissary-ingress is a CNCF incubating project (and was formerly known
+as Ambassador API Gateway).
+
+### Design Goals
+
+The first problem faced by any organization trying to develop cloud-native
+applications is the _ingress problem_: allowing users outside the cluster to
+access the application running inside the cluster. Emissary is built around
+the idea that the application developers should be able to solve the ingress
+problem themselves, without needing to become Kubernetes experts and without
+needing dedicated operations staff: a self-service, developer-centric workflow
+is necessary to develop at scale.
+
+Emissary is open-source, developer-centric, role-oriented, opinionated, and
+Kubernatives-native.
+
+- open-source: Emissary is licensed under the Apache 2 license, permitting use
+ or modification by anyone.
+- developer-centric: Emissary is designed taking the application developer
+ into account first.
+- role-oriented: Emissary's configuration deliberately tries to separate
+ elements to allow separation of concerns between developers and operations.
+- opinionated: Emissary deliberately tries to make easy things easy, even if
+ that comes of the cost of not allowing some uncommon features.
+
+### Features
+
+Emissary supports all the table-stakes features needed for a modern API
+gateway:
+
+* Per-request [load balancing]
+* Support for routing [gRPC], [HTTP/2], [TCP], and [web sockets]
+* Declarative configuration via Kubernetes [custom resources]
+* Fine-grained [authentication] and [authorization]
+* Advanced routing features like [canary releases], [A/B testing], [dynamic routing], and [sticky sessions]
+* Resilience features like [retries], [rate limiting], and [circuit breaking]
+* Observability features including comprehensive [metrics] support using the [Prometheus] stack
+* Easy service mesh integration with [Linkerd], [Istio], [Consul], etc.
+* [Knative serverless integration]
+
+See the full list of [features](https://www.getambassador.io/docs/emissary) here.
+
+### Branches
+
+(If you are looking at this list on a branch other than `master`, it
+may be out of date.)
+
+- [`main`](https://github.com/emissary-ingress/emissary/tree/main): Emissary 4 development work
+
+**No further development is planned on any branches listed below.**
+
+- [`master`](https://github.com/emissary-ingress/emissary/tree/master) - **Frozen** at Emissary 3.10.0
+- [`release/v3.10`](https://github.com/emissary-ingress/emissary/tree/release/v3.10) - Emissary-ingress 3.10.0 release branch
+- [`release/v3.9`](https://github.com/emissary-ingress/emissary/tree/release/v3.9)
+ - Emissary-ingress 3.9.1 release branch
+- [`release/v2.5`](https://github.com/emissary-ingress/emissary/tree/release/v2.5) - Emissary-ingress 2.5.1 release branch
+
+**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
+**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
+diverged and will continue to do so.
+
+#### Community
+
+Emissary-ingress is a CNCF Incubating project and welcomes any and all
+contributors.
+
+Check out the [`Community/`](Community/) directory for information on
+the way the community is run, including:
+
+ - the [`CODE_OF_CONDUCT.md`](Community/CODE_OF_CONDUCT.md)
+ - the [`GOVERNANCE.md`](Community/GOVERNANCE.md) structure
+ - the list of [`MAINTAINERS.md`](Community/MAINTAINERS.md)
+ - the [`MEETING_SCHEDULE.md`](Community/MEETING_SCHEDULE.md) of
+ regular trouble-shooting meetings and contributor meetings
+ - how to get [`SUPPORT.md`](Community/SUPPORT.md).
+
+The best way to join the community is to join the `#emissary-ingress` channel
+in the [CNCF Slack]. This is also the best place for technical information
+about Emissary's architecture or development.
+
+If you're interested in contributing, here are some ways:
+* Write a blog post for [our blog](https://blog.getambassador.io)
+* Investigate an [open issue](https://github.com/emissary-ingress/emissary/issues)
+* Add [more tests](https://github.com/emissary-ingress/emissary/tree/main/ambassador/tests)
+
+
+[CNCF Slack]: https://communityinviter.com/apps/cloud-native/cncf
+[Envoy Proxy]: https://www.envoyproxy.io
+
+
+
+[authentication]: https://www.getambassador.io/docs/emissary/latest/topics/running/services/auth-service/
+[canary releases]: https://www.getambassador.io/docs/emissary/latest/topics/using/canary/
+[circuit breaking]: https://www.getambassador.io/docs/emissary/latest/topics/using/circuit-breakers/
+[Consul]: https://www.getambassador.io/docs/emissary/latest/howtos/consul/
+[CRDs]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
+[Datadog]: https://www.getambassador.io/docs/emissary/latest/topics/running/statistics/#datadog
+[Grafana]: https://www.getambassador.io/docs/emissary/latest/topics/running/statistics/#grafana
+[gRPC and HTTP/2]: https://www.getambassador.io/docs/emissary/latest/howtos/grpc/
+[Istio]: https://www.getambassador.io/docs/emissary/latest/howtos/istio/
+[Knative serverless integration]: https://www.getambassador.io/docs/emissary/latest/howtos/knative/
+[Linkerd]: https://www.getambassador.io/docs/emissary/latest/howtos/linkerd2/
+[load balancing]: https://www.getambassador.io/docs/emissary/latest/topics/running/load-balancer/
+[metrics]: https://www.getambassador.io/docs/emissary/latest/topics/running/statistics/
+[Prometheus]: https://www.getambassador.io/docs/emissary/latest/topics/running/statistics/#prometheus
+[rate limiting]: https://www.getambassador.io/docs/emissary/latest/topics/running/services/rate-limit-service/
+[self-service configuration]: https://www.getambassador.io/docs/emissary/latest/topics/using/mappings/
+[sticky sessions]: https://www.getambassador.io/docs/emissary/latest/topics/running/load-balancer/#sticky-sessions--session-affinity
+[TCP]: https://www.getambassador.io/docs/emissary/latest/topics/using/tcpmappings/
+[TLS]: https://www.getambassador.io/docs/emissary/latest/howtos/tls-termination/
+[web sockets]: https://www.getambassador.io/docs/emissary/latest/topics/using/tcpmappings/
diff --git a/data/readmes/envoy-v1363.md b/data/readmes/envoy-v1363.md
new file mode 100644
index 0000000..718740a
--- /dev/null
+++ b/data/readmes/envoy-v1363.md
@@ -0,0 +1,109 @@
+# Envoy - README (v1.36.3)
+
+**Repository**: https://github.com/envoyproxy/envoy
+**Version**: v1.36.3
+
+---
+
+
+
+[Cloud-native high-performance edge/middle/service proxy](https://www.envoyproxy.io/)
+
+Envoy is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF). If you are a
+company that wants to help shape the evolution of technologies that are container-packaged,
+dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's
+involved and how Envoy plays a role, read the CNCF
+[announcement](https://www.cncf.io/blog/2017/09/13/cncf-hosts-envoy/).
+
+[](https://bestpractices.coreinfrastructure.org/projects/1266)
+[](https://securityscorecards.dev/viewer/?uri=github.com/envoyproxy/envoy)
+[](https://clomonitor.io/projects/cncf/envoy)
+[](https://dev.azure.com/cncf/envoy/_build/latest?definitionId=11&branchName=main)
+[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:envoy)
+[](https://powerci.osuosl.org/job/build-envoy-static-master/)
+[](https://ibmz-ci.osuosl.org/job/Envoy_IBMZ_CI/)
+
+## Documentation
+
+* [Official documentation](https://www.envoyproxy.io/)
+* [FAQ](https://www.envoyproxy.io/docs/envoy/latest/faq/overview)
+* [Example documentation](https://github.com/envoyproxy/examples/)
+* [Blog](https://medium.com/@mattklein123/envoy-threading-model-a8d44b922310) about the threading model
+* [Blog](https://medium.com/@mattklein123/envoy-hot-restart-1d16b14555b5) about hot restart
+* [Blog](https://medium.com/@mattklein123/envoy-stats-b65c7f363342) about stats architecture
+* [Blog](https://medium.com/@mattklein123/the-universal-data-plane-api-d15cec7a) about universal data plane API
+* [Blog](https://medium.com/@mattklein123/lyfts-envoy-dashboards-5c91738816b1) on Lyft's Envoy dashboards
+
+## Related
+
+* [data-plane-api](https://github.com/envoyproxy/data-plane-api): v2 API definitions as a standalone
+ repository. This is a read-only mirror of [api](api/).
+* [envoy-perf](https://github.com/envoyproxy/envoy-perf): Performance testing framework.
+* [envoy-filter-example](https://github.com/envoyproxy/envoy-filter-example): Example of how to add new filters
+ and link to the main repository.
+
+## Contact
+
+* [envoy-announce](https://groups.google.com/forum/#!forum/envoy-announce): Low frequency mailing
+ list where we will email announcements only.
+* [envoy-security-announce](https://groups.google.com/forum/#!forum/envoy-security-announce): Low frequency mailing
+ list where we will email security related announcements only.
+* [envoy-users](https://groups.google.com/forum/#!forum/envoy-users): General user discussion.
+* [envoy-dev](https://groups.google.com/forum/#!forum/envoy-dev): Envoy developer discussion (APIs,
+ feature design, etc.).
+* [envoy-maintainers](https://groups.google.com/forum/#!forum/envoy-maintainers): Use this list
+ to reach all core Envoy maintainers.
+* [Twitter](https://twitter.com/EnvoyProxy/): Follow along on Twitter!
+* [Slack](https://envoyproxy.slack.com/): Slack, to get invited go [here](https://communityinviter.com/apps/envoyproxy/envoy).
+ * NOTE: Response to user questions is best effort on Slack. For a "guaranteed" response please email
+ envoy-users@ per the guidance in the following linked thread.
+
+Please see [this](https://groups.google.com/forum/#!topic/envoy-announce/l9zjYsnS3TY) email thread
+for information on email list usage.
+
+## Contributing
+
+Contributing to Envoy is fun and modern C++ is a lot less scary than you might think if you don't
+have prior experience. To get started:
+
+* [Contributing guide](CONTRIBUTING.md)
+* [Beginner issues](https://github.com/envoyproxy/envoy/issues?q=is%3Aopen+is%3Aissue+label%3Abeginner)
+* [Build/test quick start using docker](ci#building-and-running-tests-as-a-developer)
+* [Developer guide](DEVELOPER.md)
+* Consider installing the Envoy [development support toolchain](https://github.com/envoyproxy/envoy/blob/main/support/README.md), which helps automate parts of the development process, particularly those involving code review.
+* Please make sure that you let us know if you are working on an issue so we don't duplicate work!
+
+## Community Meeting
+
+The Envoy team has a scheduled meeting time twice per month on Tuesday at 9am PT. The public
+Google calendar is [here](https://goo.gl/PkDijT). The meeting will only be held
+if there are agenda items listed in the [meeting
+minutes](https://goo.gl/5Cergb). Any member of the community should be able to
+propose agenda items by adding to the minutes. The maintainers will either confirm
+the additions to the agenda, or will cancel the meeting within 24 hours of the scheduled
+date if there is no confirmed agenda.
+
+## Security
+
+### Security Audit
+
+There has been several third party engagements focused on Envoy security:
+* In 2018 Cure53 performed a security audit, [full report](docs/security/audit_cure53_2018.pdf).
+* In 2021 Ada Logics performed an audit on our fuzzing infrastructure with recommendations for improvements, [full report](docs/security/audit_fuzzer_adalogics_2021.pdf).
+
+### Reporting security vulnerabilities
+
+If you've found a vulnerability or a potential vulnerability in Envoy please let us know at
+[envoy-security](mailto:envoy-security@googlegroups.com). We'll send a confirmation
+email to acknowledge your report, and we'll send an additional email when we've identified the issue
+positively or negatively.
+
+For further details please see our complete [security release process](SECURITY.md).
+
+### ppc64le builds
+
+Builds for the ppc64le architecture or using aws-lc are not covered by the envoy security policy. The ppc64le architecture is currently best-effort and not maintained by the Envoy maintainers.
+
+## Releases
+
+For further details please see our [release process](https://github.com/envoyproxy/envoy/blob/main/RELEASES.md).
diff --git a/data/readmes/eraser-v141.md b/data/readmes/eraser-v141.md
new file mode 100644
index 0000000..0f996d8
--- /dev/null
+++ b/data/readmes/eraser-v141.md
@@ -0,0 +1,46 @@
+# Eraser - README (v1.4.1)
+
+**Repository**: https://github.com/eraser-dev/eraser
+**Version**: v1.4.1
+
+---
+
+# Eraser: Cleaning up Images from Kubernetes Nodes
+
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Feraser-dev%2Feraser?ref=badge_shield)
+[](https://www.bestpractices.dev/projects/7622)
+[](https://api.securityscorecards.dev/projects/github.com/eraser-dev/eraser)
+
+
+
+Eraser helps Kubernetes admins remove a list of non-running images from all Kubernetes nodes in a cluster.
+
+## Getting started
+
+You can find a quick start guide in the Eraser [documentation](https://eraser-dev.github.io/eraser/docs/quick-start).
+
+## Demo
+
+
+
+## Contributing
+
+There are several ways to get involved:
+
+- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.
+- Join the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to discuss development, issues, use cases, etc.
+- Join the `#eraser` channel on the [Kubernetes Slack](https://kubernetes.slack.com/archives/C03Q8KV8YQ4)
+- View the development setup instructions in the [documentation](https://eraser-dev.github.io/eraser/docs/setup)
+
+This project welcomes contributions and suggestions.
+
+This project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+## Support
+
+### How to file issues and get help
+
+This project uses GitHub Issues to track bugs and feature requests. Please search the [existing issues](https://github.com/eraser-dev/eraser/issues) before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new Issue.
+
+The Eraser maintainers will respond to the best of their abilities.
\ No newline at end of file
diff --git a/data/readmes/erigon-v331.md b/data/readmes/erigon-v331.md
new file mode 100644
index 0000000..ad603e4
--- /dev/null
+++ b/data/readmes/erigon-v331.md
@@ -0,0 +1,739 @@
+# Erigon - README (v3.3.1)
+
+**Repository**: https://github.com/erigontech/erigon
+**Version**: v3.3.1
+
+---
+
+# Erigon
+
+[](https://docs.erigon.tech/)
+[](https://erigon.tech/blog/)
+[](https://x.com/ErigonEth)
+[](https://dsc.gg/erigon)
+[](https://github.com/erigontech/erigon/actions/workflows/ci.yml)
+[](https://sonarcloud.io/summary/new_code?id=erigontech_erigon)
+
+Erigon is an implementation of Ethereum (execution layer with embeddable consensus layer), on the efficiency
+frontier.
+
+- [Erigon](#erigon)
+- [System Requirements](#system-requirements)
+- [Sync Times](#sync-times)
+- [Usage](#usage)
+ - [Getting Started](#getting-started)
+ - [Datadir structure](#datadir-structure)
+ - [History on cheap disk](#history-on-cheap-disk)
+ - [Erigon3 datadir size](#erigon3-datadir-size)
+ - [Erigon3 changes from Erigon2](#erigon3-changes-from-erigon2)
+ - [Logging](#logging)
+ - [Modularity](#modularity)
+ - [Embedded Consensus Layer](#embedded-consensus-layer)
+ - [Testnets](#testnets)
+ - [Block Production (PoS Validator)](#block-production-pos-validator)
+ - [Config Files TOML](#config-files-toml)
+ - [Beacon Chain (Consensus Layer)](#beacon-chain-consensus-layer)
+ - [Caplin](#caplin)
+ - [Caplin's Usage](#caplins-usage)
+ - [Multiple Instances / One Machine](#multiple-instances--one-machine)
+ - [Dev Chain](#dev-chain)
+- [Key features](#key-features)
+ - [Faster Initial Sync](#faster-initial-sync)
+ - [More Efficient State Storage](#more-efficient-state-storage)
+ - [JSON-RPC daemon](#json-rpc-daemon)
+ - [Grafana dashboard](#grafana-dashboard)
+- [FAQ](#faq)
+ - [Use as library](#use-as-library)
+ - [Default Ports and Firewalls](#default-ports-and-firewalls)
+ - [`erigon` ports](#erigon-ports)
+ - [`caplin` ports](#caplin-ports)
+ - [`beaconAPI` ports](#beaconapi-ports)
+ - [`shared` ports](#shared-ports)
+ - [`other` ports](#other-ports)
+ - [Hetzner expecting strict firewall rules](#hetzner-expecting-strict-firewall-rules)
+ - [Run as a separate user - `systemd` example](#run-as-a-separate-user---systemd-example)
+ - [Grab diagnostic for bug report](#grab-diagnostic-for-bug-report)
+ - [Run local devnet](#run-local-devnet)
+ - [Docker permissions error](#docker-permissions-error)
+ - [Public RPC](#public-rpc)
+ - [RaspberryPI](#raspberrypi)
+ - [Run all components by docker-compose](#run-all-components-by-docker-compose)
+ - [Optional: Setup dedicated user](#optional-setup-dedicated-user)
+ - [Environment Variables](#environment-variables)
+ - [Run](#run)
+ - [How to change db pagesize](#how-to-change-db-pagesize)
+ - [Erigon3 perf tricks](#erigon3-perf-tricks)
+ - [Windows](#windows)
+- [Getting in touch](#getting-in-touch)
+ - [Reporting security issues/concerns](#reporting-security-issuesconcerns)
+
+
+
+**Important defaults**: Erigon 3 is a Full Node by default. (Erigon 2 was an [Archive Node](https://ethereum.org/en/developers/docs/nodes-and-clients/archive-nodes/#what-is-an-archive-node) by default.)
+Set `--prune.mode` to "archive" if you need an archive node or to "minimal" if you run a validator on a small disk (not allowed to change after first start).
+
+In-depth links are marked by the microscope sign (🔬)
+
+System Requirements
+===================
+
+RAM: >=32GB, [Golang >= 1.24](https://golang.org/doc/install); GCC 10+ or Clang; On Linux: kernel > v4. 64-bit
+architecture.
+
+- ArchiveNode Ethereum Mainnet: 1.6TB (May 2025). FullNode: 1.1TB (May 2025)
+- ArchiveNode Gnosis: 640GB (May 2025). FullNode: 300GB (June 2024)
+- ArchiveNode Polygon Mainnet: 4.1TB (April 2024). FullNode: 2Tb (April 2024)
+
+SSD or NVMe. Do not recommend HDD - on HDD Erigon will always stay N blocks behind chain tip, but not fall behind.
+Bear in mind that SSD performance deteriorates when close to capacity. CloudDrives (like
+gp3): Blocks Execution is slow
+on [cloud-network-drives](https://github.com/erigontech/erigon?tab=readme-ov-file#cloud-network-drives)
+
+🔬 More details on [Erigon3 datadir size](#erigon3-datadir-size)
+
+🔬 More details on what type of data stored [here](https://ledgerwatch.github.io/turbo_geth_release.html#Disk-space)
+
+Sync Times
+==========
+
+These are the approximate sync times syncing from scratch to the tip of the chain (results may vary depending on hardware and bandwidth).
+
+
+| Chain | Archive | Full | Minimal |
+|------------|-----------------|----------------|----------------|
+| Ethereum | 7 Hours, 55 Minutes | 4 Hours, 23 Minutes | 1 Hour, 41 Minutes |
+| Gnosis | 2 Hours, 10 Minutes | 1 Hour, 5 Minutes | 33 Minutes |
+| Polygon | 1 Day, 21 Hours | 21 Hours, 41 Minutes | 11 Hours, 54 Minutes |
+
+Usage
+=====
+
+### Getting Started
+
+[Release Notes and Binaries](https://github.com/erigontech/erigon/releases)
+
+Build latest release (this will be suitable for most users just wanting to run a node):
+
+```sh
+git clone --branch release/ --single-branch https://github.com/erigontech/erigon.git
+cd erigon
+make erigon
+./build/bin/erigon
+```
+
+Use `--datadir` to choose where to store data.
+
+Use `--chain=gnosis` for [Gnosis Chain](https://www.gnosis.io/), `--chain=bor-mainnet` for Polygon Mainnet,
+and `--chain=amoy` for Polygon Amoy.
+For Gnosis Chain you need a [Consensus Layer](#beacon-chain-consensus-layer) client alongside
+Erigon (https://docs.gnosischain.com/category/step--3---run-consensus-client).
+
+Running `make help` will list and describe the convenience commands available in the [Makefile](./Makefile).
+
+### Upgrading from 3.0 to 3.1
+
+1. Backup your datadir.
+2. Upgrade your Erigon binary.
+3. OPTIONAL: Upgrade snapshot files.
+ 1. Update snapshot file names. To do this either run Erigon 3.1 until the sync stage completes, or run `erigon snapshots update-to-new-ver-format --datadir /your/datadir`.
+ 2. Reset your datadir so that Erigon will sync to a newer snapshot. `erigon snapshots reset --datadir /your/datadir`. See [Resetting snapshots](#Resetting-snapshots) for more details.
+4. Run Erigon 3.1. Your snapshots file names will be migrated automatically if you didn't do this manually. If you reset your datadir, Erigon will sync to the latest remote snapshots.
+
+### Datadir structure
+
+```sh
+datadir
+ chaindata # "Recently-updated Latest State", "Recent History", "Recent Blocks"
+ snapshots # contains `.seg` files - it's old blocks
+ domain # Latest State
+ history # Historical values
+ idx # InvertedIndices: can search/filtering/union/intersect them - to find historical data. like eth_getLogs or trace_transaction
+ accessor # Additional (generated) indices of history - have "random-touch" read-pattern. They can serve only `Get` requests (no search/filters).
+ txpool # pending transactions. safe to remove.
+ nodes # p2p peers. safe to remove.
+ temp # used to sort data bigger than RAM. can grow to ~100gb. cleaned at startup.
+
+# There is 4 domains: account, storage, code, commitment
+```
+
+See the [lib](db/downloader/README.md) and [cmd](cmd/downloader/README.md) READMEs for more information.
+
+### History on cheap disk
+
+If you can afford store datadir on 1 nvme-raid - great. If can't - it's possible to store history on cheap drive.
+
+```sh
+# place (or ln -s) `datadir` on slow disk. link some sub-folders to fast (low-latency) disk.
+# Example: what need link to fast disk to speedup execution
+datadir
+ chaindata # link to fast disk
+ snapshots
+ domain # link to fast disk
+ history
+ idx
+ accessor
+ temp # buffers to sort data >> RAM. sequential-buffered IO - is slow-disk-friendly
+
+# Example: how to speedup history access:
+# - go step-by-step - first try store `accessor` on fast disk
+# - if speed is not good enough: `idx`
+# - if still not enough: `history`
+```
+
+### Erigon3 datadir size
+
+```sh
+# eth-mainnet - archive - Nov 2024
+
+du -hsc /erigon/chaindata
+15G /erigon/chaindata
+
+du -hsc /erigon/snapshots/*
+120G /erigon/snapshots/accessor
+300G /erigon/snapshots/domain
+280G /erigon/snapshots/history
+430G /erigon/snapshots/idx
+2.3T /erigon/snapshots
+```
+
+```sh
+# bor-mainnet - archive - Nov 2024
+
+du -hsc /erigon/chaindata
+20G /erigon/chaindata
+
+du -hsc /erigon/snapshots/*
+360G /erigon-data/snapshots/accessor
+1.1T /erigon-data/snapshots/domain
+750G /erigon-data/snapshots/history
+1.5T /erigon-data/snapshots/idx
+4.9T /erigon/snapshots
+```
+
+### Erigon3 changes from Erigon2
+
+- **Initial sync doesn't re-exec from 0:** downloading 99% LatestState and History
+- **Per-Transaction granularity of history** (Erigon2 had per-block). Means:
+ - Can execute 1 historical transaction - without executing it's block
+ - If account X change V1->V2->V1 within 1 block (different transactions): `debug_getModifiedAccountsByNumber` return
+ it
+ - Erigon3 doesn't store Logs (aka Receipts) - it always re-executing historical txn (but it's cheaper)
+- **Validator mode**: added. `--internalcl` is enabled by default. to disable use `--externalcl`.
+- **Store most of data in immutable files (segments/snapshots):**
+ - can symlink/mount latest state to fast drive and history to cheap drive
+ - `chaindata` is less than `15gb`. It's ok to `rm -rf chaindata`. (to prevent grow: recommend `--batchSize <= 1G`)
+- **`--prune` flags changed**: see `--prune.mode` (default: `full`, archive: `archive`, EIP-4444: `minimal`)
+- **Other changes:**
+ - ExecutionStage included many E2 stages: stage_hash_state, stage_trie, log_index, history_index, trace_index
+ - Restart doesn't loose much partial progress: `--sync.loop.block.limit=5_000` enabled by default
+
+### Logging
+
+_Flags:_
+
+- `verbosity`
+- `log.console.verbosity` (overriding alias for `verbosity`)
+- `log.json`
+- `log.console.json` (alias for `log.json`)
+- `log.dir.path`
+- `log.dir.prefix`
+- `log.dir.verbosity`
+- `log.dir.json`
+- `torrent.verbosity`
+
+In order to log only to the stdout/stderr the `--verbosity` (or `log.console.verbosity`) flag can be used to supply an
+int value specifying the highest output log level:
+
+```
+ LvlCrit = 0
+ LvlError = 1
+ LvlWarn = 2
+ LvlInfo = 3
+ LvlDebug = 4
+ LvlTrace = 5
+```
+
+To set an output dir for logs to be collected on disk, please set `--log.dir.path` If you want to change the filename
+produced from `erigon` you should also set the `--log.dir.prefix` flag to an alternate name. The
+flag `--log.dir.verbosity` is
+also available to control the verbosity of this logging, with the same int value as above, or the string value e.g. '
+debug' or 'info'. Default verbosity is 'debug' (4), for disk logging.
+
+Log format can be set to json by the use of the boolean flags `log.json` or `log.console.json`, or for the disk
+output `--log.dir.json`.
+
+#### Torrent client logging
+
+The torrent client in the Downloader logs to `logs/torrent.log` at the level specified by `torrent.verbosity` or WARN, whichever is lower. Logs at `torrent.verbosity` or higher are also passed through to the top level Erigon dir and console loggers (which must have their own levels set low enough to log the messages in their respective handlers).
+
+### Resetting snapshots
+
+Erigon 3.1 adds the command `erigon snapshots reset`. This modifies your datadir so that Erigon will sync to the latest remote snapshots on next run. You must pass `--datadir`. If the chain cannot be inferred from the chaindata, you must pass `--chain`. `--local=false` will prevent locally generated snapshots from also being removed. Pass `--dry-run` and/or `--verbosity=5` for more information.
+
+### Modularity
+
+Erigon by default is "all in one binary" solution, but it's possible start TxPool as separated processes.
+Same true about: JSON RPC layer (RPCDaemon), p2p layer (Sentry), history download layer (Downloader), consensus.
+Don't start services as separated processes unless you have clear reason for it: resource limiting, scale, replace by
+your own implementation, security.
+How to start Erigon's services as separated processes, see in [docker-compose.yml](./docker-compose.yml).
+Each service has own `./cmd/*/README.md` file.
+[Erigon Blog](https://erigon.tech/blog/).
+
+### Embedded Consensus Layer
+
+Built-in consensus for Ethereum Mainnet, Sepolia, Hoodi, Gnosis, Chiado.
+To use external Consensus Layer: `--externalcl`.
+
+### Testnets
+
+If you would like to give Erigon a try: a good option is to start syncing one of the public testnets, Hoodi (or Chiado).
+It syncs much quicker, and does not take so much disk space:
+
+```sh
+git clone https://github.com/erigontech/erigon.git
+cd erigon
+make erigon
+./build/bin/erigon --datadir= --chain=hoodi --prune.mode=full
+```
+
+Please note the `--datadir` option that allows you to store Erigon files in a non-default location. Name of the
+directory `--datadir` does not have to match the name of the chain in `--chain`.
+
+### Block Production (PoS Validator)
+
+Block production is fully supported for Ethereum & Gnosis Chain. It is still experimental for Polygon.
+
+### Config Files TOML
+
+You can set Erigon flags through a TOML configuration file with the flag `--config`. The flags set in the
+configuration file can be overwritten by writing the flags directly on Erigon command line
+
+`./build/bin/erigon --config ./config.toml --chain=sepolia`
+
+Assuming we have `chain : "mainnet"` in our configuration file, by adding `--chain=sepolia` allows the overwrite of the
+flag inside of the toml configuration file and sets the chain to sepolia
+
+```toml
+datadir = 'your datadir'
+port = 1111
+chain = "mainnet"
+http = true
+"private.api.addr"="localhost:9090"
+
+"http.api" = ["eth","debug","net"]
+```
+
+### Beacon Chain (Consensus Layer)
+
+Erigon can be used as an Execution Layer (EL) for Consensus Layer clients (CL). Default configuration is OK.
+
+If your CL client is on a different device, add `--authrpc.addr 0.0.0.0` ([Engine API] listens on localhost by default)
+as well as `--authrpc.vhosts ` where `` is your source host or `any`.
+
+[Engine API]: https://github.com/ethereum/execution-apis/blob/main/src/engine
+
+In order to establish a secure connection between the Consensus Layer and the Execution Layer, a JWT secret key is
+automatically generated.
+
+The JWT secret key will be present in the datadir by default under the name of `jwt.hex` and its path can be specified
+with the flag `--authrpc.jwtsecret`.
+
+This piece of info needs to be specified in the Consensus Layer as well in order to establish connection successfully.
+More information can be found [here](https://github.com/ethereum/execution-apis/blob/main/src/engine/authentication.md).
+
+Once Erigon is running, you need to point your CL client to `:8551`,
+where `` is either `localhost` or the IP address of the device running Erigon, and also point to the JWT
+secret path created by Erigon.
+
+### Caplin
+
+Caplin is a full-fledged validating Consensus Client like Prysm, Lighthouse, Teku, Nimbus and Lodestar. Its goal is:
+
+* provide better stability
+* Validation of the chain
+* Stay in sync
+* keep the execution of blocks on chain tip
+* serve the Beacon API using a fast and compact data model alongside low CPU and memory usage.
+
+The main reason why developed a new Consensus Layer is to experiment with the possible benefits that could come with it.
+For example, The Engine API does not work well with Erigon. The Engine API sends data one block at a time, which does
+not suit how Erigon works. Erigon is designed to handle many blocks simultaneously and needs to sort and process data
+efficiently. Therefore, it would be better for Erigon to handle the blocks independently instead of relying on the
+Engine API.
+
+#### Caplin's Usage
+
+Caplin is be enabled by default. to disable it and enable the Engine API, use the `--externalcl` flag. from that point
+on, an external Consensus Layer will not be need
+anymore.
+
+Caplin also has an archival mode for historical states and blocks. it can be enabled through the `--caplin.archive`
+flag.
+In order to enable the caplin's Beacon API, the flag `--beacon.api=` must be added.
+e.g: `--beacon.api=beacon,builder,config,debug,node,validator,lighthouse` will enable all endpoints.
+Note: enabling the Beacon API will lead to a 6 GB higher RAM usage
+
+### Multiple Instances / One Machine
+
+Define 6 flags to avoid conflicts: `--datadir --port --http.port --authrpc.port --torrent.port --private.api.addr`.
+Example of multiple chains on the same machine:
+
+```
+# mainnet
+./build/bin/erigon --datadir="" --chain=mainnet --port=30303 --http.port=8545 --authrpc.port=8551 --torrent.port=42069 --private.api.addr=127.0.0.1:9090 --http --ws --http.api=eth,debug,net,trace,web3,erigon
+
+
+# sepolia
+./build/bin/erigon --datadir="" --chain=sepolia --port=30304 --http.port=8546 --authrpc.port=8552 --torrent.port=42068 --private.api.addr=127.0.0.1:9091 --http --ws --http.api=eth,debug,net,trace,web3,erigon
+```
+
+Quote your path if it has spaces.
+
+### Dev Chain
+
+ 🔬 Detailed explanation is [DEV_CHAIN](/docs/DEV_CHAIN.md).
+
+Key features
+============
+
+### Faster Initial Sync
+
+On good network bandwidth EthereumMainnet FullNode syncs in 3
+hours: [OtterSync](https://erigon.substack.com/p/erigon-3-alpha-2-introducing-blazingly) can sync
+
+### More Efficient State Storage
+
+**Flat KV storage.** Erigon uses a key-value database and storing accounts and storage in a simple way.
+
+ 🔬 See our detailed DB walkthrough [here](./docs/programmers_guide/db_walkthrough.MD).
+
+**Preprocessing**. For some operations, Erigon uses temporary files to preprocess data before inserting it into the main
+DB. That reduces write amplification and DB inserts are orders of magnitude quicker.
+
+ 🔬 See our detailed ETL explanation [here](https://github.com/erigontech/erigon/blob/main/db/etl/README.md).
+
+**Plain state**
+
+**Single accounts/state trie**. Erigon uses a single Merkle trie for both accounts and the storage.
+
+ 🔬 [Staged Sync Readme](/docs/readthedocs/source/stagedsync.rst)
+
+### JSON-RPC daemon
+
+Most of Erigon's components (txpool, rpcdaemon, snapshots downloader, sentry, ...) can work inside Erigon and as
+independent process on same Server (or another Server). Example:
+
+```sh
+make erigon rpcdaemon
+./build/bin/erigon --datadir=/my --http=false
+# To run RPCDaemon as separated process: use same `--datadir` as Erigon
+./build/bin/rpcdaemon --datadir=/my --http.api=eth,erigon,web3,net,debug,trace,txpool --ws
+```
+
+- Supported JSON-RPC
+ calls: [eth](./rpc/jsonrpc/eth_api.go), [debug](./rpc/jsonrpc/debug_api.go), [net](./rpc/jsonrpc/net_api.go), [web3](./rpc/jsonrpc/web3_api.go)
+- increase throughput by: `--rpc.batch.concurrency`, `--rpc.batch.limit`, `--db.read.concurrency`
+- increase throughput by disabling: `--http.compression`, `--ws.compression`
+
+🔬 See [RPC-Daemon docs](./cmd/rpcdaemon/README.md)
+
+### Grafana dashboard
+
+`docker compose up prometheus grafana`, [detailed docs](./cmd/prometheus/Readme.md).
+
+FAQ
+================
+
+### Use as library
+
+```
+# please use git branch name (or commit hash). don't use git tags
+go get github.com/erigontech/erigon@main
+go mod tidy
+```
+
+### Default Ports and Firewalls
+
+#### `erigon` ports
+
+| Component | Port | Protocol | Purpose | Should Expose |
+|-----------|-------|-----------|-----------------------------|---------------|
+| engine | 9090 | TCP | gRPC Server | Private |
+| engine | 42069 | TCP & UDP | Snap sync (Bittorrent) | Public |
+| engine | 8551 | TCP | Engine API (JWT auth) | Private |
+| sentry | 30303 | TCP & UDP | eth/68 peering | Public |
+| sentry | 30304 | TCP & UDP | eth/69 peering | Public |
+| sentry | 9091 | TCP | incoming gRPC Connections | Private |
+| rpcdaemon | 8545 | TCP | HTTP & WebSockets & GraphQL | Private |
+| shutter | 23102 | TCP | Peering | Public |
+
+Typically, 30303 and 30304 are exposed to the internet to allow incoming peering connections. 9090 is exposed only
+internally for rpcdaemon or other connections, (e.g. rpcdaemon -> erigon).
+Port 8551 (JWT authenticated) is exposed only internally for [Engine API] JSON-RPC queries from the Consensus Layer
+node.
+
+#### `caplin` ports
+
+| Component | Port | Protocol | Purpose | Should Expose |
+|-----------|------|----------|---------|---------------|
+| sentinel | 4000 | UDP | Peering | Public |
+| sentinel | 4001 | TCP | Peering | Public |
+
+In order to configure the ports, use:
+
+```
+ --caplin.discovery.addr value Address for Caplin DISCV5 protocol (default: "127.0.0.1")
+ --caplin.discovery.port value Port for Caplin DISCV5 protocol (default: 4000)
+ --caplin.discovery.tcpport value TCP Port for Caplin DISCV5 protocol (default: 4001)
+```
+
+#### `beaconAPI` ports
+
+| Component | Port | Protocol | Purpose | Should Expose |
+|-----------|------|----------|---------|---------------|
+| REST | 5555 | TCP | REST | Public |
+
+#### `shared` ports
+
+| Component | Port | Protocol | Purpose | Should Expose |
+|-----------|------|----------|---------|---------------|
+| all | 6060 | TCP | pprof | Private |
+| all | 6061 | TCP | metrics | Private |
+
+Optional flags can be enabled that enable pprof or metrics (or both). Use `--help` with the binary for more info.
+
+#### `other` ports
+
+Reserved for future use: **gRPC ports**: `9092` consensus engine, `9093` snapshot downloader, `9094` TxPool
+
+#### Hetzner expecting strict firewall rules
+
+```
+0.0.0.0/8 "This" Network RFC 1122, Section 3.2.1.3
+10.0.0.0/8 Private-Use Networks RFC 1918
+100.64.0.0/10 Carrier-Grade NAT (CGN) RFC 6598, Section 7
+127.16.0.0/12 Private-Use Networks RFC 1918
+169.254.0.0/16 Link Local RFC 3927
+172.16.0.0/12 Private-Use Networks RFC 1918
+192.0.0.0/24 IETF Protocol Assignments RFC 5736
+192.0.2.0/24 TEST-NET-1 RFC 5737
+192.88.99.0/24 6to4 Relay Anycast RFC 3068
+192.168.0.0/16 Private-Use Networks RFC 1918
+198.18.0.0/15 Network Interconnect
+Device Benchmark Testing RFC 2544
+198.51.100.0/24 TEST-NET-2 RFC 5737
+203.0.113.0/24 TEST-NET-3 RFC 5737
+224.0.0.0/4 Multicast RFC 3171
+240.0.0.0/4 Reserved for Future Use RFC 1112, Section 4
+255.255.255.255/32 Limited Broadcast RFC 919, Section 7
+RFC 922, Section 7
+```
+
+Same
+in [IpTables syntax](https://ethereum.stackexchange.com/questions/6386/how-to-prevent-being-blacklisted-for-running-an-ethereum-client/13068#13068)
+
+### Run as a separate user - `systemd` example
+
+Running erigon from `build/bin` as a separate user might produce an error:
+
+```sh
+error while loading shared libraries: libsilkworm_capi.so: cannot open shared object file: No such file or directory
+```
+
+The library needs to be *installed* for another user using `make DIST= install`. You could use `$HOME/erigon`
+or `/opt/erigon` as the installation path, for example:
+
+```sh
+make DIST=/opt/erigon install
+```
+
+### Grab diagnostic for bug report
+
+- Get stack trace: `kill -SIGUSR1 `, get trace and stop: `kill -6 `
+- Get CPU profiling: add `--pprof` flag and run
+ `go tool pprof -png http://127.0.0.1:6060/debug/pprof/profile\?seconds\=20 > cpu.png`
+- Get RAM profiling: add `--pprof` flag and run
+ `go tool pprof -inuse_space -png http://127.0.0.1:6060/debug/pprof/heap > mem.png`
+
+### Run local devnet
+
+ 🔬 Detailed explanation is [here](/docs/DEV_CHAIN.md).
+
+### Docker permissions error
+
+Docker uses user erigon with UID/GID 1000 (for security reasons). You can see this user being created in the Dockerfile.
+Can fix by giving a host's user ownership of the folder, where the host's user UID/GID is the same as the docker's user
+UID/GID (1000).
+More details
+in [post](https://www.fullstaq.com/knowledge-hub/blogs/docker-and-the-host-filesystem-owner-matching-problem)
+
+### Public RPC
+
+- `--txpool.nolocals=true`
+- don't add `admin` in `--http.api` list
+- `--http.corsdomain="*"` is bad-practice: set exact hostname or IP
+- protect from DOS by reducing: `--rpc.batch.concurrency`, `--rpc.batch.limit`
+
+### Why doesn't my full node have earlier blocks data?
+
+- `prune.mode=full` no longer downloads pre-merge blocks (see [partial history expiry](https://blog.ethereum.org/2025/07/08/partial-history-exp)).
+ Now it only stores post-merge blocks data (i.e. blocks and transactions)
+- To include pre-merge blocks data, use `--prune.mode=blocks` (all blocks data + only recent state data) or `--prune.mode=archive` (all data)
+
+### RaspberryPI
+
+https://github.com/mathMakesArt/Erigon-on-RPi-4
+
+### Run all components by docker-compose
+
+Docker allows for building and running Erigon via containers. This alleviates the need for installing build dependencies
+onto the host OS.
+
+#### Optional: Setup dedicated user
+
+User UID/GID need to be synchronized between the host OS and container so files are written with correct permission.
+
+You may wish to setup a dedicated user/group on the host OS, in which case the following `make` targets are available.
+
+```sh
+# create "erigon" user
+make user_linux
+# or
+make user_macos
+```
+
+#### Environment Variables
+
+There is a `.env.example` file in the root of the repo.
+
+* `DOCKER_UID` - The UID of the docker user
+* `DOCKER_GID` - The GID of the docker user
+* `XDG_DATA_HOME` - The data directory which will be mounted to the docker containers
+
+If not specified, the UID/GID will use the current user.
+
+A good choice for `XDG_DATA_HOME` is to use the `~erigon/.ethereum` directory created by helper
+targets `make user_linux` or `make user_macos`.
+
+#### Run
+
+Check permissions: In all cases, `XDG_DATA_HOME` (specified or default) must be writable by the user UID/GID in docker,
+which will be determined by the `DOCKER_UID` and `DOCKER_GID` at build time. If a build or service startup is failing
+due to permissions, check that all the directories, UID, and GID controlled by these environment variables are correct.
+
+Next command starts: Erigon on port 30303, rpcdaemon on port 8545, prometheus on port 9090, and grafana on port 3000.
+
+```sh
+#
+# Will mount ~/.local/share/erigon to /home/erigon/.local/share/erigon inside container
+#
+make docker-compose
+
+#
+# or
+#
+# if you want to use a custom data directory
+# or, if you want to use different uid/gid for a dedicated user
+#
+# To solve this, pass in the uid/gid parameters into the container.
+#
+# DOCKER_UID: the user id
+# DOCKER_GID: the group id
+# XDG_DATA_HOME: the data directory (default: ~/.local/share)
+#
+# Note: /preferred/data/folder must be read/writeable on host OS by user with UID/GID given
+# if you followed above instructions
+#
+# Note: uid/gid syntax below will automatically use uid/gid of running user so this syntax
+# is intended to be run via the dedicated user setup earlier
+#
+DOCKER_UID=$(id -u) DOCKER_GID=$(id -g) XDG_DATA_HOME=/preferred/data/folder DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose
+
+#
+# if you want to run the docker, but you are not logged in as the $ERIGON_USER
+# then you'll need to adjust the syntax above to grab the correct uid/gid
+#
+# To run the command via another user, use
+#
+ERIGON_USER=erigon
+sudo -u ${ERIGON_USER} DOCKER_UID=$(id -u ${ERIGON_USER}) DOCKER_GID=$(id -g ${ERIGON_USER}) XDG_DATA_HOME=~${ERIGON_USER}/.ethereum DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose
+```
+
+Makefile creates the initial directories for erigon, prometheus and grafana. The PID namespace is shared between erigon
+and rpcdaemon which is required to open Erigon's DB from another process (RPCDaemon local-mode).
+See: https://github.com/erigontech/erigon/pull/2392/files
+
+If your docker installation requires the docker daemon to run as root (which is by default), you will need to prefix
+the command above with `sudo`. However, it is sometimes recommended running docker (and therefore its containers) as a
+non-root user for security reasons. For more information about how to do this, refer to
+[this article](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user).
+
+### How to change db pagesize
+
+[post](https://github.com/erigontech/erigon/blob/main/cmd/integration/Readme.md#copy-data-to-another-db)
+
+### Erigon3 perf tricks
+
+- on BorMainnet may help: `--sync.loop.block.limit=10_000`
+- on cloud-drives (good throughput, bad latency) - can enable OS's brain to pre-fetch: `SNAPSHOT_MADV_RND=false`
+- can lock latest state in RAM - to prevent from eviction (node may face high historical RPC traffic without impacting
+ Chain-Tip perf):
+
+```
+vmtouch -vdlw /mnt/erigon/snapshots/domain/*bt
+ls /mnt/erigon/snapshots/domain/*.kv | parallel vmtouch -vdlw
+
+# if it failing with "can't allocate memory", try:
+sync && sudo sysctl vm.drop_caches=3
+echo 1 > /proc/sys/vm/compact_memory
+```
+
+### Windows
+
+Windows users may run erigon in 3 possible ways:
+
+* Build executable binaries natively for Windows using provided `wmake.ps1` PowerShell script. Usage syntax is the same
+ as `make` command so you have to run `.\wmake.ps1 [-target] `. Example: `.\wmake.ps1 erigon` builds erigon
+ executable. All binaries are placed in `.\build\bin\` subfolder. There are some requirements for a successful native
+ build on windows :
+ * [Git](https://git-scm.com/downloads) for Windows must be installed. If you're cloning this repository is very
+ likely you already have it
+ * [GO Programming Language](https://golang.org/dl/) must be installed. Minimum required version is 1.24
+ * GNU CC Compiler at least version 13 (is highly suggested that you install `chocolatey` package manager - see
+ following point)
+ * If you need to build MDBX tools (i.e. `.\wmake.ps1 db-tools`)
+ then [Chocolatey package manager](https://chocolatey.org/) for Windows must be installed. By Chocolatey you need
+ to install the following components : `cmake`, `make`, `mingw` by `choco install cmake make mingw`. Make sure
+ Windows System "Path" variable has:
+ C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin
+
+ **Important note about Anti-Viruses**
+ During MinGW's compiler detection phase some temporary executables are generated to test compiler capabilities. It's
+ been reported some anti-virus programs detect those files as possibly infected by `Win64/Kryptic.CIS` trojan horse (or
+ a variant of it). Although those are false positives we have no control over 100+ vendors of security products for
+ Windows and their respective detection algorithms and we understand this might make your experience with Windows
+ builds uncomfortable. To workaround the issue you might either set exclusions for your antivirus specifically
+ for `build\bin\mdbx\CMakeFiles` sub-folder of the cloned repo or you can run erigon using the following other two
+ options
+
+* Use Docker : see [docker-compose.yml](./docker-compose.yml)
+
+* Use WSL (Windows Subsystem for Linux) **strictly on version 2**. Under this option you can build Erigon just as you
+ would on a regular Linux distribution. You can point your data also to any of the mounted Windows partitions (
+ eg. `/mnt/c/[...]`, `/mnt/d/[...]` etc) but in such case be advised performance is impacted: this is due to the fact
+ those mount points use `DrvFS` which is
+ a [network file system](https://github.com/erigontech/erigon?tab=readme-ov-file#cloud-network-drives)
+ and, additionally, MDBX locks the db for exclusive access which implies only one process at a time can access data.
+ This has consequences on the running of `rpcdaemon` which has to be configured as [Remote DB](#json-rpc-daemon) even if
+ it is executed on the very same computer. If instead your data is hosted on the native Linux filesystem non
+ limitations apply.
+ **Please also note the default WSL2 environment has its own IP address which does not match the one of the network
+ interface of Windows host: take this into account when configuring NAT for port 30303 on your router.**
+
+Getting in touch
+================
+
+### Reporting security issues/concerns
+
+Send an email to `security [at] torquem.ch`.
\ No newline at end of file
diff --git a/data/readmes/etcd-v366.md b/data/readmes/etcd-v366.md
new file mode 100644
index 0000000..1ef8abb
--- /dev/null
+++ b/data/readmes/etcd-v366.md
@@ -0,0 +1,212 @@
+# etcd - README (v3.6.6)
+
+**Repository**: https://github.com/etcd-io/etcd
+**Version**: v3.6.6
+
+---
+
+# etcd
+
+[](https://goreportcard.com/report/github.com/etcd-io/etcd)
+[](https://app.codecov.io/gh/etcd-io/etcd/tree/main)
+[](https://github.com/etcd-io/etcd/actions/workflows/tests.yaml)
+[](https://github.com/etcd-io/etcd/actions/workflows/codeql-analysis.yml)
+[](https://etcd.io/docs)
+[](https://godocs.io/go.etcd.io/etcd/v3)
+[](https://github.com/etcd-io/etcd/releases)
+[](https://github.com/etcd-io/etcd/blob/main/LICENSE)
+[](https://scorecard.dev/viewer/?uri=github.com/etcd-io/etcd)
+
+**Note**: The `main` branch may be in an *unstable or even broken state* during development. For stable versions, see [releases][github-release].
+
+
+
+
+
+
+
+etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus on being:
+
+* *Simple*: well-defined, user-facing API (gRPC)
+* *Secure*: automatic TLS with optional client cert authentication
+* *Fast*: benchmarked 10,000 writes/sec
+* *Reliable*: properly distributed using Raft
+
+etcd is written in Go and uses the [Raft][] consensus algorithm to manage a highly-available replicated log.
+
+etcd is used [in production by many companies](./ADOPTERS.md), and the development team stands behind it in critical deployment scenarios, where etcd is frequently teamed with applications such as [Kubernetes][k8s], [locksmith][], [vulcand][], [Doorman][], and many others. Reliability is further ensured by rigorous [**robustness testing**](https://github.com/etcd-io/etcd/tree/main/tests/robustness).
+
+See [etcdctl][etcdctl] for a simple command line client.
+
+
+
+Original image credited to xkcd.com/2347, alterations by Josh Berkus.
+
+[raft]: https://raft.github.io/
+[k8s]: http://kubernetes.io/
+[doorman]: https://github.com/youtube/doorman
+[locksmith]: https://github.com/coreos/locksmith
+[vulcand]: https://github.com/vulcand/vulcand
+[etcdctl]: https://github.com/etcd-io/etcd/tree/main/etcdctl
+
+## Documentation
+
+The most common API documentation you'll need can be found here:
+
+* [go.etcd.io/etcd/api/v3](https://godocs.io/go.etcd.io/etcd/api/v3)
+* [go.etcd.io/etcd/client/pkg/v3](https://godocs.io/go.etcd.io/etcd/client/pkg/v3)
+* [go.etcd.io/etcd/client/v3](https://godocs.io/go.etcd.io/etcd/client/v3)
+* [go.etcd.io/etcd/etcdctl/v3](https://godocs.io/go.etcd.io/etcd/etcdctl/v3)
+* [go.etcd.io/etcd/pkg/v3](https://godocs.io/go.etcd.io/etcd/pkg/v3)
+* [go.etcd.io/etcd/raft/v3](https://godocs.io/go.etcd.io/etcd/raft/v3)
+* [go.etcd.io/etcd/server/v3](https://godocs.io/go.etcd.io/etcd/server/v3)
+
+## Maintainers
+
+[Maintainers](OWNERS) strive to shape an inclusive open source project culture where users are heard and contributors feel respected and empowered. Maintainers aim to build productive relationships across different companies and disciplines. Read more about [Maintainers role and responsibilities](Documentation/contributor-guide/community-membership.md#maintainers).
+
+## Getting started
+
+### Getting etcd
+
+The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, and Docker on the [release page][github-release].
+
+For more installation guides, please check out [play.etcd.io](http://play.etcd.io) and [operating etcd](https://etcd.io/docs/latest/op-guide).
+
+[github-release]: https://github.com/etcd-io/etcd/releases
+
+### Running etcd
+
+First start a single-member cluster of etcd.
+
+If etcd is installed using the [pre-built release binaries][github-release], run it from the installation location as below:
+
+```bash
+/tmp/etcd-download-test/etcd
+```
+
+The etcd command can be simply run as such if it is moved to the system path as below:
+
+```bash
+mv /tmp/etcd-download-test/etcd /usr/local/bin/
+etcd
+```
+
+This will bring up etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.
+
+Next, let's set a single key, and then retrieve it:
+
+```bash
+etcdctl put mykey "this is awesome"
+etcdctl get mykey
+```
+
+etcd is now running and serving client requests. For more, please check out:
+
+* [Interactive etcd playground](http://play.etcd.io)
+* [Animated quick demo](https://etcd.io/docs/latest/demo)
+
+### etcd TCP ports
+
+The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication.
+
+[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
+
+### Running a local etcd cluster
+
+First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
+
+Our [Procfile script](./Procfile) will set up a local example cluster. Start it with:
+
+```bash
+goreman start
+```
+
+This will bring up 3 etcd members `infra1`, `infra2` and `infra3` and optionally etcd `grpc-proxy`, which runs locally and composes a cluster.
+
+Every cluster member and proxy accepts key value reads and key value writes.
+
+Follow the comments in [Procfile script](./Procfile) to add a learner node to the cluster.
+
+### Install etcd client v3
+
+```bash
+go get go.etcd.io/etcd/client/v3
+```
+
+### Next steps
+
+Now it's time to dig into the full etcd API and other guides.
+
+* Read the full [documentation].
+* Review etcd [frequently asked questions].
+* Explore the full gRPC [API].
+* Set up a [multi-machine cluster][clustering].
+* Learn the [config format, env variables and flags][configuration].
+* Find [language bindings and tools][integrations].
+* Use TLS to [secure an etcd cluster][security].
+* [Tune etcd][tuning].
+
+[documentation]: https://etcd.io/docs/latest
+[api]: https://etcd.io/docs/latest/learning/api
+[clustering]: https://etcd.io/docs/latest/op-guide/clustering
+[configuration]: https://etcd.io/docs/latest/op-guide/configuration
+[integrations]: https://etcd.io/docs/latest/integrations
+[security]: https://etcd.io/docs/latest/op-guide/security
+[tuning]: https://etcd.io/docs/latest/tuning
+
+## Contact
+
+* Email: [etcd-dev](https://groups.google.com/g/etcd-dev)
+* Slack: [#sig-etcd](https://kubernetes.slack.com/archives/C3HD8ARJ5) channel on Kubernetes ([get an invite](http://slack.kubernetes.io/))
+* [Community meetings](#community-meetings)
+
+### Community meetings
+
+etcd contributors and maintainers meet every week at `11:00` AM (USA Pacific) on Thursday and meetings alternate between community meetings and issue triage meetings. Meeting agendas are recorded in a [shared Google doc][shared-meeting-notes] and everyone is welcome to suggest additional topics or other agendas.
+
+Issue triage meetings are aimed at getting through our backlog of PRs and Issues. Triage meetings are open to any contributor; you don't have to be a reviewer or approver to help out! They can also be a good way to get started contributing.
+
+The meeting lead role is rotated for each meeting between etcd maintainers or sig-etcd leads and is recorded in a [shared Google sheet][shared-rotation-sheet].
+
+Meeting recordings are uploaded to the official etcd [YouTube channel].
+
+Get calendar invitations by joining [etcd-dev](https://groups.google.com/g/etcd-dev) mailing group.
+
+Join the CNCF-funded Zoom channel: [zoom.us/my/cncfetcdproject](https://zoom.us/my/cncfetcdproject)
+
+[shared-meeting-notes]: https://docs.google.com/document/d/16XEGyPBisZvmmoIHSZzv__LoyOeluC5a4x353CX0SIM/edit
+[shared-rotation-sheet]: https://docs.google.com/spreadsheets/d/1jodHIO7Dk2VWTs1IRnfMFaRktS9IH8XRyifOnPdSY8I/edit
+[YouTube channel]: https://www.youtube.com/@etcdio
+
+## Contributing
+
+See [CONTRIBUTING](CONTRIBUTING.md) for details on setting up your development environment, submitting patches and the contribution workflow.
+
+Please refer to [community-membership.md](Documentation/contributor-guide/community-membership.md#member) for information on becoming an etcd project member. We welcome and look forward to your contributions to the project!
+
+Please also refer to [roadmap](Documentation/contributor-guide/roadmap.md) to get more details on the priorities for the next few major or minor releases.
+
+## Reporting bugs
+
+See [reporting bugs](https://github.com/etcd-io/etcd/blob/main/Documentation/contributor-guide/reporting_bugs.md) for details about reporting any issues. Before opening an issue please check it is not covered in our [frequently asked questions].
+
+[frequently asked questions]: https://etcd.io/docs/latest/faq
+
+## Reporting a security vulnerability
+
+See [security disclosure and release process](security/README.md) for details on how to report a security vulnerability and how the etcd team manages it.
+
+## Issue and PR management
+
+See [issue triage guidelines](https://github.com/etcd-io/etcd/blob/main/Documentation/contributor-guide/triage_issues.md) for details on how issues are managed.
+
+See [PR management](https://github.com/etcd-io/etcd/blob/main/Documentation/contributor-guide/triage_prs.md) for guidelines on how pull requests are managed.
+
+## etcd Emeritus Maintainers
+
+etcd [emeritus maintainers](OWNERS) dedicated a part of their career to etcd and reviewed code, triaged bugs and pushed the project forward over a substantial period of time. Their contribution is greatly appreciated.
+
+### License
+
+etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
diff --git a/data/readmes/eventmesh-v1110.md b/data/readmes/eventmesh-v1110.md
new file mode 100644
index 0000000..14f054c
--- /dev/null
+++ b/data/readmes/eventmesh-v1110.md
@@ -0,0 +1,227 @@
+# EventMesh - README (v1.11.0)
+
+**Repository**: https://github.com/apache/eventmesh
+**Version**: v1.11.0
+
+---
+
+
+
+
+# Apache EventMesh
+
+**Apache EventMesh** is a new generation serverless event middleware for building distributed [event-driven](https://en.wikipedia.org/wiki/Event-driven_architecture) applications.
+
+### EventMesh Architecture
+
+
+
+### EventMesh K8S deployment
+
+
+
+## Features
+
+Apache EventMesh has a vast amount of features to help users achieve their goals. Let us share with you some of the key features EventMesh has to offer:
+
+- Built around the [CloudEvents](https://cloudevents.io) specification.
+- Rapidty extendsible interconnector layer [connectors](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors) using [openConnect](https://github.com/apache/eventmesh/tree/master/eventmesh-openconnect) such as the source or sink of Saas, CloudService, and Database etc.
+- Rapidty extendsible storage layer such as [Apache RocketMQ](https://rocketmq.apache.org), [Apache Kafka](https://kafka.apache.org), [Apache Pulsar](https://pulsar.apache.org), [RabbitMQ](https://rabbitmq.com), [Redis](https://redis.io).
+- Rapidty extendsible meta such as [Consul](https://consulproject.org/en/), [Nacos](https://nacos.io), [ETCD](https://etcd.io) and [Zookeeper](https://zookeeper.apache.org/).
+- Guaranteed at-least-once delivery.
+- Deliver events between multiple EventMesh deployments.
+- Event schema management by catalog service.
+- Powerful event orchestration by [Serverless workflow](https://serverlessworkflow.io/) engine.
+- Powerful event filtering and transformation.
+- Rapid, seamless scalability.
+- Easy Function develop and framework integration.
+
+## Roadmap
+
+Please go to the [roadmap](https://eventmesh.apache.org/docs/roadmap) to get the release history and new features of Apache EventMesh.
+
+## Subprojects
+
+- [EventMesh-site](https://github.com/apache/eventmesh-site): Apache official website resources for EventMesh.
+- [EventMesh-workflow](https://github.com/apache/eventmesh-workflow): Serverless workflow runtime for event Orchestration on EventMesh.
+- [EventMesh-dashboard](https://github.com/apache/eventmesh-dashboard): Operation and maintenance console of EventMesh.
+- [EventMesh-catalog](https://github.com/apache/eventmesh-catalog): Catalog service for event schema management using AsyncAPI.
+- [EventMesh-go](https://github.com/apache/eventmesh-go): A go implementation for EventMesh runtime.
+
+## Quick start
+
+This section of the guide will show you the steps to deploy EventMesh from [Local](#run-eventmesh-runtime-locally), [Docker](#run-eventmesh-runtime-in-docker), [K8s](#run-eventmesh-runtime-in-kubernetes).
+
+This section guides the launch of EventMesh according to the default configuration, if you need more detailed EventMesh deployment steps, please visit the [EventMesh official document](https://eventmesh.apache.org/docs/introduction).
+
+### Deployment Event Store
+
+> EventMesh supports [multiple Event Stores](https://eventmesh.apache.org/docs/roadmap#event-store-implementation-status), the default storage mode is `standalone`, and does not rely on other event stores as layers.
+
+### Run EventMesh Runtime locally
+
+#### 1. Download EventMesh
+
+Download the latest version of the Binary Distribution from the [EventMesh Download](https://eventmesh.apache.org/download/) page and extract it:
+
+```shell
+wget https://dlcdn.apache.org/eventmesh/1.10.0/apache-eventmesh-1.10.0-bin.tar.gz
+tar -xvzf apache-eventmesh-1.10.0-bin.tar.gz
+cd apache-eventmesh-1.10.0
+```
+
+#### 2. Run EventMesh
+
+Execute the `start.sh` script to start the EventMesh Runtime server.
+
+```shell
+bash bin/start.sh
+```
+
+View the output log:
+
+```shell
+tail -n 50 -f logs/eventmesh.out
+```
+
+When the log output shows server `state:RUNNING`, it means EventMesh Runtime has started successfully.
+
+You can stop the run with the following command:
+
+```shell
+bash bin/stop.sh
+```
+
+When the script prints `shutdown server ok!`, it means EventMesh Runtime has stopped.
+
+### Run EventMesh Runtime in Docker
+
+#### 1. Pull EventMesh Image
+
+Use the following command line to download the latest version of [EventMesh](https://hub.docker.com/r/apache/eventmesh):
+
+```shell
+sudo docker pull apache/eventmesh:latest
+```
+
+#### 2. Run and Manage EventMesh Container
+
+Use the following command to start the EventMesh container:
+
+```shell
+sudo docker run -d --name eventmesh -p 10000:10000 -p 10105:10105 -p 10205:10205 -p 10106:10106 -t apache/eventmesh:latest
+```
+
+
+Enter the container:
+
+```shell
+sudo docker exec -it eventmesh /bin/bash
+```
+
+view the log:
+
+```shell
+cd logs
+tail -n 50 -f eventmesh.out
+```
+
+### Run EventMesh Runtime in Kubernetes
+
+#### 1. Deploy operator
+
+Run the following commands(To delete a deployment, simply replace `deploy` with `undeploy`):
+
+```shell
+$ cd eventmesh-operator && make deploy
+```
+
+Run `kubectl get pods` 、`kubectl get crd | grep eventmesh-operator.eventmesh`to see the status of the deployed eventmesh-operator.
+
+```shell
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 20s
+
+$ kubectl get crd | grep eventmesh-operator.eventmesh
+connectors.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z
+runtimes.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z
+```
+
+#### 2. Deploy EventMesh Runtime
+
+Execute the following command to deploy runtime, connector-rocketmq (To delete, simply replace `create` with `delete`):
+
+```shell
+$ make create
+```
+
+Run `kubectl get pods` to see if the deployment was successful.
+
+```shell
+NAME READY STATUS RESTARTS AGE
+connector-rocketmq-0 1/1 Running 0 9s
+eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 3m12s
+eventmesh-runtime-0-a-0 1/1 Running 0 15s
+```
+
+## Contributing
+
+Each contributor has played an important role in promoting the robust development of Apache EventMesh. We sincerely appreciate all contributors who have contributed code and documents.
+
+- [Contributing Guideline](https://eventmesh.apache.org/community/contribute/contribute)
+- [Good First Issues](https://github.com/apache/eventmesh/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
+
+
+## CNCF Landscape
+
+
+
+## License
+
+Apache EventMesh is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html).
+
+## Community
+
+| WeChat Assistant | WeChat Public Account | Slack |
+|---------------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| | | [Join Slack Chat](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g)(Please open an issue if this link is expired) |
+
+Bi-weekly meeting : [#Tencent meeting](https://meeting.tencent.com/dm/wes6Erb9ioVV) : 346-6926-0133
+
+Bi-weekly meeting record : [bilibili](https://space.bilibili.com/1057662180)
+
+### Mailing List
+
+| Name | Description | Subscribe | Unsubscribe | Archive |
+|-------------|---------------------------------------------------------|------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------|
+| Users | User discussion | [Subscribe](mailto:users-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:users-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?users@eventmesh.apache.org) |
+| Development | Development discussion (Design Documents, Issues, etc.) | [Subscribe](mailto:dev-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?dev@eventmesh.apache.org) |
+| Commits | Commits to related repositories | [Subscribe](mailto:commits-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:commits-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?commits@eventmesh.apache.org) |
+| Issues | Issues or PRs comments and reviews | [Subscribe](mailto:issues-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:issues-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?issues@eventmesh.apache.org) |
diff --git a/data/readmes/external-secrets-v111.md b/data/readmes/external-secrets-v111.md
new file mode 100644
index 0000000..da35506
--- /dev/null
+++ b/data/readmes/external-secrets-v111.md
@@ -0,0 +1,83 @@
+# external-secrets - README (v1.1.1)
+
+**Repository**: https://github.com/external-secrets/external-secrets
+**Version**: v1.1.1
+
+---
+
+
+
+
+
+# External Secrets
+
+
+[](https://bestpractices.coreinfrastructure.org/projects/5947)
+[](https://securityscorecards.dev/viewer/?uri=github.com/external-secrets/external-secrets)
+[](https://goreportcard.com/report/github.com/external-secrets/external-secrets)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fexternal-secrets%2Fexternal-secrets?ref=badge_shield)
+
+
+
+**External Secrets Operator** is a Kubernetes operator that integrates external
+secret management systems like [AWS Secrets
+Manager](https://aws.amazon.com/secrets-manager/), [HashiCorp
+Vault](https://www.vaultproject.io/), [Google Secrets
+Manager](https://cloud.google.com/secret-manager), [Azure Key
+Vault](https://azure.microsoft.com/en-us/services/key-vault/), [IBM Cloud Secrets Manager](https://www.ibm.com/cloud/secrets-manager), [Akeyless](https://akeyless.io), [CyberArk Secrets Manager](https://www.cyberark.com/products/secrets-management/), [Pulumi ESC](https://www.pulumi.com/product/esc/) and many more. The
+operator reads information from external APIs and automatically injects the
+values into a [Kubernetes
+Secret](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+Multiple people and organizations are joining efforts to create a single External Secrets solution based on existing projects. If you are curious about the origins of this project, check out [this issue](https://github.com/external-secrets/kubernetes-external-secrets/issues/47) and [this PR](https://github.com/external-secrets/kubernetes-external-secrets/pull/477).
+
+## Documentation
+
+External Secrets Operator guides and reference documentation is available at [external-secrets.io](https://external-secrets.io). Also see our [stability and support](https://external-secrets.io/main/introduction/stability-support/) policy.
+
+## Contributing
+
+We welcome and encourage contributions to this project! Please read the [Developer](https://www.external-secrets.io/main/contributing/devguide/) and [Contribution process](https://www.external-secrets.io/main/contributing/process/) guides. Also make sure to check the [Code of Conduct](https://www.external-secrets.io/main/contributing/coc/) and adhere to its guidelines.
+
+Also, please take a look our [Contribution Ladder](CONTRIBUTOR_LADDER.md) for a _very_ detailed explanation of what roles and tracks are available for people to try and help this project.
+
+### Sponsoring
+
+Please consider sponsoring this project, there are many ways you can help us with: engineering time, providing infrastructure, donating money, etc. We are open to cooperations, feel free to approach as and we discuss how this could look like. We can keep your contribution anonymized if that's required (depending on the type of contribution), and anonymous donations are possible inside [Opencollective](https://opencollective.com/external-secrets-org).
+
+## Bi-weekly Development Meeting
+
+We host our development meeting every odd wednesday on [Zoom](https://zoom-lfx.platform.linuxfoundation.org/meeting/92843470602?password=b953d8fb-825b-48ae-8fd7-226e498cc316). We run the meeting with alternating times [8:00 PM Berlin Time](https://dateful.com/time-zone-converter?t=20:00&tz=Europe/Berlin) and [1:00 PM Berlin Time](https://dateful.com/time-zone-converter?t=13:00&tz=Europe/Berlin). Be sure to check the [CNCF Calendar](https://zoom-lfx.platform.linuxfoundation.org/meetings/externalsecretsoperator?view=month) to see when the next meeting is scheduled, we'll also announce the time in our [Kubernetes Slack channel](https://kubernetes.slack.com/messages/external-secrets).
+Meeting notes are recorded on [this google document](https://docs.google.com/document/d/1etFaDlLd01PUWuMlAwCXnpUg85QiTkNjw0SHu-rQjDs/).
+
+Anyone is welcome to join. Feel free to ask questions, request feedback, raise awareness for an issue, or just say hi. ;)
+
+## Security
+
+Please report vulnerabilities by email to cncf-ExternalSecretsOp-maintainers@lists.cncf.io. Also see our [SECURITY.md file](SECURITY.md) for details.
+
+## Software bill of materials
+We attach SBOM and provenance file to our GitHub release. Also, they are attached to container images.
+
+## Adopters
+
+Please create a PR and add your company or project to our [ADOPTERS.md file](ADOPTERS.md) if you are using our project!
+
+## Roadmap
+
+You can find the roadmap in our documentation: https://external-secrets.io/main/contributing/roadmap/
+
+## Kicked off by
+
+
+
+## Sponsored by
+
+
+
+
+
+
+
+## License
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fexternal-secrets%2Fexternal-secrets?ref=badge_large)
diff --git a/data/readmes/fabedge-v081.md b/data/readmes/fabedge-v081.md
new file mode 100644
index 0000000..38b042e
--- /dev/null
+++ b/data/readmes/fabedge-v081.md
@@ -0,0 +1,102 @@
+# FabEdge - README (v0.8.1)
+
+**Repository**: https://github.com/FabEdge/fabedge
+**Version**: v0.8.1
+
+---
+
+# FabEdge
+
+[](https://github.com/FabEdge/fabedge/actions/workflows/main.yml)
+[](https://github.com/fabedge/fabedge/releases)
+[](https://github.com/FabEdge/fabedge/blob/main/LICENSE)
+
+
+
+English | [中文](README_zh.md)
+
+
+FabEdge is a secure container networking solution based on Kubernetes, focusing on edge computing. It enables cloud-edge, edge-edge collaboration and solves the problems including complex configuration management, network isolation, unaware of the underlying topology, etc. It supports weak network, such as 4/5G, WiFi, etc. The main use cases are IoT, IoV, smart city, etc.
+
+FabEdge supports the major edge computing frameworks ,like KubeEdge/SuperEdge/OpenYurt.
+
+FabEdge not only supports edge nodes (remote nodes joined to the cluster via an edge computing framework such as KubeEdge), but also edge clusters (standalone K8S clusters).
+
+FabEdge is a sandbox project of the Cloud Native Computing Foundation (CNCF).
+
+
+## Features
+* **Kubernetes Native**: Compatible with Kubernetes, transparent to applications.
+
+* **Automatic Configuration Management**: the addresses, certificates, endpoints, tunnels, etc. are automatically managed.
+
+* **Cloud-Edge/Edge-Edge Collaboration**: Secure tunnels between cloud-edge, edge-edge nodes for synergy.
+
+* **Topology-aware Service Discovery**: reduces service access latency, by using the nearest available service endpoint.
+
+
+## Advantages:
+
+- **Standard**: suitable for any protocol, any application.
+- **Secure**: Uses mature and stable IPSec technology, and a secure certificate-based authentication system.
+- **Easy to use**: Adopts the `Operator` pattern to automatically manage addresses, nodes, certificates, etc., minimizing human intervention.
+
+
+## How it works
+
+
+* The cloud can be any Kubernetes cluster with supported CNI network plug-in, including Calico, Flannel, etc.
+* FabEdge builds a layer-3 data plane with tunnels in additional to the control plan managed by KubeEdge, SuperEdge, OpenYurt,etc.
+* FabEdge consists of **Operators, Connector, Agent, Cloud-Agent**.
+* Operator monitors k8s resources such as nodes, services, and endpoints in the cloud, and creates a configmap for each edge node, which contains the configuration information such as the subnet, tunnel, and load balancing rules. The operator is also responsible to manage the life cycle of agent pod for each edge node.
+* Connector is responsible to terminate the tunnels from edge nodes, and forward traffic between the cloud and the edge. It relies on the cloud CNI plug-in to forward traffic to other non-connector nodes in the cloud.
+* Cloud-Agent runs on the non-connector nodes in the cluster and manages the routes to remote peers.
+* Each edge node runs an agent and consumes its own configmap including the following functions:
+ - Manage the configuration file of the CNI plug-in of this node
+ - Manage the tunnels of this node
+ - Manage the load balancing rules of this node
+
+* Fab-DNS runs in all the clusters, to provide the topology-aware service discovery capability by intercepting the DNS queries.
+
+
+## FabEdge vs. Calico/Flannel/etc
+
+FabEdge is not to replace the traditional Kubernetes network plugins such as Calico/Flannel. As in the above architecture diagram, Calico/Flannel is used within the cloud for communication between cloud nodes, while FabEdge is a complement to it for the edge-cloud, edge-edge communication.
+
+## Documentation
+
+* [Getting started](docs/get-started.md)
+* [User Guide](docs/user-guide.md)
+* [FAQ](./docs/FAQ.md)
+* [Uninstall FabEdge](docs/uninstall.md)
+* [Troubleshooting](docs/troubleshooting-guide.md)
+
+
+## Meeting
+Regular community meeting at 2nd and 4th Thursday of every month
+
+Resources:
+[Meeting notes and agenda](https://shimo.im/docs/Wwt9TdGqgVvpDHJt)
+[Meeting recordings:bilibili channel](https://space.bilibili.com/524926244?spm_id_from=333.1007.0.0)
+
+## Contact
+Any question, feel free to reach us in the following ways:
+
+· Email: fabedge@beyondcent.com
+. Slack: [#fabedge](https://cloud-native.slack.com/archives/C03AD0TFPFF)
+· Scan the QR code to join WeChat Group
+
+
+
+
+
+## Contributing
+
+If you're interested in being a contributor and want to get involved in developing the FabEdge code, please see [CONTRIBUTING](./CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
+
+Please make sure to read and observe our [Code of Conduct](https://github.com/FabEdge/fabedge/blob/main/CODE_OF_CONDUCT.md).
+
+
+## License
+FabEdge is under the Apache 2.0 license. See the [LICENSE](https://github.com/FabEdge/fabedge/blob/main/LICENSE) file for details.
+
diff --git a/data/readmes/falco-0421.md b/data/readmes/falco-0421.md
new file mode 100644
index 0000000..e4fe8df
--- /dev/null
+++ b/data/readmes/falco-0421.md
@@ -0,0 +1,154 @@
+# Falco - README (0.42.1)
+
+**Repository**: https://github.com/falcosecurity/falco
+**Version**: 0.42.1
+
+---
+
+# Falco
+
+[](https://github.com/falcosecurity/falco/releases/latest) [](https://github.com/falcosecurity/falco/releases/latest) [](COPYING) [](https://falco.org/docs)
+
+[](https://github.com/falcosecurity/evolution/blob/main/REPOSITORIES.md#core-scope) [](https://github.com/falcosecurity/evolution/blob/main/REPOSITORIES.md#stable) [](https://scorecard.dev/viewer/?uri=github.com/falcosecurity/falco) [](https://bestpractices.coreinfrastructure.org/projects/2317)
+
+[](https://falco.org)
+
+[Falco](https://falco.org/) is a cloud native runtime security tool for Linux operating systems. It is designed to detect and alert on abnormal behavior and potential security threats in real-time.
+
+At its core, Falco is a kernel monitoring and detection agent that observes events, such as syscalls, based on custom rules. Falco can enhance these events by integrating metadata from the container runtime and Kubernetes. The collected events can be analyzed off-host in SIEM or data lake systems.
+
+Falco, originally created by [Sysdig](https://sysdig.com), is a **graduated project** under the [Cloud Native Computing Foundation](https://cncf.io) (CNCF) used in production by various [organisations](https://github.com/falcosecurity/falco/blob/master/ADOPTERS.md).
+
+For detailed technical information and insights into the cyber threats that Falco can detect, visit the official [Falco](https://falco.org/) website.
+
+For comprehensive information on the latest updates and changes to the project, please refer to the [Change Log](CHANGELOG.md).
+
+## The Falco Project
+
+The Falco Project codebase is maintained under the [falcosecurity GitHub organization](https://github.com/falcosecurity). The primary repository, [falcosecurity/falco](https://github.com/falcosecurity/falco), holds the source code for the Falco binary, while other sub-projects are hosted in dedicated repositories. This approach of isolating components into specialized repositories enhances modularity and focused development. Notable [core repositories](https://github.com/falcosecurity/evolution?tab=readme-ov-file#core) include:
+
+- [falcosecurity/libs](https://github.com/falcosecurity/libs): This repository hosts Falco's core libraries, which constitute the majority of the binary’s source code and provide essential features, such as kernel drivers.
+- [falcosecurity/rules](https://github.com/falcosecurity/rules): It contains the official ruleset for Falco, offering pre-defined detection rules for various security threats and abnormal behaviors.
+- [falcosecurity/plugins](https://github.com/falcosecurity/plugins): This repository supports integration with external services through plugins that extend Falco's capabilities beyond syscalls and container events, with plans for evolving specialized functionalities in future releases.
+- [falcosecurity/falcoctl](https://github.com/falcosecurity/falcoctl): A command-line utility designed for managing and interacting with Falco.
+- [falcosecurity/charts](https://github.com/falcosecurity/charts): This repository provides Helm charts for deploying Falco and its ecosystem, simplifying the installation and management process.
+
+For further insights into our repositories and additional details about our governance model, please visit the official hub of The Falco Project: [falcosecurity/evolution](https://github.com/falcosecurity/evolution).
+
+## Getting Started with Falco
+
+If you're new to Falco, begin your journey with our [Getting Started](https://falco.org/docs/getting-started/) guide. For production deployments, please refer to our comprehensive [Setup](https://falco.org/docs/setup/) documentation.
+
+As final recommendations before deploying Falco, verify environment compatibility, define your detection goals, optimize performance, choose the appropriate build, and plan for SIEM or data lake integration to ensure effective incident response.
+
+### Demo Environment
+
+A demo environment is provided via a docker-compose file that can be started on a docker host which includes falco, falcosidekick, falcosidekick-ui and its required redis database. For more information see the [docker-compose section](docker/docker-compose/)
+
+## Join the Community
+
+To get involved with the Falco Project please visit the [Community](https://github.com/falcosecurity/community) repository to find more information and ways to get involved.
+
+If you have any questions about Falco or contributing, do not hesitate to file an issue or contact the Falco maintainers and community members for assistance.
+
+How to reach out?
+
+ - Join the [#falco](https://kubernetes.slack.com/messages/falco) channel on the [Kubernetes Slack](https://slack.k8s.io).
+ - Join the [Falco mailing list](https://lists.cncf.io/g/cncf-falco-dev).
+ - File an [issue](https://github.com/falcosecurity/falco/issues) or make feature requests.
+
+## Commitment to Falco's Own Security
+
+Full reports of various security audits can be found [here](./audits/).
+
+In addition, you can refer to the [falco](https://github.com/falcosecurity/falco/security) and [libs](https://github.com/falcosecurity/libs/security) security sections for detailed updates on security advisories and policies.
+
+To report security vulnerabilities, please follow the community process outlined in the documentation found [here](https://github.com/falcosecurity/.github/blob/main/SECURITY.md).
+
+## Building
+
+For comprehensive, step-by-step instructions on building Falco from source, please refer to the [official documentation](https://falco.org/docs/developer-guide/source/).
+
+## Testing
+
+
+ Expand Testing Instructions
+
+Falco's [Build Falco from source](https://falco.org/docs/developer-guide/source/) is the go-to resource to understand how to build Falco from source. In addition, the [falcosecurity/libs](https://github.com/falcosecurity/libs) repository offers additional valuable information about tests and debugging of Falco's underlying libraries and kernel drivers.
+
+Here's an example of a `cmake` command that will enable everything you need for all unit tests of this repository:
+
+```bash
+cmake \
+-DUSE_BUNDLED_DEPS=ON \
+-DBUILD_LIBSCAP_GVISOR=ON \
+-DBUILD_BPF=ON \
+-DBUILD_DRIVER=ON \
+-DBUILD_FALCO_MODERN_BPF=ON \
+-DCREATE_TEST_TARGETS=ON \
+-DBUILD_FALCO_UNIT_TESTS=ON ..;
+```
+
+Build and run the unit test suite:
+
+```bash
+nproc=$(grep processor /proc/cpuinfo | tail -n 1 | awk '{print $3}');
+make -j$(($nproc-1)) falco_unit_tests;
+# Run the tests
+sudo ./unit_tests/falco_unit_tests;
+```
+
+Optionally, build the driver of your choice and test run the Falco binary to perform manual tests.
+
+Lastly, The Falco Project has moved its Falco regression tests to [falcosecurity/testing](https://github.com/falcosecurity/testing).
+
+
+
+
+
+
+ ## How to Contribute
+
+Please refer to the [Contributing](https://github.com/falcosecurity/.github/blob/main/CONTRIBUTING.md) guide and the [Code of Conduct](https://github.com/falcosecurity/evolution/blob/main/CODE_OF_CONDUCT.md) for more information on how to contribute.
+
+## FAQs
+
+### Why is Falco in C++ rather than Go or {language}?
+
+
+ Expand Information
+
+1. The first lines of code at the base of Falco were written some time ago, where Go didn't yet have the same level of maturity and adoption as today.
+2. The Falco execution model is sequential and mono-thread due to the statefulness requirements of the tool, and so most of the concurrency-related selling points of the Go runtime would not be leveraged at all.
+3. The Falco code deals with very low-level programming in many places (e.g. some headers are shared with the eBPF probe and the Kernel module), and we all know that interfacing Go with C is possible but brings tons of complexity and tradeoffs to the table.
+4. As a security tool meant to consume a crazy high throughput of events per second, Falco needs to squeeze performance in all hot paths at runtime and requires deep control on memory allocation, which the Go runtime can't provide (there's also garbage collection involved).
+5. Although Go didn't suit the engineering requirements of the core of Falco, we still thought that it could be a good candidate for writing Falco extensions through the plugin system. This is the main reason we gave special attention and high priority to the development of the plugin-sdk-go.
+6. Go is not a requirement for having statically-linked binaries. In fact, we provide fully-static Falco builds since few years. The only issue with those is that the plugin system can't be supported with the current dynamic library model we currently have.
+7. The plugin system has been envisioned to support multiple languages, so on our end maintaining a C-compatible codebase is the best strategy to ensure maximum cross-language compatibility.
+8. In general, plugins have GLIBC requirements/dependencies because they have low-level C bindings required for dynamic loading. A potential solution for the future could be to also support plugin to be statically-linked at compilation time and so released as bundled in the Falco binary. Although no work started yet in this direction, this would solve most issues you reported and would provide a totally-static binary too. Of course, this would not be compatible with dynamic loading anymore, but it may be a viable solution for our static-build flavor of Falco.
+9. Memory safety is definitely a concern and we try our best to keep an high level of quality even though C++ is quite error prone. For instance, we try to use smart pointers whenever possible, we build the libraries with an address sanitizer in our CI, we run Falco through Valgrind before each release, and have ways to stress-test it to detect performance regressions or weird memory usage (e.g. https://github.com/falcosecurity/event-generator). On top of that, we also have third parties auditing the codebase by time to time. None of this make a perfect safety standpoint of course, but we try to maximize our odds. Go would definitely make our life easier from this perspective, however the tradeoffs never made it worth it so far due to the points above.
+10. The C++ codebase of falcosecurity/libs, which is at the core of Falco, is quite large and complex. Porting all that code to another language would be a major effort requiring lots of development resource and with an high chance of failure and regression. As such, our approach so far has been to choose refactors and code polishing instead, up until we'll reach an optimal level of stability, quality, and modularity, on that portion of code. This would allow further developments to be smoother and more feasibile in the future.
+
+
+
+
+### What's next for Falco?
+
+Stay updated with Falco's evolving capabilities by exploring the [Falco Roadmap](https://github.com/orgs/falcosecurity/projects/5), which provides insights into the features currently under development and planned for future releases.
+
+## License
+
+Falco is licensed to you under the [Apache 2.0](./COPYING) open source license.
+
+## Resources
+
+ - [Governance](https://github.com/falcosecurity/evolution/blob/main/GOVERNANCE.md)
+ - [Code Of Conduct](https://github.com/falcosecurity/evolution/blob/main/CODE_OF_CONDUCT.md)
+ - [Maintainers Guidelines](https://github.com/falcosecurity/evolution/blob/main/MAINTAINERS_GUIDELINES.md)
+ - [Maintainers List](https://github.com/falcosecurity/evolution/blob/main/MAINTAINERS.md)
+ - [Repositories Guidelines](https://github.com/falcosecurity/evolution/blob/main/REPOSITORIES.md)
+ - [Repositories List](https://github.com/falcosecurity/evolution/blob/main/README.md#repositories)
+ - [Adopters List](https://github.com/falcosecurity/falco/blob/master/ADOPTERS.md)
+ - [Release Process](RELEASE.md)
+ - [Setup documentation](https://falco.org/docs/setup/)
+ - [Troubleshooting](https://falco.org/docs/troubleshooting/)
diff --git a/data/readmes/flink-v04-rc1.md b/data/readmes/flink-v04-rc1.md
new file mode 100644
index 0000000..37ceecc
--- /dev/null
+++ b/data/readmes/flink-v04-rc1.md
@@ -0,0 +1,205 @@
+# Flink - README (v0.4-rc1)
+
+**Repository**: https://github.com/apache/flink
+**Version**: v0.4-rc1
+
+---
+
+# Stratosphere
+
+_"Big Data looks tiny from Stratosphere."_
+
+Stratosphere is a next-generation Big Data Analytics Platform. It combines the strengths of MapReduce/Hadoop with powerful programming abstractions in Java and Scala, and a high performance runtime. Stratosphere has native support for iterations, delta iterations, and programs consisting of workflows of many operations.
+
+Learn more about Stratosphere at http://stratosphere.eu
+
+## Start writing a Stratosphere Job
+If you just want to get started with Stratosphere, use the following command to set up an empty Stratosphere Job
+
+```
+curl https://raw.github.com/stratosphere/stratosphere-quickstart/master/quickstart.sh | bash
+```
+The quickstart sample contains everything to develop a Stratosphere Job on your computer and run it in a local embedded runtime. No setup needed.
+Further quickstart guides are at http://stratosphere.eu/quickstart/
+
+
+## Build Stratosphere
+Below are three short tutorials that guide you through the first steps: Building, running and developing.
+
+### Build From Source
+
+This tutorial shows how to build Stratosphere on your own system. Please open a bug report if you have any troubles!
+
+#### Requirements
+* Unix-like environment (We use Linux, Mac OS X, Cygwin)
+* git
+* Maven (at least version 3.0.4)
+* Java 6 or 7
+
+```
+git clone https://github.com/stratosphere/stratosphere.git
+cd stratosphere
+mvn -DskipTests clean package # this will take up to 5 minutes
+```
+
+Stratosphere is now installed in `stratosphere-dist/target`
+If you’re a Debian/Ubuntu user, you’ll find a .deb package. We will continue with the generic case.
+
+ cd stratosphere-dist/target/stratosphere-dist-0.4-SNAPSHOT-bin/stratosphere-0.4-SNAPSHOT/
+
+The directory structure here looks like the contents of the official release distribution.
+
+#### Build for different Hadoop Versions
+This section is for advanced users that want to build Stratosphere for a different Hadoop version, for example for Hadoop Yarn support.
+
+We use the profile activation via properties (-D).
+
+##### Build hadoop v1 (default)
+Build the default (currently hadoop 1.2.1)
+```mvn clean package```
+
+Build for a specific hadoop v1 version
+```mvn -Dhadoop-one.version=1.1.2 clean package```
+
+##### Build hadoop v2 (yarn)
+
+Build the yarn using the default version defined in the pom
+```mvn -Dhadoop.profile=2 clean package```
+
+Build for a specific hadoop v1 version
+```mvn -Dhadoop.profile=2 -Dhadoop-two.version=2.1.0-beta clean package```
+
+It is necessary to generate separate POMs if you want to deploy to your local repository (`mvn install`) or somewhere else.
+We have a script in `/tools` that generates POMs for the profiles. Use
+```mvn -f pom.hadoop2.xml clean install -DskipTests```
+to put a POM file with the right dependencies into your local repository.
+
+
+### Run your first program
+
+We will run a simple “Word Count” example.
+The easiest way to start Stratosphere on your local machine is so-called "local-mode":
+
+ ./bin/start-local.sh
+
+Get some test data:
+
+ wget -O hamlet.txt http://www.gutenberg.org/cache/epub/1787/pg1787.txt
+
+Start the job:
+
+ ./bin/stratosphere run --jarfile ./examples/java-record-api-examples-0.4-SNAPSHOT-WordCount.jar --arguments 1 file://`pwd`/hamlet.txt file://`pwd`/wordcount-result.txt
+
+You will find a file called `wordcount-result.txt` in your current directory.
+
+#### Alternative Method: Use the webclient interface
+(And get a nice execution plan overview for free!)
+
+ ./bin/start-local.sh
+ ./bin/start-webclient.sh start
+
+Get some test data:
+ wget -O ~/hamlet.txt http://www.gutenberg.org/cache/epub/1787/pg1787.txt
+
+* Point your browser to to http://localhost:8080/launch.html. Upload the WordCount.jar using the upload form in the lower right box. The jar is located in `./examples/java-record-api-examples-0.4-SNAPSHOT-WordCount.jar`.
+* Select the WordCount jar from the list of available jars (upper left).
+* Enter the argument line in the lower-left box: `1 file:///hamlet.txt file:///wordcount-result.txt`
+
+* Hit “Run Job”
+
+
+### Eclipse Setup and Debugging
+
+To contribute back to the project or develop your own jobs for Stratosphere, you need a working development environment. We use Eclipse and IntelliJ for development. Here we focus on Eclipse.
+
+If you want to work on the scala code you will need the following plugins:
+
+Eclipse 4.x:
+ * scala-ide: http://download.scala-ide.org/sdk/e38/scala210/stable/site
+ * m2eclipse-scala: http://alchim31.free.fr/m2e-scala/update-site
+ * build-helper-maven-plugin: https://repository.sonatype.org/content/repositories/forge-sites/m2e-extras/0.15.0/N/0.15.0.201206251206/
+
+Eclipse 3.7:
+ * scala-ide: http://download.scala-ide.org/sdk/e37/scala210/stable/site
+ * m2eclipse-scala: http://alchim31.free.fr/m2e-scala/update-site
+ * build-helper-maven-plugin: https://repository.sonatype.org/content/repositories/forge-sites/m2e-extras/0.14.0/N/0.14.0.201109282148/
+
+When you don't have the plugins your project will have build errors, you can just close the scala projects and ignore them.
+o
+Import the Stratosphere source code using Maven's Import tool:
+ * Select "Import" from the "File"-menu.
+ * Expand "Maven" node, select "Existing Maven Projects", and click "next" button
+ * Select the root directory by clicking on the "Browse" button and navigate to the top folder of the cloned Stratosphere Git repository.
+ * Ensure that all projects are selected and click the "Finish" button.
+
+Create a new Eclipse Project that requires Stratosphere in its Build Path!
+
+Use this skeleton as an entry point for your own Jobs: It allows you to hit the “Run as” -> “Java Application” feature of Eclipse:
+
+```java
+public class Tutorial implements Program {
+
+ @Override
+ public Plan getPlan(String... args) {
+ // your parallel program goes here
+ }
+
+ public static void main(String[] args) throws Exception {
+ Tutorial tut = new Tutorial();
+ Plan toExecute = tut.getPlan(args);
+ long runtime = LocalExecutor.execute(toExecute);
+ System.out.println("Runime: " + runtime);
+ System.exit(0);
+ }
+}
+```
+
+## Support
+Don’t hesitate to ask!
+
+[Open an issue](https://github.com/stratosphere/stratosphere/issues/new) on Github, if you found a bug or need any help.
+We also have a [mailing list](https://groups.google.com/d/forum/stratosphere-dev) for both users and developers.
+
+Some of our colleagues are also in the #dima irc channel on freenode.
+
+## Documentation
+
+There is our (old) [official Wiki](https://stratosphere.eu/wiki/doku).
+We are in the progress of migrating it to the [GitHub Wiki](https://github.com/stratosphere/stratosphere/wiki/_pages)
+
+Please make edits to the Wiki if you find inconsistencies or [Open an issue](https://github.com/stratosphere/stratosphere/issues/new)
+
+
+## Fork and Contribute
+
+This is an active open-source project. We are always open to people who want to use the system or contribute to it.
+Contact us if you are looking for implementation tasks that fit your skills.
+
+We have a list of [starter jobs](https://github.com/stratosphere/stratosphere/wiki/Starter-Jobs) in our wiki.
+
+We use the GitHub Pull Request system for the development of Stratosphere. Just open a request if you want to contribute.
+
+### What to contribute
+* Bug reports
+* Bug fixes
+* Documentation
+* Tools that ease the use and development of Stratosphere
+* Well-written Stratosphere jobs
+
+
+Let us know if you have created a system that uses Stratosphere, so that we can link to you.
+
+## About
+
+[Stratosphere](http://www.stratosphere.eu) is a DFG-founded research project.
+We combine cutting edge research outcomes with a stable and usable codebase.
+Decisions are not made behind closed doors. We discuss all changes and plans on our Mailinglists and on GitHub.
+
+
+
+
+
+
+
diff --git a/data/readmes/fluentd-v1191.md b/data/readmes/fluentd-v1191.md
new file mode 100644
index 0000000..47d79f9
--- /dev/null
+++ b/data/readmes/fluentd-v1191.md
@@ -0,0 +1,79 @@
+# Fluentd - README (v1.19.1)
+
+**Repository**: https://github.com/fluent/fluentd
+**Version**: v1.19.1
+
+---
+
+Fluentd: Open-Source Log Collector
+===================================
+
+[](https://github.com/fluent/fluentd/actions/workflows/test.yml)
+[](https://github.com/fluent/fluentd/actions/workflows/test-ruby-head.yml)
+[](https://bestpractices.coreinfrastructure.org/projects/1189)
+[](https://scorecard.dev/viewer/?uri=github.com/fluent/fluentd)
+
+[Fluentd](https://www.fluentd.org/) collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure (Learn more about the [Unified Logging Layer](https://www.fluentd.org/blog/unified-logging-layer)).
+
+
+
+
+
+## Quick Start
+
+ $ gem install fluentd
+ $ fluentd -s conf
+ $ fluentd -c conf/fluent.conf &
+ $ echo '{"json":"message"}' | fluent-cat debug.test
+
+## Development
+
+### Branch
+
+- master: For v1 development.
+- v0.12: For v0.12. This is deprecated version. we already stopped supporting (See https://www.fluentd.org/blog/drop-schedule-announcement-in-2019).
+
+### Prerequisites
+
+- Ruby 3.2 or later
+- git
+
+`git` should be in `PATH`. On Windows, you can use `Github for Windows` and `GitShell` for easy setup.
+
+### Install dependent gems
+
+Use bundler:
+
+ $ gem install bundler
+ $ bundle install --path vendor/bundle
+
+### Run test
+
+ $ bundle exec rake test
+
+You can run specified test via `TEST` environment variable:
+
+ $ bundle exec rake test TEST=test/test_specified_path.rb
+ $ bundle exec rake test TEST=test/test_*.rb
+
+## More Information
+
+- Website: https://www.fluentd.org/
+- Documentation: https://docs.fluentd.org/
+- Project repository: https://github.com/fluent
+- Discussion: https://github.com/fluent/fluentd/discussions
+- Slack / Community: https://slack.fluentd.org
+- Newsletters: https://www.fluentd.org/newsletter
+- Author: [Sadayuki Furuhashi](https://github.com/frsyuki)
+- Copyright: 2011-2021 Fluentd Authors
+- License: Apache License, Version 2.0
+
+## Security
+
+A third party security audit was performed by Cure53, you can see the full report [here](docs/SECURITY_AUDIT.pdf).
+
+See [SECURITY](SECURITY.md) to contact us about vulnerability.
+
+## Contributors:
+
+Patches contributed by [great developers](https://github.com/fluent/fluentd/contributors).
diff --git a/data/readmes/fluid-v108.md b/data/readmes/fluid-v108.md
new file mode 100644
index 0000000..18934e4
--- /dev/null
+++ b/data/readmes/fluid-v108.md
@@ -0,0 +1,182 @@
+# Fluid - README (v1.0.8)
+
+**Repository**: https://github.com/fluid-cloudnative/fluid
+**Version**: v1.0.8
+
+---
+
+
+
+
+
+
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://circleci.com/gh/fluid-cloudnative/fluid)
+[](https://travis-ci.org/fluid-cloudnative/fluid)
+[](https://codecov.io/gh/fluid-cloudnative/fluid)
+[](https://goreportcard.com/report/github.com/fluid-cloudnative/fluid)
+[](https://artifacthub.io/packages/helm/fluid/fluid)
+[](https://scorecard.dev/viewer/?uri=github.com/fluid-cloudnative/fluid)
+[](https://bestpractices.coreinfrastructure.org/projects/4886)
+[](https://opensource.alibaba.com/contribution_leaderboard/details?projectValue=fluid)
+
+|:date: Community Meeting|
+|------------------|
+|The Fluid project holds bi-weekly community online meeting. To join or watch previous meeting notes and recordings, please see [meeting schedule](https://github.com/fluid-cloudnative/community/wiki/Meeting-Schedule) and [meeting minutes](https://github.com/fluid-cloudnative/community/wiki/Meeting-Agenda-and-Notes). |
+
+## What is Fluid?
+Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF) as a sandbox project.
+
+For more information, please refer to our papers:
+
+1. **Rong Gu, Kai Zhang, Zhihao Xu, et al. [Fluid: Dataset Abstraction and Elastic Acceleration for Cloud-native Deep Learning Training Jobs](https://ieeexplore.ieee.org/abstract/document/9835158). IEEE ICDE, pp. 2183-2196, May, 2022. (Conference Version)**
+
+2. **Rong Gu, Zhihao Xu, Yang Che, et al. [High-level Data Abstraction and Elastic Data Caching for Data-intensive AI Applications on Cloud-native Platforms](https://ieeexplore.ieee.org/document/10249214). IEEE TPDS, pp. 2946-2964, Vol 34(11), 2023. (Journal Version)**
+
+# Fluid
+English | [简体中文](./README-zh_CN.md)
+
+|  What is NEW! |
+| ------------------------------------------------------------ |
+|**Latest Release**: Oct. 31st, 2025. Fluid v1.0.8. Please check the [CHANGELOG](CHANGELOG.md) for details. |
+|v1.0.7 Release: Sep. 21st, 2025. Fluid v1.0.7. Please check the [CHANGELOG](CHANGELOG.md) for details. |
+|v1.0.6 Release: Jul. 12th, 2025. Fluid v1.0.6. Please check the [CHANGELOG](CHANGELOG.md) for details. |
+| Apr. 27th, 2021. Fluid accepted by **CNCF**! Fluid project was [accepted as an official CNCF Sandbox Project](https://lists.cncf.io/g/cncf-toc/message/5822) by CNCF Technical Oversight Committee (TOC) with a majority vote after the review process. New beginning for Fluid! . |
+
+
+
+
+## Features
+
+- __Dataset Abstraction__
+
+ Implements the unified abstraction for datasets from multiple storage sources, with observability features to help users evaluate the need for scaling the cache system.
+
+- __Scalable Cache Runtime__
+
+ Offers a unified access interface for data operations with different runtimes, enabling access to third-party storage systems.
+
+- __Automated Data Operations__
+
+ Provides various automated data operation modes to facilitate integration with automated operations systems.
+
+- __Elasticity and Scheduling__
+
+ Enhances data access performance by combining data caching technology with elastic scaling, portability, observability, and data affinity-scheduling capabilities.
+
+- __Runtime Platform Agnostic__
+
+ Supports a variety of environments and can run different storage clients based on the environment, including native, edge, Serverless Kubernetes clusters, and Kubernetes multi-cluster environments.
+
+## Key Concepts
+
+**Dataset**: A Dataset is a set of data logically related that can be used by computing engines, such as Spark for big data analytics and TensorFlow for AI applications. Intelligently leveraging data often creates core industry values. Managing Datasets may require features in different dimensions, such as security, version management and data acceleration. We hope to start with data acceleration to support the management of datasets.
+
+**Runtime**: The Runtime enforces dataset isolation/share, provides version management, and enables data acceleration by defining a set of interfaces to handle DataSets throughout their lifecycle, allowing for the implementation of management and acceleration functionalities behind these interfaces.
+
+## Prerequisites
+
+- Kubernetes version > 1.16, and support CSI
+- Golang 1.18+
+- Helm 3
+
+## Quick Start
+
+You can follow our [Get Started](docs/en/userguide/get_started.md) guide to quickly start a testing Kubernetes cluster.
+
+## Documentation
+
+You can see our documentation at [docs](docs/README.md) for more in-depth installation and instructions for production:
+
+- [English](docs/en/TOC.md)
+- [简体中文](docs/zh/TOC.md)
+
+You can also visit [Fluid Homepage](https://fluid-cloudnative.github.io) to get relevant documents.
+
+## Quick Demo
+
+
+Demo 1: Accelerate Remote File Accessing with Fluid
+
+
+
+
+
+
+Demo 2: Machine Learning with Fluid
+
+
+
+
+
+
+Demo 3: Accelerate PVC with Fluid
+
+
+
+
+
+
+Demo 4: Preload dataset with Fluid
+
+
+
+
+
+
+Demo 5: On-the-fly dataset cache scaling
+
+
+
+
+
+## Roadmap
+
+See [ROADMAP.md](ROADMAP.md) for the roadmap details. It may be updated from time to time.
+
+## Community
+
+Feel free to reach out if you have any questions. The maintainers of this project are reachable via:
+
+DingTalk Group:
+
+
+
+
+
+WeChat Group:
+
+
+
+
+
+WeChat Official Account:
+
+
+
+
+
+Slack:
+- Join in the [`CNCF Slack`](https://slack.cncf.io/) and navigate to the ``#fluid`` channel for discussion.
+
+
+## Contributing
+
+Contributions are highly welcomed and greatly appreciated. See [CONTRIBUTING.md](CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
+
+## Adopters
+
+If you are interested in Fluid and would like to share your experiences with others, you are warmly welcome to add your information on [ADOPTERS.md](ADOPTERS.md) page. We will continuously discuss new requirements and feature design with you in advance.
+
+
+## Open Source License
+
+Fluid is under the Apache 2.0 license. See the [LICENSE](./LICENSE) file for details. It is vendor-neutral.
+
+## Report Vulnerability
+Security is a first priority thing for us at Fluid. If you come across a related issue, please send an email to fluid.opensource.project@gmail.com. Also see our [SECURITY.md](SECURITY.md) file for details.
+
+
+## Code of Conduct
+
+Fluid adopts [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
diff --git a/data/readmes/flux-v275.md b/data/readmes/flux-v275.md
new file mode 100644
index 0000000..2afb060
--- /dev/null
+++ b/data/readmes/flux-v275.md
@@ -0,0 +1,110 @@
+# Flux - README (v2.7.5)
+
+**Repository**: https://github.com/fluxcd/flux2
+**Version**: v2.7.5
+
+---
+
+# Flux version 2
+
+[](https://github.com/fluxcd/flux2/releases)
+[](https://bestpractices.coreinfrastructure.org/projects/4782)
+[](https://scorecard.dev/viewer/?uri=github.com/fluxcd/flux2)
+[](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Ffluxcd%2Fflux2?ref=badge_shield)
+[](https://artifacthub.io/packages/helm/fluxcd-community/flux2)
+[](https://fluxcd.io/flux/security/slsa-assessment)
+
+Flux is a tool for keeping Kubernetes clusters in sync with sources of
+configuration (like Git repositories and OCI artifacts),
+and automating updates to configuration when there is new code to deploy.
+
+Flux version 2 ("v2") is built from the ground up to use Kubernetes'
+API extension system, and to integrate with Prometheus and other core
+components of the Kubernetes ecosystem. In version 2, Flux supports
+multi-tenancy and support for syncing an arbitrary number of Git
+repositories, among other long-requested features.
+
+Flux v2 is constructed with the [GitOps Toolkit](#gitops-toolkit), a
+set of composable APIs and specialized tools for building Continuous
+Delivery on top of Kubernetes.
+
+Flux is a Cloud Native Computing Foundation ([CNCF](https://www.cncf.io/)) graduated project, used in
+production by various [organisations](https://fluxcd.io/adopters) and [cloud providers](https://fluxcd.io/ecosystem).
+
+## Quickstart and documentation
+
+To get started check out this [guide](https://fluxcd.io/flux/get-started/)
+on how to bootstrap Flux on Kubernetes and deploy a sample application in a GitOps manner.
+
+For more comprehensive documentation, see the following guides:
+- [Ways of structuring your repositories](https://fluxcd.io/flux/guides/repository-structure/)
+- [Manage Helm Releases](https://fluxcd.io/flux/guides/helmreleases/)
+- [Automate image updates to Git](https://fluxcd.io/flux/guides/image-update/)
+- [Manage Kubernetes secrets with Flux and SOPS](https://fluxcd.io/flux/guides/mozilla-sops/)
+
+If you need help, please refer to our **[Support page](https://fluxcd.io/support/)**.
+
+## GitOps Toolkit
+
+The GitOps Toolkit is the set of APIs and controllers that make up the
+runtime for Flux v2. The APIs comprise Kubernetes custom resources,
+which can be created and updated by a cluster user, or by other
+automation tooling.
+
+
+
+You can use the toolkit to extend Flux, or to build your own systems
+for continuous delivery -- see [the developer
+guides](https://fluxcd.io/flux/gitops-toolkit/source-watcher/).
+
+### Components
+
+- [Source Controllers](https://fluxcd.io/flux/components/source/)
+ - [GitRepository CRD](https://fluxcd.io/flux/components/source/gitrepositories/)
+ - [OCIRepository CRD](https://fluxcd.io/flux/components/source/ocirepositories/)
+ - [HelmRepository CRD](https://fluxcd.io/flux/components/source/helmrepositories/)
+ - [HelmChart CRD](https://fluxcd.io/flux/components/source/helmcharts/)
+ - [Bucket CRD](https://fluxcd.io/flux/components/source/buckets/)
+ - [ExternalArtifact CRD](https://fluxcd.io/flux/components/source/externalartifacts/)
+ - [ArtifactGenerator CRD](https://fluxcd.io/flux/components/source/artifactgenerators/)
+- [Kustomize Controller](https://fluxcd.io/flux/components/kustomize/)
+ - [Kustomization CRD](https://fluxcd.io/flux/components/kustomize/kustomizations/)
+- [Helm Controller](https://fluxcd.io/flux/components/helm/)
+ - [HelmRelease CRD](https://fluxcd.io/flux/components/helm/helmreleases/)
+- [Notification Controller](https://fluxcd.io/flux/components/notification/)
+ - [Provider CRD](https://fluxcd.io/flux/components/notification/providers/)
+ - [Alert CRD](https://fluxcd.io/flux/components/notification/alerts/)
+ - [Receiver CRD](https://fluxcd.io/flux/components/notification/receivers/)
+- [Image Automation Controllers](https://fluxcd.io/flux/components/image/)
+ - [ImageRepository CRD](https://fluxcd.io/flux/components/image/imagerepositories/)
+ - [ImagePolicy CRD](https://fluxcd.io/flux/components/image/imagepolicies/)
+ - [ImageUpdateAutomation CRD](https://fluxcd.io/flux/components/image/imageupdateautomations/)
+
+## Community
+
+Need help or want to contribute? Please see the links below. The Flux project is always looking for
+new contributors and there are a multitude of ways to get involved.
+
+- Getting Started?
+ - Look at our [Get Started guide](https://fluxcd.io/flux/get-started/) and give us feedback
+- Need help?
+ - First: Ask questions on our [GH Discussions page](https://github.com/fluxcd/flux2/discussions).
+ - Second: Talk to us in the #flux channel on [CNCF Slack](https://slack.cncf.io/).
+ - Please follow our [Support Guidelines](https://fluxcd.io/support/)
+ (in short: be nice, be respectful of volunteers' time, understand that maintainers and
+ contributors cannot respond to all DMs, and keep discussions in the public #flux channel as much as possible).
+- Have feature proposals or want to contribute?
+ - Propose features on our [GitHub Discussions page](https://github.com/fluxcd/flux2/discussions).
+ - Join our upcoming dev meetings ([meeting access and agenda](https://docs.google.com/document/d/1l_M0om0qUEN_NNiGgpqJ2tvsF2iioHkaARDeh6b70B0/view)).
+ - [Join the flux-dev mailing list](https://lists.cncf.io/g/cncf-flux-dev).
+ - Check out [how to contribute](CONTRIBUTING.md) to the project.
+ - Check out the [project roadmap](https://fluxcd.io/roadmap/).
+
+### Events
+
+Check out our **[events calendar](https://fluxcd.io/#calendar)**,
+both with upcoming talks, events and meetings you can attend.
+Or view the **[resources section](https://fluxcd.io/resources)**
+with past events videos you can watch.
+
+We look forward to seeing you with us!
diff --git a/data/readmes/fonio-v11.md b/data/readmes/fonio-v11.md
new file mode 100644
index 0000000..ad48f3b
--- /dev/null
+++ b/data/readmes/fonio-v11.md
@@ -0,0 +1,120 @@
+# Fonio - README (v1.1)
+
+**Repository**: https://github.com/foniod/foniod
+**Version**: v1.1
+
+---
+
+
+
+
+
ingraind
+
+
+ Data-first Monitoring
+
+
+
+
+
+
+
+ingraind is a security monitoring agent built around [RedBPF](https://github.com/redsift/redbpf)
+for complex containerized environments and endpoints. The ingraind agent uses eBPF
+probes to provide safe and performant instrumentation for any Linux-based environment.
+
+InGrain provides oversight of assets and risks:
+ * Your customer data - an employee copying your customer database to their
+ personal cloud store.
+ * Your infrastructure - an attacker executing a zero day attack to gain access
+ to your web servers.
+ * Your resources - malware using your users machines compute resources to mine
+ cryptocurrency.
+
+This is what `curl https://redsift.com` looks like if seen through ingraind:
+
+
+
+## Requirements
+
+ * LLVM/Clang version 9 or newer
+ * Rust toolchain [rustup.rs](https://rustup.rs)
+ * Linux 4.15 kernel or newer including kernel headers
+ * capnproto
+
+## Compile
+
+The usual Rust compilation ritual will produce a binary in `target/release`:
+
+ $ cargo build --release
+
+or for a kernel version other than the running one:
+
+ $ export KERNEL_VERSION=1.2.3
+ $ cargo build --release
+
+or with a custom kernel tree path (needs to include generated files):
+
+ $ export KERNEL_SOURCE=/build/linux
+ $ cargo build --release
+
+We keep `ingraind` compatible with the `musl` target on `x86_64`,
+which you can build like so:
+
+ $ cargo build --release --target=x86_64-unknown-linux-musl
+
+## Build a docker image
+
+To build a Docker image, use the instructions above to build an
+ingrain binary for the desired kernel. By default, the Dockerfile will
+assume you've built `ingraind` for the `musl` target.
+
+ $ docker build .
+
+You can specify an arbitrary `ingraind` binary by setting the
+`BINARY_PATH` environment variable:
+
+ $ docker build --build-arg BINARY_PATH=./target/x86_64-unknown-linux-musl/release/ingraind .
+
+## Configuration & Run
+
+To get an idea about the configuration [file
+structure](https://github.com/redsift/ingraind/wiki/Configuration), consult the
+wiki or take a look at the [example config](./config.toml.example) for a full reference.
+
+To start `ingraind`, run:
+
+ $ ./target/release/ingraind config.toml
+
+Depending on the backends used in the config file, some secrets may need to be
+passed as environment variables. These are documented in
+[config.toml.example](./config.toml.example), which should be a good starting point,
+and a sane default to get `ingraind` running, printing everything to the standard output.
+
+## Repo structure
+
+The `bpf` directory contains the BPF programs written in C. These are compiled
+by `build.rs`, and embedded in the final binary, and will be managed by the
+grains.
+
+The `ingraind-probes` directory contains the BPF programs written in Rust.
+
+# Anything else?
+
+For more information, take a look at the [Wiki](https://github.com/redsift/ingraind/wiki)
+
+# Contribution
+
+This project is for everyone. We ask that our users and contributors
+take a few minutes to review our [code of conduct](https://github.com/ingraind/project/blob/main/CODE_OF_CONDUCT.md).
+
+Unless you explicitly state otherwise, any contribution intentionally submitted
+for inclusion in the work by you, as defined in the GPL-3.0 license, shall
+be licensed as GPL-3.0, without any additional terms or conditions.
+
+For further advice on getting started, please consult the
+[Contributor's
+Guide](https://github.com/ingraind/project/blob/main/CONTRIBUTING.md). Please
+note that all contributions MUST contain a [Developer Certificate of
+Origin](https://github.com/ingraind/project/blob/developer-certificate-of-origin/CONTRIBUTING.md#developer-certificate-of-origin)
+sign-off line.
diff --git a/data/readmes/foundry-nightly-97fba0a51e335a174442b19d92b64df9d2ab72ab.md b/data/readmes/foundry-nightly-97fba0a51e335a174442b19d92b64df9d2ab72ab.md
new file mode 100644
index 0000000..3595ad8
--- /dev/null
+++ b/data/readmes/foundry-nightly-97fba0a51e335a174442b19d92b64df9d2ab72ab.md
@@ -0,0 +1,370 @@
+# Foundry - README (nightly-97fba0a51e335a174442b19d92b64df9d2ab72ab)
+
+**Repository**: https://github.com/foundry-rs/foundry
+**Version**: nightly-97fba0a51e335a174442b19d92b64df9d2ab72ab
+
+---
+
+
+
+
+## 🐘 **Gradle Build Tool**
+
+**[Gradle](https://gradle.org/)** is a highly scalable build automation tool designed to handle everything from large, multi-project enterprise builds to quick development tasks across various languages. Gradle’s modular, performance-oriented architecture seamlessly integrates with development environments, making it a go-to solution for building, testing, and deploying applications on **Java**, **Kotlin**, **Scala**, **Android**, **Groovy**, **C++**, and **Swift**.
+
+> For a comprehensive overview, please visit the [official Gradle project homepage](https://gradle.org).
+
+---
+
+### 🚀 **Getting Started**
+
+Starting with Gradle is easy with these essential resources. Follow these to install Gradle, set up initial projects, and explore supported platforms:
+
+- **[Installing Gradle](https://docs.gradle.org/current/userguide/installation.html)**
+- **Build Projects for Popular Languages and Frameworks**:
+ - [Java Applications](https://docs.gradle.org/current/samples/sample_building_java_applications.html)
+ - [Java Modules](https://docs.gradle.org/current/samples/sample_java_modules_multi_project.html)
+ - [Android Apps](https://developer.android.com/studio/build/index.html)
+ - [Groovy Applications](https://docs.gradle.org/current/samples/sample_building_groovy_applications.html)
+ - [Kotlin Libraries](https://docs.gradle.org/current/samples/sample_building_kotlin_libraries.html)
+ - [Scala Applications](https://docs.gradle.org/current/samples/sample_building_scala_applications.html)
+ - [Spring Boot Web Apps](https://docs.gradle.org/current/samples/sample_building_spring_boot_web_applications.html)
+ - [C++ Libraries](https://docs.gradle.org/current/samples/sample_building_cpp_libraries.html)
+ - [Swift Apps](https://docs.gradle.org/current/samples/sample_building_swift_applications.html)
+ - [Swift Libraries](https://docs.gradle.org/current/samples/sample_building_swift_libraries.html)
+
+> 📘 Explore Gradle’s full array of resources through the [Gradle Documentation](https://docs.gradle.org/).
+
+---
+
+### 🛠 **Seamless IDE & CI Integration**
+
+Gradle is built to work smoothly with a variety of Integrated Development Environments (IDEs) and Continuous Integration (CI) systems, providing extensive support for a streamlined workflow:
+
+- **Supported IDEs**: Quickly integrate Gradle with [Android Studio](https://docs.gradle.org/current/userguide/gradle_ides.html), [IntelliJ IDEA](https://docs.gradle.org/current/userguide/gradle_ides.html), [Eclipse](https://docs.gradle.org/current/userguide/gradle_ides.html), [NetBeans](https://docs.gradle.org/current/userguide/gradle_ides.html), and [Visual Studio Code](https://docs.gradle.org/current/userguide/gradle_ides.html).
+- **Continuous Integration**: Gradle easily connects with popular CI tools, including Jenkins, [GitHub Actions](https://docs.github.com/actions), [GitLab CI](https://docs.gitlab.com/ee/ci/), [CircleCI](https://circleci.com/), and more, to streamline build and deployment pipelines.
+
+---
+
+### 🎓 **Learning Resources for Gradle**
+
+Kickstart your Gradle knowledge with courses, guides, and community support tailored to various experience levels:
+
+- **[DPE University Free Courses](https://dpeuniversity.gradle.com/app/catalog)**: A collection of hands-on courses for learning Gradle, complete with project-based tasks to improve real-world skills.
+- **[Gradle Community Resources](https://community.gradle.org/resources/)**: Discover a range of resources, tutorials, and guides to support your Gradle journey, from foundational concepts to advanced practices.
+
+---
+
+### 💬 **Community Support & Resources**
+
+The Gradle community offers a range of forums, documentation, and direct help to guide you through every step of your Gradle journey:
+
+- **Documentation**: The [Gradle User Manual](https://docs.gradle.org/current/userguide/userguide.html) covers everything from basic to advanced configurations.
+- **Community Forum**: Engage with others on the [Gradle Forum](https://discuss.gradle.org/) for discussions, tips, and best practices.
+- **Community Slack**: [Join our Slack Channel](https://gradle.org/slack-invite) for real-time discussions, with specialized channels like `#github-integrations` for integration topics.
+- **Newsletter**: Subscribe to the [Gradle Newsletter](https://newsletter.gradle.org) for news, tutorials, and community highlights.
+
+> **Quick Tip**: New contributors to Gradle projects are encouraged to ask questions in the Slack `#community-support` channel.
+
+---
+
+### 🌱 **Contributing to Gradle**
+
+- **Contribution Guide**: [Contribute](https://github.com/gradle/gradle/blob/master/CONTRIBUTING.md) to Gradle by submitting patches or pull requests for code or documentation improvements.
+- **Code of Conduct**: Gradle enforces a [Code of Conduct](https://gradle.org/conduct/) to ensure a welcoming and supportive community for all contributors.
+
+---
+
+### 🔗 **Additional Resources**
+
+To make the most out of Gradle, take advantage of these additional resources:
+
+- **[Gradle Documentation](https://docs.gradle.org/)** - Your go-to guide for all Gradle-related documentation.
+- **[DPE University](https://dpeuniversity.gradle.com/app/catalog)** - Explore tutorials designed to get you started quickly.
+- **[Community Resources](https://gradle.org/resources/)** - Find more community-contributed materials to expand your knowledge.
+
+> 🌟 **Stay connected with the Gradle Community and access the latest news, training, and updates via [Slack](https://gradle.org/slack-invite), [Forum](https://discuss.gradle.org/), and our [Newsletter](https://newsletter.gradle.org)**.
+
+
diff --git a/data/readmes/grafana-v1230.md b/data/readmes/grafana-v1230.md
new file mode 100644
index 0000000..eb8d3d2
--- /dev/null
+++ b/data/readmes/grafana-v1230.md
@@ -0,0 +1,58 @@
+# Grafana - README (v12.3.0)
+
+**Repository**: https://github.com/grafana/grafana
+**Version**: v12.3.0
+
+---
+
+
+
+
+The open-source platform for monitoring and observability
+
+[](LICENSE)
+[](https://goreportcard.com/report/github.com/grafana/grafana)
+
+Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data-driven culture:
+
+- **Visualizations:** Fast and flexible client side graphs with a multitude of options. Panel plugins offer many different ways to visualize metrics and logs.
+- **Dynamic Dashboards:** Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.
+- **Explore Metrics:** Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.
+- **Explore Logs:** Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.
+- **Alerting:** Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, PagerDuty, VictorOps, OpsGenie.
+- **Mixed Data Sources:** Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom datasources.
+
+## Get started
+
+- [Get Grafana](https://grafana.com/get)
+- [Installation guides](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)
+
+Unsure if Grafana is for you? Watch Grafana in action on [play.grafana.org](https://play.grafana.org/)!
+
+## Documentation
+
+The Grafana documentation is available at [grafana.com/docs](https://grafana.com/docs/).
+
+## Contributing
+
+If you're interested in contributing to the Grafana project:
+
+- Start by reading the [Contributing guide](https://github.com/grafana/grafana/blob/HEAD/CONTRIBUTING.md).
+- Learn how to set up your local environment, in our [Developer guide](https://github.com/grafana/grafana/blob/HEAD/contribute/developer-guide.md).
+- Explore our [beginner-friendly issues](https://github.com/grafana/grafana/issues?q=is%3Aopen+is%3Aissue+label%3A%22beginner+friendly%22).
+- Look through our [style guide and Storybook](https://developers.grafana.com/ui/latest/index.html).
+
+> Share your contributor experience in our [feedback survey](https://gra.fan/ome) to help us improve.
+
+## Get involved
+
+- Follow [@grafana on X (formerly Twitter)](https://x.com/grafana/).
+- Read and subscribe to the [Grafana blog](https://grafana.com/blog/).
+- If you have a specific question, check out our [discussion forums](https://community.grafana.com/).
+- For general discussions, join us on the [official Slack](https://slack.grafana.com) team.
+
+This project is tested with [BrowserStack](https://www.browserstack.com/).
+
+## License
+
+Grafana is distributed under [AGPL-3.0-only](LICENSE). For Apache-2.0 exceptions, see [LICENSING.md](https://github.com/grafana/grafana/blob/HEAD/LICENSING.md).
diff --git a/data/readmes/grpc-v1760.md b/data/readmes/grpc-v1760.md
new file mode 100644
index 0000000..6c39d90
--- /dev/null
+++ b/data/readmes/grpc-v1760.md
@@ -0,0 +1,111 @@
+# gRPC - README (v1.76.0)
+
+**Repository**: https://github.com/grpc/grpc
+**Version**: v1.76.0
+
+---
+
+# gRPC – An RPC library and framework
+
+gRPC is a modern, open source, high-performance remote procedure call (RPC)
+framework that can run anywhere. gRPC enables client and server applications to
+communicate transparently, and simplifies the building of connected systems.
+
+
+
+[](https://gitter.im/grpc/grpc?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+
+## To start using gRPC
+
+To maximize usability, gRPC supports the standard method for adding dependencies
+to a user's chosen language (if there is one). In most languages, the gRPC
+runtime comes as a package available in a user's language package manager.
+
+For instructions on how to use the language-specific gRPC runtime for a project,
+please refer to these documents
+
+- [C++](src/cpp): follow the instructions under the `src/cpp` directory
+- [C#/.NET](https://github.com/grpc/grpc-dotnet): NuGet packages
+ `Grpc.Net.Client`, `Grpc.AspNetCore.Server`
+- [Dart](https://github.com/grpc/grpc-dart): pub package `grpc`
+- [Go](https://github.com/grpc/grpc-go): `go get google.golang.org/grpc`
+- [Java](https://github.com/grpc/grpc-java): Use JARs from Maven Central
+ Repository
+- [Kotlin](https://github.com/grpc/grpc-kotlin): Use JARs from Maven Central
+ Repository
+- [Node](https://github.com/grpc/grpc-node): `npm install @grpc/grpc-js`
+- [Objective-C](src/objective-c): Add `gRPC-ProtoRPC` dependency to podspec
+- [PHP](src/php): `pecl install grpc`
+- [Python](src/python/grpcio): `pip install grpcio`
+- [Ruby](src/ruby): `gem install grpc`
+- [WebJS](https://github.com/grpc/grpc-web): follow the grpc-web instructions
+
+Per-language quickstart guides and tutorials can be found in the
+[documentation section on the grpc.io website](https://grpc.io/docs/). Code
+examples are available in the [examples](examples) directory.
+
+Precompiled bleeding-edge package builds of gRPC `master` branch's `HEAD` are
+uploaded daily to [packages.grpc.io](https://packages.grpc.io).
+
+## To start developing gRPC
+
+Contributions are welcome!
+
+Please read [How to contribute](CONTRIBUTING.md) which will guide you through
+the entire workflow of how to build the source code, how to run the tests, and
+how to contribute changes to the gRPC codebase. The "How to contribute" document
+also contains info on how the contribution process works and contains best
+practices for creating contributions.
+
+## Troubleshooting
+
+Sometimes things go wrong. Please check out the
+[Troubleshooting guide](TROUBLESHOOTING.md) if you are experiencing issues with
+gRPC.
+
+## Performance
+
+See the [Performance dashboard](https://grafana-dot-grpc-testing.appspot.com/)
+for performance numbers of master branch daily builds.
+
+## Concepts
+
+See [gRPC Concepts](CONCEPTS.md)
+
+## About This Repository
+
+This repository contains source code for gRPC libraries implemented in multiple
+languages written on top of a shared C++ core library [src/core](src/core).
+
+Libraries in different languages may be in various states of development. We are
+seeking contributions for all of these libraries:
+
+Language | Source
+------------------------- | ----------------------------------
+Shared C++ [core library] | [src/core](src/core)
+C++ | [src/cpp](src/cpp)
+Ruby | [src/ruby](src/ruby)
+Python | [src/python](src/python)
+PHP | [src/php](src/php)
+C# (core library based) | [src/csharp](src/csharp)
+Objective-C | [src/objective-c](src/objective-c)
+
+Language | Source repo
+-------------------- | --------------------------------------------------
+Java | [grpc-java](https://github.com/grpc/grpc-java)
+Kotlin | [grpc-kotlin](https://github.com/grpc/grpc-kotlin)
+Go | [grpc-go](https://github.com/grpc/grpc-go)
+NodeJS | [grpc-node](https://github.com/grpc/grpc-node)
+WebJS | [grpc-web](https://github.com/grpc/grpc-web)
+Dart | [grpc-dart](https://github.com/grpc/grpc-dart)
+.NET (pure C# impl.) | [grpc-dotnet](https://github.com/grpc/grpc-dotnet)
+Swift | [grpc-swift](https://github.com/grpc/grpc-swift)
diff --git a/data/readmes/grype-v01041.md b/data/readmes/grype-v01041.md
new file mode 100644
index 0000000..9894b9b
--- /dev/null
+++ b/data/readmes/grype-v01041.md
@@ -0,0 +1,1015 @@
+# Grype - README (v0.104.1)
+
+**Repository**: https://github.com/anchore/grype
+**Version**: v0.104.1
+
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+A vulnerability scanner for container images and filesystems. Easily [install the binary](#installation) to try it out. Works with [Syft](https://github.com/anchore/syft), the powerful SBOM (software bill of materials) tool for container images and filesystems.
+
+### Join our community meetings!
+
+- Calendar: https://calendar.google.com/calendar/u/0/r?cid=Y182OTM4dGt0MjRtajI0NnNzOThiaGtnM29qNEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t
+- Agenda: https://docs.google.com/document/d/1ZtSAa6fj2a6KRWviTn3WoJm09edvrNUp4Iz_dOjjyY8/edit?usp=sharing (join [this group](https://groups.google.com/g/anchore-oss-community) for write access)
+- All are welcome!
+
+For commercial support options with Syft or Grype, please [contact Anchore](https://get.anchore.com/contact/).
+
+
+
+## Features
+
+- Scan the contents of a container image or filesystem to find known vulnerabilities.
+- Find vulnerabilities for major operating system packages:
+ - Alpine
+ - Amazon Linux
+ - Azure Linux (previously CBL-Mariner)
+ - BusyBox
+ - CentOS
+ - Debian
+ - Echo
+ - Distroless
+ - MinimOS
+ - Oracle Linux
+ - Red Hat (RHEL)
+ - Ubuntu
+ - Wolfi
+- Find vulnerabilities for language-specific packages:
+ - Ruby (Gems)
+ - Java (JAR, WAR, EAR, JPI, HPI)
+ - JavaScript (NPM, Yarn)
+ - Python (Egg, Wheel, Poetry, requirements.txt/setup.py files)
+ - Dotnet (deps.json)
+ - Golang (go.mod)
+ - PHP (Composer)
+ - Rust (Cargo)
+- Supports Docker, OCI and [Singularity](https://github.com/sylabs/singularity) image formats.
+- [OpenVEX](https://github.com/openvex) support for filtering and augmenting scanning results.
+
+If you encounter an issue, please [let us know using the issue tracker](https://github.com/anchore/grype/issues).
+
+## Installation
+
+### Recommended
+
+```bash
+curl -sSfL https://get.anchore.io/grype | sudo sh -s -- -b /usr/local/bin
+```
+Install script options:
+- `-b`: Specify a custom installation directory (defaults to `./bin`)
+- `-d`: More verbose logging levels (`-d` for debug, `-dd` for trace)
+- `-v`: Verify the signature of the downloaded artifact before installation (requires [`cosign`](https://github.com/sigstore/cosign) to be installed)
+
+### Chocolatey
+
+The chocolatey distribution of grype is community-maintained and not distributed by the anchore team.
+
+```bash
+choco install grype -y
+```
+
+### Homebrew
+
+```bash
+brew tap anchore/grype
+brew install grype
+```
+
+### MacPorts
+
+On macOS, Grype can additionally be installed from the [community-maintained port](https://ports.macports.org/port/grype/) via MacPorts:
+
+```bash
+sudo port install grype
+```
+
+**Note**: Currently, Grype is built only for macOS and Linux.
+
+### From source
+
+See [DEVELOPING.md](DEVELOPING.md#native-development) for instructions to build and run from source.
+
+### GitHub Actions
+
+If you're using GitHub Actions, you can use our [Grype-based action](https://github.com/marketplace/actions/anchore-container-scan) to run vulnerability scans on your code or container images during your CI workflows.
+
+## Verifying the artifacts
+
+Checksums are applied to all artifacts, and the resulting checksum file is signed using cosign.
+
+You need the following tool to verify signature:
+
+- [Cosign](https://docs.sigstore.dev/cosign/system_config/installation/)
+
+Verification steps are as follow:
+
+1. Download the files you want, and the checksums.txt, checksums.txt.pem and checksums.txt.sig files from the [releases](https://github.com/anchore/grype/releases) page:
+
+2. Verify the signature:
+
+```shell
+cosign verify-blob \
+--certificate \
+--signature \
+--certificate-identity-regexp 'https://github\.com/anchore/grype/\.github/workflows/.+' \
+--certificate-oidc-issuer "https://token.actions.githubusercontent.com"
+```
+
+3. Once the signature is confirmed as valid, you can proceed to validate that the SHA256 sums align with the downloaded artifact:
+
+```shell
+sha256sum --ignore-missing -c checksums.txt
+```
+
+## Getting started
+
+[Install the binary](#installation), and make sure that `grype` is available in your path. To scan for vulnerabilities in an image:
+
+```
+grype
+```
+
+The above command scans for vulnerabilities visible in the container (i.e., the squashed representation of the image). To include software from all image layers in the vulnerability scan, regardless of its presence in the final image, provide `--scope all-layers`:
+
+```
+grype --scope all-layers
+```
+
+To run grype from a Docker container so it can scan a running container, use the following command:
+
+```yml
+docker run --rm \
+--volume /var/run/docker.sock:/var/run/docker.sock \
+--name Grype anchore/grype:latest \
+$(ImageName):$(ImageTag)
+```
+
+## Supported sources
+
+Grype can scan a variety of sources beyond those found in Docker.
+
+```
+# scan a container image archive (from the result of `docker image save ...`, `podman save ...`, or `skopeo copy` commands)
+grype path/to/image.tar
+
+# scan a Singularity Image Format (SIF) container
+grype path/to/image.sif
+
+# scan a directory
+grype dir:path/to/dir
+```
+
+Sources can be explicitly provided with a scheme:
+
+```
+podman:yourrepo/yourimage:tag use images from the Podman daemon
+docker:yourrepo/yourimage:tag use images from the Docker daemon
+docker-archive:path/to/yourimage.tar use a tarball from disk for archives created from "docker save"
+oci-archive:path/to/yourimage.tar use a tarball from disk for OCI archives (from Skopeo or otherwise)
+oci-dir:path/to/yourimage read directly from a path on disk for OCI layout directories (from Skopeo or otherwise)
+singularity:path/to/yourimage.sif read directly from a Singularity Image Format (SIF) container on disk
+dir:path/to/yourproject read directly from a path on disk (any directory)
+file:path/to/yourfile read directly from a file on disk
+sbom:path/to/syft.json read Syft JSON from path on disk
+registry:yourrepo/yourimage:tag pull image directly from a registry (no container runtime required)
+```
+
+If an image source is not provided and cannot be detected from the given reference it is assumed the image should be pulled from the Docker daemon.
+If docker is not present, then the Podman daemon is attempted next, followed by reaching out directly to the image registry last.
+
+
+This default behavior can be overridden with the `default-image-pull-source` configuration option (See [Configuration](https://github.com/anchore/grype#configuration) for more details).
+
+Use SBOMs for even faster vulnerability scanning in Grype:
+
+```
+# Then scan for new vulnerabilities as frequently as needed
+grype sbom:./sbom.json
+
+# (You can also pipe the SBOM into Grype)
+cat ./sbom.json | grype
+```
+
+Grype supports input of [Syft](https://github.com/anchore/syft), [SPDX](https://spdx.dev/), and [CycloneDX](https://cyclonedx.org/)
+SBOM formats. If Syft has generated any of these file types, they should have the appropriate information to work properly with Grype.
+It is also possible to use SBOMs generated by other tools with varying degrees of success. Two things that make Grype matching
+more successful are the inclusion of CPE and Linux distribution information. If an SBOM does not include any CPE information, it
+is possible to generate these based on package information using the `--add-cpes-if-none` flag. To specify a distribution,
+use the `--distro :` flag. A full example is:
+
+```
+grype --add-cpes-if-none --distro alpine:3.10 sbom:some-alpine-3.10.spdx.json
+```
+
+## Threat & Risk Prioritization
+
+This section explains the columns and UI cues that help prioritize remediation efforts:
+
+- **Severity**: String-based severity derived from CVSS scores that indicates the significance of a vulnerability in levels.
+ This balances concerns such as ease of exploitability, and the potential to affect
+ confidentiality, integrity, and availability of software and services.
+
+- **EPSS**:
+ [Exploit Prediction Scoring System](https://www.first.org/epss/model) is a metric expressing the likelihood
+ that a vulnerability will be
+ exploited in the wild over the next 30 days (on a 0–1 scale); higher values signal a greater likelihood of
+ exploitation.
+ The table output shows the EPSS percentile, a one-way transform of the EPSS score showing the
+ proportion of all scored vulnerabilities with an equal or lower probability.
+ Percentiles linearize a heavily skewed distribution, making threshold choice (e.g. “only CVEs above the
+ 90th percentile”) straightforward.
+
+- **KEV Indicator**: Flags entries from CISA’s [Known Exploited Vulnerabilities Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
+ --an authoritative list of flaws observed being exploited in the wild.
+
+- **Risk Score**: A composite 0–100 metric calculated as:
+ ```markdown
+ risk = min(1, threat * average(severity)) * 100
+ ```
+ Where:
+ - `severity` is the average of all CVSS scores and string severity for a vulnerability (scaled between 0–1).
+ - `threat` is the EPSS score (between 0–1). If the vulnerability is on the KEV list then `threat` is
+ `1.05`, or `1.1` if the vulnerability is associated with a ransomware campaign.
+ This metric is one way to combine EPSS and CVSS suggested in the [EPSS user guide](https://www.first.org/epss/user-guide).
+
+- **Suggested Fixes**: All possible fixes for a package are listed, however, when multiple fixes are available, we de-emphasize all
+ upgrade paths except for the minimal upgrade path (which highlights the smallest, safest version bump).
+
+Results default to sorting by Risk Score and can be overridden with `--sort-by `:
+
+- `severity`: sort by severity
+- `epss`: sort by EPSS percentile (aka, "threat")
+- `risk`: sort by risk score
+- `kev`: just like risk, except that KEV entries are always above non-KEV entries
+- `package`: sort by package name, version, type
+- `vulnerability`: sort by vulnerability ID
+
+### Supported versions
+
+Software updates are always applied to the latest version of Grype; fixes are not backported to any previous versions of Grype.
+
+In terms of database updates, any version of Grype before v0.51.0 (Oct 2022, before schema v5) will not receive
+vulnerability database updates. You can still build vulnerability databases for unsupported Grype releases by using previous
+releases of [vunnel](https://github.com/anchore/vunnel) to gather the upstream data and [grype-db](https://github.com/anchore/grype-db)
+to build databases for unsupported schemas.
+
+Only the latest database schema is considered to be supported. When a new database schema is introduced then the one it replaces is
+marked as deprecated. Deprecated schemas will continue to receive updates for at least one year after they are marked
+as deprecated at which point they will no longer be supported.
+
+### Working with attestations
+Grype supports scanning SBOMs as input via stdin. Users can use [cosign](https://github.com/sigstore/cosign) to verify attestations
+with an SBOM as its content to scan an image for vulnerabilities:
+```
+COSIGN_EXPERIMENTAL=1 cosign verify-attestation caphill4/java-spdx-tools:latest \
+| jq -r .payload \
+| base64 --decode \
+| jq -r .predicate.Data \
+| grype
+```
+
+### Vulnerability Summary
+
+#### Basic Grype Vulnerability Data Shape
+
+```json
+ {
+ "vulnerability": {
+ ...
+ },
+ "relatedVulnerabilities": [
+ ...
+ ],
+ "matchDetails": [
+ ...
+ ],
+ "artifact": {
+ ...
+ }
+}
+```
+
+- **Vulnerability**: All information on the specific vulnerability that was directly matched on (e.g. ID, severity, CVSS score, fix information, links for more information)
+- **RelatedVulnerabilities**: Information pertaining to vulnerabilities found to be related to the main reported vulnerability. Perhaps the vulnerability we matched on was a GitHub Security Advisory, which has an upstream CVE (in the authoritative national vulnerability database). In these cases we list the upstream vulnerabilities here.
+- **MatchDetails**: This section tries to explain what we searched for while looking for a match and exactly what details on the package and vulnerability that led to a match.
+- **Artifact**: This is a subset of the information that we know about the package (when compared to the [Syft](https://github.com/anchore/syft) json output, we summarize the metadata section).
+ This has information about where within the container image or directory we found the package, what kind of package it is, licensing info, pURLs, CPEs, etc.
+
+### Excluding file paths
+
+Grype can exclude files and paths from being scanned within a source by using glob expressions
+with one or more `--exclude` parameters:
+
+```
+grype --exclude './out/**/*.json' --exclude /etc
+```
+
+**Note:** in the case of _image scanning_, since the entire filesystem is scanned it is
+possible to use absolute paths like `/etc` or `/usr/**/*.txt` whereas _directory scans_
+exclude files _relative to the specified directory_. For example: scanning `/usr/foo` with
+`--exclude ./package.json` would exclude `/usr/foo/package.json` and `--exclude '**/package.json'`
+would exclude all `package.json` files under `/usr/foo`. For _directory scans_,
+it is required to begin path expressions with `./`, `*/`, or `**/`, all of which
+will be resolved _relative to the specified scan directory_. Keep in mind, your shell
+may attempt to expand wildcards, so put those parameters in single quotes, like:
+`'**/*.json'`.
+
+### External Sources
+
+Grype can be configured to incorporate external data sources for added fidelity in vulnerability matching. This
+feature is currently disabled by default. To enable this feature add the following to the grype config:
+
+```yaml
+external-sources:
+ enable: true
+ maven:
+ search-upstream-by-sha1: true
+ base-url: https://search.maven.org/solrsearch/select
+ rate-limit: 300ms # Time between Maven API requests
+```
+
+You can also configure the base-url if you're using another registry as your maven endpoint.
+
+The rate at which Maven API requests are made can be configured to match your environment's requirements. The default is 300ms between requests.
+
+### Output formats
+
+The output format for Grype is configurable as well:
+
+```
+grype -o
+```
+
+Where the formats available are:
+
+- `table`: A columnar summary (default).
+- `cyclonedx`: An XML report conforming to the [CycloneDX 1.6 specification](https://cyclonedx.org/specification/overview/).
+- `cyclonedx-json`: A JSON report conforming to the [CycloneDX 1.6 specification](https://cyclonedx.org/specification/overview/).
+- `json`: Use this to get as much information out of Grype as possible!
+- `sarif`: Use this option to get a [SARIF](https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html) report (Static Analysis Results Interchange Format)
+- `template`: Lets the user specify the output format. See ["Using templates"](#using-templates) below.
+
+### Using templates
+
+Grype lets you define custom output formats, using [Go templates](https://golang.org/pkg/text/template/). Here's how it works:
+
+- Define your format as a Go template, and save this template as a file.
+
+- Set the output format to "template" (`-o template`).
+
+- Specify the path to the template file (`-t ./path/to/custom.template`).
+
+- Grype's template processing uses the same data models as the `json` output format — so if you're wondering what data is available as you author a template, you can use the output from `grype -o json` as a reference.
+
+**Please note:** Templates can access information about the system they are running on, such as environment variables. You should never run untrusted templates.
+
+There are several example templates in the [templates](https://github.com/anchore/grype/tree/main/templates) directory in the Grype source which can serve as a starting point for a custom output format. For example, [csv.tmpl](https://github.com/anchore/grype/blob/main/templates/csv.tmpl) produces a vulnerability report in CSV (comma separated value) format:
+
+```text
+"Package","Version Installed","Vulnerability ID","Severity"
+"coreutils","8.30-3ubuntu2","CVE-2016-2781","Low"
+"libc-bin","2.31-0ubuntu9","CVE-2016-10228","Negligible"
+"libc-bin","2.31-0ubuntu9","CVE-2020-6096","Low"
+...
+```
+
+You can also find the template for the default "table" output format in the same place.
+
+Grype also includes a vast array of utility templating functions from [sprig](http://masterminds.github.io/sprig/) apart from the default golang [text/template](https://pkg.go.dev/text/template#hdr-Functions) to allow users to customize the output from Grype.
+
+### Gating on severity of vulnerabilities
+
+You can have Grype exit with an error if any vulnerabilities are reported at or above the specified severity level. This comes in handy when using Grype within a script or CI pipeline. To do this, use the `--fail-on ` CLI flag.
+
+For example, here's how you could trigger a CI pipeline failure if any vulnerabilities are found in the `ubuntu:latest` image with a severity of "medium" or higher:
+
+```
+grype ubuntu:latest --fail-on medium
+```
+
+**Note:** Grype returns exit code `2` on vulnerability errors.
+
+### Specifying matches to ignore
+
+If you're seeing Grype report **false positives** or any other vulnerability matches that you just don't want to see, you can tell Grype to **ignore** matches by specifying one or more _"ignore rules"_ in your Grype configuration file (e.g. `~/.grype.yaml`). This causes Grype not to report any vulnerability matches that meet the criteria specified by any of your ignore rules.
+
+Each rule can specify any combination of the following criteria:
+
+- vulnerability ID (e.g. `"CVE-2008-4318"`)
+- include vulnerability aliases when matching on a vulnerability ID (set `"include-aliases"` to `true`)
+- namespace (e.g. `"nvd"`)
+- fix state (allowed values: `"fixed"`, `"not-fixed"`, `"wont-fix"`, or `"unknown"`)
+- match type (allowed values: `"exact-direct-match"`, `"exact-indirect-match"`, or `"cpe-match"`)
+- package name (e.g. `"libcurl"`)
+- package version (e.g. `"1.5.1"`)
+- package language (e.g. `"python"`; these values are defined [here](https://github.com/anchore/syft/blob/main/syft/pkg/language.go#L14-L23))
+- package type (e.g. `"npm"`; these values are defined [here](https://github.com/anchore/syft/blob/main/syft/pkg/type.go#L10-L24))
+- package location (e.g. `"/usr/local/lib/node_modules/**"`; supports glob patterns)
+- package upstream name (e.g. `"curl"`)
+
+You can also document a rule with a free-form `reason`, and use VEX-specific fields (`vex-status` and `vex-justification`) when providing VEX data.
+
+Here's an example `~/.grype.yaml` that demonstrates the expected format for ignore rules:
+
+```yaml
+ignore:
+ # This is the full set of supported rule fields:
+ - vulnerability: CVE-2008-4318
+ include-aliases: true
+ reason: "False positive due to bundled curl version"
+ fix-state: unknown
+ namespace: nvd
+ # VEX fields apply when Grype reads vex data:
+ vex-status: not_affected
+ vex-justification: vulnerable_code_not_present
+ match-type: exact-direct-match
+ package:
+ name: libcurl
+ version: 1.5.1
+ language: python
+ type: npm
+ location: "/usr/local/lib/node_modules/**"
+ upstream-name: curl
+
+ # We can make rules to match just by vulnerability ID:
+ - vulnerability: CVE-2014-54321
+
+ # ...or just by a single package field:
+ - package:
+ type: gem
+```
+
+Vulnerability matches will be ignored if **any** rules apply to the match. A rule is considered to apply to a given vulnerability match only if **all** fields specified in the rule apply to the vulnerability match.
+
+When you run Grype while specifying ignore rules, the following happens to the vulnerability matches that are "ignored":
+
+- Ignored matches are **completely hidden** from Grype's output, except for when using the `json` or `template` output formats; however, in these two formats, the ignored matches are **removed** from the existing `matches` array field, and they are placed in a new `ignoredMatches` array field. Each listed ignored match also has an additional field, `appliedIgnoreRules`, which is an array of any rules that caused Grype to ignore this vulnerability match.
+
+- Ignored matches **do not** factor into Grype's exit status decision when using `--fail-on `. For instance, if a user specifies `--fail-on critical`, and all of the vulnerability matches found with a "critical" severity have been _ignored_, Grype will exit zero.
+
+**Note:** Please continue to **[report](https://github.com/anchore/grype/issues/new/choose)** any false positives you see! Even if you can reliably filter out false positives using ignore rules, it's very helpful to the Grype community if we have as much knowledge about Grype's false positives as possible. This helps us continuously improve Grype!
+
+### Showing only "fixed" vulnerabilities
+
+If you only want Grype to report vulnerabilities **that have a confirmed fix**, you can use the `--only-fixed` flag. (This automatically adds [ignore rules](#specifying-matches-to-ignore) into Grype's configuration, such that vulnerabilities that aren't fixed will be ignored.)
+
+For example, here's a scan of Alpine 3.10:
+
+```
+NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY
+apk-tools 2.10.6-r0 2.10.7-r0 CVE-2021-36159 Critical
+libcrypto1.1 1.1.1k-r0 CVE-2021-3711 Critical
+libcrypto1.1 1.1.1k-r0 CVE-2021-3712 High
+libssl1.1 1.1.1k-r0 CVE-2021-3712 High
+libssl1.1 1.1.1k-r0 CVE-2021-3711 Critical
+```
+
+...and here's the same scan, but adding the flag `--only-fixed`:
+
+```
+NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY
+apk-tools 2.10.6-r0 2.10.7-r0 CVE-2021-36159 Critical
+```
+
+If you want Grype to only report vulnerabilities **that do not have a confirmed fix**, you can use the `--only-notfixed` flag. Alternatively, you can use the `--ignore-states` flag to filter results for vulnerabilities with specific states such as `wont-fix` (see `--help` for a list of valid fix states). These flags automatically add [ignore rules](#specifying-matches-to-ignore) into Grype's configuration, such that vulnerabilities which are fixed, or will not be fixed, will be ignored.
+
+## VEX Support
+
+Grype can use VEX (Vulnerability Exploitability Exchange) data to filter false
+positives or provide additional context, augmenting matches. When scanning a
+container image, you can use the `--vex` flag to point to one or more
+[OpenVEX](https://github.com/openvex) documents.
+
+VEX statements relate a product (a container image), a vulnerability, and a VEX
+status to express an assertion of the vulnerability's impact. There are four
+[VEX statuses](https://github.com/openvex/spec/blob/main/OPENVEX-SPEC.md#status-labels):
+`not_affected`, `affected`, `fixed` and `under_investigation`.
+
+Here is an example of a simple OpenVEX document. (tip: use
+[`vexctl`](https://github.com/openvex/vexctl) to generate your own documents).
+
+```json
+{
+ "@context": "https://openvex.dev/ns/v0.2.0",
+ "@id": "https://openvex.dev/docs/public/vex-d4e9020b6d0d26f131d535e055902dd6ccf3e2088bce3079a8cd3588a4b14c78",
+ "author": "A Grype User ",
+ "timestamp": "2023-07-17T18:28:47.696004345-06:00",
+ "version": 1,
+ "statements": [
+ {
+ "vulnerability": {
+ "name": "CVE-2023-1255"
+ },
+ "products": [
+ {
+ "@id": "pkg:oci/alpine@sha256%3A124c7d2707904eea7431fffe91522a01e5a861a624ee31d03372cc1d138a3126",
+ "subcomponents": [
+ { "@id": "pkg:apk/alpine/libssl3@3.0.8-r3" },
+ { "@id": "pkg:apk/alpine/libcrypto3@3.0.8-r3" }
+ ]
+ }
+ ],
+ "status": "fixed"
+ }
+ ]
+}
+```
+
+By default, Grype will use any statements in specified VEX documents with a
+status of `not_affected` or `fixed` to move matches to the ignore set.
+
+Any matches ignored as a result of VEX statements are flagged when using
+`--show-suppressed`:
+
+```
+libcrypto3 3.0.8-r3 3.0.8-r4 apk CVE-2023-1255 Medium (suppressed by VEX)
+```
+
+Statements with an `affected` or `under_investigation` status will only be
+considered to augment the result set when specifically requested using the
+`GRYPE_VEX_ADD` environment variable or in a configuration file.
+
+
+### VEX Ignore Rules
+
+Ignore rules can be written to control how Grype honors VEX statements. For
+example, to configure Grype to only act on VEX statements when the justification is `vulnerable_code_not_present`, you can write a rule like this:
+
+```yaml
+---
+ignore:
+ - vex-status: not_affected
+ vex-justification: vulnerable_code_not_present
+```
+
+See the [list of justifications](https://github.com/openvex/spec/blob/main/OPENVEX-SPEC.md#status-justifications) for details. You can mix `vex-status` and `vex-justification`
+with other ignore rule parameters.
+
+## Grype's database
+
+When Grype performs a scan for vulnerabilities, it does so using a vulnerability database that's stored on your local filesystem, which is constructed by pulling data from a variety of publicly available vulnerability data sources. These sources include:
+
+- Alpine Linux SecDB: https://secdb.alpinelinux.org/
+- Amazon Linux ALAS: https://alas.aws.amazon.com/AL2/alas.rss
+- Chainguard SecDB: https://packages.cgr.dev/chainguard/security.json
+- Debian Linux CVE Tracker: https://security-tracker.debian.org/tracker/data/json
+- Echo Security Advisories: https://advisory.echohq.com/data.json
+- GitHub Security Advisories (GHSAs): https://github.com/advisories
+- MinimOS SecDB: https://packages.mini.dev/advisories/secdb/security.json
+- National Vulnerability Database (NVD): https://nvd.nist.gov/vuln/data-feeds
+- Oracle Linux OVAL: https://linux.oracle.com/security/oval/
+- RedHat Linux Security Data: https://access.redhat.com/hydra/rest/securitydata/
+- RedHat RHSAs: https://www.redhat.com/security/data/oval/
+- SUSE Linux OVAL: https://ftp.suse.com/pub/projects/security/oval/
+- Ubuntu Linux Security: https://people.canonical.com/~ubuntu-security/
+- Wolfi SecDB: https://packages.wolfi.dev/os/security.json
+
+By default, Grype automatically manages this database for you. Grype checks for new updates to the vulnerability database to make sure that every scan uses up-to-date vulnerability information. This behavior is configurable. For more information, see the [Managing Grype's database](#managing-grypes-database) section.
+
+### How database updates work
+
+Grype's vulnerability database is a SQLite file, named `vulnerability.db`. Updates to the database are atomic: the entire database is replaced and then treated as "readonly" by Grype.
+
+Grype's first step in a database update is discovering databases that are available for retrieval. Grype does this by requesting a "latest database file" from a public endpoint:
+
+https://grype.anchore.io/databases/v6/latest.json
+
+The latest database file contains an entry for the most recent database available for download.
+
+Here's an example of an entry in the latest database file:
+
+```json
+{
+ "status": "active",
+ "schemaVersion": "6.0.0",
+ "built": "2025-02-11T04:06:41Z",
+ "path": "vulnerability-db_v6.0.0_2025-02-11T01:30:51Z_1739246801.tar.zst",
+ "checksum": "sha256:79bfa04265c5a32d21773ad0da1bda13c31e932fa1e1422db635c8d714038868"
+}
+```
+
+With this information, Grype can find the most recently built database with the current schema version, download the database, and verify the database's integrity using the `checksum` value.
+
+### Managing Grype's database
+
+> **Note:** During normal usage, _there is no need for users to manage Grype's database!_ Grype manages its database behind the scenes. However, for users that need more control, Grype provides options to manage the database more explicitly.
+
+#### Local database cache directory
+
+By default, the database is cached on the local filesystem in the directory `$XDG_CACHE_HOME/grype/db//`. For example, on macOS, the database would be stored in `~/Library/Caches/grype/db/6/`. (For more information on XDG paths, refer to the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html).)
+
+You can set the cache directory path using the environment variable `GRYPE_DB_CACHE_DIR`. If setting that variable alone does not work, then the `TMPDIR` environment variable might also need to be set.
+
+#### Data staleness
+
+Grype needs up-to-date vulnerability information to provide accurate matches. By default, it will fail execution if the local database was not built in the last 5 days. The data staleness check is configurable via the environment variable `GRYPE_DB_MAX_ALLOWED_BUILT_AGE` and `GRYPE_DB_VALIDATE_AGE` or the field `max-allowed-built-age` and `validate-age`, under `db`. It uses [golang's time duration syntax](https://pkg.go.dev/time#ParseDuration). Set `GRYPE_DB_VALIDATE_AGE` or `validate-age` to `false` to disable staleness check.
+
+#### Offline and air-gapped environments
+
+By default, Grype checks for a new database on every run, by making a network request over the internet.
+You can tell Grype not to perform this check by setting the environment variable `GRYPE_DB_AUTO_UPDATE` to `false`.
+
+As long as you place Grype's `vulnerability.db` and `import.json` files in the cache directory for the expected schema version, Grype has no need to access the network.
+Additionally, you can get a reference to the latest database archive for download from the `grype db list` command in an online environment, download the database archive, transfer it to your offline environment, and use `grype db import ` to use the given database in an offline capacity.
+
+If you would like to distribute your own Grype databases internally without needing to use `db import` manually you can leverage Grype's DB update mechanism. To do this you can craft your own `latest.json` file similar to the public "latest database file" and change the download URL to point to an internal endpoint (e.g. a private S3 bucket, an internal file server, etc.). Any internal installation of Grype can receive database updates automatically by configuring the `db.update-url` (same as the `GRYPE_DB_UPDATE_URL` environment variable) to point to the hosted `latest.json` file you've crafted.
+
+#### CLI commands for database management
+
+Grype provides database-specific CLI commands for users that want to control the database from the command line. Here are some of the useful commands provided:
+
+`grype db status` — report the current status of Grype's database (such as its location, build date, and checksum)
+
+`grype db check` — see if updates are available for the database
+
+`grype db update` — ensure the latest database has been downloaded to the cache directory (Grype performs this operation at the beginning of every scan by default)
+
+`grype db list` — download the latest database file configured at `db.update-url` and show the database available for download
+
+`grype db import` — provide grype with a database archive to explicitly use (useful for offline DB updates)
+
+`grype db providers` - provides a detailed list of database providers
+
+Find complete information on Grype's database commands by running `grype db --help`.
+
+## Shell completion
+
+Grype supplies shell completion through its CLI implementation ([cobra](https://github.com/spf13/cobra/blob/master/shell_completions.md)). Generate the completion code for your shell by running one of the following commands:
+
+- `grype completion `
+- `go run ./cmd/grype completion `
+
+This will output a shell script to STDOUT, which can then be used as a completion script for Grype. Running one of the above commands with the
+`-h` or `--help` flags will provide instructions on how to do that for your chosen shell.
+
+## Private Registry Authentication
+
+### Local Docker Credentials
+
+When a container runtime is not present, grype can still utilize credentials configured in common credential sources (such as `~/.docker/config.json`).
+It will pull images from private registries using these credentials. The config file is where your credentials are stored when authenticating with private registries via some command like `docker login`.
+For more information see the `go-containerregistry` [documentation](https://github.com/google/go-containerregistry/tree/main/pkg/authn).
+
+An example `config.json` looks something like this:
+
+```
+// config.json
+{
+ "auths": {
+ "registry.example.com": {
+ "username": "AzureDiamond",
+ "password": "hunter2"
+ }
+ }
+}
+```
+
+You can run the following command as an example. It details the mount/environment configuration a container needs to access a private registry:
+
+`docker run -v ./config.json:/config/config.json -e "DOCKER_CONFIG=/config" anchore/grype:latest `
+
+### Docker Credentials in Kubernetes
+
+The below section shows a simple workflow on how to mount this config file as a secret into a container on kubernetes.
+
+1. Create a secret. The value of `config.json` is important. It refers to the specification detailed [here](https://github.com/google/go-containerregistry/tree/main/pkg/authn#the-config-file).
+ Below this section is the `secret.yaml` file that the pod configuration will consume as a volume.
+ The key `config.json` is important. It will end up being the name of the file when mounted into the pod.
+ ``` # secret.yaml
+
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: registry-config
+ namespace: grype
+ data:
+ config.json:
+ ```
+
+ `kubectl apply -f secret.yaml`
+
+2. Create your pod running grype. The env `DOCKER_CONFIG` is important because it advertises where to look for the credential file.
+ In the below example, setting `DOCKER_CONFIG=/config` informs grype that credentials can be found at `/config/config.json`.
+ This is why we used `config.json` as the key for our secret. When mounted into containers the secrets' key is used as the filename.
+ The `volumeMounts` section mounts our secret to `/config`. The `volumes` section names our volume and leverages the secret we created in step one.
+ ``` # pod.yaml
+
+ apiVersion: v1
+ kind: Pod
+ spec:
+ containers:
+ - image: anchore/grype:latest
+ name: grype-private-registry-demo
+ env:
+ - name: DOCKER_CONFIG
+ value: /config
+ volumeMounts:
+ - mountPath: /config
+ name: registry-config
+ readOnly: true
+ args:
+ -
+ volumes:
+ - name: registry-config
+ secret:
+ secretName: registry-config
+ ```
+
+ `kubectl apply -f pod.yaml`
+
+3. The user can now run `kubectl logs grype-private-registry-demo`. The logs should show the grype analysis for the `` provided in the pod configuration.
+
+Using the above information, users should be able to configure private registry access without having to do so in the `grype` or `syft` configuration files.
+They will also not be dependent on a docker daemon, (or some other runtime software) for registry configuration and access.
+
+## Configuration
+
+Default configuration search paths (see all with `grype config locations`):
+
+- `.grype.yaml`
+- `.grype/config.yaml`
+- `~/.grype.yaml`
+- `/grype/config.yaml`
+
+Use `grype config` to print a sample config file to stdout.
+Use `grype config --load` to print the current config after loading all values to stdout.
+
+You can specify files directly using the `--config` / `-c` flags (or environment variable `GRYPE_CONFIG`) to provide your own configuration files/paths:
+
+```shell
+# Using the flag
+grype -c /path/to/config.yaml
+# Or using the environment variable
+GRYPE_CONFIG=/path/to/config.yaml grype
+```
+
+Configuration options (example values are the default):
+
+```yaml
+# the output format of the vulnerability report (options: table, template, json, cyclonedx)
+# when using template as the output type, you must also provide a value for 'output-template-file' (env: GRYPE_OUTPUT)
+output: 'table'
+
+# if using template output, you must provide a path to a Go template file
+# see https://github.com/anchore/grype#using-templates for more information on template output
+# the default path to the template file is the current working directory
+# output-template-file: .grype/html.tmpl
+#
+# write output report to a file (default is to write to stdout) (env: GRYPE_FILE)
+file: ''
+
+# pretty-print JSON output (env: GRYPE_PRETTY)
+pretty: false
+
+# distro to match against in the format: : (env: GRYPE_DISTRO)
+distro: ''
+
+# generate CPEs for packages with no CPE data (env: GRYPE_ADD_CPES_IF_NONE)
+add-cpes-if-none: false
+
+# specify the path to a Go template file (requires 'template' output to be selected) (env: GRYPE_OUTPUT_TEMPLATE_FILE)
+output-template-file: ''
+
+# enable/disable checking for application updates on startup (env: GRYPE_CHECK_FOR_APP_UPDATE)
+check-for-app-update: true
+
+# ignore matches for vulnerabilities that are not fixed (env: GRYPE_ONLY_FIXED)
+only-fixed: false
+
+# ignore matches for vulnerabilities that are fixed (env: GRYPE_ONLY_NOTFIXED)
+only-notfixed: false
+
+# ignore matches for vulnerabilities with specified comma separated fix states, options=[fixed not-fixed unknown wont-fix] (env: GRYPE_IGNORE_WONTFIX)
+ignore-wontfix: ''
+
+# an optional platform specifier for container image sources (e.g. 'linux/arm64', 'linux/arm64/v8', 'arm64', 'linux') (env: GRYPE_PLATFORM)
+platform: ''
+
+# upon scanning, if a severity is found at or above the given severity then the return code will be 1
+# default is unset which will skip this validation (options: negligible, low, medium, high, critical) (env: GRYPE_FAIL_ON_SEVERITY)
+fail-on-severity: ''
+
+# show suppressed/ignored vulnerabilities in the output (only supported with table output format) (env: GRYPE_SHOW_SUPPRESSED)
+show-suppressed: false
+
+# orient results by CVE instead of the original vulnerability ID when possible (env: GRYPE_BY_CVE)
+by-cve: false
+
+# sort the match results with the given strategy, options=[package severity epss risk kev vulnerability] (env: GRYPE_SORT_BY)
+sort-by: 'risk'
+
+# same as --name; set the name of the target being analyzed (env: GRYPE_NAME)
+name: ''
+
+# allows users to specify which image source should be used to generate the sbom
+# valid values are: registry, docker, podman (env: GRYPE_DEFAULT_IMAGE_PULL_SOURCE)
+default-image-pull-source: ''
+
+search:
+ # selection of layers to analyze, options=[squashed all-layers] (env: GRYPE_SEARCH_SCOPE)
+ scope: 'squashed'
+
+ # search within archives that do not contain a file index to search against (tar, tar.gz, tar.bz2, etc)
+ # note: enabling this may result in a performance impact since all discovered compressed tars will be decompressed
+ # note: for now this only applies to the java package cataloger (env: GRYPE_SEARCH_UNINDEXED_ARCHIVES)
+ unindexed-archives: false
+
+ # search within archives that do contain a file index to search against (zip)
+ # note: for now this only applies to the java package cataloger (env: GRYPE_SEARCH_INDEXED_ARCHIVES)
+ indexed-archives: true
+
+# A list of vulnerability ignore rules, one or more property may be specified and all matching vulnerabilities will be ignored.
+# This is the full set of supported rule fields:
+# - vulnerability: CVE-2008-4318
+# fix-state: unknown
+# package:
+# name: libcurl
+# version: 1.5.1
+# type: npm
+# location: "/usr/local/lib/node_modules/**"
+#
+# VEX fields apply when Grype reads vex data:
+# - vex-status: not_affected
+# vex-justification: vulnerable_code_not_present
+ignore: []
+
+# a list of globs to exclude from scanning, for example:
+# - '/etc/**'
+# - './out/**/*.json'
+# same as --exclude (env: GRYPE_EXCLUDE)
+exclude: []
+
+external-sources:
+ # enable Grype searching network source for additional information (env: GRYPE_EXTERNAL_SOURCES_ENABLE)
+ enable: false
+
+ maven:
+ # search for Maven artifacts by SHA1 (env: GRYPE_EXTERNAL_SOURCES_MAVEN_SEARCH_MAVEN_UPSTREAM)
+ search-maven-upstream: true
+
+ # base URL of the Maven repository to search (env: GRYPE_EXTERNAL_SOURCES_MAVEN_BASE_URL)
+ base-url: 'https://search.maven.org/solrsearch/select'
+
+ # (env: GRYPE_EXTERNAL_SOURCES_MAVEN_RATE_LIMIT)
+ rate-limit: 300ms
+
+match:
+ java:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_JAVA_USING_CPES)
+ using-cpes: false
+
+ jvm:
+ # (env: GRYPE_MATCH_JVM_USING_CPES)
+ using-cpes: true
+
+ dotnet:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_DOTNET_USING_CPES)
+ using-cpes: false
+
+ golang:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_GOLANG_USING_CPES)
+ using-cpes: false
+
+ # use CPE matching to find vulnerabilities for the Go standard library (env: GRYPE_MATCH_GOLANG_ALWAYS_USE_CPE_FOR_STDLIB)
+ always-use-cpe-for-stdlib: true
+
+ # allow comparison between main module pseudo-versions (e.g. v0.0.0-20240413-2b432cf643...) (env: GRYPE_MATCH_GOLANG_ALLOW_MAIN_MODULE_PSEUDO_VERSION_COMPARISON)
+ allow-main-module-pseudo-version-comparison: false
+
+ javascript:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_JAVASCRIPT_USING_CPES)
+ using-cpes: false
+
+ python:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_PYTHON_USING_CPES)
+ using-cpes: false
+
+ ruby:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_RUBY_USING_CPES)
+ using-cpes: false
+
+ rust:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_RUST_USING_CPES)
+ using-cpes: false
+
+ stock:
+ # use CPE matching to find vulnerabilities (env: GRYPE_MATCH_STOCK_USING_CPES)
+ using-cpes: true
+
+
+registry:
+ # skip TLS verification when communicating with the registry (env: GRYPE_REGISTRY_INSECURE_SKIP_TLS_VERIFY)
+ insecure-skip-tls-verify: false
+
+ # use http instead of https when connecting to the registry (env: GRYPE_REGISTRY_INSECURE_USE_HTTP)
+ insecure-use-http: false
+
+ # Authentication credentials for specific registries. Each entry describes authentication for a specific authority:
+ # - authority: the registry authority URL the URL to the registry (e.g. "docker.io", "localhost:5000", etc.) (env: SYFT_REGISTRY_AUTH_AUTHORITY)
+ # username: a username if using basic credentials (env: SYFT_REGISTRY_AUTH_USERNAME)
+ # password: a corresponding password (env: SYFT_REGISTRY_AUTH_PASSWORD)
+ # token: a token if using token-based authentication, mutually exclusive with username/password (env: SYFT_REGISTRY_AUTH_TOKEN)
+ # tls-cert: filepath to the client certificate used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_CERT)
+ # tls-key: filepath to the client key used for TLS authentication to the registry (env: SYFT_REGISTRY_AUTH_TLS_KEY)
+ auth: []
+
+ # filepath to a CA certificate (or directory containing *.crt, *.cert, *.pem) used to generate the client certificate (env: GRYPE_REGISTRY_CA_CERT)
+ ca-cert: ''
+
+# a list of VEX documents to consider when producing scanning results (env: GRYPE_VEX_DOCUMENTS)
+vex-documents: []
+
+# VEX statuses to consider as ignored rules (env: GRYPE_VEX_ADD)
+vex-add: []
+
+# match kernel-header packages with upstream kernel as kernel vulnerabilities (env: GRYPE_MATCH_UPSTREAM_KERNEL_HEADERS)
+match-upstream-kernel-headers: false
+
+db:
+ # location to write the vulnerability database cache (env: GRYPE_DB_CACHE_DIR)
+ cache-dir: '~/Library/Caches/grype/db'
+
+ # URL of the vulnerability database (env: GRYPE_DB_UPDATE_URL)
+ update-url: 'https://grype.anchore.io/databases'
+
+ # certificate to trust download the database and listing file (env: GRYPE_DB_CA_CERT)
+ ca-cert: ''
+
+ # check for database updates on execution (env: GRYPE_DB_AUTO_UPDATE)
+ auto-update: true
+
+ # validate the database matches the known hash each execution (env: GRYPE_DB_VALIDATE_BY_HASH_ON_START)
+ validate-by-hash-on-start: true
+
+ # ensure db build is no older than the max-allowed-built-age (env: GRYPE_DB_VALIDATE_AGE)
+ validate-age: true
+
+ # Max allowed age for vulnerability database,
+ # age being the time since it was built
+ # Default max age is 120h (or five days) (env: GRYPE_DB_MAX_ALLOWED_BUILT_AGE)
+ max-allowed-built-age: 120h0m0s
+
+ # fail the scan if unable to check for database updates (env: GRYPE_DB_REQUIRE_UPDATE_CHECK)
+ require-update-check: false
+
+ # Timeout for downloading GRYPE_DB_UPDATE_URL to see if the database needs to be downloaded
+ # This file is ~156KiB as of 2024-04-17 so the download should be quick; adjust as needed (env: GRYPE_DB_UPDATE_AVAILABLE_TIMEOUT)
+ update-available-timeout: 30s
+
+ # Timeout for downloading actual vulnerability DB
+ # The DB is ~156MB as of 2024-04-17 so slower connections may exceed the default timeout; adjust as needed (env: GRYPE_DB_UPDATE_DOWNLOAD_TIMEOUT)
+ update-download-timeout: 5m0s
+
+ # Maximum frequency to check for vulnerability database updates (env: GRYPE_DB_MAX_UPDATE_CHECK_FREQUENCY)
+ max-update-check-frequency: 2h0m0s
+
+log:
+ # suppress all logging output (env: GRYPE_LOG_QUIET)
+ quiet: false
+
+ # explicitly set the logging level (available: [error warn info debug trace]) (env: GRYPE_LOG_LEVEL)
+ level: 'warn'
+
+ # file path to write logs to (env: GRYPE_LOG_FILE)
+ file: ''
+
+dev:
+ # capture resource profiling data (available: [cpu, mem]) (env: GRYPE_DEV_PROFILE)
+ profile: ''
+
+ db:
+ # show sql queries in trace logging (requires -vv) (env: GRYPE_DEV_DB_DEBUG)
+ debug: false
+
+# include a timestamp (env: GRYPE_TIMESTAMP)
+timestamp: true
+```
+
+## Future plans
+
+The following areas of potential development are currently being investigated:
+
+- Support for allowlist, package mapping
+
+
+## Grype Logo
+
+
diff --git a/data/readmes/hadolint-v2140.md b/data/readmes/hadolint-v2140.md
new file mode 100644
index 0000000..6aad2b1
--- /dev/null
+++ b/data/readmes/hadolint-v2140.md
@@ -0,0 +1,726 @@
+# Hadolint - README (v2.14.0)
+
+**Repository**: https://github.com/hadolint/hadolint
+**Version**: v2.14.0
+
+---
+
+# Haskell Dockerfile Linter
+
+[![Build Status][github-actions-img]][github-actions]
+[![GPL-3 licensed][license-img]][license]
+[![GitHub release][release-img]][release]
+![GitHub downloads][downloads-img]
+
+
+A smarter Dockerfile linter that helps you build [best practice][] Docker
+images. The linter parses the Dockerfile into an AST and performs rules on
+top of the AST. It stands on the shoulders of [ShellCheck][] to lint
+the Bash code inside `RUN` instructions.
+
+[:globe_with_meridians: **Check the online version on
+hadolint.github.io/hadolint**](https://hadolint.github.io/hadolint)
+[](https://hadolint.github.io/hadolint)
+
+## Table of Contents
+
+- [How to use](#how-to-use)
+- [Install](#install)
+- [CLI](#cli)
+- [Configure](#configure)
+- [Non-Posix Shells](#non-posix-shells)
+- [Ignoring Rules](#ignoring-rules)
+ - [Inline ignores](#inline-ignores)
+ - [Global ignores](#global-ignores)
+- [Linting Labels](#linting-labels)
+ - [Note on dealing with variables in labels](#note-on-dealing-with-variables-in-labels)
+- [Integrations](#integrations)
+- [Rules](#rules)
+- [Develop](#develop)
+ - [Setup](#setup)
+ - [REPL](#repl)
+ - [Tests](#tests)
+ - [AST](#ast)
+ - [Building against custom libraries](#building-against-custom-libraries)
+- [Alternatives](#alternatives)
+
+## How to use
+
+You can run `hadolint` locally to lint your Dockerfile.
+
+```bash
+hadolint
+hadolint --ignore DL3003 --ignore DL3006 # exclude specific rules
+hadolint --trusted-registry my-company.com:500 # Warn when using untrusted FROM images
+```
+
+Docker comes to the rescue, providing an easy way how to run `hadolint` on most
+platforms.
+Just pipe your `Dockerfile` to `docker run`:
+
+```bash
+docker run --rm -i hadolint/hadolint < Dockerfile
+# OR
+docker run --rm -i ghcr.io/hadolint/hadolint < Dockerfile
+```
+
+or using [Podman](https://podman.io/):
+
+```bash
+podman run --rm -i docker.io/hadolint/hadolint < Dockerfile
+# OR
+podman run --rm -i ghcr.io/hadolint/hadolint < Dockerfile
+```
+
+or using Windows PowerShell:
+
+```powershell
+cat .\Dockerfile | docker run --rm -i hadolint/hadolint
+```
+
+## Install
+
+You can download prebuilt binaries for OSX, Windows and Linux from the latest
+[release page][]. However, if this does not work for you, please fall back to
+container (Docker), `brew` or source installation.
+
+On OSX, you can use [brew](https://brew.sh/) to install `hadolint`.
+
+```bash
+brew install hadolint
+```
+
+On Windows, you can use [scoop](https://github.com/lukesampson/scoop) to
+install `hadolint`.
+
+```batch
+scoop install hadolint
+```
+
+On distributions that have `nix` installed, you can use the `hadolint`
+package to run ad-hoc shells or permanently install `hadolint` into
+your environment.
+
+As mentioned earlier, `hadolint` is available as a container image:
+
+```bash
+docker pull hadolint/hadolint
+# OR
+docker pull ghcr.io/hadolint/hadolint
+```
+
+If you need a container with shell access, use the Debian or Alpine
+variants:
+
+```bash
+docker pull hadolint/hadolint:latest-debian
+# OR
+docker pull hadolint/hadolint:latest-alpine
+# OR
+docker pull ghcr.io/hadolint/hadolint:latest-debian
+# OR
+docker pull ghcr.io/hadolint/hadolint:latest-alpine
+```
+
+You can also build `hadolint` locally. You need [Haskell][] and the [cabal][]
+build tool to build the binary.
+
+```bash
+git clone https://github.com/hadolint/hadolint \
+ && cd hadolint \
+ && cabal configure \
+ && cabal build \
+ && cabal install
+```
+
+If you want the
+[VS Code Hadolint](https://github.com/michaellzc/vscode-hadolint)
+extension to use Hadolint in a container, you can use the following
+[wrapper script](https://github.com/hadolint/hadolint/issues/691#issuecomment-932116329):
+
+```bash
+#!/bin/bash
+dockerfile="$1"
+shift
+docker run --rm -i hadolint/hadolint hadolint "$@" - < "$dockerfile"
+```
+
+## CLI
+
+```bash
+hadolint --help
+```
+
+```text
+hadolint - Dockerfile Linter written in Haskell
+
+Usage: hadolint [-v|--version] [-c|--config FILENAME] [DOCKERFILE...]
+ [--file-path-in-report FILEPATHINREPORT] [--no-fail]
+ [--no-color] [-V|--verbose] [-f|--format ARG] [--error RULECODE]
+ [--warning RULECODE] [--info RULECODE] [--style RULECODE]
+ [--ignore RULECODE]
+ [--trusted-registry REGISTRY (e.g. docker.io)]
+ [--require-label LABELSCHEMA (e.g. maintainer:text)]
+ [--strict-labels] [--disable-ignore-pragma]
+ [-t|--failure-threshold THRESHOLD]
+ Lint Dockerfile for errors and best practices
+
+Available options:
+ -h,--help Show this help text
+ -v,--version Show version
+ -c,--config FILENAME Path to the configuration file
+ --file-path-in-report FILEPATHINREPORT
+ The file path referenced in the generated report.
+ This only applies for the 'checkstyle' format and is
+ useful when running Hadolint with Docker to set the
+ correct file path.
+ --no-fail Don't exit with a failure status code when any rule
+ is violated
+ --no-color Don't colorize output
+ -V,--verbose Enables verbose logging of hadolint's output to
+ stderr
+ -f,--format ARG The output format for the results [tty | json |
+ checkstyle | codeclimate | gitlab_codeclimate | gnu |
+ codacy | sonarqube | sarif] (default: tty)
+ --error RULECODE Make the rule `RULECODE` have the level `error`
+ --warning RULECODE Make the rule `RULECODE` have the level `warning`
+ --info RULECODE Make the rule `RULECODE` have the level `info`
+ --style RULECODE Make the rule `RULECODE` have the level `style`
+ --ignore RULECODE A rule to ignore. If present, the ignore list in the
+ config file is ignored
+ --trusted-registry REGISTRY (e.g. docker.io)
+ A docker registry to allow to appear in FROM
+ instructions
+ --require-label LABELSCHEMA (e.g. maintainer:text)
+ The option --require-label=label:format makes
+ Hadolint check that the label `label` conforms to
+ format requirement `format`
+ --strict-labels Do not permit labels other than specified in
+ `label-schema`
+ --disable-ignore-pragma Disable inline ignore pragmas `# hadolint
+ ignore=DLxxxx`
+ -t,--failure-threshold THRESHOLD
+ Exit with failure code only when rules with a
+ severity equal to or above THRESHOLD are violated.
+ Accepted values: [error | warning | info | style |
+ ignore | none] (default: info)
+```
+
+## Configure
+
+Configuration files can be used globally or per project.
+Hadolint looks for configuration files in the following locations or their
+platform specific equivalents in this order and uses the first one exclusively:
+
+- `$PWD/.hadolint.yaml`
+- `$XDG_CONFIG_HOME/hadolint.yaml`
+- `$HOME/.config/hadolint.yaml`
+- `$HOME/.hadolint/hadolint.yaml or $HOME/hadolint/config.yaml`
+- `$HOME/.hadolint.yaml`
+
+In windows, the `%LOCALAPPDATA%` environment variable is used instead of
+`XDG_CONFIG_HOME`. Config files can have either `yaml` or `yml` extensions.
+
+`hadolint` full `yaml` config file schema
+
+```yaml
+failure-threshold: string # name of threshold level (error | warning | info | style | ignore | none)
+format: string # Output format (tty | json | checkstyle | codeclimate | gitlab_codeclimate | gnu | codacy)
+ignored: [string] # list of rules
+label-schema: # See Linting Labels below for specific label-schema details
+ author: string # Your name
+ contact: string # email address
+ created: timestamp # rfc3339 datetime
+ version: string # semver
+ documentation: string # url
+ git-revision: string # hash
+ license: string # spdx
+no-color: boolean # true | false
+no-fail: boolean # true | false
+override:
+ error: [string] # list of rules
+ warning: [string] # list of rules
+ info: [string] # list of rules
+ style: [string] # list of rules
+strict-labels: boolean # true | false
+disable-ignore-pragma: boolean # true | false
+trustedRegistries: string | [string] # registry or list of registries
+```
+
+`hadolint` supports specifying the ignored rules using a configuration
+file. The configuration file should be in `yaml` format. This is one
+valid configuration file as an example:
+
+```yaml
+ignored:
+ - DL3000
+ - SC1010
+```
+
+Additionally, `hadolint` can warn you when images from untrusted
+repositories are being used in Dockerfiles, you can append the
+`trustedRegistries` keys to the configuration file, as shown below:
+
+```yaml
+ignored:
+ - DL3000
+ - SC1010
+
+trustedRegistries:
+ - docker.io
+ - my-company.com:5000
+ - "*.gcr.io"
+```
+
+If you want to override the severity of specific rules, you can do that too:
+
+```yaml
+override:
+ error:
+ - DL3001
+ - DL3002
+ warning:
+ - DL3042
+ - DL3033
+ info:
+ - DL3032
+ style:
+ - DL3015
+```
+
+`failure-threshold` Exit with failure code only when rules with a
+severity above THRESHOLD are violated (Available in v2.6.0+)
+
+```yaml
+failure-threshold: info
+override:
+ warning:
+ - DL3042
+ - DL3033
+ info:
+ - DL3032
+```
+
+Additionally, you can pass a custom configuration file in the command line with
+the `--config` option
+
+```bash
+hadolint --config /path/to/config.yaml Dockerfile
+```
+
+To pass a custom configuration file (using relative or absolute path) to
+a container, use the following command:
+
+```bash
+docker run --rm -i -v /your/path/to/hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint < Dockerfile
+# OR
+docker run --rm -i -v /your/path/to/hadolint.yaml:/.config/hadolint.yaml ghcr.io/hadolint/hadolint < Dockerfile
+```
+
+In addition to config files, Hadolint can be configured with environment
+variables.
+
+```bash
+NO_COLOR=1 # Set or unset. See https://no-color.org
+HADOLINT_NOFAIL=1 # Truthy value e.g. 1, true or yes
+HADOLINT_VERBOSE=1 # Truthy value e.g. 1, true or yes
+HADOLINT_FORMAT=json # Output format (tty | json | checkstyle | codeclimate | gitlab_codeclimate | gnu | codacy | sarif )
+HADOLINT_FAILURE_THRESHOLD=info # threshold level (error | warning | info | style | ignore | none)
+HADOLINT_OVERRIDE_ERROR=DL3010,DL3020 # comma separated list of rule codes
+HADOLINT_OVERRIDE_WARNING=DL3010,DL3020 # comma separated list of rule codes
+HADOLINT_OVERRIDE_INFO=DL3010,DL3020 # comma separated list of rule codes
+HADOLINT_OVERRIDE_STYLE=DL3010,DL3020 # comma separated list of rule codes
+HADOLINT_IGNORE=DL3010,DL3020 # comma separated list of rule codes
+HADOLINT_STRICT_LABELS=1 # Truthy value e.g. 1, true or yes
+HADOLINT_DISABLE_IGNORE_PRAGMA=1 # Truthy value e.g. 1, true or yes
+HADOLINT_TRUSTED_REGISTRIES=docker.io # comma separated list of registry urls
+HADOLINT_REQUIRE_LABELS=maintainer:text # comma separated list of label schema items
+```
+
+## Non-Posix Shells
+
+When using base images with non-posix shells as default (e.g. Windows based
+images) a special pragma `hadolint shell` can specify which shell the base image
+uses, so that Hadolint can automatically ignore all shell-specific rules.
+
+```Dockerfile
+FROM mcr.microsoft.com/windows/servercore:ltsc2022
+# hadolint shell=powershell
+RUN Get-Process notepad | Stop-Process
+```
+
+## Ignoring Rules
+
+### Inline ignores
+
+It is also possible to ignore rules by adding a special comment directly
+above the Dockerfile statement for which you want to make an exception for.
+Such comments look like
+`# hadolint ignore=DL3001,SC1081`. For example:
+
+```dockerfile
+# hadolint ignore=DL3006
+FROM ubuntu
+
+# hadolint ignore=DL3003,SC1035
+RUN cd /tmp && echo "hello!"
+```
+
+The comment "inline ignores" applies only to the statement following it.
+
+### Global ignores
+
+Rules can also be ignored on a per-file basis using the global ignore pragma.
+It works just like inline ignores, except that it applies to the whole file
+instead of just the next line.
+
+```dockerfile
+# hadolint global ignore=DL3003,DL3006,SC1035
+FROM ubuntu
+
+RUN cd /tmp && echo "foo"
+```
+
+## Linting Labels
+
+Hadolint is able to check if specific labels are present and conform
+to a predefined label schema.
+First, a label schema must be defined either via the command line:
+
+```bash
+hadolint --require-label author:text --require-label version:semver Dockerfile
+```
+
+or via the config file:
+
+```yaml
+label-schema:
+ author: text
+ contact: email
+ created: rfc3339
+ version: semver
+ documentation: url
+ git-revision: hash
+ license: spdx
+```
+
+The value of a label can be either of `text`, `url`, `semver`, `hash` or
+`rfc3339`:
+
+| Schema | Description |
+|:--------|:---------------------------------------------------|
+| text | Anything |
+| rfc3339 | A time, formatted according to [RFC 3339][rfc3339] |
+| semver | A [semantic version][semver] |
+| url | A URI as described in [RFC 3986][rfc3986] |
+| hash | Either a short or a long [Git hash][githash] |
+| spdx | An [SPDX license identifier][spdxid] |
+| email | An email address conforming to [RFC 5322][rfc5322] |
+
+By default, Hadolint ignores any label that is not specified in the label schema. To
+warn against such additional labels, turn on strict labels, using the command line:
+
+```bash
+hadolint --strict-labels --require-label version:semver Dockerfile
+```
+
+or the config file:
+
+```yaml
+strict-labels: true
+```
+
+When strict labels is enabled, but no label schema is specified, `hadolint`
+will warn if any label is present.
+
+### Note on dealing with variables in labels
+
+It is a common pattern to fill the value of a label not statically, but rather
+dynamically at build time by using a variable:
+
+```dockerfile
+FROM debian:buster
+ARG VERSION="du-jour"
+LABEL version="${VERSION}"
+```
+
+To allow this, the label schema must specify `text` as value for that label:
+
+```yaml
+label-schema:
+ version: text
+```
+
+## Integrations
+
+To get most of `hadolint`, it is useful to integrate it as a check in your CI
+or into your editor, or as a pre-commit hook, to lint your `Dockerfile` as you
+write it. See our [Integration][] docs.
+
+- [Code Review Platform Integrations][]
+- [Continuous Integrations][]
+- [Editor Integrations][]
+- [Version Control Integrations][]
+
+## Rules
+
+An incomplete list of implemented rules. Click on the error code to get more
+detailed information.
+
+- Rules with the prefix `DL` are from `hadolint`. Have a look at
+ `Rules.hs` to find the implementation of the rules.
+
+- Rules with the `SC` prefix are from **ShellCheck** (only the most
+ common rules are listed, there are dozens more).
+
+Please [create an issue][] if you have an idea for a good rule.
+
+
+
+| Rule | Default Severity | Description |
+|:-------------------------------------------------------------|:-----------------| :-------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [DL1001](https://github.com/hadolint/hadolint/wiki/DL1001) | Ignore | Please refrain from using inline ignore pragmas `# hadolint ignore=DLxxxx`. |
+| [DL3000](https://github.com/hadolint/hadolint/wiki/DL3000) | Error | Use absolute WORKDIR. |
+| [DL3001](https://github.com/hadolint/hadolint/wiki/DL3001) | Info | For some bash commands it makes no sense running them in a Docker container like ssh, vim, shutdown, service, ps, free, top, kill, mount, ifconfig. |
+| [DL3002](https://github.com/hadolint/hadolint/wiki/DL3002) | Warning | Last user should not be root. |
+| [DL3003](https://github.com/hadolint/hadolint/wiki/DL3003) | Warning | Use WORKDIR to switch to a directory. |
+| [DL3004](https://github.com/hadolint/hadolint/wiki/DL3004) | Error | Do not use sudo as it leads to unpredictable behavior. Use a tool like gosu to enforce root. |
+| [DL3006](https://github.com/hadolint/hadolint/wiki/DL3006) | Warning | Always tag the version of an image explicitly. |
+| [DL3007](https://github.com/hadolint/hadolint/wiki/DL3007) | Warning | Using latest is prone to errors if the image will ever update. Pin the version explicitly to a release tag. |
+| [DL3008](https://github.com/hadolint/hadolint/wiki/DL3008) | Warning | Pin versions in `apt-get install`. |
+| [DL3009](https://github.com/hadolint/hadolint/wiki/DL3009) | Info | Delete the apt-get lists after installing something. |
+| [DL3010](https://github.com/hadolint/hadolint/wiki/DL3010) | Info | Use ADD for extracting archives into an image. |
+| [DL3011](https://github.com/hadolint/hadolint/wiki/DL3011) | Error | Valid UNIX ports range from 0 to 65535. |
+| [DL3012](https://github.com/hadolint/hadolint/wiki/DL3012) | Error | Multiple `HEALTHCHECK` instructions. |
+| [DL3013](https://github.com/hadolint/hadolint/wiki/DL3013) | Warning | Pin versions in pip. |
+| [DL3014](https://github.com/hadolint/hadolint/wiki/DL3014) | Warning | Use the `-y` switch. |
+| [DL3015](https://github.com/hadolint/hadolint/wiki/DL3015) | Info | Avoid additional packages by specifying `--no-install-recommends`. |
+| [DL3016](https://github.com/hadolint/hadolint/wiki/DL3016) | Warning | Pin versions in `npm`. |
+| [DL3018](https://github.com/hadolint/hadolint/wiki/DL3018) | Warning | Pin versions in `apk add`. Instead of `apk add ` use `apk add =`. |
+| [DL3019](https://github.com/hadolint/hadolint/wiki/DL3019) | Info | Use the `--no-cache` switch to avoid the need to use `--update` and remove `/var/cache/apk/*` when done installing packages. |
+| [DL3020](https://github.com/hadolint/hadolint/wiki/DL3020) | Error | Use `COPY` instead of `ADD` for files and folders. |
+| [DL3021](https://github.com/hadolint/hadolint/wiki/DL3021) | Error | `COPY` with more than 2 arguments requires the last argument to end with `/` |
+| [DL3022](https://github.com/hadolint/hadolint/wiki/DL3022) | Warning | `COPY --from` should reference a previously defined `FROM` alias |
+| [DL3023](https://github.com/hadolint/hadolint/wiki/DL3023) | Error | `COPY --from` cannot reference its own `FROM` alias |
+| [DL3024](https://github.com/hadolint/hadolint/wiki/DL3024) | Error | `FROM` aliases (stage names) must be unique |
+| [DL3025](https://github.com/hadolint/hadolint/wiki/DL3025) | Warning | Use arguments JSON notation for CMD and ENTRYPOINT arguments |
+| [DL3026](https://github.com/hadolint/hadolint/wiki/DL3026) | Error | Use only an allowed registry in the `FROM image` |
+| [DL3027](https://github.com/hadolint/hadolint/wiki/DL3027) | Warning | Do not use `apt` as it is meant to be an end-user tool, use `apt-get` or `apt-cache` instead |
+| [DL3028](https://github.com/hadolint/hadolint/wiki/DL3028) | Warning | Pin versions in gem install. Instead of `gem install ` use `gem install :` |
+| [DL3029](https://github.com/hadolint/hadolint/wiki/DL3029) | Warning | Do not use --platform flag with FROM. |
+| [DL3030](https://github.com/hadolint/hadolint/wiki/DL3030) | Warning | Use the `-y` switch to avoid manual input `yum install -y ` |
+| [DL3032](https://github.com/hadolint/hadolint/wiki/DL3032) | Warning | `yum clean all` missing after yum command. |
+| [DL3033](https://github.com/hadolint/hadolint/wiki/DL3033) | Warning | Specify version with `yum install -y -` |
+| [DL3034](https://github.com/hadolint/hadolint/wiki/DL3034) | Warning | Non-interactive switch missing from `zypper` command: `zypper install -y` |
+| [DL3035](https://github.com/hadolint/hadolint/wiki/DL3035) | Warning | Do not use `zypper dist-upgrade`. |
+| [DL3036](https://github.com/hadolint/hadolint/wiki/DL3036) | Warning | `zypper clean` missing after zypper use. |
+| [DL3037](https://github.com/hadolint/hadolint/wiki/DL3037) | Warning | Specify version with `zypper install -y [=]`. |
+| [DL3038](https://github.com/hadolint/hadolint/wiki/DL3038) | Warning | Use the `-y` switch to avoid manual input `dnf install -y ` |
+| [DL3040](https://github.com/hadolint/hadolint/wiki/DL3040) | Warning | `dnf clean all` missing after dnf command. |
+| [DL3041](https://github.com/hadolint/hadolint/wiki/DL3041) | Warning | Specify version with `dnf install -y -` |
+| [DL3042](https://github.com/hadolint/hadolint/wiki/DL3042) | Warning | Avoid cache directory with `pip install --no-cache-dir `. |
+| [DL3043](https://github.com/hadolint/hadolint/wiki/DL3043) | Error | `ONBUILD`, `FROM` or `MAINTAINER` triggered from within `ONBUILD` instruction. |
+| [DL3044](https://github.com/hadolint/hadolint/wiki/DL3044) | Error | Do not refer to an environment variable within the same `ENV` statement where it is defined. |
+| [DL3045](https://github.com/hadolint/hadolint/wiki/DL3045) | Warning | `COPY` to a relative destination without `WORKDIR` set. |
+| [DL3046](https://github.com/hadolint/hadolint/wiki/DL3046) | Warning | `useradd` without flag `-l` and high UID will result in excessively large Image. |
+| [DL3047](https://github.com/hadolint/hadolint/wiki/DL3047) | Info | `wget` without flag `--progress` will result in excessively bloated build logs when downloading larger files. |
+| [DL3048](https://github.com/hadolint/hadolint/wiki/DL3048) | Style | Invalid Label Key |
+| [DL3049](https://github.com/hadolint/hadolint/wiki/DL3049) | Info | Label `
+
+
+
+
+
+
+
+## Get started
+
+Follow our [**quick start guide**](https://www.infracost.io/docs/#quick-start) to get started 🚀
+
+Infracost also has many CI/CD integrations so you can easily post cost estimates in pull requests. This provides your team with a safety net as people can discuss costs as part of the workflow.
+
+#### Post cost estimates in pull requests
+
+
+
+#### Output of `infracost breakdown`
+
+
+
+#### `infracost diff` shows diff of monthly costs between current and planned state
+
+
+
+#### Infracost Cloud
+
+[Infracost Cloud](https://www.infracost.io/docs/infracost_cloud/get_started/) is our SaaS product that builds on top of Infracost open source and works with CI/CD integrations. It enables you to check for best practices such as using latest generation instance types or block storage, e.g. consider switching AWS gp2 volumes to gp3 as they are more performant and cheaper. Team leads, managers and FinOps practitioners can also setup [tagging policies](https://www.infracost.io/docs/infracost_cloud/tagging_policies/) and [guardrails](https://www.infracost.io/docs/infracost_cloud/guardrails/) to help guide the team.
+
+
+
+## Supported clouds and resources
+
+Infracost supports over **1,100** Terraform resources across [AWS](https://www.infracost.io/docs/supported_resources/aws), [Azure](https://www.infracost.io/docs/supported_resources/azure) and [Google](https://www.infracost.io/docs/supported_resources/google). Other IaC tools, such as [Pulumi](https://github.com/infracost/infracost/issues/187), [AWS CloudFormation/CDK](https://github.com/infracost/infracost/issues/190) and [Azure ARM/Bicep](https://github.com/infracost/infracost/issues/812) are on our roadmap.
+
+Infracost can also estimate [usage-based resources](https://www.infracost.io/docs/usage_based_resources) such as AWS S3 or Lambda!
+
+## Community and contributing
+
+Join our [community Slack channel](https://www.infracost.io/community-chat) to learn more about cost estimation, Infracost, and to connect with other users and contributors. Checkout the [pinned issues](https://github.com/infracost/infracost/issues) for our next community call or [our YouTube](https://www.youtube.com/playlist?list=PLZHI9QofNPJQS9Hz0P5zfsl0AC03llbMY) for previous calls.
+
+We ❤️ contributions big or small. For development details, see the [contributing](CONTRIBUTING.md) guide. For major changes, including CLI interface changes, please open an issue first to discuss what you would like to change.
+
+Thanks to all the people who have contributed, including bug reports, code, feedback and suggestions!
+
+
+
+
+
+## License
+
+[Apache License 2.0](https://choosealicense.com/licenses/apache-2.0/)
diff --git a/data/readmes/inspektor-gadget-v0470.md b/data/readmes/inspektor-gadget-v0470.md
new file mode 100644
index 0000000..2ec2ccd
--- /dev/null
+++ b/data/readmes/inspektor-gadget-v0470.md
@@ -0,0 +1,225 @@
+# Inspektor Gadget - README (v0.47.0)
+
+**Repository**: https://github.com/inspektor-gadget/inspektor-gadget
+**Version**: v0.47.0
+
+---
+
+
+
+
+
+
+
+
+[](https://github.com/inspektor-gadget/inspektor-gadget/actions/workflows/inspektor-gadget.yml)
+[](https://pkg.go.dev/github.com/inspektor-gadget/inspektor-gadget)
+[](https://goreportcard.com/report/github.com/inspektor-gadget/inspektor-gadget)
+[](https://www.bestpractices.dev/projects/7962)
+[](https://scorecard.dev/viewer/?uri=github.com/inspektor-gadget/inspektor-gadget)
+[](https://inspektor-gadget.github.io/ig-test-reports)
+[](https://inspektor-gadget.github.io/ig-benchmarks/dev/bench)
+[](https://github.com/inspektor-gadget/inspektor-gadget/releases)
+[](https://artifacthub.io/packages/search?repo=gadgets)
+[](https://artifacthub.io/packages/helm/gadget/gadget)
+[](https://kubernetes.slack.com/messages/inspektor-gadget/)
+[](https://github.com/inspektor-gadget/inspektor-gadget/blob/main/LICENSE)
+[](https://github.com/inspektor-gadget/inspektor-gadget/blob/main/LICENSE-bpf.txt)
+
+Inspektor Gadget is a set of tools and framework for data collection and system inspection on Kubernetes clusters and Linux hosts using [eBPF](https://ebpf.io/). It manages the packaging, deployment and execution of Gadgets (eBPF programs encapsulated in [OCI images](https://opencontainers.org/)) and provides mechanisms to customize and extend Gadget functionality.
+
+**Note**: Major new functionality was released in v0.31.0. Please read the [blog post for a detailed overview](https://inspektor-gadget.io/blog/2024/08/empowering-observability_the_advent_of_image_based_gadgets).
+
+## Features
+
+* Build and package eBPF programs into OCI images called Gadgets
+* Targets Kubernetes clusters and Linux hosts
+* Collect and export data to observability tools with a simple command and via declarative configuration
+* Security mechanisms to restrict and lock-down which Gadgets can be run
+* Automatic [enrichment](#what-is-enrichment): map kernel data to high-level resources like Kubernetes and container runtimes
+* Supports [WebAssembly](https://webassembly.org/) modules to post-process data and customize IG [operators](#what-is-an-operator); using any WASM-supported language
+* Supports many modes of operation; cli, client-server, API, embeddable via Golang library
+
+## Quick start
+
+The following examples use the [trace_open](https://www.inspektor-gadget.io/docs/latest/gadgets/trace_open) Gadget which triggers when a file is open on the system.
+
+### Kubernetes
+
+#### Deployed to the Cluster
+
+[krew](https://sigs.k8s.io/krew) is the recommended way to install
+`kubectl gadget`. You can follow the
+[krew's quickstart](https://krew.sigs.k8s.io/docs/user-guide/quickstart/)
+to install it and then install `kubectl gadget` by executing the following
+commands.
+
+```bash
+kubectl krew install gadget
+kubectl gadget deploy
+kubectl gadget run trace_open:latest
+```
+
+Check [Installing on Kubernetes](https://www.inspektor-gadget.io/docs/latest/reference/install-kubernetes) to learn more about different options.
+
+`kubectl-gadget` is also packaged for the following distributions:
+
+[](https://repology.org/project/kubectl-gadget/versions)
+
+### Kubectl Node Debug
+
+We can use [kubectl node debug](https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/) to run `ig` on a Kubernetes node:
+
+```bash
+kubectl debug --profile=sysadmin node/minikube-docker -ti --image=ghcr.io/inspektor-gadget/ig:latest -- ig run trace_open:latest
+```
+
+For more information on how to use `ig` without installation on Kubernetes, please refer to the [ig documentation](https://www.inspektor-gadget.io/docs/latest/reference/ig#using-ig-with-kubectl-debug-node).
+
+### Linux
+
+#### Install Locally
+
+Install the `ig` binary locally on Linux and run a Gadget:
+
+```bash
+IG_ARCH=amd64
+IG_VERSION=$(curl -s https://api.github.com/repos/inspektor-gadget/inspektor-gadget/releases/latest | jq -r .tag_name)
+
+curl -sL https://github.com/inspektor-gadget/inspektor-gadget/releases/download/${IG_VERSION}/ig-linux-${IG_ARCH}-${IG_VERSION}.tar.gz | sudo tar -C /usr/local/bin -xzf - ig
+
+sudo ig run trace_open:latest
+```
+
+Check [Installing on Linux](https://www.inspektor-gadget.io/docs/latest/reference/install-linux) to learn more.
+
+`ig` is also packaged for the following distributions:
+
+[](https://repology.org/project/inspektor-gadget/versions)
+
+#### Run in a Container
+
+```bash
+docker run -ti --rm --privileged -v /:/host --pid=host ghcr.io/inspektor-gadget/ig:latest run trace_open:latest
+```
+
+For more information on how to use `ig` without installation on Linux, please check [Using ig in a container](https://www.inspektor-gadget.io/docs/latest/reference/ig#using-ig-in-a-container).
+
+### MacOS or Windows
+
+It's possible to control an `ig` running in Linux from different operating systems by using the `gadgetctl` binary.
+
+Run the following on a Linux machine to make `ig` available to clients.
+
+```bash
+sudo ig daemon --host=tcp://0.0.0.0:1234
+```
+
+Download the `gadgetctl` tools for MacOS
+([amd64](https://github.com/inspektor-gadget/inspektor-gadget/releases/download/v0.30.0/gadgetctl-darwin-amd64-v0.30.0.tar.gz),
+[arm64](https://github.com/inspektor-gadget/inspektor-gadget/releases/download/v0.30.0/gadgetctl-darwin-arm64-v0.30.0.tar.gz)) or [windows](https://github.com/inspektor-gadget/inspektor-gadget/releases/download/v0.30.0/gadgetctl-windows-amd64-v0.30.0.tar.gz) and run the Gadget specifying the IP address of the Linux machine:
+
+
+```bash
+gadgetctl run trace_open:latest --remote-address=tcp://$IP:1234
+```
+
+***The above demonstrates the simplest command. To learn how to filter, export, etc. please consult the documentation for the [run](https://www.inspektor-gadget.io/docs/latest/reference/run) command***.
+
+## Core concepts
+
+### What is a Gadget?
+
+Gadgets are the central component in the Inspektor Gadget framework. A Gadget is an [OCI image](https://opencontainers.org/) that includes one or more eBPF programs, metadata YAML file and, optionally, WASM modules for post processing.
+As OCI images, they can be stored in a container registry (compliant with the OCI specifications), making them easy to distribute and share.
+Gadgets are built using the [`ig image build`](https://www.inspektor-gadget.io/docs/latest/gadget-devel/building) command.
+
+You can find a growing collection of Gadgets on [Artifact HUB](https://artifacthub.io/packages/search?kind=22). This includes both in-tree Gadgets (hosted in this git repository in the [/gadgets](./gadgets/README.md) directory and third-party Gadgets).
+
+See the [Gadget documentation](https://www.inspektor-gadget.io/docs/latest/gadgets/) for more information.
+
+### What is enrichment?
+
+The data that eBPF collects from the kernel includes no knowledge about Kubernetes, container
+runtimes or any other high-level user-space concepts. In order to relate this data to these high-level
+concepts and make the eBPF data immediately more understandable, Inspektor Gadget automatically
+uses kernel primitives such as mount namespaces, pids or similar to infer which high-level
+concepts they relate to; Kubernetes pods, container names, DNS names, etc. The process of augmenting
+the eBPF data with these high-level concepts is called *enrichment*.
+
+Enrichment flows the other way, too. Inspektor Gadget enables users to do high-performance
+in-kernel filtering by only referencing high-level concepts such as Kubernetes pods, container
+names, etc.; automatically translating these to the corresponding low-level kernel resources.
+
+### What is an operator?
+
+In Inspektor Gadget, an operator is any part of the framework where an action is taken. Some operators are under-the-hood (i.e. fetching and loading Gadgets) while others are user-exposed (enrichment, filtering, export, etc.) and can be reordered and overridden.
+
+### Further learning
+
+Use the [project documentation](https://www.inspektor-gadget.io/docs/latest/) to learn more about:
+
+* [Reference](https://www.inspektor-gadget.io/docs/latest/reference)
+* [Gadgets](https://www.inspektor-gadget.io/docs/latest/gadgets)
+* [Contributing](https://www.inspektor-gadget.io/docs/latest/devel/contributing/)
+
+## Kernel requirements
+
+Kernel requirements are largely determined by the specific eBPF functionality a Gadget makes use of.
+The eBPF functionality available to Gadgets depend on the version and configuration of the kernel running
+in the node/machine where the Gadget is being loaded. Gadgets developed by the Inspektor
+Gadget project require at least Linux 5.10 with [BTF](https://www.kernel.org/doc/html/latest/bpf/btf.html) enabled.
+
+Refer to the [documentation for a specific Gadget](https://www.inspektor-gadget.io/docs/latest/gadgets) for any notes regarding requirements.
+
+## Code examples
+
+There are some examples in [this](./examples/) folder showing the usage
+of the Golang packages provided by Inspektor Gadget. These examples are
+designed for developers that want to use the Golang packages exposed by
+Inspektor Gadget directly. End-users do not need this and can use
+`kubectl-gadget` or `ig` directly.
+
+## Security features
+
+Inspektor Gadget offers security features which are described in the following document:
+
+* [Verify assets](https://inspektor-gadget.io/docs/latest/reference/verify-assets): Covers everything with regard to signing and verifying of images and release assets. It also showcases the different SBOMs we generate like the [`ig`](https://github.com/inspektor-gadget/inspektor-gadget/releases/download/v0.38.0/ig-linux-amd64-v0.38.0.bom.json) one.
+* [Restricting Gadgets](https://inspektor-gadget.io/docs/latest/reference/restricting-gadgets): Details how users can restrict which gadgets can be run based on different filters.
+
+## Contributing
+
+Contributions are welcome, see [CONTRIBUTING](docs/devel/contributing.md).
+
+## Community Meeting
+
+We hold community meetings regularly. Please check our [calendar](https://zoom-lfx.platform.linuxfoundation.org/meetings/inspektorgadget?view=month)
+for the full schedule of up-coming meetings and please add any topic you want to discuss to our [meeting
+notes](https://docs.google.com/document/d/1cbPYvYTsdRXd41PEDcwC89IZbcA8WneNt34oiu5s9VA/edit)
+document.
+
+## Slack
+
+Join the discussions on the [`#inspektor-gadget`](https://kubernetes.slack.com/messages/inspektor-gadget/) channel in the Kubernetes Slack.
+
+## Recent Videos
+
+- Presentation: [From Kernel To Cloud: Building a Complete Open Source Observability Pipeline, OpenSearch Con NA - September 2025](https://opensearchconna2025.sched.com/event/25Gp0/from-kernel-to-cloud-building-a-complete-open-source-observability-pipeline-anirudha-jadhav-shenoy-pratik-gurudatt-amazon-web-services) ([video](https://www.youtube.com/watch?v=oWZnSS7mtK4), [slides](https://static.sched.com/hosted_files/opensearchconna2025/a8/From%20Kernel%20To%20Cloud_%20Building%20a%20Complete%20Open%20Source%20Observability%20Pipeline_final.pdf))
+- Talk: [Demystifying DNS: A Guide to Understanding and Debugging Request Flows in Kubernetes Clusters, Container Days Hamburg - September 2024](https://www.youtube.com/watch?v=KQpZg_NqbZw)
+- Talk: [eBPF Data Collection for Everyone – empowering the community to obtain Linux insights using Inspektor Gadget](https://www.youtube.com/watch?v=tkKXdaqji9c)
+- Talk: [Collecting Low-Level Metrics with eBPF, KubeCon + CloudNativeCon North America 2023](https://kccncna2023.sched.com/event/a70c0a016973beb5705f5f72fa58f622) ([video](https://www.youtube.com/watch?v=_ft3iTw5uv8), [slides](https://static.sched.com/hosted_files/kccncna2023/91/Collecting%20Low-Level%20Metrics%20with%20eBPF.pdf))
+- Interview: [Kubernetes Security and Troubleshooting Multitool: Inspektor Gadget](https://www.youtube.com/watch?v=0GDYDYhPPRw)
+- Interview: [Enlightning - Go Go Gadget! Inspektor Gadget and Observability](https://www.youtube.com/watch?v=Io6vqHitTzQ)
+- Presentation: [Misc - Feat. Kepler, Inspektor Gadget, k8sgpt, Perses, and Pixie (You Choose!, Ch. 04, Ep. 09)](https://youtu.be/OZE1hoT9-gs?t=1267)
+
+## Thanks
+
+* [BPF Compiler Collection (BCC)](https://github.com/iovisor/bcc): some of the gadgets are based on BCC tools.
+* [kubectl-trace](https://github.com/iovisor/kubectl-trace): the Inspektor Gadget architecture was inspired from kubectl-trace.
+* [cilium/ebpf](https://github.com/cilium/ebpf): the gadget tracer manager and some other gadgets use the cilium/ebpf library.
+
+## License
+
+The Inspektor Gadget user-space components are licensed under the
+[Apache License, Version 2.0](LICENSE). The BPF code templates are licensed
+under the [General Public License, Version 2.0, with the Linux-syscall-note](LICENSE-bpf.txt).
diff --git a/data/readmes/iotdb-v135.md b/data/readmes/iotdb-v135.md
new file mode 100644
index 0000000..0249074
--- /dev/null
+++ b/data/readmes/iotdb-v135.md
@@ -0,0 +1,529 @@
+# IoTDB - README (v1.3.5)
+
+**Repository**: https://github.com/apache/iotdb
+**Version**: v1.3.5
+
+---
+
+
+[English](./README.md) | [中文](./README_ZH.md)
+
+# IoTDB
+[](https://github.com/apache/iotdb/actions/workflows/unit-test.yml)
+[](https://codecov.io/github/apache/iotdb)
+[](https://github.com/apache/iotdb/releases)
+[](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+
+
+
+[](https://iotdb.apache.org/)
+[](http://search.maven.org/#search|gav|1|g:"org.apache.iotdb")
+[](https://gitpod.io/#https://github.com/apache/iotdb)
+[](https://join.slack.com/t/apacheiotdb/shared_invite/zt-qvso1nj8-7715TpySZtZqmyG5qXQwpg)
+
+# Overview
+
+IoTDB (Internet of Things Database) is a data management system for time series data, which provides users with specific services, including data collection, storage and analysis. Due to its lightweight structure, high performance and usable features, together with its seamless integration with the Hadoop and Spark ecosystem, IoTDB meets the requirements of massive dataset storage, high throughput data input, and complex data analysis in the industrial IoT field.
+
+[Click for More Information](https://www.timecho.com/archives/shi-xu-shu-ju-ku-iotdb-gong-neng-xiang-jie-yu-xing-ye-ying-yong)
+
+IoTDB depends on [TsFile](https://github.com/apache/tsfile) which is a columnar storage file format designed for time series data. The branch `iotdb` of TsFile project is used to deploy SNAPSHOT version for IoTDB project.
+
+# Main Features
+
+The main features of IoTDB are as follows:
+
+1. Flexible deployment strategy. IoTDB provides users with a one-click installation tool on either the cloud platform or the terminal devices, and a data synchronization tool bridging the data on cloud platform and terminals.
+2. Low cost on hardware. IoTDB can reach a high compression ratio of disk storage.
+3. Efficient directory structure. IoTDB supports efficient organization for complex time series data structures from intelligent networking devices, organization for time series data from devices of the same type, and fuzzy searching strategy for massive and complex directory of time series data.
+4. High-throughput read and write. IoTDB supports millions of low-power devices' strong connection data access, high-speed data read and write for intelligent networking devices and mixed devices mentioned above.
+5. Rich query semantics. IoTDB supports time alignment for time series data across devices and measurements, computation in time series field (frequency domain transformation) and rich aggregation function support in time dimension.
+6. Easy to get started. IoTDB supports SQL-like language, JDBC standard API and import/export tools which are easy to use.
+7. Seamless integration with state-of-the-practice Open Source Ecosystem. IoTDB supports analysis ecosystems, such as Hadoop and Spark, as well as visualization tools, such as Grafana.
+
+For the latest information about IoTDB, please visit [IoTDB official website](https://iotdb.apache.org/). If you encounter any problems or identify any bugs while using IoTDB, please report an issue in [Jira](https://issues.apache.org/jira/projects/IOTDB/issues).
+
+
+
+## Outline
+
+- [IoTDB](#iotdb)
+- [Overview](#overview)
+- [Main Features](#main-features)
+ - [Outline](#outline)
+- [Quick Start](#quick-start)
+ - [Prerequisites](#prerequisites)
+ - [Installation](#installation)
+ - [Build from source](#build-from-source)
+ - [Configurations](#configurations)
+ - [Start](#start)
+ - [Start IoTDB](#start-iotdb)
+ - [Use IoTDB](#use-iotdb)
+ - [Use Cli](#use-cli)
+ - [Basic commands for IoTDB](#basic-commands-for-iotdb)
+ - [Stop IoTDB](#stop-iotdb)
+ - [Only build server](#only-build-server)
+ - [Only build cli](#only-build-cli)
+ - [Usage of CSV Import and Export Tool](#usage-of-csv-import-and-export-tool)
+
+
+
+# Quick Start
+
+This short guide will walk you through the basic process of using IoTDB. For a more detailed introduction, please visit our website's [User Guide](https://iotdb.apache.org/UserGuide/latest/QuickStart/QuickStart.html).
+
+## Prerequisites
+
+To use IoTDB, you need to have:
+
+1. Java >= 1.8 (1.8, 11 to 17 are verified. Please make sure the environment path has been set accordingly).
+2. Maven >= 3.6 (If you want to compile and install IoTDB from source code).
+3. Set the max open files num as 65535 to avoid the "too many open files" error.
+4. (Optional) Set the somaxconn as 65535 to avoid "connection reset" error when the system is under high load.
+ ```
+ # Linux
+ > sudo sysctl -w net.core.somaxconn=65535
+
+ # FreeBSD or Darwin
+ > sudo sysctl -w kern.ipc.somaxconn=65535
+ ```
+### Linux
+
+(This guide is based on an installation of Ubuntu 22.04.)
+
+#### Git
+
+Make sure `Git` is installed, if it's missing, simply install it via:
+
+ sudo apt install git
+
+#### Java
+
+Make sure `Java` is installed, if it's missing, simply install it via:
+
+ sudo apt install default-jdk
+
+#### Flex
+
+ sudo apt install flex
+
+#### Bison
+
+ sudo apt install bison
+
+#### Boost
+
+ sudo apt install libboost-all-dev
+
+#### OpenSSL header files
+
+Usually OpenSSL is already installed, however it's missing the header files we need to compile.
+So ensure these are installed:
+
+ sudo apt install libssl-dev
+
+### Mac OS
+
+#### Git
+
+First ensure `git` works.
+
+Usually on a new Mac, as soon as you simply type `git` in a `Terminal` window, a popup will come up and ask if you want to finish installing the Mac developer tools.
+Just say yes.
+As soon as this is finished, you are free to use `git`.
+
+#### Homebrew
+
+Then install `Homebrew` - If this hasn't been installed yet, as we are going to be installing everything using `Homebrew`.
+
+ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+
+#### Java
+
+As soon as that's done install `Java`, if this hasn't been installed yet:
+
+ brew install java
+
+Depending on your version of Homebrew, it will tell you to do one of the following (depending on the type of processor in your device).
+
+Mainly on the Intel-based models:
+
+ sudo ln -sfn /usr/local/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
+
+Mainly on the ARM-based models:
+
+ sudo ln -sfn /opt/homebrew/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
+
+#### CPP Prerequisites
+
+Building `Thrift` requires us to add two more dependencies to the picture.
+
+This however is only needed when enabling the `with-cpp` profile:
+
+ brew install boost
+ brew install bison
+ brew install openssl
+
+### Windows
+
+#### Chocolatey
+
+Then install `Chocolatey` - If this hasn't been installed yet, as we are going to be installing everything using `Chocolatey`.
+
+https://chocolatey.org/install
+
+#### Git
+
+ choco install git.install
+
+#### Java
+
+ choco install openjdk
+
+#### Visual Studio 19 2022
+
+ choco install visualstudio2022community
+ choco install visualstudio2022buildtools
+ choco install visualstudio2022-workload-nativedesktop
+
+#### Flex / Bison
+
+ choco install winflexbison
+
+#### Boost
+
+ choco install boost-msvc-14.2
+
+#### OpenSSL
+
+ choco install openssl
+
+## Installation
+
+IoTDB provides three installation methods, you can refer to the following suggestions, choose the one fits you best:
+
+* Installation from source code. If you need to modify the code yourself, you can use this method.
+* Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box.
+* Using Docker:The path to the dockerfile is [here](https://github.com/apache/iotdb/tree/master/docker/src/main).
+
+
+Here in the Quick Start, we give a brief introduction of using source code to install IoTDB. For further information, please refer to [User Guide](https://iotdb.apache.org/UserGuide/latest/QuickStart/QuickStart.html).
+
+## Build from source
+
+### Prepare Thrift compiler
+
+Skip this chapter if you are using Windows.
+
+As we use Thrift for our RPC module (communication and
+protocol definition), we involve Thrift during the compilation, so Thrift compiler 0.13.0 (or
+higher) is required to generate Thrift Java code. Thrift officially provides binary compiler for
+Windows, but unfortunately, they do not provide that for Unix OSs.
+
+If you have permission to install new software, use `apt install` or `yum install` or `brew install`
+to install the Thrift compiler. (If you already have installed the thrift compiler, skip this step.)
+Then, you may add the following parameter
+when running Maven: `-Dthrift.download-url=http://apache.org/licenses/LICENSE-2.0.txt -Dthrift.exec.absolute.path=`.
+
+If not, then you have to compile the thrift compiler, and it requires you install a boost library first.
+Therefore, we compiled a Unix compiler ourselves and put it onto GitHub, and with the help of a
+maven plugin, it will be downloaded automatically during compilation.
+This compiler works fine with gcc8 or later, Ubuntu MacOS, and CentOS, but previous versions
+and other OSs are not guaranteed.
+
+If you can not download the thrift compiler automatically because of a network problem, you can download
+it by yourself, and then either:
+rename your thrift file to `{project_root}\thrift\target\tools\thrift_0.12.0_0.13.0_linux.exe`;
+or, add Maven commands:
+`-Dthrift.download-url=http://apache.org/licenses/LICENSE-2.0.txt -Dthrift.exec.absolute.path=`.
+
+### Compile IoTDB
+
+You can download the source code from:
+
+```
+git clone https://github.com/apache/iotdb.git
+```
+
+The default dev branch is the master branch, if you want to use a released version x.x.x:
+
+```
+git checkout vx.x.x
+```
+
+Or checkout to the branch of a big version, e.g., the branch of 1.0 is rel/1.0.
+
+```
+git checkout rel/x.x
+```
+
+### Build IoTDB from source
+
+Under the root path of iotdb:
+
+```
+> mvn clean package -pl distribution -am -DskipTests
+```
+
+After being built, the IoTDB distribution is located at the folder: "distribution/target".
+
+
+### Only build cli
+
+Under the iotdb/iotdb-client path:
+
+```
+> mvn clean package -pl cli -am -DskipTests
+```
+
+After being built, the IoTDB cli is located at the folder "cli/target".
+
+### Build Others
+
+Use `-P with-cpp` for compiling the cpp client. (For more details, read client-cpp's Readme file.)
+
+**NOTE: Directories "`thrift/target/generated-sources/thrift`", "`thrift-sync/target/generated-sources/thrift`",
+"`thrift-cluster/target/generated-sources/thrift`", "`thrift-influxdb/target/generated-sources/thrift`"
+and "`antlr/target/generated-sources/antlr4`" need to be added to sources roots to avoid compilation errors in the IDE.**
+
+**In IDEA, you just need to right click on the root project name and choose "`Maven->Reload Project`" after
+you run `mvn package` successfully.**
+
+### Configurations
+
+Configuration files are under the "conf" folder.
+
+ * environment config module (`datanode-env.bat`, `datanode-env.sh`),
+ * system config module (`iotdb-system.properties`)
+ * log config module (`logback.xml`).
+
+For more information, please see [Config Manual](https://iotdb.apache.org/UserGuide/latest/Reference/DataNode-Config-Manual.html).
+
+## Start
+
+You can go through the following steps to test the installation. If there is no error returned after execution, the installation is completed.
+
+### Start IoTDB
+
+Users can start 1C1D IoTDB by the start-standalone script under the sbin folder.
+
+```
+# Unix/OS X
+> sbin/start-standalone.sh
+
+# Windows
+> sbin\start-standalone.bat
+```
+
+### Use IoTDB
+
+#### Use Cli
+
+IoTDB offers different ways to interact with server, here we introduce the basic steps of using Cli tool to insert and query data.
+
+After installing IoTDB, there is a default user 'root', its default password is 'root'. Users can use this
+default user to login Cli to use IoTDB. The start-up script of Cli is the start-cli script in the folder sbin. When executing the script, user should assign
+IP, PORT, USER_NAME and PASSWORD. The default parameters are "-h 127.0.0.1 -p 6667 -u root -pw root".
+
+Here is the command for starting the Cli:
+
+```
+# Unix/OS X
+> sbin/start-cli.sh -h 127.0.0.1 -p 6667 -u root -pw root
+
+# Windows
+> sbin\start-cli.bat -h 127.0.0.1 -p 6667 -u root -pw root
+```
+
+The command line cli is interactive, so you should see the welcome logo and statements if everything is ready:
+
+```
+ _____ _________ ______ ______
+|_ _| | _ _ ||_ _ `.|_ _ \
+ | | .--.|_/ | | \_| | | `. \ | |_) |
+ | | / .'`\ \ | | | | | | | __'.
+ _| |_| \__. | _| |_ _| |_.' /_| |__) |
+|_____|'.__.' |_____| |______.'|_______/ version x.x.x
+
+
+IoTDB> login successfully
+IoTDB>
+```
+
+#### Basic commands for IoTDB
+
+Now, let us introduce the way of creating timeseries, inserting data and querying data.
+
+The data in IoTDB is organized as timeseries. Each timeseries includes multiple data–time pairs, and is owned by a database. Before defining a timeseries, we should define a database using CREATE DATABASE first, and here is an example:
+
+```
+IoTDB> CREATE DATABASE root.ln
+```
+
+We can also use SHOW DATABASES to check the database being created:
+
+```
+IoTDB> SHOW DATABASES
++-------------+
+| Database|
++-------------+
+| root.ln|
++-------------+
+Total line number = 1
+```
+
+After the database is set, we can use CREATE TIMESERIES to create a new timeseries. When creating a timeseries, we should define its data type and the encoding scheme. Here we create two timeseries:
+
+```
+IoTDB> CREATE TIMESERIES root.ln.wf01.wt01.status WITH DATATYPE=BOOLEAN, ENCODING=PLAIN
+IoTDB> CREATE TIMESERIES root.ln.wf01.wt01.temperature WITH DATATYPE=FLOAT, ENCODING=RLE
+```
+
+In order to query the specific timeseries, we can use SHOW TIMESERIES . represent the location of the timeseries. The default value is "null", which queries all the timeseries in the system (the same as using "SHOW TIMESERIES root"). Here are some examples:
+
+1. Querying all timeseries in the system:
+
+```
+IoTDB> SHOW TIMESERIES
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+
+| Timeseries|Alias|Database|DataType|Encoding|Compression|Tags|Attributes|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+
+|root.ln.wf01.wt01.temperature| null| root.ln| FLOAT| RLE| SNAPPY|null| null|
+| root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null|
++-----------------------------+-----+-------------+--------+--------+-----------+----+----------+
+Total line number = 2
+```
+
+2. Querying a specific timeseries (root.ln.wf01.wt01.status):
+
+```
+IoTDB> SHOW TIMESERIES root.ln.wf01.wt01.status
++------------------------+-----+-------------+--------+--------+-----------+----+----------+
+| timeseries|alias|database|dataType|encoding|compression|tags|attributes|
++------------------------+-----+-------------+--------+--------+-----------+----+----------+
+|root.ln.wf01.wt01.status| null| root.ln| BOOLEAN| PLAIN| SNAPPY|null| null|
++------------------------+-----+-------------+--------+--------+-----------+----+----------+
+Total line number = 1
+```
+
+Inserting timeseries data is a basic operation of IoTDB, you can use the ‘INSERT’ command to finish this. Before insertion, you should assign the timestamp and the suffix path name:
+
+```
+IoTDB> INSERT INTO root.ln.wf01.wt01(timestamp,status) values(100,true);
+IoTDB> INSERT INTO root.ln.wf01.wt01(timestamp,status,temperature) values(200,false,20.71)
+```
+
+The data that you have just inserted will be displayed as follows:
+
+```
+IoTDB> SELECT status FROM root.ln.wf01.wt01
++------------------------+------------------------+
+| Time|root.ln.wf01.wt01.status|
++------------------------+------------------------+
+|1970-01-01T00:00:00.100Z| true|
+|1970-01-01T00:00:00.200Z| false|
++------------------------+------------------------+
+Total line number = 2
+```
+
+You can also query several timeseries data using one SQL statement:
+
+```
+IoTDB> SELECT * FROM root.ln.wf01.wt01
++------------------------+-----------------------------+------------------------+
+| Time|root.ln.wf01.wt01.temperature|root.ln.wf01.wt01.status|
++------------------------+-----------------------------+------------------------+
+|1970-01-01T00:00:00.100Z| null| true|
+|1970-01-01T00:00:00.200Z| 20.71| false|
++------------------------+-----------------------------+------------------------+
+Total line number = 2
+```
+
+To change the time zone in Cli, you can use the following SQL:
+
+```
+IoTDB> SET time_zone=+08:00
+Time zone has set to +08:00
+IoTDB> SHOW time_zone
+Current time zone: Asia/Shanghai
+```
+
+Add then the query result will show using the new time zone.
+
+```
+IoTDB> SELECT * FROM root.ln.wf01.wt01
++-----------------------------+-----------------------------+------------------------+
+| Time|root.ln.wf01.wt01.temperature|root.ln.wf01.wt01.status|
++-----------------------------+-----------------------------+------------------------+
+|1970-01-01T08:00:00.100+08:00| null| true|
+|1970-01-01T08:00:00.200+08:00| 20.71| false|
++-----------------------------+-----------------------------+------------------------+
+Total line number = 2
+```
+
+The commands to exit the Cli are:
+
+```
+IoTDB> quit
+or
+IoTDB> exit
+```
+
+For more information about the commands supported by IoTDB SQL, please see [User Guide](https://iotdb.apache.org/UserGuide/latest/QuickStart/QuickStart.html).
+
+### Stop IoTDB
+
+The server can be stopped with "ctrl-C" or the following script:
+
+```
+# Unix/OS X
+> sbin/stop-standalone.sh
+
+# Windows
+> sbin\stop-standalone.bat
+```
+
+# The use of Data Import and Export Tool
+
+see [The use of Data Import Tool](https://iotdb.apache.org/UserGuide/latest/Tools-System/Data-Import-Tool.html)
+see [The use of Data Export Tool](https://iotdb.apache.org/UserGuide/latest/Tools-System/Data-Export-Tool.html)
+
+
+# Frequent Questions for Compiling
+see [Frequent Questions when Compiling the Source Code](https://iotdb.apache.org/Community/Development-Guide.html#frequently-asked-questions)
+
+# Contact Us
+
+### QQ Group
+
+* Apache IoTDB User Group: 659990460
+
+### Wechat Group
+
+* Add friend: `apache_iotdb`, and then we'll invite you to the group.
+
+### Slack
+
+* [Slack channel](https://join.slack.com/t/apacheiotdb/shared_invite/zt-qvso1nj8-7715TpySZtZqmyG5qXQwpg)
+
+see [Join the community](https://github.com/apache/iotdb/issues/1995) for more!
diff --git a/data/readmes/iotdb-web-workbench-v0133.md b/data/readmes/iotdb-web-workbench-v0133.md
new file mode 100644
index 0000000..7921c61
--- /dev/null
+++ b/data/readmes/iotdb-web-workbench-v0133.md
@@ -0,0 +1,67 @@
+# IoTDB-Web-Workbench - README (v0.13.3)
+
+**Repository**: https://github.com/apache/iotdb-web-workbench
+**Version**: v0.13.3
+
+---
+
+
+
+[](https://www.apache.org/licenses/LICENSE-2.0.html)
+
+
+# IoTDB-Workbench
+
+IoTDB-Workbench是IoTDB的可视化管理工具,可对IoTDB的数据进行增删改查、权限控制等,简化IoTDB的使用及学习成本。
+在我们心中IoTDB是最棒的时序数据库之一,我们将一直不遗余力地推动国产时序数据库IoTDB的应用和发展,为本土开源能力的提高、开源生态的发展,贡献自己的力量,欢迎大家加入IoTDB-Workbench的开发及维护,期待你的加入:
+
+
+
+## 后端服务运行
+
+[后端服务设计及运行说明](backend/README.md)
+
+## 前端服务运行
+
+[前端服务运行说明](frontend/README.md)
+
+## Casdoor运行
+
+[Casdoor单点登录运行说明](casdoor.md)
+
+
+## Docker
+
+1、构建镜像:将前后后端分别打包构建出target和dist目录,然后在对应的目录下执行命令(当docker hub 有下面镜像时可以省去构建镜像步骤)
+
+```shell script
+ cd bacnkend
+ docker build -t apache/iotdb-web-workbench:0.13.0-backend .
+ cd frontend
+ docker build -t apache/iotdb-web-frontend:0.13.0-frontend .
+```
+
+2、将backend/resources/sqlite目录下的iotdb拷贝到你需要挂载的路径,如/data/iotdb.db
+
+3、在根目录下执行
+
+`docker-compose up -d`
+
+> 注意 docker-compose.yml中volumes挂载路径为步骤2中所指定路径;PORTS和你后端端口的值一样。
\ No newline at end of file
diff --git a/data/readmes/ipfs-v0390.md b/data/readmes/ipfs-v0390.md
new file mode 100644
index 0000000..0104af3
--- /dev/null
+++ b/data/readmes/ipfs-v0390.md
@@ -0,0 +1,521 @@
+# IPFS - README (v0.39.0)
+
+**Repository**: https://github.com/ipfs/kubo
+**Version**: v0.39.0
+
+---
+
+
+
+
+
+ Kubo: IPFS Implementation in GO
+
+
+
+
The first implementation of IPFS.
+
+
+
+
+
+
+
+
+
+
+
+## What is Kubo?
+
+Kubo was the first IPFS implementation and is the most widely used one today. Implementing the *Interplanetary Filesystem* - the standard for content-addressing on the Web, interoperable with HTTP. Thus powered by future-proof data models and the libp2p for network communication. Kubo is written in Go.
+
+Featureset
+- Runs an IPFS-Node as a network service that is part of LAN and WAN DHT
+- Native support for UnixFS (most popular way to represent files and directories on IPFS)
+- [HTTP Gateway](https://specs.ipfs.tech/http-gateways/) (`/ipfs` and `/ipns`) functionality for trusted and [trustless](https://docs.ipfs.tech/reference/http/gateway/#trustless-verifiable-retrieval) content retrieval
+- [HTTP Routing V1](https://specs.ipfs.tech/routing/http-routing-v1/) (`/routing/v1`) client and server implementation for [delegated routing](./docs/delegated-routing.md) lookups
+- [HTTP Kubo RPC API](https://docs.ipfs.tech/reference/kubo/rpc/) (`/api/v0`) to access and control the daemon
+- [Command Line Interface](https://docs.ipfs.tech/reference/kubo/cli/) based on (`/api/v0`) RPC API
+- [WebUI](https://github.com/ipfs/ipfs-webui/#readme) to manage the Kubo node
+- [Content blocking](/docs/content-blocking.md) support for operators of public nodes
+
+### Other implementations
+
+See [List](https://docs.ipfs.tech/basics/ipfs-implementations/)
+
+## What is IPFS?
+
+IPFS is a global, versioned, peer-to-peer filesystem. It combines good ideas from previous systems such as Git, BitTorrent, Kademlia, SFS, and the Web. It is like a single BitTorrent swarm, exchanging git objects. IPFS provides an interface as simple as the HTTP web, but with permanence built-in. You can also mount the world at /ipfs.
+
+For more info see: https://docs.ipfs.tech/concepts/what-is-ipfs/
+
+Before opening an issue, consider using one of the following locations to ensure you are opening your thread in the right place:
+ - kubo (previously named go-ipfs) _implementation_ bugs in [this repo](https://github.com/ipfs/kubo/issues).
+ - Documentation issues in [ipfs/docs issues](https://github.com/ipfs/ipfs-docs/issues).
+ - IPFS _design_ in [ipfs/specs issues](https://github.com/ipfs/specs/issues).
+ - Exploration of new ideas in [ipfs/notes issues](https://github.com/ipfs/notes/issues).
+ - Ask questions and meet the rest of the community at the [IPFS Forum](https://discuss.ipfs.tech).
+ - Or [chat with us](https://docs.ipfs.tech/community/chat/).
+
+[](https://www.youtube.com/channel/UCdjsUXJ3QawK4O5L1kqqsew) [](https://twitter.com/IPFS)
+
+## Next milestones
+
+[Milestones on GitHub](https://github.com/ipfs/kubo/milestones)
+
+
+## Table of Contents
+
+- [What is Kubo?](#what-is-kubo)
+- [What is IPFS?](#what-is-ipfs)
+- [Next milestones](#next-milestones)
+- [Table of Contents](#table-of-contents)
+- [Security Issues](#security-issues)
+- [Install](#install)
+ - [Minimal System Requirements](#minimal-system-requirements)
+ - [Docker](#docker)
+ - [Official prebuilt binaries](#official-prebuilt-binaries)
+ - [Updating](#updating)
+ - [Downloading builds using IPFS](#downloading-builds-using-ipfs)
+ - [Unofficial Linux packages](#unofficial-linux-packages)
+ - [ArchLinux](#arch-linux)
+ - [Gentoo Linux](#gentoo-linux)
+ - [Nix](#nix)
+ - [Solus](#solus)
+ - [openSUSE](#opensuse)
+ - [Guix](#guix)
+ - [Snap](#snap)
+ - [Ubuntu PPA](#ubuntu-ppa)
+ - [Fedora](#fedora-copr)
+ - [Unofficial Windows packages](#unofficial-windows-packages)
+ - [Chocolatey](#chocolatey)
+ - [Scoop](#scoop)
+ - [Unofficial MacOS packages](#unofficial-macos-packages)
+ - [MacPorts](#macports)
+ - [Nix](#nix-macos)
+ - [Homebrew](#homebrew)
+ - [Build from Source](#build-from-source)
+ - [Install Go](#install-go)
+ - [Download and Compile IPFS](#download-and-compile-ipfs)
+ - [Cross Compiling](#cross-compiling)
+ - [Troubleshooting](#troubleshooting)
+- [Getting Started](#getting-started)
+ - [Usage](#usage)
+ - [Some things to try](#some-things-to-try)
+ - [Troubleshooting](#troubleshooting-1)
+- [Packages](#packages)
+- [Development](#development)
+ - [Map of Implemented Subsystems](#map-of-implemented-subsystems)
+ - [CLI, HTTP-API, Architecture Diagram](#cli-http-api-architecture-diagram)
+ - [Testing](#testing)
+ - [Development Dependencies](#development-dependencies)
+ - [Developer Notes](#developer-notes)
+- [Maintainer Info](#maintainer-info)
+- [Contributing](#contributing)
+- [License](#license)
+
+## Security Issues
+
+Please follow [`SECURITY.md`](SECURITY.md).
+
+## Install
+
+The canonical download instructions for IPFS are over at: https://docs.ipfs.tech/install/. It is **highly recommended** you follow those instructions if you are not interested in working on IPFS development.
+
+For production use, Release Docker images (below) are recommended.
+
+### Minimal System Requirements
+
+Kubo runs on most Linux, macOS, and Windows systems. For optimal performance, we recommend at least 6 GB of RAM and 2 CPU cores (more is ideal, as Kubo is highly parallel).
+
+> [!IMPORTANT]
+> Larger pinsets require additional memory, with an estimated ~1 GiB of RAM per 20 million items for reproviding to the Amino DHT.
+
+> [!CAUTION]
+> Systems with less than the recommended memory may experience instability, frequent OOM errors or restarts, and missing data announcement (reprovider window), which can make data fully or partially inaccessible to other peers. Running Kubo on underprovisioned hardware is at your own risk.
+
+### Docker
+
+Official images are published at https://hub.docker.com/r/ipfs/kubo/: [](https://hub.docker.com/r/ipfs/kubo/)
+
+#### 🟢 Release Images
+ - These are production grade images. Use them.
+ - `latest` and [`release`](https://hub.docker.com/r/ipfs/kubo/tags?name=release) tags always point at [the latest stable release](https://github.com/ipfs/kubo/releases/latest). If you use this, remember to `docker pull` periodically to update.
+ - [`vN.N.N`](https://hub.docker.com/r/ipfs/kubo/tags?name=v) points at a specific [release tag](https://github.com/ipfs/kubo/releases)
+
+#### 🟠 Developer Preview Images
+ - These tags are used by developers for internal testing, not intended for end users or production use.
+ - [`master-latest`](https://hub.docker.com/r/ipfs/kubo/tags?name=master-latest) always points at the `HEAD` of the [`master`](https://github.com/ipfs/kubo/commits/master/) branch
+ - [`master-YYYY-DD-MM-GITSHA`](https://hub.docker.com/r/ipfs/kubo/tags?name=master-2) points at a specific commit from the `master` branch
+
+#### 🔴 Internal Staging Images
+ - We use `staging` for testing arbitrary commits and experimental patches.
+ - To build image for current HEAD, force push to `staging` via `git push origin HEAD:staging --force`)
+ - [`staging-latest`](https://hub.docker.com/r/ipfs/kubo/tags?name=staging-latest) always points at the `HEAD` of the [`staging`](https://github.com/ipfs/kubo/commits/staging/) branch
+ - [`staging-YYYY-DD-MM-GITSHA`](https://hub.docker.com/r/ipfs/kubo/tags?name=staging-2) points at a specific commit from the `staging` branch
+
+```console
+$ docker pull ipfs/kubo:latest
+$ docker run --rm -it --net=host ipfs/kubo:latest
+```
+
+To [customize your node](https://docs.ipfs.tech/install/run-ipfs-inside-docker/#customizing-your-node),
+pass necessary config via `-e` or by mounting scripts in the `/container-init.d`.
+
+Learn more at https://docs.ipfs.tech/install/run-ipfs-inside-docker/
+
+### Official prebuilt binaries
+
+The official binaries are published at https://dist.ipfs.tech#kubo:
+
+[](https://dist.ipfs.tech#kubo)
+
+From there:
+- Click the blue "Download Kubo" on the right side of the page.
+- Open/extract the archive.
+- Move kubo (`ipfs`) to your path (`install.sh` can do it for you).
+
+If you are unable to access [dist.ipfs.tech](https://dist.ipfs.tech#kubo), you can also download kubo from:
+- this project's GitHub [releases](https://github.com/ipfs/kubo/releases/latest) page
+- `/ipns/dist.ipfs.tech` at [dweb.link](https://dweb.link/ipns/dist.ipfs.tech#kubo) gateway
+
+#### Updating
+
+##### Downloading builds using IPFS
+
+List the available versions of Kubo implementation:
+
+```console
+$ ipfs cat /ipns/dist.ipfs.tech/kubo/versions
+```
+
+Then, to view available builds for a version from the previous command (`$VERSION`):
+
+```console
+$ ipfs ls /ipns/dist.ipfs.tech/kubo/$VERSION
+```
+
+To download a given build of a version:
+
+```console
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_darwin-amd64.tar.gz # darwin amd64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_darwin-arm64.tar.gz # darwin arm64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_freebsd-amd64.tar.gz # freebsd amd64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-amd64.tar.gz # linux amd64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-riscv64.tar.gz # linux riscv64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_linux-arm64.tar.gz # linux arm64 build
+$ ipfs get /ipns/dist.ipfs.tech/kubo/$VERSION/kubo_$VERSION_windows-amd64.zip # windows amd64 build
+```
+
+### Unofficial Linux packages
+
+
+
+
+
+- [ArchLinux](#arch-linux)
+- [Gentoo Linux](#gentoo-linux)
+- [Nix](#nix-linux)
+- [Solus](#solus)
+- [openSUSE](#opensuse)
+- [Guix](#guix)
+- [Snap](#snap)
+- [Ubuntu PPA](#ubuntu-ppa)
+- [Fedora](#fedora-copr)
+
+#### Arch Linux
+
+[](https://wiki.archlinux.org/title/IPFS)
+
+```bash
+# pacman -S kubo
+```
+
+[](https://archlinux.org/packages/kubo/)
+
+#### Gentoo Linux
+
+https://wiki.gentoo.org/wiki/Kubo
+
+```bash
+# emerge -a net-p2p/kubo
+```
+
+https://packages.gentoo.org/packages/net-p2p/kubo
+
+#### Nix
+
+With the purely functional package manager [Nix](https://nixos.org/nix/) you can install kubo like this:
+
+```
+$ nix-env -i kubo
+```
+
+You can also install the Package by using its attribute name, which is also `kubo`.
+
+#### Solus
+
+[Package for Solus](https://dev.getsol.us/source/kubo/repository/master/)
+
+```
+$ sudo eopkg install kubo
+```
+
+You can also install it through the Solus software center.
+
+#### openSUSE
+
+[Community Package for kubo](https://software.opensuse.org/package/kubo)
+
+#### Guix
+
+[Community Package for kubo](https://packages.guix.gnu.org/search/?query=kubo) is available.
+
+#### Snap
+
+No longer supported, see rationale in [kubo#8688](https://github.com/ipfs/kubo/issues/8688).
+
+#### Ubuntu PPA
+
+[PPA homepage](https://launchpad.net/~twdragon/+archive/ubuntu/ipfs) on Launchpad.
+
+##### Latest Ubuntu (>= 20.04 LTS)
+```sh
+sudo add-apt-repository ppa:twdragon/ipfs
+sudo apt update
+sudo apt install ipfs-kubo
+```
+
+### Fedora COPR
+
+[`taw00/ipfs-rpm`](https://github.com/taw00/ipfs-rpm)
+
+##### Any Ubuntu version
+
+```sh
+sudo su
+echo 'deb https://ppa.launchpadcontent.net/twdragon/ipfs/ubuntu <> main' >> /etc/apt/sources.list.d/ipfs
+echo 'deb-src https://ppa.launchpadcontent.net/twdragon/ipfs/ubuntu <> main' >> /etc/apt/sources.list.d/ipfs
+exit
+sudo apt update
+sudo apt install ipfs-kubo
+```
+where `<>` is the codename of your Ubuntu distribution (for example, `jammy` for 22.04 LTS). During the first installation the package maintenance script may automatically ask you about which networking profile, CPU accounting model, and/or existing node configuration file you want to use.
+
+**NOTE**: this method also may work with any compatible Debian-based distro which has `libc6` inside, and APT as a package manager.
+
+### Unofficial Windows packages
+
+- [Chocolatey](#chocolatey)
+- [Scoop](#scoop)
+
+#### Chocolatey
+
+No longer supported, see rationale in [kubo#9341](https://github.com/ipfs/kubo/issues/9341).
+
+#### Scoop
+
+Scoop provides kubo as `kubo` in its 'extras' bucket.
+
+```Powershell
+PS> scoop bucket add extras
+PS> scoop install kubo
+```
+
+### Unofficial macOS packages
+
+- [MacPorts](#macports)
+- [Nix](#nix-macos)
+- [Homebrew](#homebrew)
+
+#### MacPorts
+
+The package [ipfs](https://ports.macports.org/port/ipfs) currently points to kubo and is being maintained.
+
+```
+$ sudo port install ipfs
+```
+
+#### Nix
+
+In macOS you can use the purely functional package manager [Nix](https://nixos.org/nix/):
+
+```
+$ nix-env -i kubo
+```
+
+You can also install the Package by using its attribute name, which is also `kubo`.
+
+#### Homebrew
+
+A Homebrew formula [ipfs](https://formulae.brew.sh/formula/ipfs) is maintained too.
+
+```
+$ brew install --formula ipfs
+```
+
+### Build from Source
+
+
+
+kubo's build system requires Go and some standard POSIX build tools:
+
+* GNU make
+* Git
+* GCC (or some other go compatible C Compiler) (optional)
+
+To build without GCC, build with `CGO_ENABLED=0` (e.g., `make build CGO_ENABLED=0`).
+
+#### Install Go
+
+
+
+If you need to update: [Download latest version of Go](https://golang.org/dl/).
+
+You'll need to add Go's bin directories to your `$PATH` environment variable e.g., by adding these lines to your `/etc/profile` (for a system-wide installation) or `$HOME/.profile`:
+
+```
+export PATH=$PATH:/usr/local/go/bin
+export PATH=$PATH:$GOPATH/bin
+```
+
+(If you run into trouble, see the [Go install instructions](https://golang.org/doc/install)).
+
+#### Download and Compile IPFS
+
+```
+$ git clone https://github.com/ipfs/kubo.git
+
+$ cd kubo
+$ make install
+```
+
+Alternatively, you can run `make build` to build the kubo binary (storing it in `cmd/ipfs/ipfs`) without installing it.
+
+**NOTE:** If you get an error along the lines of "fatal error: stdlib.h: No such file or directory", you're missing a C compiler. Either re-run `make` with `CGO_ENABLED=0` or install GCC.
+
+##### Cross Compiling
+
+Compiling for a different platform is as simple as running:
+
+```
+make build GOOS=myTargetOS GOARCH=myTargetArchitecture
+```
+
+#### Troubleshooting
+
+- Separate [instructions are available for building on Windows](docs/windows.md).
+- `git` is required in order for `go get` to fetch all dependencies.
+- Package managers often contain out-of-date `golang` packages.
+ Ensure that `go version` reports the minimum version required (see go.mod). See above for how to install go.
+- If you are interested in development, please install the development
+dependencies as well.
+- Shell command completions can be generated with one of the `ipfs commands completion` subcommands. Read [docs/command-completion.md](docs/command-completion.md) to learn more.
+- See the [misc folder](https://github.com/ipfs/kubo/tree/master/misc) for how to connect IPFS to systemd or whatever init system your distro uses.
+
+## Getting Started
+
+### Usage
+
+[](https://docs.ipfs.tech/how-to/command-line-quick-start/)
+[](https://docs.ipfs.tech/reference/kubo/cli/)
+
+To start using IPFS, you must first initialize IPFS's config files on your
+system, this is done with `ipfs init`. See `ipfs init --help` for information on
+the optional arguments it takes. After initialization is complete, you can use
+`ipfs mount`, `ipfs add` and any of the other commands to explore!
+
+For detailed configuration options, see [docs/config.md](https://github.com/ipfs/kubo/blob/master/docs/config.md).
+
+### Some things to try
+
+Basic proof of 'ipfs working' locally:
+
+ echo "hello world" > hello
+ ipfs add hello
+ # This should output a hash string that looks something like:
+ # QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
+ ipfs cat
+
+### HTTP/RPC clients
+
+For programmatic interaction with Kubo, see our [list of HTTP/RPC clients](docs/http-rpc-clients.md).
+
+### Troubleshooting
+
+If you have previously installed IPFS before and you are running into problems getting a newer version to work, try deleting (or backing up somewhere else) your IPFS config directory (~/.ipfs by default) and rerunning `ipfs init`. This will reinitialize the config file to its defaults and clear out the local datastore of any bad entries.
+
+For more information about configuration options, see [docs/config.md](https://github.com/ipfs/kubo/blob/master/docs/config.md).
+
+Please direct general questions and help requests to our [forums](https://discuss.ipfs.tech).
+
+If you believe you've found a bug, check the [issues list](https://github.com/ipfs/kubo/issues) and, if you don't see your problem there, either come talk to us on [Matrix chat](https://docs.ipfs.tech/community/chat/), or file an issue of your own!
+
+## Packages
+
+See [IPFS in GO](https://docs.ipfs.tech/reference/go/api/) documentation.
+
+## Development
+
+Some places to get you started on the codebase:
+
+- Main file: [./cmd/ipfs/main.go](https://github.com/ipfs/kubo/blob/master/cmd/ipfs/main.go)
+- CLI Commands: [./core/commands/](https://github.com/ipfs/kubo/tree/master/core/commands)
+- Bitswap (the data trading engine): [go-bitswap](https://github.com/ipfs/go-bitswap)
+- libp2p
+ - libp2p: https://github.com/libp2p/go-libp2p
+ - DHT: https://github.com/libp2p/go-libp2p-kad-dht
+- [IPFS : The `Add` command demystified](https://github.com/ipfs/kubo/tree/master/docs/add-code-flow.md)
+
+### Map of Implemented Subsystems
+**WIP**: This is a high-level architecture diagram of the various sub-systems of this specific implementation. To be updated with how they interact. Anyone who has suggestions is welcome to comment [here](https://docs.google.com/drawings/d/1OVpBT2q-NtSJqlPX3buvjYhOnWfdzb85YEsM_njesME/edit) on how we can improve this!
+
+
+### CLI, HTTP-API, Architecture Diagram
+
+
+
+> [Origin](https://github.com/ipfs/pm/pull/678#discussion_r210410924)
+
+Description: Dotted means "likely going away". The "Legacy" parts are thin wrappers around some commands to translate between the new system and the old system. The grayed-out parts on the "daemon" diagram are there to show that the code is all the same, it's just that we turn some pieces on and some pieces off depending on whether we're running on the client or the server.
+
+### Testing
+
+```
+make test
+```
+
+### Development Dependencies
+
+If you make changes to the protocol buffers, you will need to install the [protoc compiler](https://github.com/google/protobuf).
+
+### Developer Notes
+
+Find more documentation for developers on [docs](./docs)
+
+## Maintainer Info
+
+Kubo is maintained by [Shipyard](https://ipshipyard.com/).
+
+* This repository is part of [Shipyard's GO Triage triage](https://ipshipyard.notion.site/IPFS-Go-Triage-Boxo-Kubo-Rainbow-0ddee6b7f28d412da7dabe4f9107c29a).
+* [Release Process](https://ipshipyard.notion.site/Kubo-Release-Process-6dba4f5755c9458ab5685eeb28173778)
+
+
+## Contributing
+
+[](https://github.com/ipfs/community/blob/master/CONTRIBUTING.md)
+
+We ❤️ all [our contributors](docs/AUTHORS); this project wouldn’t be what it is without you! If you want to help out, please see [CONTRIBUTING.md](CONTRIBUTING.md).
+
+This repository falls under the IPFS [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md).
+
+Members of IPFS community provide Kubo support on [discussion forum category here](https://discuss.ipfs.tech/c/help/help-kubo/23).
+
+Need help with IPFS itself? Learn where to get help and support at https://ipfs.tech/help.
+
+## License
+
+This project is dual-licensed under Apache 2.0 and MIT terms:
+
+- Apache License, Version 2.0, ([LICENSE-APACHE](https://github.com/ipfs/kubo/blob/master/LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
+- MIT license ([LICENSE-MIT](https://github.com/ipfs/kubo/blob/master/LICENSE-MIT) or http://opensource.org/licenses/MIT)
diff --git a/data/readmes/istio-1281.md b/data/readmes/istio-1281.md
new file mode 100644
index 0000000..9bd9e97
--- /dev/null
+++ b/data/readmes/istio-1281.md
@@ -0,0 +1,131 @@
+# Istio - README (1.28.1)
+
+**Repository**: https://github.com/istio/istio
+**Version**: 1.28.1
+
+---
+
+# Istio
+
+[](https://bestpractices.coreinfrastructure.org/projects/1395)
+[](https://goreportcard.com/report/github.com/istio/istio)
+[](https://godoc.org/istio.io/istio)
+
+
+
+
+
+
+
+
+
+---
+
+Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes.
+
+- For in-depth information about how to use Istio, visit [istio.io](https://istio.io)
+- To ask questions and get assistance from our community, visit [GitHub Discussions](https://github.com/istio/istio/discussions)
+- To learn how to participate in our overall community, visit [our community page](https://istio.io/about/community)
+
+In this README:
+
+- [Introduction](#introduction)
+- [Repositories](#repositories)
+- [Issue management](#issue-management)
+
+In addition, here are some other documents you may wish to read:
+
+- [Istio Community](https://github.com/istio/community#istio-community) - describes how to get involved and contribute to the Istio project
+- [Istio Developer's Guide](https://github.com/istio/istio/wiki/Preparing-for-Development) - explains how to set up and use an Istio development environment
+- [Project Conventions](https://github.com/istio/istio/wiki/Development-Conventions) - describes the conventions we use within the code base
+- [Creating Fast and Lean Code](https://github.com/istio/istio/wiki/Writing-Fast-and-Lean-Code) - performance-oriented advice and guidelines for the code base
+
+You'll find many other useful documents on our [Wiki](https://github.com/istio/istio/wiki).
+
+## Introduction
+
+[Istio](https://istio.io/latest/docs/concepts/what-is-istio/) is an open platform for providing a uniform way to [integrate
+microservices](https://istio.io/latest/docs/examples/microservices-istio/), manage [traffic flow](https://istio.io/latest/docs/concepts/traffic-management/) across microservices, enforce policies
+and aggregate telemetry data. Istio's control plane provides an abstraction
+layer over the underlying cluster management platform, such as Kubernetes.
+
+Istio is composed of these components:
+
+- **Envoy** - Sidecar proxies per microservice to handle ingress/egress traffic
+ between services in the cluster and from a service to external
+ services. The proxies form a _secure microservice mesh_ providing a rich
+ set of functions like discovery, rich layer-7 routing, circuit breakers,
+ policy enforcement and telemetry recording/reporting
+ functions.
+
+ > Note: The service mesh is not an overlay network. It
+ > simplifies and enhances how microservices in an application talk to each
+ > other over the network provided by the underlying platform.
+
+* **Ztunnel** - A lightweight data plane proxy written in Rust,
+ used in Ambient mesh mode to provide secure connectivity and observability for workloads without sidecar proxies.
+
+- **Istiod** - The Istio control plane. It provides service discovery, configuration and certificate management.
+
+## Repositories
+
+The Istio project is divided across a few GitHub repositories:
+
+- [istio/api](https://github.com/istio/api). This repository defines
+component-level APIs and common configuration formats for the Istio platform.
+
+- [istio/community](https://github.com/istio/community). This repository contains
+information on the Istio community, including the various documents that govern
+the Istio open source project.
+
+- [istio/istio](README.md). This is the main code repository. It hosts Istio's
+core components, install artifacts, and sample programs. It includes:
+
+ - [istioctl](istioctl/). This directory contains code for the
+[_istioctl_](https://istio.io/latest/docs/reference/commands/istioctl/) command line utility.
+
+ - [pilot](pilot/). This directory
+contains platform-specific code to populate the
+[abstract service model](https://istio.io/docs/concepts/traffic-management/#pilot), dynamically reconfigure the proxies
+when the application topology changes, as well as translate
+[routing rules](https://istio.io/latest/docs/reference/config/networking/) into proxy specific configuration.
+
+ - [security](security/). This directory contains [security](https://istio.io/latest/docs/concepts/security/) related code.
+
+- [istio/proxy](https://github.com/istio/proxy). The Istio proxy contains
+extensions to the [Envoy proxy](https://github.com/envoyproxy/envoy) (in the form of
+Envoy filters) that support authentication, authorization, and telemetry collection.
+
+- [istio/ztunnel](https://github.com/istio/ztunnel). The repository contains the Rust implementation of the ztunnel
+component of Ambient mesh.
+
+- [istio/client-go](https://github.com/istio/client-go). This repository defines
+ auto-generated Kubernetes clients for interacting with Istio resources programmatically.
+
+> [!NOTE]
+> Only the `istio/api` and `istio/client-go` repositories expose stable interfaces intended for direct usage as libraries.
+
+## Issue management
+
+We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:
+
+- **Epic**. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things.
+Each issue is ultimately part of an epic.
+
+- **Milestone**. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we
+think the issue should get addressed.
+
+- **Priority**. Each issue has a priority which is represented by the column in the [Prioritization](https://github.com/orgs/istio/projects/6) project. Priority can be one of
+P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the
+milestone cannot be considered achieved if the issue isn't resolved.
+
+---
+
+
diff --git a/data/readmes/jaeger-v1760.md b/data/readmes/jaeger-v1760.md
new file mode 100644
index 0000000..61a3603
--- /dev/null
+++ b/data/readmes/jaeger-v1760.md
@@ -0,0 +1,176 @@
+# Jaeger - README (v1.76.0)
+
+**Repository**: https://github.com/jaegertracing/jaeger
+**Version**: v1.76.0
+
+---
+
+[](https://stand-with-ukraine.pp.ua)
+
+
+
+[![Slack chat][slack-img]](#get-in-touch)
+[![Unit Tests][ci-img]][ci]
+[![Coverage Status][cov-img]][cov]
+[![Project+Community stats][community-badge]][community-stats]
+[![FOSSA Status][fossa-img]][fossa]
+[![OpenSSF Scorecard][openssf-img]][openssf]
+[![OpenSSF Best Practices][openssf-bp-img]][openssf-bp]
+[![CLOMonitor][clomonitor-img]][clomonitor]
+[![Artifact Hub][artifacthub-img]][artifacthub]
+
+
+
+# Jaeger - a Distributed Tracing System
+
+💥💥💥 Jaeger v2 is out! Read the [blog post](https://medium.com/jaegertracing/jaeger-v2-released-09a6033d1b10) and [try it out](https://www.jaegertracing.io/docs/latest/getting-started/).
+
+```mermaid
+graph TD
+ SDK["OpenTelemetry SDK"] --> |HTTP or gRPC| COLLECTOR
+ COLLECTOR["Jaeger Collector"] --> STORE[Storage]
+ COLLECTOR --> |gRPC| PLUGIN[Storage Plugin]
+ COLLECTOR --> |gRPC/sampling| SDK
+ PLUGIN --> STORE
+ QUERY[Jaeger Query Service] --> STORE
+ QUERY --> |gRPC| PLUGIN
+ UI[Jaeger UI] --> |HTTP| QUERY
+ subgraph Application Host
+ subgraph User Application
+ SDK
+ end
+ end
+```
+
+Jaeger is a distributed tracing platform created by [Uber Technologies](https://eng.uber.com/distributed-tracing/) and donated to [Cloud Native Computing Foundation](https://cncf.io).
+
+See Jaeger [documentation][doc] for getting started, operational details, and other information.
+
+Jaeger is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF) as the 7th top-level project, graduated in October 2019. See the CNCF [Jaeger incubation announcement](https://www.cncf.io/blog/2017/09/13/cncf-hosts-jaeger/) and [Jaeger graduation announcement](https://www.cncf.io/announcement/2019/10/31/cloud-native-computing-foundation-announces-jaeger-graduation/).
+
+## Get Involved
+
+Jaeger is an open source project with open governance. We welcome contributions from the community, and we would love your help to improve and extend the project. Here are [some ideas](https://www.jaegertracing.io/get-involved/) for how to get involved. Many of them do not even require any coding.
+
+## Version Compatibility Guarantees
+
+Since Jaeger uses many components from the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector/) we try to maintain configuration compatibility between Jaeger releases. Occasionally, configuration options in Jaeger (or in Jaeger v1 CLI flags) can be deprecated due to usability improvements, new functionality, or changes in our dependencies.
+In such situations, developers introducing the deprecation are required to follow [these guidelines](./CONTRIBUTING.md#deprecating-cli-flags).
+
+In short, for a deprecated configuration option, you should expect to see the following message in the documentation or release notes:
+```
+(deprecated, will be removed after yyyy-mm-dd or in release vX.Y.Z, whichever is later)
+```
+
+A grace period of at least **3 months** or **two minor version bumps** (whichever is later) from the first release
+containing the deprecation notice will be provided before the deprecated configuration option _can_ be deleted.
+
+For example, consider a scenario where v2.0.0 is released on 01-Sep-2024 containing a deprecation notice for a configuration option.
+This configuration option will remain in a deprecated state until the later of 01-Dec-2024 or v2.2.0 where it _can_ be removed on or after either of those events.
+It may remain deprecated for longer than the aforementioned grace period.
+
+## Go Version Compatibility Guarantees
+
+The Jaeger project attempts to track the currently supported versions of Go, as [defined by the Go team](https://go.dev/doc/devel/release#policy).
+Removing support for an unsupported Go version is not considered a breaking change.
+
+Starting with the release of Go 1.21, support for Go versions will be updated as follows:
+
+1. Soon after the release of a new Go minor version `N`, updates will be made to the build and tests steps to accommodate the latest Go minor version.
+2. Soon after the release of a new Go minor version `N`, support for Go version `N-2` will be removed and version `N-1` will become the minimum required version.
+
+## Related Repositories
+
+### Components
+
+ * [UI](https://github.com/jaegertracing/jaeger-ui)
+ * [Data model](https://github.com/jaegertracing/jaeger-idl)
+
+### Documentation
+
+ * Published: https://www.jaegertracing.io/docs/
+ * Source: https://github.com/jaegertracing/documentation
+
+## Building From Source
+
+See [CONTRIBUTING](./CONTRIBUTING.md).
+
+## Contributing
+
+See [CONTRIBUTING](./CONTRIBUTING.md).
+
+Thanks to all the people who already contributed!
+
+
+
+
+
+### Maintainers
+
+Rules for becoming a maintainer are defined in the [GOVERNANCE](./GOVERNANCE.md) document.
+The official maintainers of the Jaeger project are listed in the [MAINTAINERS](./MAINTAINERS.md) file.
+Please use `@jaegertracing/jaeger-maintainers` to tag them on issues / PRs.
+
+Some repositories under [jaegertracing](https://github.com/jaegertracing) org have additional maintainers.
+
+## Project Status Meetings
+
+The Jaeger maintainers and contributors meet regularly on a video call. Everyone is welcome to join, including end users. For meeting details, see https://www.jaegertracing.io/get-in-touch/.
+
+## Roadmap
+
+See https://www.jaegertracing.io/docs/roadmap/
+
+## Get in Touch
+
+Have questions, suggestions, bug reports? Reach the project community via these channels:
+
+ * [Slack chat room `#jaeger`][slack] (need to join [CNCF Slack][slack-join] for the first time)
+ * [`jaeger-tracing` mail group](https://groups.google.com/forum/#!forum/jaeger-tracing)
+ * GitHub [issues](https://github.com/jaegertracing/jaeger/issues) and [discussions](https://github.com/jaegertracing/jaeger/discussions)
+
+## Security
+
+Third-party security audits of Jaeger are available in https://github.com/jaegertracing/security-audits. Please see [Issue #1718](https://github.com/jaegertracing/jaeger/issues/1718) for the summary of available security mechanisms in Jaeger.
+
+## Adopters
+
+Jaeger as a product consists of multiple components. We want to support different types of users,
+whether they are only using our instrumentation libraries or full end to end Jaeger installation,
+whether it runs in production or you use it to troubleshoot issues in development.
+
+Please see [ADOPTERS.md](./ADOPTERS.md) for some of the organizations using Jaeger today.
+If you would like to add your organization to the list, please comment on our
+[survey issue](https://github.com/jaegertracing/jaeger/issues/207).
+
+## Sponsors
+
+The Jaeger project owes its success in open source largely to the Cloud Native Computing Foundation (CNCF), our primary supporter. We deeply appreciate their vital support. Furthermore, we are grateful to Uber for their initial, project-launching donation, and for the continuous contributions of software and infrastructure from 1Password, Codecov.io, Dosu, GitHub, Google Analytics, Netlify, and Oracle Cloud Infrastructure. Thank you for your generous support.
+
+## License
+
+Copyright (c) The Jaeger Authors. [Apache 2.0 License](./LICENSE).
+
+[doc]: https://jaegertracing.io/docs/
+[ci-img]: https://github.com/jaegertracing/jaeger/actions/workflows/ci-unit-tests.yml/badge.svg?branch=main
+[ci]: https://github.com/jaegertracing/jaeger/actions/workflows/ci-unit-tests.yml?query=branch%3Amain
+[cov-img]: https://codecov.io/gh/jaegertracing/jaeger/branch/main/graph/badge.svg
+[cov]: https://codecov.io/gh/jaegertracing/jaeger/branch/main/
+[fossa-img]: https://app.fossa.com/api/projects/git%2Bgithub.com%2Fjaegertracing%2Fjaeger.svg?type=shield
+[fossa]: https://app.fossa.io/projects/git%2Bgithub.com%2Fjaegertracing%2Fjaeger?ref=badge_shield
+[openssf-img]: https://api.securityscorecards.dev/projects/github.com/jaegertracing/jaeger/badge
+[openssf]: https://securityscorecards.dev/viewer/?uri=github.com/jaegertracing/jaeger
+[openssf-bp-img]: https://www.bestpractices.dev/projects/1273/badge
+[openssf-bp]: https://www.bestpractices.dev/projects/1273
+[clomonitor-img]: https://img.shields.io/endpoint?url=https://clomonitor.io/api/projects/cncf/jaeger/badge
+[clomonitor]: https://clomonitor.io/projects/cncf/jaeger
+[artifacthub-img]: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/jaegertracing
+[artifacthub]: https://artifacthub.io/packages/search?repo=jaegertracing
+
+
+[community-badge]: https://img.shields.io/badge/Project+Community-stats-blue.svg
+[community-stats]: https://all.devstats.cncf.io/d/54/project-health?orgId=1&var-repogroup_name=Jaeger
+[hotrod-tutorial]: https://medium.com/jaegertracing/take-jaeger-for-a-hotrod-ride-233cf43e46c2
+[slack]: https://cloud-native.slack.com/archives/CGG7NFUJ3
+[slack-join]: https://slack.cncf.io
+[slack-img]: https://img.shields.io/badge/slack-join%20chat%20%E2%86%92-brightgreen?logo=slack
diff --git a/data/readmes/java-openjdk-jdk-271.md b/data/readmes/java-openjdk-jdk-271.md
new file mode 100644
index 0000000..7f426a2
--- /dev/null
+++ b/data/readmes/java-openjdk-jdk-271.md
@@ -0,0 +1,19 @@
+# Java (OpenJDK) - README (jdk-27+1)
+
+**Repository**: https://github.com/openjdk/jdk
+**Version**: jdk-27+1
+
+---
+
+# Welcome to the JDK!
+
+For build instructions please see the
+[online documentation](https://openjdk.org/groups/build/doc/building.html),
+or either of these files:
+
+- [doc/building.html](doc/building.html) (html version)
+- [doc/building.md](doc/building.md) (markdown version)
+
+See for more information about the OpenJDK
+Community and the JDK and see for JDK issue
+tracking.
diff --git a/data/readmes/jenkins-jenkins-2540.md b/data/readmes/jenkins-jenkins-2540.md
new file mode 100644
index 0000000..870d000
--- /dev/null
+++ b/data/readmes/jenkins-jenkins-2540.md
@@ -0,0 +1,127 @@
+# Jenkins - README (jenkins-2.540)
+
+**Repository**: https://github.com/jenkinsci/jenkins
+**Version**: jenkins-2.540
+
+---
+
+
+
+
+
+[](https://www.jenkins.io/changelog)
+[](https://www.jenkins.io/changelog-stable)
+[](https://hub.docker.com/r/jenkins/jenkins/)
+[](https://bestpractices.coreinfrastructure.org/projects/3538)
+[](https://maven.apache.org/guides/mini/guide-reproducible-builds.html)
+[](https://app.gitter.im/#/room/#jenkinsci_jenkins:gitter.im)
+
+---
+
+# Table of Contents
+
+- [About](#about)
+- [What to Use Jenkins for and When to Use It](#what-to-use-jenkins-for-and-when-to-use-it)
+- [Downloads](#downloads)
+- [Getting Started (Development)](#getting-started-development)
+- [Source](#source)
+- [Contributing to Jenkins](#contributing-to-jenkins)
+- [News and Website](#news-and-website)
+- [Governance](#governance)
+- [Adopters](#adopters)
+- [License](#license)
+
+---
+
+# About
+
+In a nutshell, Jenkins is the leading open-source automation server.
+Built with Java, it provides over 2,000 [plugins](https://plugins.jenkins.io/) to support automating virtually anything,
+so that humans can spend their time doing things machines cannot.
+
+# What to Use Jenkins for and When to Use It
+
+Use Jenkins to automate your development workflow, so you can focus on work that matters most. Jenkins is commonly used for:
+
+- Building projects
+- Running tests to detect bugs and other issues as soon as they are introduced
+- Static code analysis
+- Deployment
+
+Execute repetitive tasks, save time, and optimize your development process with Jenkins.
+
+# Downloads
+
+The Jenkins project provides official distributions as WAR files, Docker images, native packages and installers for platforms including several Linux distributions and Windows.
+See the [Downloads](https://www.jenkins.io/download) page for references.
+
+For all distributions Jenkins offers two release lines:
+
+- [Weekly](https://www.jenkins.io/download/weekly/) -
+ Frequent releases which include all new features, improvements, and bug fixes.
+- [Long-Term Support (LTS)](https://www.jenkins.io/download/lts/) -
+ Older release line which gets periodically updated via bug fix backports.
+
+Latest releases:
+
+[](https://www.jenkins.io/changelog)
+[](https://www.jenkins.io/changelog-stable)
+
+# Getting Started (Development)
+
+For more information on setting up your development environment, contributing, and working with Jenkins internals, check the [contributing guide](CONTRIBUTING.md) and the [Jenkins Developer Documentation](https://www.jenkins.io/doc/developer/).
+
+# Source
+
+Our latest and greatest source of Jenkins can be found on [GitHub](https://github.com/jenkinsci/jenkins). Fork us!
+
+# Contributing to Jenkins
+
+New to open source or Jenkins? Here’s how to get started:
+
+- Read the [Contribution Guidelines](CONTRIBUTING.md)
+- Check our [good first issues](https://github.com/jenkinsci/jenkins/issues?q=is%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22)
+- Join our [Gitter chat](https://app.gitter.im/#/room/#jenkinsci_newcomer-contributors:gitter.im) for questions and help
+
+For more information about participating in the community and contributing to the Jenkins project,
+see [this page](https://www.jenkins.io/participate/).
+
+Documentation for Jenkins core maintainers is in the [maintainers guidelines](docs/MAINTAINERS.adoc).
+
+# News and Website
+
+All information about Jenkins can be found on our [official website](https://www.jenkins.io/), including documentation, blog posts, plugin listings, community updates, and more.
+
+Stay up-to-date with the latest Jenkins news, tutorials, and release notes:
+
+- [Jenkins Blog](https://www.jenkins.io/blog/)
+- [Documentation](https://www.jenkins.io/doc/)
+- [Plugins Index](https://plugins.jenkins.io/)
+- [Events](https://www.jenkins.io/events/)
+
+Follow Jenkins on social media to stay connected with the community:
+
+- [Twitter / X](https://x.com/jenkinsci)
+- [YouTube](https://www.youtube.com/@jenkinscicd)
+- [LinkedIn](https://www.linkedin.com/company/jenkins-project/)
+
+# Governance
+
+The Jenkins project is governed by an open source community.
+To learn more about the governance structure, project leadership, and how decisions are made, visit the [Governance Page](https://www.jenkins.io/project/governance/).
+
+# Adopters
+
+Jenkins is trusted by **millions of users** and adopted by **thousands of companies** around the world — from startups to enterprises — to automate their software delivery pipelines.
+
+Explore the [Adopters Page](https://www.jenkins.io/project/adopters/) and https://stories.jenkins.io to see:
+
+- Companies and organizations using Jenkins
+- Success stories and case studies
+- How Jenkins is used in different industries
+
+> If your company uses Jenkins and you'd like to be featured, feel free to [submit your story](https://www.jenkins.io/project/adopters/contributing/#share-your-story)!
+
+# License
+
+Jenkins is **licensed** under the **[MIT License](LICENSE.txt)**.
diff --git a/data/readmes/jq-jq-181.md b/data/readmes/jq-jq-181.md
new file mode 100644
index 0000000..3a035d3
--- /dev/null
+++ b/data/readmes/jq-jq-181.md
@@ -0,0 +1,85 @@
+# jq - README (jq-1.8.1)
+
+**Repository**: https://github.com/jqlang/jq
+**Version**: jq-1.8.1
+
+---
+
+# jq
+
+`jq` is a lightweight and flexible command-line JSON processor akin to `sed`,`awk`,`grep`, and friends for JSON data. It's written in portable C and has zero runtime dependencies, allowing you to easily slice, filter, map, and transform structured data.
+
+## Documentation
+
+- **Official Documentation**: [jqlang.org](https://jqlang.org)
+- **Try jq Online**: [play.jqlang.org](https://play.jqlang.org)
+
+## Installation
+
+### Prebuilt Binaries
+
+Download the latest releases from the [GitHub release page](https://github.com/jqlang/jq/releases).
+
+### Docker Image
+
+Pull the [jq image](https://github.com/jqlang/jq/pkgs/container/jq) to start quickly with Docker.
+
+#### Run with Docker
+
+##### Example: Extracting the version from a `package.json` file
+
+```bash
+docker run --rm -i ghcr.io/jqlang/jq:latest < package.json '.version'
+```
+
+##### Example: Extracting the version from a `package.json` file with a mounted volume
+
+```bash
+docker run --rm -i -v "$PWD:$PWD" -w "$PWD" ghcr.io/jqlang/jq:latest '.version' package.json
+```
+
+### Building from source
+
+#### Dependencies
+
+- libtool
+- make
+- automake
+- autoconf
+
+#### Instructions
+
+```console
+git submodule update --init # if building from git to get oniguruma
+autoreconf -i # if building from git
+./configure --with-oniguruma=builtin
+make clean # if upgrading from a version previously built from source
+make -j8
+make check
+sudo make install
+```
+
+Build a statically linked version:
+
+```console
+make LDFLAGS=-all-static
+```
+
+If you're not using the latest git version but instead building a released tarball (available on the release page), skip the `autoreconf` step, and flex or bison won't be needed.
+
+##### Cross-Compilation
+
+For details on cross-compilation, check out the [GitHub Actions file](.github/workflows/ci.yml) and the [cross-compilation wiki page](https://github.com/jqlang/jq/wiki/Cross-compilation).
+
+## Community & Support
+
+- Questions & Help: [Stack Overflow (jq tag)](https://stackoverflow.com/questions/tagged/jq)
+- Chat & Community: [Join us on Discord](https://discord.gg/yg6yjNmgAC)
+- Wiki & Advanced Topics: [Explore the Wiki](https://github.com/jqlang/jq/wiki)
+
+## License
+
+`jq` is released under the [MIT License](COPYING). `jq`'s documentation is
+licensed under the [Creative Commons CC BY 3.0](COPYING).
+`jq` uses parts of the open source C library "decNumber", which is distributed
+under [ICU License](COPYING)
diff --git a/data/readmes/k0s-v1342k0s0.md b/data/readmes/k0s-v1342k0s0.md
new file mode 100644
index 0000000..eb60187
--- /dev/null
+++ b/data/readmes/k0s-v1342k0s0.md
@@ -0,0 +1,192 @@
+# k0s - README (v1.34.2+k0s.0)
+
+**Repository**: https://github.com/k0sproject/k0s
+**Version**: v1.34.2+k0s.0
+
+---
+
+# k0s - The Zero Friction Kubernetes
+
+
+
+[](https://www.bestpractices.dev/projects/9994)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fk0sproject%2Fk0s?ref=badge_shield)
+[](https://github.com/k0sproject/k0s/actions/workflows/go.yml?query=branch%3Amain)
+[](https://github.com/k0sproject/k0s/actions/workflows/ostests-nightly.yaml)
+
+[](https://github.com/k0sproject/k0s/tags?label=Downloads)
+
+
+ 
+
+
+
+
+
+## Overview
+
+k0s is an open source, all-inclusive Kubernetes distribution, which is configured with all of the features needed to build a Kubernetes cluster and packaged as a single binary for ease of use. Due to its simple design, flexible deployment options and modest system requirements, k0s is well suited for
+
+- Any cloud
+- Bare metal
+- Edge and IoT
+
+k0s drastically reduces the complexity of installing and running a CNCF certified Kubernetes distribution. With k0s new clusters can be bootstrapped in minutes and developer friction is reduced to zero. This allows anyone with no special skills or expertise in Kubernetes to easily get started.
+
+k0s is distributed as a single binary with zero host OS dependencies besides the host OS kernel. It works with any Linux without additional software packages or configuration. Any security vulnerabilities or performance issues can be fixed directly in the k0s distribution that makes it extremely straightforward to keep the clusters up-to-date and secure.
+
+
+
+## Key Features
+
+- Certified and 100% upstream Kubernetes
+- Multiple installation methods: [single-node](docs/install.md), [multi-node](docs/k0sctl-install.md), [airgap](docs/airgap-install.md) and [Docker](docs/k0s-in-docker.md)
+- Automatic lifecycle management with k0sctl: [upgrade](docs/upgrade.md), [backup and restore](docs/backup.md)
+- Modest [system requirements](docs/system-requirements.md) (1 vCPU, 1 GB RAM)
+- Available as a single binary with no [external runtime dependencies](docs/external-runtime-deps.md) besides the kernel
+- Flexible deployment options with [control plane isolation](docs/networking.md#controller-worker-communication) as default
+- Scalable from a single node to large, [high-available](docs/high-availability.md) clusters
+- Supports custom [Container Network Interface (CNI)](docs/networking.md) plugins (Kube-Router is the default, Calico is offered as a preconfigured alternative)
+- Supports custom [Container Runtime Interface (CRI)](docs/runtime.md) plugins (containerd is the default)
+- Supports all Kubernetes storage options with [Container Storage Interface (CSI)](docs/storage.md)
+- Supports a variety of [datastore backends](docs/configuration.md#specstorage): etcd (default for multi-node clusters), SQLite (default for single node clusters), MySQL, and PostgreSQL
+- Supports x86-64, ARM64, ARMv7 and RISC-V
+- Includes [Konnectivity service](docs/networking.md#controller-worker-communication), CoreDNS and Metrics Server
+
+
+## Getting Started
+
+If you'd like to try k0s, please jump in to our:
+
+- [Quick Start Guide](https://docs.k0sproject.io/stable/install/) - Create a full Kubernetes cluster with a single node that includes both the controller and the worker.
+- [Install using k0sctl](https://docs.k0sproject.io/stable/k0sctl-install/) - Deploy and upgrade multi-node clusters with one command.
+- [NanoDemo](https://docs.k0sproject.io/stable/#demo) - Watch a .gif recording on how to create a k0s instance.
+- [Run k0s in Docker](https://docs.k0sproject.io/stable/k0s-in-docker/) - Run k0s controllers and workers in containers.
+- For docs, tutorials, and other k0s resources, see [docs main page](https://docs.k0sproject.io).
+
+
+## Join the Community
+
+- [k8s Slack] - Reach out for support and help from the k0s community.
+- [GitHub Issues] - Submit your issues and feature requests via GitHub.
+
+We welcome your help in building k0s! If you are interested, we invite you to
+check out the [Contributing Guide] and the [Code of Conduct].
+
+[k8s Slack]: https://kubernetes.slack.com/archives/C07VAPJUECS
+[GitHub Issues]: https://github.com/k0sproject/k0s/issues
+[Contributing Guide]: https://docs.k0sproject.io/stable/contributors/overview/
+[Code of Conduct]:https://docs.k0sproject.io/stable/contributors/CODE_OF_CONDUCT/
+
+### Community hours
+
+We will be holding regular community hours. Everyone in the community is welcome to drop by and ask questions, talk about projects, and chat.
+
+We currently have a monthly office hours call on the last Tuesday of the month.
+
+To see the call details in your local timezone, check out [https://dateful.com/eventlink/2735919704](https://dateful.com/eventlink/2735919704).
+
+
+### Adopters
+
+k0s is used across diverse environments, from small-scale far-edge deployments
+to large data centers. Share your use case and add yourself to the list of
+[adopters].
+
+[adopters]: ADOPTERS.md
+
+
+## Motivation
+
+_We have seen a gap between the host OS and Kubernetes that runs on top of it: How to ensure they work together as they are upgraded independent from each other? Who is responsible for vulnerabilities or performance issues originating from the host OS that affect the K8S on top?_
+
+**→** k0s is fully self contained. It’s distributed as a single binary with no host OS deps besides the kernel. Any vulnerability or perf issues may be fixed in k0s Kubernetes.
+
+_We have seen K8S with partial FIPS security compliance: How to ensure security compliance for critical applications if only part of the system is FIPS compliant?_
+
+**→** k0s core + all included host OS dependencies + components on top may be compiled and packaged as a 100% FIPS compliant distribution using a proper toolchain.
+
+_We have seen Kubernetes with cumbersome lifecycle management, high minimum system requirements, weird host OS and infra restrictions, and/or need to use different distros to meet different use cases._
+
+**→** k0s is designed to be lightweight at its core. It comes with a tool to automate cluster lifecycle management. It works on any host OS and infrastructure, and may be extended to work with any use cases such as edge, IoT, telco, public clouds, private data centers, and hybrid & hyper converged cloud applications without sacrificing the pure Kubernetes compliance or amazing developer experience.
+
+
+## Status
+
+k0s is ready for production (starting from v1.21.0+k0s.0). Since the initial release of k0s back in November 2020, we have made numerous releases, improved stability, added new features, and most importantly, listened to our users and community in an effort to create the most modern Kubernetes product out there. The active development continues to make k0s even better.
+
+
+## Scope
+
+While some Kubernetes distros package everything and the kitchen sink, k0s tries to minimize the amount of "add-ons" to bundle in. Instead, we aim to provide a robust and versatile "base" for running Kubernetes in various setups. Of course we will provide some ways to easily control and setup various "add-ons", but we will not bundle many of those into k0s itself. There are a couple of reasons why we think this is the correct way:
+
+- Many of the addons such as ingresses, service meshes, storage etc. are VERY opinionated. We try to build this base with fewer opinions. :D
+- Keeping up with the upstream releases with many external addons is very maintenance heavy. Shipping with old versions does not make much sense either.
+
+With strong enough arguments we might take in new addons, but in general those should be something that are essential for the "core" of k0s.
+
+
+## Build
+
+The requirements for building k0s from source are as follows:
+
+- GNU Make (v3.81 or newer)
+- A POSIX shell
+- coreutils
+- findutils
+- Docker
+
+All of the compilation steps are performed inside Docker containers, no
+installation of Go is required.
+
+The k0s binary can be built in different ways:
+
+The "k0s" way, self-contained, all binaries compiled from source, statically
+linked, including embedded binaries:
+
+```shell
+make
+```
+
+The "package maintainer" way, without building and embedding the required
+binaries. This assumes necessary binaries are provided separately at runtime:
+
+```shell
+make EMBEDDED_BINS_BUILDMODE=none
+```
+
+Docker build integration is enabled by default. However, in environments without
+Docker, you can use the Go toolchain installed on the host system to build k0s
+without embedding binaries. Note that static linking is not possible with
+glibc-based toolchains:
+
+```shell
+make DOCKER='' EMBEDDED_BINS_BUILDMODE=none BUILD_GO_LDFLAGS_EXTRA=''
+```
+
+Note that the k0s build system does not currently support building the embedded
+binaries without Docker. However, the embedded binaries can be built
+independently using Docker:
+
+```shell
+make -C embedded-bins
+```
+
+Builds can be done in parallel:
+
+```shell
+make -j$(nproc)
+```
+
+## Smoke test
+
+Additionally to the requirements for building k0s, the smoke tests _do_ require
+a local Go installation. you can run `./vars.sh go_version` in a terminal to
+find out the version that's being used to build k0s. It will print the
+corresponding Go version to stdout.
+
+To run a basic smoke test after build:
+
+```shell
+make check-basic
+```
diff --git a/data/readmes/k3s-v1342k3s1.md b/data/readmes/k3s-v1342k3s1.md
new file mode 100644
index 0000000..e41882c
--- /dev/null
+++ b/data/readmes/k3s-v1342k3s1.md
@@ -0,0 +1,202 @@
+# k3s - README (v1.34.2+k3s1)
+
+**Repository**: https://github.com/k3s-io/k3s
+**Version**: v1.34.2+k3s1
+
+---
+
+K3s - Lightweight Kubernetes
+===============================================
+[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Fk3s-io%2Fk3s?ref=badge_shield)
+[](https://github.com/k3s-io/k3s/actions/workflows/nightly-install.yaml)
+[](https://drone-publish.k3s.io/k3s-io/k3s)
+[](https://github.com/k3s-io/k3s/actions/workflows/integration.yaml)
+[](https://github.com/k3s-io/k3s/actions/workflows/unitcoverage.yaml)
+[](https://www.bestpractices.dev/projects/6835)
+[](https://scorecard.dev/viewer/?uri=github.com/k3s-io/k3s)
+[](https://github.com/k3s-io/k3s/tags?label=Downloads)
+[](https://clomonitor.io/projects/cncf/k3s)
+
+Lightweight Kubernetes. Production ready, easy to install, half the memory, all in a binary less than 100 MB.
+
+Great for:
+
+* Edge
+* IoT
+* CI
+* Development
+* ARM
+* Embedding k8s
+* Situations where a PhD in k8s clusterology is infeasible
+
+What is this?
+---
+
+K3s is a [fully conformant](https://github.com/cncf/k8s-conformance/pulls?q=is%3Apr+k3s) production-ready Kubernetes distribution with the following changes:
+
+1. It is packaged as a single binary.
+1. It adds support for sqlite3 as the default storage backend. Etcd3, MariaDB, MySQL, and Postgres are also supported.
+1. It wraps Kubernetes and other components in a single, simple launcher.
+1. It is secure by default with reasonable defaults for lightweight environments.
+1. It has minimal to no OS dependencies (just a sane kernel and cgroup mounts needed).
+1. It eliminates the need to expose a port on Kubernetes worker nodes for the kubelet API by exposing this API to the Kubernetes control plane nodes over a websocket tunnel.
+
+K3s bundles the following technologies together into a single cohesive distribution:
+
+* [Containerd](https://containerd.io/) & [runc](https://github.com/opencontainers/runc)
+* [Flannel](https://github.com/flannel-io/flannel) for CNI
+* [CoreDNS](https://coredns.io/)
+* [Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
+* [Traefik](https://containo.us/traefik/) for ingress
+* [Klipper-lb](https://github.com/k3s-io/klipper-lb) as an embedded service load balancer provider
+* [Kube-router](https://www.kube-router.io/) netpol controller for network policy
+* [Helm-controller](https://github.com/k3s-io/helm-controller) to allow for CRD-driven deployment of helm manifests
+* [Kine](https://github.com/k3s-io/kine) as a datastore shim that allows etcd to be replaced with other databases
+* [Local-path-provisioner](https://github.com/rancher/local-path-provisioner) for provisioning volumes using local storage
+* [Host utilities](https://github.com/k3s-io/k3s-root) such as iptables/nftables, ebtables, ethtool, & socat
+
+These technologies can be disabled or swapped out for technologies of your choice.
+
+Additionally, K3s simplifies Kubernetes operations by maintaining functionality for:
+
+* Managing the TLS certificates of Kubernetes components
+* Managing the connection between worker and server nodes
+* Auto-deploying Kubernetes resources from local manifests in realtime as they are changed.
+* Managing an embedded etcd cluster
+
+What's with the name?
+--------------------
+
+We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a
+10 letter word stylized as k8s. So something half as big as Kubernetes would be a 5 letter word stylized as
+K3s. A '3' is also an '8' cut in half vertically. There is neither a long-form of K3s nor official pronunciation.
+
+Is this a fork?
+---------------
+
+No, it's a distribution. A fork implies continued divergence from the original. This is not K3s's goal or practice. K3s explicitly intends not to change any core Kubernetes functionality. We seek to remain as close to upstream Kubernetes as possible. However, we maintain a small set of patches (well under 1000 lines) important to K3s's use case and deployment model. We maintain patches for other components as well. When possible, we contribute these changes back to the upstream projects, for example, with [SELinux support in containerd](https://github.com/containerd/cri/pull/1487/commits/24209b91bf361e131478d15cfea1ab05694dc3eb). This is a common practice amongst software distributions.
+
+K3s is a distribution because it packages additional components and services necessary for a fully functional cluster that go beyond vanilla Kubernetes. These are opinionated choices on technologies for components like ingress, storage class, network policy, service load balancer, and even container runtime. These choices and technologies are touched on in more detail in the [What is this?](#what-is-this) section.
+
+How is this lightweight or smaller than upstream Kubernetes?
+---
+
+There are two major ways that K3s is lighter weight than upstream Kubernetes:
+1. The memory footprint to run it is smaller
+2. The binary, which contains all the non-containerized components needed to run a cluster, is smaller
+
+The memory footprint is reduced primarily by running many components inside of a single process. This eliminates significant overhead that would otherwise be duplicated for each component.
+
+The binary is smaller by removing third-party storage drivers and cloud providers, explained in more detail below.
+
+What have you removed from upstream Kubernetes?
+---
+
+This is a common point of confusion because it has changed over time. Early versions of K3s had much more removed than the current version. K3s currently removes two things:
+
+1. In-tree storage drivers
+1. In-tree cloud provider
+
+Both of these have out-of-tree alternatives in the form of [CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) and [CCM](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/), which work in K3s and which upstream is moving towards.
+
+We remove these to achieve a smaller binary size. They can be removed while remaining conformant because neither affects core Kubernetes functionality. They are also dependent on third-party cloud or data center technologies/services, which may not be available in many K3s' use cases.
+
+Getting Started
+---
+- [Quick Install](https://docs.k3s.io/quick-start)
+- [Architecture](https://docs.k3s.io/architecture)
+- [FAQ](https://docs.k3s.io/faq)
+- [Contribute](CONTRIBUTING.md)
+
+Community
+---
+- ### Slack
+
+Join [Slack](https://slack.rancher.io/) to chat with K3s developers and other K3s users. Great place to learn and ask questions: [#k3s](https://rancher-users.slack.com/archives/CGGQEHPPW) and [#k3s-contributor](https://rancher-users.slack.com/archives/CGXR87T8B) and [#k3s](https://cloud-native.slack.com/archives/C0196ULKX8S) channel in [CNCF Slack](https://cloud-native.slack.com)
+
+- ### Getting involved
+[GitHub Issues](https://github.com/k3s-io/k3s/issues) - Submit your issues and feature requests via GitHub.
+
+- ### Community Meetings and Office hours
+The K3s developer community hangs out on Zoom to chat. Everybody is welcome.
+
+**Add the [Linux Foundation iCal](https://webcal.prod.itx.linuxfoundation.org/lfx/a092M00001IkYIjQAN) to your calendar**:
+- AMS/EMEA TZ 10:00 am PST - every *second* Tuesday of the month
+- EMEA/APAC TimeZone friendly - every *third* Tuesday of the month
+
+**Meeting notes and agenda**: https://hackmd.io/@k3s/meet-notes/
+
+**Meeting recordings**: [K3s Channel](https://www.youtube.com/watch?v=HRuJROA6Z3k&list=PLlBG85HKlLE9KFDqJ_K6NOpup-zVw8ANl&pp=gAQB)
+
+You can check also the full details on the website: https://k3s.io/community
+
+
+What's next?
+---
+
+Check out our [roadmap](ROADMAP.md) to see what we have planned moving forward.
+
+Release cadence
+---
+
+K3s maintains pace with upstream Kubernetes releases. Our goal is to release patch releases within one week, and new minors within 30 days.
+
+Our release versioning reflects the version of upstream Kubernetes that is being released. For example, the K3s release [v1.27.4+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.27.4%2Bk3s1) maps to the `v1.27.4` Kubernetes release. We add a postfix in the form of `+k3s` to allow us to make additional releases using the same version of upstream Kubernetes while remaining [semver](https://semver.org/) compliant. For example, if we discovered a high severity bug in `v1.27.4+k3s1` and needed to release an immediate fix for it, we would release `v1.27.4+k3s2`.
+
+Documentation
+-------------
+
+Please see [the official docs site](https://docs.k3s.io) for complete documentation.
+
+Quick-Start - Install Script
+--------------
+
+The `install.sh` script provides a convenient way to download K3s and add a service to systemd or openrc.
+
+To install k3s as a service, run:
+
+```bash
+curl -sfL https://get.k3s.io | sh -
+```
+
+A kubeconfig file is written to `/etc/rancher/k3s/k3s.yaml` and the service is automatically started or restarted.
+The install script will install K3s and additional utilities, such as `kubectl`, `crictl`, `k3s-killall.sh`, and `k3s-uninstall.sh`, for example:
+
+```bash
+sudo kubectl get nodes
+```
+
+`K3S_TOKEN` is created at `/var/lib/rancher/k3s/server/node-token` on your server.
+To install on worker nodes, pass `K3S_URL` along with
+`K3S_TOKEN` environment variables, for example:
+
+```bash
+curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
+```
+
+Manual Download
+---------------
+
+1. Download `k3s` from latest [release](https://github.com/k3s-io/k3s/releases/latest), x86_64, armhf, arm64 and s390x are supported.
+1. Run the server.
+
+```bash
+sudo k3s server &
+# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
+sudo k3s kubectl get nodes
+
+# On a different node run the below. NODE_TOKEN comes from
+# /var/lib/rancher/k3s/server/node-token on your server
+sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
+```
+
+Contributing
+------------
+
+Please check out our [contributing guide](CONTRIBUTING.md) if you're interested in contributing to K3s.
+
+Security
+--------
+
+Security issues in K3s can be reported by sending an email to [security@k3s.io](mailto:security@k3s.io).
+Please do not file issues about security issues.
diff --git a/data/readmes/k8gb-v0160.md b/data/readmes/k8gb-v0160.md
new file mode 100644
index 0000000..b9a5fa4
--- /dev/null
+++ b/data/readmes/k8gb-v0160.md
@@ -0,0 +1,166 @@
+# k8gb - README (v0.16.0)
+
+**Repository**: https://github.com/k8gb-io/k8gb
+**Version**: v0.16.0
+
+---
+
+
+
+---
+
+**kagent** is a Kubernetes native framework for building AI agents. Kubernetes is the most popular orchestration platform for running workloads, and **kagent** makes it easy to build, deploy and manage AI agents in Kubernetes. The **kagent** framework is designed to be easy to understand and use, and to provide a flexible and powerful way to build and manage AI agents.
+
+
+
+
+---
+
+## Getting started
+
+- [Quick Start](https://kagent.dev/docs/kagent/getting-started/quickstart)
+- [Installation guide](https://kagent.dev/docs/kagent/introduction/installation)
+
+## Technical Details
+
+### Core Concepts
+
+- **Agents**: Agents are the main building block of kagent. They are a system prompt, a set of tools and agents, and an LLM configuration represented with a Kubernetes custom resource called "Agent".
+- **LLM Providers**: Kagent supports multiple LLM providers, including [OpenAI](https://kagent.dev/docs/kagent/supported-providers/openai), [Azure OpenAI](https://kagent.dev/docs/kagent/supported-providers/azure-openai), [Anthropic](https://kagent.dev/docs/kagent/supported-providers/anthropic), [Google Vertex AI](https://kagent.dev/docs/kagent/supported-providers/google-vertexai), [Ollama](https://kagent.dev/docs/kagent/supported-providers/ollama) and any other [custom providers and models](https://kagent.dev/docs/kagent/supported-providers/custom-models) accessible via AI gateways. Providers are represented by the ModelConfig resource.
+- **MCP Tools**: Agents can connect to any MCP server that provides tools. Kagent comes with an MCP server with tools for Kubernetes, Istio, Helm, Argo, Prometheus, Grafana, Cilium, and others. All tools are Kubernetes custom resources (ToolServers) and can be used by multiple agents.
+- **Observability**: Kagent supports [OpenTelemetry tracing](https://kagent.dev/docs/kagent/getting-started/tracing), which allows you to monitor what's happening with your agents and tools.
+
+### Core Principles
+
+- **Kubernetes Native**: Kagent is designed to be easy to understand and use, and to provide a flexible and powerful way to build and manage AI agents.
+- **Extensible**: Kagent is designed to be extensible, so you can add your own agents and tools.
+- **Flexible**: Kagent is designed to be flexible, to suit any AI agent use case.
+- **Observable**: Kagent is designed to be observable, so you can monitor the agents and tools using all common monitoring frameworks.
+- **Declarative**: Kagent is designed to be declarative, so you can define the agents and tools in a YAML file.
+- **Testable**: Kagent is designed to be tested and debugged easily. This is especially important for AI agent applications.
+
+### Architecture
+
+The kagent framework is designed to be easy to understand and use, and to provide a flexible and powerful way to build and manage AI agents.
+
+
+
+
+
+Kagent has 4 core components:
+
+- **Controller**: The controller is a Kubernetes controller that watches the kagent custom resources and creates the necessary resources to run the agents.
+- **UI**: The UI is a web UI that allows you to manage the agents and tools.
+- **Engine**: The engine runs your agents using [ADK](https://google.github.io/adk-docs/).
+- **CLI**: The CLI is a command-line tool that allows you to manage the agents and tools.
+
+## Get Involved
+
+_We welcome contributions! Contributors are expected to [respect the kagent Code of Conduct](https://github.com/kagent-dev/community/blob/main/CODE-OF-CONDUCT.md)_
+
+There are many ways to get involved:
+
+- 🐛 [Report bugs and issues](https://github.com/kagent-dev/kagent/issues/)
+- 💡 [Suggest new features](https://github.com/kagent-dev/kagent/issues/)
+- 📖 [Improve documentation](https://github.com/kagent-dev/website/)
+- 🔧 [Submit pull requests](/CONTRIBUTION.md)
+- ⭐ Star the repository
+- 💬 [Help others in Discord](https://discord.gg/Fu3k65f2k3)
+- 💬 [Join the kagent community meetings](https://calendar.google.com/calendar/u/0?cid=Y183OTI0OTdhNGU1N2NiNzVhNzE0Mjg0NWFkMzVkNTVmMTkxYTAwOWVhN2ZiN2E3ZTc5NDA5Yjk5NGJhOTRhMmVhQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20)
+- 🤝 [Share tips in the CNCF #kagent slack channel](https://cloud-native.slack.com/archives/C08ETST0076)
+- 🔒 [Report security concerns](SECURITY.md)
+
+### Roadmap
+
+`kagent` is currently in active development. You can check out the full roadmap in the project Kanban board [here](https://github.com/orgs/kagent-dev/projects/3).
+
+### Local development
+
+For instructions on how to run everything locally, see the [DEVELOPMENT.md](DEVELOPMENT.md) file.
+
+### Contributors
+
+Thanks to all contributors who are helping to make kagent better.
+
+
+
+
+
+### Star History
+
+
+
+
+
+
+
+
+
+## Reference
+
+### License
+
+This project is licensed under the [Apache 2.0 License.](/LICENSE)
+
+---
+
+
\ No newline at end of file
diff --git a/data/readmes/kairos-v3510.md b/data/readmes/kairos-v3510.md
new file mode 100644
index 0000000..dc0befb
--- /dev/null
+++ b/data/readmes/kairos-v3510.md
@@ -0,0 +1,86 @@
+# Kairos - README (v3.5.10)
+
+**Repository**: https://github.com/kairos-io/kairos
+**Version**: v3.5.10
+
+---
+
+
+
+
+
+
+
+
+
Kairos - Kubernetes-focused, Cloud Native Linux meta-distribution
+
+
+
+
+
+
+
+
+
+ The immutable Linux meta-distribution for edge Kubernetes.
+
+
+
+
+With Kairos you can build immutable, bootable Kubernetes and OS images for your edge devices as easily as writing a Dockerfile. Optional P2P mesh with distributed ledger automates node bootstrapping and coordination. Updating nodes is as easy as CI/CD: push a new image to your container registry and let secure, risk-free A/B atomic upgrades do the rest. Kairos is part of the Secure Edge-Native Architecture (SENA) to securely run workloads at the Edge ([whitepaper](https://github.com/kairos-io/kairos/files/11250843/Secure-Edge-Native-Architecture-white-paper-20240417.3.pdf)).
+
+Kairos (formerly `c3os`) is an open-source project which brings Edge, cloud, and bare metal lifecycle OS management into the same design principles with a unified Cloud Native API.
+
+At-a-glance:
+
+- :bowtie: Community Driven
+- :octocat: Open Source
+- :lock: Linux immutable, meta-distribution
+- :key: Secure
+- :whale: Container-based
+- :penguin: Distribution agnostic
+
+Kairos can be used to:
+
+- Easily spin-up a Kubernetes cluster, with the Linux distribution of your choice :penguin:
+- Create your Immutable infrastructure, no more infrastructure drift! :lock:
+- Manage the cluster lifecycle with Kubernetes—from building to provisioning, and upgrading :rocket:
+- Create a multiple—node, a single cluster that spans up across regions :earth_africa:
+
+For comprehensive docs, tutorials, and examples see our [documentation](https://kairos.io/getting-started/).
+
+## Project status
+
+Check the [Roadmap](https://github.com/orgs/kairos-io/projects/2) for a high-level view of what features are coming to Kairos.
+
+Or go to the [Project Board](https://github.com/orgs/kairos-io/projects/1/views/1) to check what the team is working on right now!
+
+To stay up-to-date, check out the [Kairos Blog](https://kairos.io/blog/). You will find also release announcements and deep-dive into Kairos features!
+
+## Community
+
+You can find us at:
+
+- [Cloud Native Slack #kairos channel](https://cloud-native.slack.com/archives/C0707M8UEU8)
+- [#kairos-io at matrix.org](https://matrix.to/#/#kairos-io:matrix.org)
+- [IRC #kairos in libera.chat](https://web.libera.chat/#kairos)
+- [GitHub Discussions](https://github.com/kairos-io/kairos/discussions)
+
+The [:handshake: community repository](https://github.com/kairos-io/community) contains information about how to get involved, Code of conduct, Maintainers, Contribution guidelines, including also links to our weekly meeting notes, roadmap, and more.
+
+## Governance
+
+The Kairos project governance can be found [in the community repository](https://github.com/kairos-io/community/blob/main/GOVERNANCE.md).
+
+**Note:** Kairos adopts the CNCF Code of conduct - please make sure to read the CNCF [Code of Conduct document](https://github.com/kairos-io/community/blob/main/CODE_OF_CONDUCT.md).
+
+### Project Office Hours
+
+Project Office Hours is an opportunity for attendees to meet the maintainers of the project, learn more about the project, ask questions, and learn about new features and upcoming updates.
+
+[Add to Google Calendar](https://calendar.google.com/calendar/embed?src=c_6d65f26502a5a67c9570bb4c16b622e38d609430bce6ce7fc1d8064f2df09c11%40group.calendar.google.com&ctz=Europe%2FRome)
+
+---
+
+Kairos is a [Cloud Native Computing Foundation (CNCF) Sandbox project](https://www.cncf.io/sandbox-projects/) and was contributed by [Spectrocloud](https://spectrocloud.com).
+
diff --git a/data/readmes/kaito-v080-rc0.md b/data/readmes/kaito-v080-rc0.md
new file mode 100644
index 0000000..a4402fd
--- /dev/null
+++ b/data/readmes/kaito-v080-rc0.md
@@ -0,0 +1,177 @@
+# KAITO - README (v0.8.0-rc.0)
+
+**Repository**: https://github.com/kaito-project/kaito
+**Version**: v0.8.0-rc.0
+
+---
+
+# Kubernetes AI Toolchain Operator (KAITO)
+
+
+[](https://goreportcard.com/report/github.com/kaito-project/kaito)
+
+[](https://codecov.io/gh/kaito-project/kaito)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_shield)
+
+|  What is NEW! |
+| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Mistral 3 models are supported in the latest [release](https://github.com/kaito-project/kaito/releases). Learn about the new models from [here](https://mistral.ai/news/mistral-3)! |
+| Latest Release: Dec 7th, 2025. KAITO v0.8.0-rc.0 |
+| First Release: Nov 15th, 2023. KAITO v0.1.0. |
+
+KAITO is an operator that automates the AI/ML model inference or tuning workload in a Kubernetes cluster.
+The target models are popular open-sourced large models such as [phi-4](https://huggingface.co/microsoft/phi-4) and [llama](https://huggingface.co/meta-llama).
+KAITO has the following key differentiations compared to most of the mainstream model deployment methodologies built on top of virtual machine infrastructures:
+
+- Providing OpenAI-compatible server to perform inference calls.
+- Provide preset configurations to avoid adjusting workload parameters based on GPU hardware.
+- Provide support for popular open-sourced inference runtimes: [vLLM](https://github.com/vllm-project/vllm) and [transformers](https://github.com/huggingface/transformers).
+- Auto-provision GPU nodes based on model requirements.
+- Autoscale the inference workload based on the service monitoring metrics.
+- Leverage local NVMe as the primary storage to store model weight files.
+
+Using KAITO, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
+
+## Architecture
+
+KAITO follows the classic Kubernetes Custom Resource Definition(CRD)/controller design pattern. User manages a `workspace` custom resource which describes the GPU requirements and the inference or tuning specification. KAITO controllers will automate the deployment by reconciling the `workspace` custom resource.
+
+
+
+
+The above figure presents the KAITO architecture overview. Its major components consist of:
+
+- **Workspace controller**: It reconciles the `workspace` custom resource, creates `NodeClaim` (explained below) custom resources to trigger node auto provisioning, and creates the inference or tuning workload (`deployment`, `statefulset` or `job`) based on the model preset configurations.
+- **Node provisioner controller**: The controller's name is *gpu-provisioner* in [gpu-provisioner helm chart](https://github.com/Azure/gpu-provisioner/tree/main/charts/gpu-provisioner). It uses the `NodeClaim` CRD originated from [Karpenter](https://sigs.k8s.io/karpenter) to interact with the workspace controller. It integrates with Azure Resource Manager REST APIs to add new GPU nodes to the AKS or AKS Arc cluster.
+> Note: The [*gpu-provisioner*](https://github.com/Azure/gpu-provisioner) is an open sourced component. It can be replaced by other controllers if they support [Karpenter-core](https://sigs.k8s.io/karpenter) APIs.
+
+**NEW!** Starting with version v0.5.0, KAITO releases a new operator, **RAGEngine**, which is used to streamline the process of managing a Retrieval Augmented Generation(RAG) service.
+
+
+
+
+As illustrated in the above figure, the **RAGEngine controller** reconciles the `ragengine` custom resource and creates a `RAGService` deployment. The `RAGService` provides the following capabilities:
+ - **Orchestration**: use [LlamaIndex](https://github.com/run-llama/llama_index) orchestrator.
+ - **Embedding**: support both local and remote embedding services, to embed queries and documents in the vector database.
+ - **Vector database**: support a built-in [faiss](https://github.com/facebookresearch/faiss) in-memory vector database. Remote vector database support will be added soon.
+ - **Backend inference**: support any OAI compatible inference service.
+
+The details of the service APIs can be found in this [document](https://kaito-project.github.io/kaito/docs/rag).
+
+
+## Installation
+
+- **Workspace**: Please check the installation guidance [here](https://kaito-project.github.io/kaito/docs/installation) for deployment using helm and [here](./terraform/README.md) for deployment using Terraform.
+- **RAGEngine**: Please check the installation guidance [here](https://kaito-project.github.io/kaito/docs/rag).
+
+## Workspace quick start
+
+After installing KAITO, one can try following commands to start a phi-3.5-mini-instruct inference service.
+
+```sh
+$ cat examples/inference/kaito_workspace_phi_3.5-instruct.yaml
+apiVersion: kaito.sh/v1beta1
+kind: Workspace
+metadata:
+ name: workspace-phi-3-5-mini
+resource:
+ instanceType: "Standard_NC24ads_A100_v4"
+ labelSelector:
+ matchLabels:
+ apps: phi-3-5
+inference:
+ preset:
+ name: phi-3.5-mini-instruct
+
+$ kubectl apply -f examples/inference/kaito_workspace_phi_3.5-instruct.yaml
+```
+
+The workspace status can be tracked by running the following command. When the WORKSPACESUCCEEDED column becomes `True`, the model has been deployed successfully.
+
+```sh
+$ kubectl get workspace workspace-phi-3-5-mini
+NAME INSTANCE RESOURCEREADY INFERENCEREADY JOBSTARTED WORKSPACESUCCEEDED AGE
+workspace-phi-3-5-mini Standard_NC24ads_A100_v4 True True True 4h15m
+```
+
+Next, one can find the inference service's cluster ip and use a temporal `curl` pod to test the service endpoint in the cluster.
+
+```sh
+# find service endpoint
+$ kubectl get svc workspace-phi-3-5-mini
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+workspace-phi-3-5-mini ClusterIP 80/TCP,29500/TCP 10m
+$ export CLUSTERIP=$(kubectl get svc workspace-phi-3-5-mini -o jsonpath="{.spec.clusterIPs[0]}")
+
+# find available models
+$ kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -s http://$CLUSTERIP/v1/models | jq
+{
+ "object": "list",
+ "data": [
+ {
+ "id": "phi-3.5-mini-instruct",
+ "object": "model",
+ "created": 1733370094,
+ "owned_by": "vllm",
+ "root": "/workspace/vllm/weights",
+ "parent": null,
+ "max_model_len": 16384
+ }
+ ]
+}
+
+# make an inference call using the model id (phi-3.5-mini-instruct) from previous step
+$ kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$CLUSTERIP/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "phi-3.5-mini-instruct",
+ "messages": [{"role": "user", "content": "What is kubernetes?"}],
+ "max_tokens": 50,
+ "temperature": 0
+ }'
+```
+
+## Usage
+
+The detailed usage for KAITO supported models can be found in [**HERE**](https://kaito-project.github.io/kaito/docs/presets). In case users want to deploy their own containerized models, they can provide the pod template in the `inference` field of the workspace custom resource (please see [API definitions](./api/v1alpha1/workspace_types.go) for details).
+
+> Note: Currently the controller does **NOT** handle automatic model upgrade. It only creates inference workloads based on the preset configurations if the workloads do not exist.
+
+The number of the supported models in KAITO is growing! Please check [this](https://kaito-project.github.io/kaito/docs/preset-onboarding) document to see how to add a new supported model. Refer to [tuning document](https://kaito-project.github.io/kaito/docs/tuning), [inference document](https://kaito-project.github.io/kaito/docs/inference) , [RAGEngine document](https://kaito-project.github.io/kaito/docs/rag) and [FAQ](https://kaito-project.github.io/kaito/docs/faq) for more information.
+
+## Contributing
+
+[Read more](https://kaito-project.github.io/kaito/docs/contributing)
+
+This project welcomes contributions and suggestions. The contributions require you to agree to a
+Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
+the rights to use your contribution. For details, visit [CLAs for CNCF](https://github.com/cncf/cla?tab=readme-ov-file).
+
+When you submit a pull request, a CLA bot will automatically determine whether you need to provide
+a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
+provided by the bot. You will only need to do this once across all repos using our CLA.
+
+This project has adopted the CLAs for CNCF, please electronically sign the CLA via
+https://easycla.lfx.linuxfoundation.org. If you encounter issues, you can submit a ticket with the
+Linux Foundation ID group through the [Linux Foundation Support website](https://jira.linuxfoundation.org/plugins/servlet/desk/portal/4/create/143).
+
+## Get Involved!
+
+- Visit [#KAITO channel in CNCF Slack](https://cloud-native.slack.com/archives/C09B4EWCZ5M) to discuss features in development and proposals.
+- We host a weekly community meeting for contributors on Tuesdays at 4:00pm PST. Please join here: [meeting link](https://zoom-lfx.platform.linuxfoundation.org/meeting/99948431028?password=05912bb9-53fb-4b22-a634-ab5f8261e94c).
+- Reference the weekly meeting notes in our [KAITO community calls doc](https://docs.google.com/document/d/1OEC-WUQ2wn0TDQPsU09shMoXn5cW3dSrdu-M43Q79dA/edit?usp=sharing)!
+
+## License
+
+See [Apache License 2.0](LICENSE).
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_large)
+
+## Code of Conduct
+
+KAITO has adopted the [Cloud Native Compute Foundation Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). For more information see the [KAITO Code of Conduct](CODE_OF_CONDUCT.md).
+
+
+## Contact
+
+- Please send emails to "KAITO devs" for any issues.
diff --git a/data/readmes/kanister-01160.md b/data/readmes/kanister-01160.md
new file mode 100644
index 0000000..7a7ce56
--- /dev/null
+++ b/data/readmes/kanister-01160.md
@@ -0,0 +1,141 @@
+# Kanister - README (0.116.0)
+
+**Repository**: https://github.com/kanisterio/kanister
+**Version**: 0.116.0
+
+---
+
+
+
+# Kanister
+
+[](https://goreportcard.com/report/github.com/kanisterio/kanister)
+[](https://github.com/kanisterio/kanister/actions)
+
+[](https://www.bestpractices.dev/projects/8699)
+[](https://securityscorecards.dev/viewer/?uri=github.com/kanisterio/kanister)
+
+Kanister is a data protection workflow management tool. It provides a set of
+cohesive APIs for defining and curating data operations by abstracting away
+tedious details around executing data operations on Kubernetes. It's extensible
+and easy to install, operate and scale.
+
+## Highlights
+
+✅ _Kubernetes centric_ - Kanister's APIs are implemented as [Custom Resource
+Definitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
+that conforms to Kubernetes' declarative management, security and distribution
+models.
+
+✅ _Storage agnostic_ - Kanister allows you to efficiently and securely transfer
+backup data between your services and the object storage service of your choice.
+Use Kanister to backup, restore, and copy your data using your storage's APIs,
+and Kanister won't get in the way.
+
+✅ _Asynchronous or synchronous task execution_ - Kanister can schedule your data
+operation to run asynchronously in dedicated job pods, or synchronously via
+Kubernetes apimachinery `ExecStream` framework.
+
+✅ _Re-usable workflow artifacts_ - A Kanister blueprint can be re-used across
+multiple workflows to protect different environment deployments.
+
+✅ _Extensible, atomic data operation functions_ - Kanister provides a collection
+of easy-to-use
+[data operation functions](https://docs.kanister.io/functions.html) that you can
+add to your blueprint to express detailed backup and restore operation steps,
+including pre-backup scaling down of replicas, working with all mounted volumes
+in a pod etc.
+
+✅ _Secured via RBAC_ - Prevent unauthorized access to your workflows via Kubernetes
+[role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
+model.
+
+✅ _Observability_ - Kanister exposes logs, events and metrics to popular
+observability tools like Prometheus, Grafana and Loki to provide you with
+operational insights into your data protection workflows.
+
+## Quickstart
+
+Follow the instructions in
+[the installation documentation](https://docs.kanister.io/install.html), to
+install Kanister on your Kubernetes cluster.
+
+Walk through the [tutorial](https://docs.kanister.io/tutorial.html) to define,
+curate and run your first data protection workflow using Kanister blueprints,
+actionsets and profiles.
+
+The [`examples`](./examples) directory contains many sample blueprints that you
+can use to define data operations for:
+
+- [AWS RDS](./examples/aws-rds)
+- [Cassandra](./examples/cassandra)
+- [Couchbase](./examples/couchbase)
+- [Elasticsearch](./examples/elasticsearch)
+- [etcd](./examples/etcd/etcd-in-cluster)
+- [FoundationDB](./examples/foundationdb)
+- [K8ssandra](./examples/k8ssandra)
+- [MongoDB](./examples/mongodb)
+- [MongoDB on OpenShift using DeploymentConfig](./examples/mongodb-deploymentconfig)
+- [MySQL](./examples/mysql)
+- [MySQL on OpenShift using DeploymentConfig](./examples/mysql-deploymentconfig)
+- [PostgreSQL](./examples/postgresql)
+- [PostgreSQL on OpenShift using DeploymentConfig](./examples/postgresql-deploymentconfig)
+- [Redis](./examples/redis)
+
+The Kanister architecture is documented
+[here](https://docs.kanister.io/architecture.html).
+
+## Getting Help
+
+If you have any questions or run into issues, feel free to reach out to us on
+[Slack](https://kanisterio.slack.com).
+
+GitHub issues or pull requests that have been inactive for more than 60 days
+will be labeled as stale. If they remained inactive for another 30 days, they
+will be automatically closed. To be exempted from the issue lifecycle, discuss
+with a [maintainer](MAINTAINERS.md) the reasons behind the exemption, and add the `frozen` label
+to the issue or pull request.
+
+If you discovered any security issues, refer to our [`SECURITY.md`](SECURITY.md)
+documentation for our security policy, including steps on how to report
+vulnerabilities.
+
+## Community
+
+The Kanister community meetings happen once every two weeks on Thursday, 16:00
+UTC, where we discuss ongoing interesting features, issues, and pull requests.
+Come join us! Everyone is welcome! 🙌 (Zoom link is bookmarked on Slack.)
+
+If you are currently using Kanister, we would love to hear about it! Feel free
+to add your organization to the [`ADOPTERS.md`](ADOPTERS.md) by submitting a
+pull request.
+
+## Code of Conduct
+
+Kanister is for everyone. We ask that our users and contributors take a few
+minutes to review our [Code of Conduct](CODE_OF_CONDUCT.md).
+
+## Contributing to Kanister
+
+We welcome contributions to Kanister! If you're interested in getting involved, please take a look at our guidelines:
+
+- **BUILD.md:** Contains detailed instructions on how to build and test Kanister locally or within a CI/CD pipeline. Please refer to this guide if you want to make changes to Kanister's codebase.
+ [Build and Test Instructions](BUILD.md)
+
+- **CONTRIBUTING.md:** Provides essential information on how to contribute code, documentation, or bug reports, as well as our coding style and commit message conventions.
+ [Contribution Guidelines](CONTRIBUTING.md)
+
+## Resources
+
+- [CNCF - Enhancing data protection workflows with Kanister and Argo workflows](https://youtu.be/nqfP1e9jeU4)
+- [CNCF - Kanister: Application-Level Data Protection on Kubernetes](https://youtu.be/GSgFwAHLziA)
+- [CNCF - Integrating Backup Into Your GitOps CI/CD Pipeline](https://www.youtube.com/watch?v=2zik5jDjVvM)
+- [DoK - Kanister & Kopia: An Open-Source Data Protection Match Made in Heaven](https://www.youtube.com/watch?v=hN8sn3A_oEs)
+- [DoK - Kanister: Application-Level Data Operations on Kubernetes](https://www.youtube.com/watch?v=ooJFt0bid1I&t=791s)
+- [Kanister Overview 2021 ](https://www.youtube.com/watch?v=wFD42Zpbfts&t=1s)
+- [SIG Apps Demo](https://youtu.be/uzIp-CjsX1c?t=82)
+- [Percona Live 2018](https://www.youtube.com/watch?v=dS0kv0k8D_E)
+
+## License
+
+Apache License 2.0, see [LICENSE](https://github.com/kanisterio/kanister/blob/master/LICENSE).
diff --git a/data/readmes/karmada-v1160.md b/data/readmes/karmada-v1160.md
new file mode 100644
index 0000000..4ea4828
--- /dev/null
+++ b/data/readmes/karmada-v1160.md
@@ -0,0 +1,255 @@
+# Karmada - README (v1.16.0)
+
+**Repository**: https://github.com/karmada-io/karmada
+**Version**: v1.16.0
+
+---
+
+# Karmada
+
+
+
+
+[](/LICENSE)
+[](https://github.com/karmada-io/karmada/releases/latest)
+[](https://slack.cncf.io)
+[](https://bestpractices.coreinfrastructure.org/projects/5301)
+[](https://securityscorecards.dev/viewer/?uri=github.com/karmada-io/karmada)
+
+[](https://goreportcard.com/report/github.com/karmada-io/karmada)
+[](https://codecov.io/gh/karmada-io/karmada)
+[](https://app.fossa.com/projects/custom%2B28176%2Fgithub.com%2Fkarmada-io%2Fkarmada?ref=badge_shield)
+[](https://artifacthub.io/packages/krew/krew-index/karmada)
+[](https://clomonitor.io/projects/cncf/karmada)
+
+## Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
+
+Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.
+
+Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios,
+with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.
+
+
+
+Karmada is an incubation project of the [Cloud Native Computing Foundation](https://cncf.io/) (CNCF).
+
+## Why Karmada:
+- __K8s Native API Compatible__
+ - Zero change upgrade, from single-cluster to multi-cluster
+ - Seamless integration of existing K8s tool chain
+
+- __Out of the Box__
+ - Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
+ - Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
+
+- __Avoid Vendor Lock-in__
+ - Integration with mainstream cloud providers
+ - Automatic allocation, migration across clusters
+ - Not tied to proprietary vendor orchestration
+
+- __Centralized Management__
+ - Location agnostic cluster management
+ - Support clusters in Public cloud, on-prem or edge
+
+- __Fruitful Multi-Cluster Scheduling Policies__
+ - Cluster Affinity, Multi Cluster Splitting/Rebalancing
+ - Multi-Dimension HA: Region/AZ/Cluster/Provider
+
+- __Open and Neutral__
+ - Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
+ - Target for open governance with CNCF
+
+
+
+**Notice: this project is developed in continuation of Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation) and [v2](https://github.com/kubernetes-sigs/kubefed). Some basic concepts are inherited from these two versions.**
+
+
+## Architecture
+
+
+
+The Karmada Control Plane consists of the following components:
+
+- Karmada API Server
+- Karmada Controller Manager
+- Karmada Scheduler
+
+ETCD stores the Karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you create through the API server.
+
+The Karmada Controller Manager runs the various controllers, the controllers watch Karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.
+
+1. Cluster Controller: attach Kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster objects.
+2. Policy Controller: the controller watches PropagationPolicy objects. When the PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and creates ResourceBinding with each single resource object.
+3. Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with a single resource manifest.
+4. Execution Controller: the controller watches Work objects. When Work objects are created, it will distribute the resources to member clusters.
+
+
+## Concepts
+
+**Resource template**: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes
+
+**Propagation Policy**: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.
+- Support 1:n mapping of Policy: workload, users don't need to indicate scheduling constraints every time creating federated applications.
+- With default policies, users can just interact with K8s API
+
+**Override Policy**: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:
+- Override image prefix according to member cluster region
+- Override StorageClass according to cloud provider
+
+
+The following diagram shows how Karmada resources are involved when propagating resources to member clusters.
+
+
+
+## Quick Start
+
+This guide will cover:
+- Install `karmada` control plane components in a Kubernetes cluster which is known as `host cluster`.
+- Join a member cluster to `karmada` control plane.
+- Propagate an application by using `karmada`.
+
+### Prerequisites
+- [Go](https://golang.org/) version follows [go.mod](https://github.com/karmada-io/karmada/blob/master/go.mod#L3)
+- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.19+
+- [kind](https://kind.sigs.k8s.io/) version v0.14.0+
+
+### Install the Karmada control plane
+
+#### 1. Clone this repo to your machine:
+
+```bash
+git clone https://github.com/karmada-io/karmada
+```
+
+#### 2. Change to the karmada directory:
+
+```bash
+cd karmada
+```
+
+#### 3. Deploy and run Karmada control plane:
+
+run the following script:
+
+```bash
+hack/local-up-karmada.sh
+```
+This script will do the following tasks for you:
+- Start a Kubernetes cluster to run the Karmada control plane, aka. the `host cluster`.
+- Build Karmada control plane components based on a current codebase.
+- Deploy Karmada control plane components on the `host cluster`.
+- Create member clusters and join Karmada.
+
+If everything goes well, at the end of the script output, you will see similar messages as follows:
+
+```bash
+Local Karmada is running.
+
+To start using your Karmada environment, run:
+ export KUBECONFIG="$HOME/.kube/karmada.config"
+Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
+
+To manage your member clusters, run:
+ export KUBECONFIG="$HOME/.kube/members.config"
+Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
+```
+
+There are two contexts in Karmada:
+- karmada-apiserver `kubectl config use-context karmada-apiserver`
+- karmada-host `kubectl config use-context karmada-host`
+
+The `karmada-apiserver` is the **main kubeconfig** to be used when interacting with the Karmada control plane, while `karmada-host` is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running: `kubectl config view`. To switch cluster contexts, run `kubectl config use-context [CONTEXT_NAME]`
+
+
+### Demo
+
+
+
+### Propagate application
+In the following steps, we are going to propagate a deployment by Karmada.
+
+#### 1. Create nginx deployment in Karmada.
+First, create a [deployment](samples/nginx/deployment.yaml) named `nginx`:
+
+```bash
+kubectl create -f samples/nginx/deployment.yaml
+```
+
+#### 2. Create PropagationPolicy that will propagate nginx to member cluster
+Then, we need to create a policy to propagate the deployment to our member cluster.
+
+```bash
+kubectl create -f samples/nginx/propagationpolicy.yaml
+```
+
+#### 3. Check the deployment status from Karmada
+You can check deployment status from Karmada, don't need to access member cluster:
+
+```bash
+$ kubectl get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 2/2 2 2 20s
+```
+
+## Kubernetes compatibility
+
+Karmada is compatible with a wide range of Kubernetes versions. For detailed compatibility instructions,
+please refer to the [Kubernetes Compatibility](https://karmada.io/docs/administrator/compatibility/).
+
+The following table shows the compatibility test results against the latest 10 Kubernetes versions:
+
+| | Kubernetes 1.34 | Kubernetes 1.33 | Kubernetes 1.32 | Kubernetes 1.31 | Kubernetes 1.30 | Kubernetes 1.29 | Kubernetes 1.28 | Kubernetes 1.27 | Kubernetes 1.26 | Kubernetes 1.25 | Kubernetes 1.24 | Kubernetes 1.23 |
+|-----------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
+| Karmada v1.13 | | | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| Karmada v1.14 | | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
+| Karmada v1.15 | | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
+| Karmada HEAD (master) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | |
+
+Key:
+* `✓` Karmada and the Kubernetes version are exactly compatible.
+* `+` Karmada has features or API objects that may not be present in the Kubernetes version.
+* `-` The Kubernetes version has features or API objects that Karmada can't use.
+
+## Meeting
+
+Regular Community Meeting:
+* Tuesday at 14:30 UTC+8 (Chinese)(biweekly). [Convert to your timezone](https://dateful.com/convert/utc8?t=1430).
+* Tuesday at 15:00 UTC+0 (English)(biweekly). [Convert to your timezone](https://dateful.com/convert/coordinated-universal-time-utc?t=15).
+
+Resources:
+- [Meeting Notes and Agenda](https://docs.google.com/document/d/1y6YLVC-v7cmVAdbjedoyR5WL0-q45DBRXTvz5_I7bkA/edit)
+- [Meeting Calendar](https://calendar.google.com/calendar/embed?src=karmadaoss%40gmail.com&ctz=Asia%2FShanghai) | [Subscribe](https://calendar.google.com/calendar/u/1?cid=a2FybWFkYW9zc0BnbWFpbC5jb20)
+- [Meeting Link](https://zoom.com/my/karmada)
+
+## Contact
+
+If you have questions, feel free to reach out to us in the following ways:
+
+- [mailing list](https://groups.google.com/forum/#!forum/karmada)
+- [slack](https://cloud-native.slack.com/archives/C02MUF8QXUN) | [Join](https://slack.cncf.io/)
+- [twitter](https://twitter.com/karmada_io)
+
+## Talks and References
+
+| | Link |
+|------------------|-------------------------------------------------------------------------------------------------------------------------|
+| KubeCon(EU 2021) | [Beyond federation: automating multi-cloud workloads with K8s native APIs](https://www.youtube.com/watch?v=LJJoaGszBVk) |
+| KubeCon(EU 2022) | [Sailing Multi Cloud Traffic Management With Karmada](https://youtu.be/rzFbxeZQHWI) |
+| KubeDay(Israel 2023)| [Simplifying Multi-cluster Kubernetes Management with Karmada](https://www.youtube.com/watch?v=WCrIhRNBZ9I) |
+| KubeCon(China 2023) | [Multi-Cloud Multi-Cluster HPA Helps Trip.com Group Deal with Business Downturn and Rapid Recovery](https://www.youtube.com/watch?v=uninSyVBKO4) |
+| KubeCon(China 2023) | [Break Through Cluster Boundaries to Autoscale Workloads Across Them on a Large Scale](https://www.youtube.com/watch?v=22W1yrEJjtQ) |
+| KubeCon(China 2023) | [Cross-Cluster Traffic Orchestration with eBPF](https://www.youtube.com/watch?v=e4GA5e-C7n0) |
+| KubeCon(China 2023) | [Non-Intrusively Enable OpenKruise and Argo Workflow in a Multi-Cluster Federation](https://www.youtube.com/watch?v=gcllTXRkz-E) |
+| KubeCon(NA 2025) | [Maximizing Global Potential: Cost-Optimized, High-Availability Workloads Across Regions](https://www.youtube.com/watch?v=VrXt_T8DkIo) |
+
+For blogs, please refer to [website](https://karmada.io/blog/).
+
+## Contributing
+
+If you're interested in being a contributor and want to get involved in
+developing the Karmada code, please see [CONTRIBUTING](CONTRIBUTING.md) for
+details on submitting patches and the contribution workflow.
+
+## License
+
+Karmada is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
diff --git a/data/readmes/kcl-v0112.md b/data/readmes/kcl-v0112.md
new file mode 100644
index 0000000..a9b039c
--- /dev/null
+++ b/data/readmes/kcl-v0112.md
@@ -0,0 +1,95 @@
+# KCL - README (v0.11.2)
+
+**Repository**: https://github.com/kcl-lang/kcl
+**Version**: v0.11.2
+
+---
+
+
KCL: Constraint-based Record & Functional Language
+
+## Introduction
+
+KCL is an open-source, constraint-based record and functional language that enhances the writing of complex configurations, including those for cloud-native scenarios. With its advanced programming language technology and practices, KCL is dedicated to promoting better modularity, scalability, and stability for configurations. It enables simpler logic writing and offers ease of automation APIs and integration with homegrown systems.
+
+
+
+
+
+## What is it for?
+
+You can use KCL to
+
++ [Generate low-level static configuration data](https://kcl-lang.io/docs/user_docs/guides/configuration) such as JSON, YAML, etc., or [integrate with existing data](https://kcl-lang.io/docs/user_docs/guides/data-integration).
++ Reduce boilerplate in configuration data with the [schema modeling](https://kcl-lang.io/docs/user_docs/guides/schema-definition).
++ Define schemas with [rule constraints for configuration data and validate](https://kcl-lang.io/docs/user_docs/guides/validation) them automatically.
++ Organize, simplify, unify and manage large configurations without side effects through [gradient automation schemes and GitOps](https://kcl-lang.io/docs/user_docs/guides/automation).
++ Manage large configurations in a scalable way for different environments with [isolated configuration blocks](https://kcl-lang.io/docs/reference/lang/tour#config-operations).
++ Mutating or validating Kubernetes resources with [cloud-native configuration tool plugins](https://www.kcl-lang.io/docs/user_docs/guides/working-with-k8s/mutate-manifests/kubectl-kcl-plugin).
++ Used as a platform engineering programming language to deliver modern applications with [KusionStack](https://kusionstack.io).
+
+## Features
+
++ **Easy-to-use**: Originated from high-level languages such as Python and Golang, incorporating functional language features with low side effects.
++ **Well-designed**: Independent spec-driven syntax, semantics, runtime and system modules design.
++ **Quick modeling**: [Out-of-the-box modules](https://artifacthub.io/packages/search?org=kcl&sort=relevance&page=1) and [Schema](https://kcl-lang.io/docs/reference/lang/tour#schema)-centric configuration types and modular abstraction.
++ **Rich capabilities**: Configuration with type, logic and policy based on [Config](https://kcl-lang.io/docs/reference/lang/tour#config-operations), [Schema](https://kcl-lang.io/docs/reference/lang/tour#schema), [Lambda](https://kcl-lang.io/docs/reference/lang/tour#function), [Rule](https://kcl-lang.io/docs/reference/lang/tour#rule).
++ **Stability**: Configuration stability is achieved through a [static type system](https://kcl-lang.io/docs/reference/lang/tour/#type-system), [constraints](https://kcl-lang.io/docs/reference/lang/tour/#validation), and [rules](https://kcl-lang.io/docs/reference/lang/tour#rule).
++ **Scalability**: High scalability is assured with an [automatic merge mechanism](https://kcl-lang.io/docs/reference/lang/tour/#-operators-1) of isolated config blocks.
++ **Fast automation**: Gradient automation scheme of [CRUD APIs](https://kcl-lang.io/docs/reference/lang/tour/#kcl-cli-variable-override), [multilingual SDKs](https://kcl-lang.io/docs/reference/xlang-api/overview), and [language plugin](https://github.com/kcl-lang/kcl-plugin)
++ **High performance**: High compile-time and runtime performance using Rust & C, and support compilation to native code and [WASM](https://webassembly.org/).
++ **API affinity**: Native support for ecological API specifications such as [OpenAPI](https://github.com/kcl-lang/kcl-openapi), Kubernetes CRD, Kubernetes Resource Model (KRM) spec.
++ **Developer-friendly**: Friendly development experiences with rich [language tools](https://kcl-lang.io/docs/tools/cli/kcl/) (Format, Lint, Test, Vet, Doc, package management tools etc.), and multiple [IDE extensions](https://kcl-lang.io/docs/tools/Ide/).
++ **Safety & maintainable**: Domain-oriented, no system-level functions such as native threads and IO, low noise and security risk, easy maintenance and governance.
++ **Rich multi-language SDK**: Rust, Go, Python, .NET, Java and Node.js SDKs meet different scenarios and application use prelude.
++ **Integrations**: Abstract, mutate and validate manifests through [Kubectl KCL Plugin](https://github.com/kcl-lang/kubectl-kcl), [Kustomize KCL Plugin](https://github.com/kcl-lang/kustomize-kcl), [Helm KCL Plugin](https://github.com/kcl-lang/helm-kcl), [KPT KCL SDK](https://github.com/kcl-lang/kpt-kcl) or [Crossplane KCL Function](https://github.com/kcl-lang/crossplane-kcl).
++ **Production-ready**: Widely used in production practices of platform engineering and automation at Ant Group.
+
+## How to choose?
+
+A detailed feature and scenario comparison is [here](https://kcl-lang.io/docs/user_docs/getting-started/intro).
+
+## Installation
+
+For more information about installation, please check the [Installation Guide](https://kcl-lang.io/docs/user_docs/getting-started/install/) on the KCL official website.
+
+## Documentation
+
+Detailed documentation is available at [KCL Website](https://kcl-lang.io/)
+
+There is also a very nice [Devin](https://devin.ai/) generated document available at [**deepwiki**](https://deepwiki.com/kcl-lang/kcl).
+
+## Contributing
+
+See [Developing Guide](./docs/dev_guide/1.about_this_guide.md). You can also get started by opening the project in GitHub Codespaces.
+
+[](https://codespaces.new/kcl-lang/kcl)
+
+## Roadmap
+
+See [KCL Roadmap](https://github.com/kcl-lang/kcl/issues/882).
+
+## Community
+
+See the [community](https://github.com/kcl-lang/community) for ways to join us.
+
+## License
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkcl-lang%2Fkcl?ref=badge_large)
diff --git a/data/readmes/kcp-v0290.md b/data/readmes/kcp-v0290.md
new file mode 100644
index 0000000..0d7e921
--- /dev/null
+++ b/data/readmes/kcp-v0290.md
@@ -0,0 +1,91 @@
+# kcp - README (v0.29.0)
+
+**Repository**: https://github.com/kcp-dev/kcp
+**Version**: v0.29.0
+
+---
+
+# kcp
+
+[](https://www.bestpractices.dev/projects/8119)
+[](https://goreportcard.com/report/github.com/kcp-dev/kcp)
+[](https://github.com/kcp-dev/kcp/blob/main/LICENSE)
+[](https://github.com/kcp-dev/kcp/releases/latest)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkcp-dev%2Fkcp?ref=badge_shield)
+
+## Overview
+
+kcp is a Kubernetes-like control plane focusing on:
+
+- A **control plane** for many independent, **isolated** “clusters” known as **workspaces**
+- Enabling API service providers to **offer APIs centrally** using **multi-tenant operators**
+- Easy **API consumption** for users in their workspaces
+
+kcp can be a building block for SaaS service providers who need a **massively multi-tenant platform** to offer services
+to a large number of fully isolated tenants using Kubernetes-native APIs. The goal is to be useful to cloud
+providers as well as enterprise IT departments offering APIs within their company.
+
+**NB:** In May 2023, the kcp project was restructured and components related to workload scheduling (e.g. the syncer) and the transparent multi cluster (tmc) code were removed due to lack of interest/maintainers. Please refer to the [`main-pre-tmc-removal` branch](https://github.com/kcp-dev/kcp/tree/main-pre-tmc-removal) if you are interested in the related code.
+
+## Getting Started
+
+**For Users:** Follow our [Setup & Quick Start](https://docs.kcp.io/kcp/main/setup/quickstart/) guide to download and run kcp.
+
+**For Developers:** To build and run kcp from source:
+```bash
+# Build and run from source
+go run ./cmd/kcp start
+
+# In another terminal:
+export KUBECONFIG=.kcp/admin.kubeconfig && kubectl get workspaces
+```
+
+**Learn More:**
+- [Concepts & Tenancy Guide](https://docs.kcp.io/kcp/main/concepts/quickstart-tenancy-and-apis/) - Learn about workspaces, workspace types, and APIs
+
+## Documentation
+
+Please visit [docs.kcp.io/kcp](https://docs.kcp.io/kcp/latest) for our documentation.
+
+## Contributing
+
+We ❤️ our contributors! If you're interested in helping us out, please check out [contributing to kcp](https://docs.kcp.io/kcp/main/contributing/).
+
+This community has a [Code of Conduct](./code-of-conduct.md). Please make sure to follow it.
+
+## Getting in touch
+
+There are several ways to communicate with us:
+
+- On the [Kubernetes Slack workspace](https://slack.k8s.io).
+ - [`#kcp-users`](https://app.slack.com/client/T09NY5SBT/C021U8WSAFK) for discussions and questions regarding kcp's setup and usage.
+ - [`#kcp-dev`](https://kubernetes.slack.com/archives/C09C7UP1VLM) for conversations about developing kcp itself.
+- Our mailing lists.
+ - [kcp-users](https://groups.google.com/g/kcp-users) for discussions among users and potential users.
+ - [kcp-dev](https://groups.google.com/g/kcp-dev) for development discussions.
+- The bi-weekly community meetings.
+ - By joining the kcp-dev mailing list, you should receive an invite to our bi-weekly community meetings.
+ - The next community meeting dates are also available via our [CNCF community group](https://community.cncf.io/kcp/).
+ - Check the [community meeting notes document](https://docs.google.com/document/d/1PrEhbmq1WfxFv1fTikDBZzXEIJkUWVHdqDFxaY1Ply4) for future and past meeting agendas.
+ - See recordings of past community meetings on [YouTube](https://www.youtube.com/channel/UCfP_yS5uYix0ppSbm2ltS5Q).
+- Browse the [shared Google Drive](https://drive.google.com/drive/folders/1FN7AZ_Q1CQor6eK0gpuKwdGFNwYI517M?usp=sharing) to share design docs, notes, etc.
+ - Members of the kcp-dev mailing list can view this drive.
+
+## Additional references
+
+- [CNCF CNL: kcp – Kubernetes-Like Control Planes for Declarative APIs - Marvin Beckers, Simon Bein](https://www.youtube.com/watch?v=OVjTKxPc92Y)
+- [ContainerDays 2025: Next Generation of Platform Engineering Using kcp and Crossplane - Lovro Sviben, Simon Bein](https://www.youtube.com/watch?v=JM1RnNYnuWg)
+- [Platform Engineering Day Europe 2024: Building a Platform Engineering API Layer with kcp – Marvin Beckers](https://www.youtube.com/watch?v=az5Rm8Snms4)
+- [KubeCon EU 2024: Why Kubernetes Is Inappropriate for Platforms, and How to Make It Better – Stefan Schimanski, Mangirdas Judeikis, Sebastian Scheele](https://www.youtube.com/watch?v=7op_r9R0fCo)
+- [KubeCon EU 2024: Kubernetes-style APIs for SaaS-like Control Planes with kcp – Marvin Beckers, Mangirdas Judeikis](https://www.youtube.com/watch?v=-P1kUo5zZR4)
+- [KubeCon US 2022: Kcp: Towards 1,000,000 Clusters, Name^WWorkspaced CRDs - Stefan Schimanski](https://www.youtube.com/watch?v=fGv5dpQ8X5I)
+- [Rejekts US 2022: What if namespaces provided more isolation than just names? – Stefan Schimanski](https://www.youtube.com/watch?v=WGrPUyx7qQE)
+- [Let's Learn kcp - A minimal Kubernetes API server with Saiyam Pathak - July 7, 2021](https://www.youtube.com/watch?v=M4mn_LlCyzk)
+- [TGI Kubernetes 157: Exploring kcp: apiserver without Kubernetes](https://youtu.be/FD_kY3Ey2pI)
+- [K8s SIG Architecture meeting discussing kcp - June 29, 2021](https://www.youtube.com/watch?v=YrdAYoo-UQQ)
+- [OpenShift Commons: Kubernetes as the Control Plane for the Hybrid Cloud - Clayton Coleman](https://www.youtube.com/watch?v=Y3Y11Aj_01I)
+- [KubeCon EU 2021: Kubernetes as the Hybrid Cloud Control Plane Keynote - Clayton Coleman](https://www.youtube.com/watch?v=oaPBYUfdFE8)
+
+
+## License
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkcp-dev%2Fkcp?ref=badge_large)
diff --git a/data/readmes/keda-v2182.md b/data/readmes/keda-v2182.md
new file mode 100644
index 0000000..d7521f5
--- /dev/null
+++ b/data/readmes/keda-v2182.md
@@ -0,0 +1,102 @@
+# KEDA - README (v2.18.2)
+
+**Repository**: https://github.com/kedacore/keda
+**Version**: v2.18.2
+
+---
+
+
+
Kubernetes-based Event Driven Autoscaling
+
+
+
+
+
+
+
+
+
+KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves
+as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom
+resource definition.
+
+KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal
+Pod Autoscaler, and has no external dependencies.
+
+We are a Cloud Native Computing Foundation (CNCF) graduated project.
+
+
+
+
+**Table of contents**
+
+- [Getting started](#getting-started)
+ - [Deploying KEDA](#deploying-keda)
+- [Documentation](#documentation)
+- [Community](#community)
+- [Adopters - Become a listed KEDA user!](#adopters---become-a-listed-keda-user)
+- [Governance & Policies](#governance--policies)
+- [Support](#support)
+- [Roadmap](#roadmap)
+- [Releases](#releases)
+- [Contributing](#contributing)
+ - [Building & deploying locally](#building--deploying-locally)
+ - [Testing strategy](#testing-strategy)
+
+
+
+## Getting started
+
+* [QuickStart - RabbitMQ and Go](https://github.com/kedacore/sample-go-rabbitmq)
+* [QuickStart - Azure Functions and Queues](https://github.com/kedacore/sample-hello-world-azure-functions)
+* [QuickStart - Azure Functions and Kafka on Openshift 4](https://github.com/kedacore/sample-azure-functions-on-ocp4)
+* [QuickStart - Azure Storage Queue with ScaledJob](https://github.com/kedacore/sample-go-storage-queue)
+
+You can find several samples for various event sources [here](https://github.com/kedacore/samples).
+
+### Deploying KEDA
+
+There are many ways to [deploy KEDA including Helm, Operator Hub and YAML files](https://keda.sh/docs/latest/deploy/).
+
+## Documentation
+
+Interested to learn more? Head over to [keda.sh](https://keda.sh).
+
+## Community
+
+If interested in contributing or participating in the direction of KEDA, you can join our community meetings! Learn more about them on [our website](https://keda.sh/community/).
+
+Just want to learn or chat about KEDA? Feel free to join the conversation in
+**[#KEDA](https://kubernetes.slack.com/messages/CKZJ36A5D)** on the **[Kubernetes Slack](https://slack.k8s.io/)**!
+
+## Adopters - Become a listed KEDA user!
+
+We are always happy to [list users](https://keda.sh/community/#users) who run KEDA in production, learn more about it [here](https://github.com/kedacore/keda-docs#become-a-listed-keda-user).
+
+## Governance & Policies
+
+You can learn about the governance of KEDA [here](https://github.com/kedacore/governance).
+
+## Support
+
+Details on the KEDA support policy can found [here](https://keda.sh/support/).
+
+## Roadmap
+
+We use GitHub issues to build our backlog, a complete overview of all open items and our planning.
+
+Learn more about our roadmap [here](ROADMAP.md).
+
+## Releases
+
+You can find the latest releases [here](https://github.com/kedacore/keda/releases).
+
+## Contributing
+
+You can find contributing guide [here](./CONTRIBUTING.md).
+
+### Building & deploying locally
+Learn how to build & deploy KEDA locally [here](./BUILD.md).
+
+### Testing strategy
+Learn more about our testing strategy [here](./TESTING.md).
diff --git a/data/readmes/keptn-metrics-operator-v210.md b/data/readmes/keptn-metrics-operator-v210.md
new file mode 100644
index 0000000..16b91e8
--- /dev/null
+++ b/data/readmes/keptn-metrics-operator-v210.md
@@ -0,0 +1,162 @@
+# Keptn - README (metrics-operator-v2.1.0)
+
+**Repository**: https://github.com/keptn/lifecycle-toolkit
+**Version**: metrics-operator-v2.1.0
+
+---
+
+# Keptn
+
+
+
+
+
+[](https://github.com/keptn/lifecycle-toolkit/discussions)
+[](https://artifacthub.io/packages/helm/lifecycle-toolkit/keptn)
+[](https://www.bestpractices.dev/projects/3588)
+[](https://securityscorecards.dev/viewer/?uri=github.com/keptn/lifecycle-toolkit)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkeptn%2Flifecycle-toolkit?ref=badge_shield&issueType=license)
+[](https://clomonitor.io/projects/cncf/keptn)
+
+This is the primary repository for
+the Keptn software and documentation.
+Keptn provides a “cloud-native” approach
+for managing the application release lifecycle
+metrics, observability, health checks,
+with pre- and post-deployment evaluations and tasks.
+It is an incubating project, under the umbrella of the
+[Keptn Application Lifecycle working group](https://github.com/keptn/wg-app-lifecycle).
+
+> **Note** Keptn was developed under the code name of
+ "Keptn Lifecycle Toolkit" or "KLT" for short.
+ The source code contains many vestiges of these names.
+
+## Goals
+
+Keptn provides Cloud Native teams with the following capabilities:
+
+- Pre-requisite evaluation before deploying workloads and applications
+- Checking the Application Health in a declarative (cloud-native) way
+- Standardized way to run pre- and post-deployment tasks
+- Provide out-of-the-box Observability
+- Deployment lifecycle management
+
+
+
+Keptn can be seen as a general purpose and declarative
+[Level 3 operator](https://operatorframework.io/operator-capabilities/)
+for your Application.
+For this reason, Keptn is agnostic to deployment tools
+that are used and works with any GitOps solution.
+
+For more information about the core concepts of Keptn, see
+our core concepts
+[documentation section](https://keptn.sh/stable/docs/core-concepts/).
+
+## Status
+
+Status of the different features:
+
+- 
+ Observability: expose [OTel](https://opentelemetry.io/) metrics and traces of your deployment.
+- 
+ K8s Custom Metrics: expose your Observability platform via the [Custom Metric API](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/custom-metrics-api.md).
+- 
+ Release lifecycle: handle pre- and post-checks of your Application deployment.
+- 
+ Certificate Manager: automatically configure TLS certificates with the
+ [Keptn Certificate Manager](https://keptn.sh/stable/docs/components/certificate-operator/).
+ You can instead
+ [configure your own certificate manager](https://keptn.sh/stable/docs/installation/configuration/cert-manager/) to provide
+ [secure communication with the Kube API](https://kubernetes.io/docs/concepts/security/controlling-access/#transport-security).
+
+
+The status follows the
+[Kubernetes API versioning schema](https://kubernetes.io/docs/reference/using-api/#api-versioning).
+
+## Community
+
+Find details on regular hosted community events in the [keptn/community repo](https://github.com/keptn/community)
+and our Slack channel(s) in the [CNCF Slack workspace.](https://cloud-native.slack.com/messages/keptn/)
+
+## Roadmap
+
+You can find our roadmap [here](https://github.com/orgs/keptn/projects/10).
+
+## Governance
+
+- [Community Membership](https://github.com/keptn/community/blob/main/community-membership.md):
+ Guidelines for community engagement, contribution expectations,
+ and the process for becoming a community member at different levels.
+
+- [Members and Charter](https://github.com/keptn/community/blob/main/governance/members-and-charter.md):
+ Describes the formation and responsibilities of the Keptn Governance Committee,
+ including its scope, members, and core responsibilities.
+
+## Installation
+
+Keptn can be installed on any Kubernetes cluster
+running Kubernetes >=1.27.
+
+Use the following command sequence
+to install the latest release of Keptn:
+
+```shell
+helm repo add keptn https://charts.lifecycle.keptn.sh
+helm repo update
+helm upgrade --install keptn keptn/keptn -n keptn-system --create-namespace --wait
+```
+
+### Monitored namespaces
+
+Keptn must be installed in its own namespace
+that does not run other major components or deployments.
+
+By default, the Keptn lifecycle orchestration
+monitors all namespaces in the cluster
+except for a few namespaces that are reserved
+for specific Kubernetes and other components.
+You can modify the Helm chart to specify the namespaces
+where the Keptn lifecycle orchestration is allowed.
+For more information, see the "Namespaces and Keptn" page in the
+[Configuration](https://keptn.sh/stable/docs/installation/configuration/)
+section of the documentation.
+
+## More information
+
+For more info about Keptn, please see our
+[documentation](https://keptn.sh/stable/docs/).
+
+You can also find a number of video presentations and demos
+about Keptn on the
+[YouTube Keptn channel](https://www.youtube.com/@keptn).
+Videos that refer to the "Keptn Lifecycle Controller"
+are relevant for the Keptn project.
+
+## Contributing
+
+For more information about contributing to Keptn, please
+refer to the [Contribution guide](https://keptn.sh/stable/docs/contribute/)
+section of the documentation.
+
+To set up your local Keptn development environment, please follow
+[these steps](https://keptn.sh/stable/docs/contribute/software/dev-environ/#first-steps)
+for new contributors.
+
+## License
+
+Please find more information in the [LICENSE](LICENSE) file.
+
+## Thanks to all the people who have contributed 💜
+
+
+
+
+
+
+
+Made with [contrib.rocks](https://contrib.rocks).
diff --git a/data/readmes/keycloak-nightly.md b/data/readmes/keycloak-nightly.md
new file mode 100644
index 0000000..ad4bacc
--- /dev/null
+++ b/data/readmes/keycloak-nightly.md
@@ -0,0 +1,87 @@
+# Keycloak - README (nightly)
+
+**Repository**: https://github.com/keycloak/keycloak
+**Version**: nightly
+
+---
+
+
+
+
+[](https://bestpractices.coreinfrastructure.org/projects/6818)
+[](https://clomonitor.io/projects/cncf/keycloak)
+[](https://securityscorecards.dev/viewer/?uri=github.com/keycloak/keycloak)
+[](https://artifacthub.io/packages/olm/community-operators/keycloak-operator)
+
+
+[](docs/translation.md)
+
+# Open Source Identity and Access Management
+
+Add authentication to applications and secure services with minimum effort. No need to deal with storing users or authenticating users.
+
+Keycloak provides user federation, strong authentication, user management, fine-grained authorization, and more.
+
+
+## Help and Documentation
+
+* [Documentation](https://www.keycloak.org/documentation.html)
+* [User Mailing List](https://groups.google.com/d/forum/keycloak-user) - Mailing list for help and general questions about Keycloak
+* Join [#keycloak](https://cloud-native.slack.com/archives/C056HC17KK9) for general questions, or [#keycloak-dev](https://cloud-native.slack.com/archives/C056XU905S6) on Slack for design and development discussions, by creating an account at [https://slack.cncf.io/](https://slack.cncf.io/).
+
+
+## Reporting Security Vulnerabilities
+
+If you have found a security vulnerability, please look at the [instructions on how to properly report it](https://github.com/keycloak/keycloak/security/policy).
+
+
+## Reporting an issue
+
+If you believe you have discovered a defect in Keycloak, please open [an issue](https://github.com/keycloak/keycloak/issues).
+Please remember to provide a good summary, description as well as steps to reproduce the issue.
+
+
+## Getting started
+
+To run Keycloak, download the distribution from our [website](https://www.keycloak.org/downloads.html). Unzip and run:
+
+ bin/kc.[sh|bat] start-dev
+
+Alternatively, you can use the Docker image by running:
+
+ docker run quay.io/keycloak/keycloak start-dev
+
+For more details refer to the [Keycloak Documentation](https://www.keycloak.org/documentation.html).
+
+
+## Building from Source
+
+To build from source, refer to the [building and working with the code base](docs/building.md) guide.
+
+
+### Testing
+
+To run tests, refer to the [running tests](docs/tests.md) guide.
+
+
+### Writing Tests
+
+To write tests, refer to the [writing tests](docs/tests-development.md) guide.
+
+
+## Contributing
+
+Before contributing to Keycloak, please read our [contributing guidelines](CONTRIBUTING.md). Participation in the Keycloak project is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+Joining a [community meeting](https://www.keycloak.org/community) is a great way to get involved and help shape the future of Keycloak.
+
+## Other Keycloak Projects
+
+* [Keycloak](https://github.com/keycloak/keycloak) - Keycloak Server and Java adapters
+* [Keycloak QuickStarts](https://github.com/keycloak/keycloak-quickstarts) - QuickStarts for getting started with Keycloak
+* [Keycloak Node.js Connect](https://github.com/keycloak/keycloak-nodejs-connect) - Node.js adapter for Keycloak
+
+
+## License
+
+* [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
diff --git a/data/readmes/keylime-v7130.md b/data/readmes/keylime-v7130.md
new file mode 100644
index 0000000..8c5f3e8
--- /dev/null
+++ b/data/readmes/keylime-v7130.md
@@ -0,0 +1,239 @@
+# Keylime - README (v7.13.0)
+
+**Repository**: https://github.com/keylime/keylime
+**Version**: v7.13.0
+
+---
+
+# Keylime
+
+[](https://cloud-native.slack.com/archives/C01ARE2QUTZ)
+[](https://keylime.readthedocs.io/en/latest/?badge=latest)
+
+
+
+Keylime is an open-source scalable trust system harnessing TPM Technology.
+
+Keylime provides an end-to-end solution for bootstrapping hardware rooted
+cryptographic trust for remote machines, the provisioning of encrypted payloads,
+and run-time system integrity monitoring. It also provides a flexible
+framework for the remote attestation of any given `PCR` (Platform Configuration
+Register). Users can create their own customized actions that will trigger when
+a machine fails its attested measurements.
+
+Keylime's mission is to make TPM Technology easily accessible to developers and
+users alike, without the need for a deep understanding of the lower levels of a
+TPM's operations. Amongst many scenarios, it well suited to tenants who need to
+remotely attest machines not under their own full control (such as a consumer of
+hybrid cloud or a remote Edge / IoT device in an insecure physical tamper prone
+location.)
+
+Keylime can be driven with a CLI application and a set of RESTful APIs.
+
+Keylime consists of three main components; The Verifier, Registrar and the
+Agent.
+
+* The Verifier continuously verifies the integrity state of the machine that
+the agent is running on.
+
+* The Registrar is a database of all agents registered
+with Keylime and hosts the public keys of the TPM vendors.
+
+* The Agent is deployed to the remote machine that is to be measured or provisioned
+with secrets stored within an encrypted payload released once trust is established.
+
+### Rust based Keylime Agent
+
+The verifier, registrar, and agent are all developed in Python and situated
+in this repository `keylime`. The agent was ported to the
+[Rust programming language](https://www.rust-lang.org). The code can be found
+in the [rust-keylime repository](https://github.com/keylime/rust-keylime).
+
+The decision was made to port the agent to Rust, as rust is a low-level
+performant systems language designed with security as a central tenet, by means
+of the rust compiler's ownership model.
+
+Starting with the 0.1.0 release of the Rust based Keylime agent, this agent is now the official agent.
+
+| IMPORTANT: The Python version is deprecated and will be removed with the next major version (7.0.0)! |
+|------------------------------------------------------------------------------------------------------|
+
+
+### TPM Support
+
+Keylime supports TPM version *2.0*.
+
+Keylime can be used with a hardware TPM, or a software TPM emulator for
+development, testing, or demonstration purposes. However, DO NOT USE Keylime in
+production with a TPM emulator! A software TPM emulator does not provide a
+hardware root of trust and dramatically lowers the security benefits of using
+Keylime.
+
+A hardware TPM should always be used when real secrets and trust is required.
+
+## Table of Contents
+
+* [Installation](#installation)
+* [Usage](#usage)
+ * [Configuring Keylime](#configuring-keylime)
+ * [Running Keylime](#running-keylime)
+ * [Provisioning](#provisioning)
+* [Request a Feature](#request-a-feature)
+* [Security Vulnerability Management Policy](#security-vulnerability-management-policy)
+* [Meeting Information](#project-meetings)
+* [Contributing: First Timers Support](#contributing--first-timers-support)
+* [Testing](#testing)
+* [Additional Reading](#additional-reading)
+* [Disclaimer](#disclaimer)
+
+## Installation
+
+To install Keylime refer to [the instructions found in the documentation](https://keylime.readthedocs.io/en/latest/installation.html).
+
+
+## Usage
+
+### Configuring Keylime
+
+Keylime puts its configuration in `/etc/keylime/*.conf` or `/usr/etc/keylime/*.conf`.
+It will also take an alternate location for the config in the environment var
+`keylime_{VERIFIER,REGISTRAR,TENANT,CA,LOGGING}_CONFIG`.
+
+Those files are documented with comments and should be self-explanatory in most cases.
+
+### Running Keylime
+
+Keylime has three major component services that run: the registrar, verifier, and the agent:
+
+* The *registrar* is a simple HTTPS service that accepts TPM public keys. It then
+presents an interface to obtain these public keys for checking quotes.
+
+* The *verifier* is the most important component in Keylime. It does initial and
+periodic checks of system integrity and supports bootstrapping a cryptographic key
+securely with the agent. The verifier uses mutual TLS for its control interface.
+
+ By default, the verifier will create appropriate TLS certificates for itself
+ in `/var/lib/keylime/cv_ca/`. The registrar and tenant will use this as well. If
+ you use the generated TLS certificates then all the processes need to run as root
+ to allow reading of private key files in `/var/lib/keylime/`.
+
+* The *agent* is the target of bootstrapping and integrity measurements. It puts
+ its stuff into `/var/lib/keylime/`.
+
+
+### Provisioning
+
+To kick everything off you need to tell Keylime to provision a machine. This can be
+done with the Keylime tenant.
+
+#### Provisioning with keylime_tenant
+
+The `keylime_tenant` utility can be used to provision your agent.
+
+As an example, the following command tells Keylime to provision a new agent
+at 127.0.0.1 with UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 and talk to a
+verifier at 127.0.0.1. Finally, it will encrypt a file called `filetosend`
+and send it to the agent allowing it to decrypt it only if the configured TPM
+policy is satisfied:
+
+`keylime_tenant -c add -t 127.0.0.1 -v 127.0.0.1 -u D432fbb3-d2f1-4a97-9ef7-75bd81c00000 -f filetosend`
+
+To stop Keylime from requesting attestations:
+
+`keylime_tenant -c delete -t 127.0.0.1 -u d432fbb3-d2f1-4a97-9ef7-75bd81c00000`
+
+For additional advanced options for the tenant utility run:
+
+`keylime_tenant -h`
+
+Documentation on how to create runtime and measured boot policies can be found in
+the [Keylime User Guide](https://keylime.readthedocs.io/en/latest/user_guide.html).
+
+## Systemd service support
+
+The directory `services/` includes `systemd` service files for the verifier,
+agent and registrar.
+
+You can install the services with the following command:
+
+`sudo ./services/installer.sh`
+
+Once installed, you can run and inspect the services `keylime_verifier` and `keylime_registrar` via `systemctl`.
+The Rust agent repository also contains a systemd service file for the agent.
+
+## Request a feature
+
+Keylime feature requests are tracked as enhancements in the [enhancements repository](https://github.com/keylime/enhancements)
+
+The enhancement process has been implemented to provide a way to review and
+assess the impact(s) of significant changes to Keylime.
+
+## Security Vulnerability Management Policy
+
+If you have found a security vulnerability in Keylime and would like to
+report, first of all: thank you.
+
+Please contact us directly at [security@keylime.groups.io](mailto:security@keylime.groups.io)
+for any bug that might impact the security of this project. **Do not** use a
+Github issue to report any potential security bugs.
+
+
+## Project Meetings
+
+We meet on the fourth Wednesday each month @ 16:00 UK time (GMT/BST) to 17:00. Anyone is welcome to join the meeting.
+
+The meeting is normally announced on [CNCF chat (Slack)](https://cloud-native.slack.com/archives/C01ARE2QUTZ)
+
+Meeting agenda are hosted and archived in the [meetings repo](https://github.com/keylime/meetings) as GitHub issues.
+
+## Contributing: First Timers Support
+
+We welcome new contributors to Keylime of any form, including those of you who maybe new to working in an open source project.
+
+So if you are new to open source development, don't worry, there are a myriad of ways you can get involved in our open source project. As a start, try exploring issues with [`good first issue`](https://github.com/keylime/keylime/labels/good%20first%20issue) label.
+We understand that the process of creating a Pull Request (PR) can be a barrier for new contributors. These issues are reserved for new contributors like you. If you need any help or advice in making the PR, feel free to jump into our [chat room](https://cloud-native.slack.com/archives/C01ARE2QUTZ) and ask for help there.
+
+Your contribution is our gift to make our project even more robust. Check out [CONTRIBUTING.md](https://github.com/keylime/keylime/blob/master/CONTRIBUTING.md) to find out more about how to contribute to our project.
+
+Keylime uses [Semantic Versioning](https://semver.org/). It is recommended you also read the [RELEASE.md](RELEASE.md)
+file to learn more about it and familiarise yourself with simple of examples of using it.
+
+## Testing
+
+Please, see [TESTING.md](TESTING.md) for details.
+
+## Additional Reading
+
+* Executive summary Keylime slides: [docs/old/keylime-elevator-slides.pptx](https://github.com/keylime/keylime/raw/master/docs/old/keylime-elevator-slides.pptx)
+* Detailed Keylime Architecture slides: [docs/old/keylime-detailed-architecture-v7.pptx](https://github.com/keylime/keylime/raw/master/docs/old/keylime-detailed-architecture-v7.pptx)
+* See ACSAC 2016 paper in doc directory: [docs/old/tci-acm.pdf](https://github.com/keylime/keylime/blob/master/docs/old/tci-acm.pdf)
+ * and the ACSAC presentation on Keylime: [docs/old/llsrc-keylime-acsac-v6.pptx](https://github.com/keylime/keylime/raw/master/docs/old/llsrc-keylime-acsac-v6.pptx)
+* See the HotCloud 2018 paper: [docs/old/hotcloud18.pdf](https://github.com/keylime/keylime/blob/master/docs/old/hotcloud18.pdf)
+* Details about Keylime REST API: [docs/old/keylime RESTful API.docx](https://github.com/keylime/keylime/raw/master/docs/old/keylime%20RESTful%20API.docx)
+* [Demo files](demo/) - Some pre-packaged demos to show off what Keylime can do.
+* [IMA stub service](https://github.com/keylime/rust-keylime/tree/master/keylime-ima-emulator) - Allows you to test IMA and Keylime on a machine without a TPM. Service keeps emulated TPM synchronized with IMA.
+
+#### Errata from the ACSAC Paper
+
+We discovered a typo in Figure 5 of the published ACSAC paper. The final interaction
+between the Tenant and Cloud Verifier showed an HMAC of the node's ID using the key
+K_e. This should be using K_b. The paper in this repository and the ACSAC presentation
+have been updated to correct this typo.
+
+The software that runs on the system with the TPM is now called the Keylime *agent* rather
+than the *node*. We have made this change in the documentation and code. The ACSAC paper
+will remain as it was published using *node*.
+
+## Disclaimer
+
+DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited.
+
+This material is based upon work supported by the Assistant Secretary of Defense for
+Research and Engineering under Air Force Contract No. FA8721-05-C-0002 and/or
+FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this
+material are those of the author(s) and do not necessarily reflect the views of the
+Assistant Secretary of Defense for Research and Engineering.
+
+Keylime's license was changed from BSD Clause-2 to Apache 2.0. The original BSD
+Clause-2 licensed code can be found on the [MIT GitHub
+organization](https://github.com/mit-ll/MIT-keylime).
diff --git a/data/readmes/kgateway-v212.md b/data/readmes/kgateway-v212.md
new file mode 100644
index 0000000..7000d20
--- /dev/null
+++ b/data/readmes/kgateway-v212.md
@@ -0,0 +1,66 @@
+# Kgateway - README (v2.1.2)
+
+**Repository**: https://github.com/kgateway-dev/kgateway
+**Version**: v2.1.2
+
+---
+
+
+
+
+
+
+
+
+ An Envoy-Powered, Kubernetes-Native API Gateway
+
+
+[](https://bestpractices.coreinfrastructure.org/projects/10534)
+
+## About kgateway
+
+Kgateway is:
+
+* **An ingress/edge router for Kubernetes**: Powered by [Envoy](https://www.envoyproxy.io) and programmed with the [Gateway API](https://gateway-api.sigs.k8s.io/), kgateway is a world-leading Cloud Native ingress.
+* **An advanced API gateway**: Aggregate web APIs and apply key functions like authentication, authorization and rate limiting in one place
+* **A better waypoint proxy for [ambient mesh](https://ambientmesh.io/)**: Use the same stack for east-west management as you do for north-south.
+* **An AI gateway for securing LLM usage**: Protect applications, models, and data from inappropriate access or use, whether you're producing or consuming. Manage traffic to LLM providers, and enrich prompts at a system level.
+* **An LLM Gateway utilizing the [Inference Extension](https://gateway-api-inference-extension.sigs.k8s.io/) project**: Intelligently route to AI inference workloads and LLMs in your Kubernetes environment.
+* **A model context protocol (MCP) gateway**: Federate MCP tool servers into a single, scalable and secure endpoint.
+* **A migration engine for hybrid apps**: Route to backends implemented as microservices, serverless functions or legacy apps. This can help you gradually migrate from legacy code to microservices and serverless, add new functionalities using cloud-native technologies while maintaining a legacy codebase or allow different teams in an organization to choose different architectures.
+
+Kgateway is feature-rich, fast, and flexible. It excels in function-level routing, supports legacy apps, microservices and serverless, offers robust discovery capabilities, integrates seamlessly with open-source projects, and is designed to support hybrid applications with various technologies, architectures, protocols, and clouds.
+
+The project was previously known as Gloo, and has been [production-ready since 2019](https://www.solo.io/blog/announcing-gloo-1-0-a-production-ready-envoy-based-api-gateway). Please see [the migration plan](https://github.com/kgateway-dev/kgateway/issues/10363) for more information and the current status of the change from Gloo to kgateway.
+
+## Get involved
+
+- [Join us on our Slack channel](https://kgateway.dev/slack/)
+- [Check out the docs](https://kgateway.dev/docs)
+- [Read the kgateway blog](https://kgateway.dev/blog/)
+- [Learn more about the community](https://github.com/kgateway-dev/community)
+- [Watch a video on our YouTube channel](https://www.youtube.com/@kgateway-dev)
+- Follow us on [X](https://x.com/kgatewaydev), [Bluesky](https://bsky.app/profile/kgateway.dev), [Mastodon](https://mastodon.social/@kgateway) or [LinkedIn](https://www.linkedin.com/company/kgateway/)
+
+## Contributing to kgateway
+
+Please refer to [devel/contributing/README.md](/devel/contributing/README.md) as a starting point for contributing to the project.
+
+## Releasing kgateway
+
+Please refer to [devel/contributing/releasing.md](devel/contributing/releasing.md) as a starting point for understanding releases of the project.
+
+## Thanks
+
+Kgateway would not be possible without the valuable open source work of projects in the community. We would like to extend a special thank-you to [Envoy](https://www.envoyproxy.io), upon whose shoulders we stand.
+
+## Security
+
+See our [SECURITY.md](SECURITY.md) file for details.
+
+---
+
+
diff --git a/data/readmes/kibana-v922.md b/data/readmes/kibana-v922.md
new file mode 100644
index 0000000..bb759f6
--- /dev/null
+++ b/data/readmes/kibana-v922.md
@@ -0,0 +1,71 @@
+# Kibana - README (v9.2.2)
+
+**Repository**: https://github.com/elastic/kibana
+**Version**: v9.2.2
+
+---
+
+# Kibana
+
+Kibana is the open source interface to query, analyze, visualize, and manage your data stored in Elasticsearch.
+
+- [Getting Started](#getting-started)
+ - [Using a Kibana Release](#using-a-kibana-release)
+ - [Building and Running Kibana, and/or Contributing Code](#building-and-running-kibana-andor-contributing-code)
+- [Documentation](#documentation)
+- [Version Compatibility with Elasticsearch](#version-compatibility-with-elasticsearch)
+- [Questions? Problems? Suggestions?](#questions-problems-suggestions)
+
+## Getting Started
+
+If you just want to try Kibana out, check out the [Elastic Stack Getting Started Page](https://www.elastic.co/start) to give it a whirl.
+
+If you're interested in diving a bit deeper and getting a taste of Kibana's capabilities, head over to the [Kibana Getting Started Page](https://www.elastic.co/guide/en/kibana/current/get-started.html).
+
+### Using a Kibana Release
+
+If you want to use a Kibana release in production, give it a test run, or just play around:
+
+- Download the latest version on the [Kibana Download Page](https://www.elastic.co/downloads/kibana).
+- Learn more about Kibana's features and capabilities on the
+[Kibana Product Page](https://www.elastic.co/kibana).
+- We also offer a hosted version of Kibana on our
+[Cloud Service](https://www.elastic.co/cloud/as-a-service).
+
+### Building and Running Kibana, and/or Contributing Code
+
+You might want to build Kibana locally to contribute some code, test out the latest features, or try
+out an open PR:
+
+- [CONTRIBUTING.md](CONTRIBUTING.md) will help you get Kibana up and running.
+- If you would like to contribute code, please follow our [STYLEGUIDE.mdx](STYLEGUIDE.mdx).
+- For all other questions, check out the [FAQ.md](FAQ.md).
+
+## Documentation
+
+Visit [Elastic.co](http://www.elastic.co/guide/en/kibana/current/index.html) for the full Kibana documentation.
+
+For information about building the documentation, see the README in [elastic/docs](https://github.com/elastic/docs).
+
+## Version Compatibility with Elasticsearch
+
+Ideally, you should be running Elasticsearch and Kibana with matching version numbers. If your Elasticsearch has an older version number or a newer _major_ number than Kibana, then Kibana will fail to run. If Elasticsearch has a newer minor or patch number than Kibana, then the Kibana Server will log a warning.
+
+_Note: The version numbers below are only examples, meant to illustrate the relationships between different types of version numbers._
+
+| Situation | Example Kibana version | Example ES version | Outcome |
+| ------------------------- | -------------------------- |------------------- | ------- |
+| Versions are the same. | 7.15.1 | 7.15.1 | 💚 OK |
+| ES patch number is newer. | 7.15.__0__ | 7.15.__1__ | ⚠️ Logged warning |
+| ES minor number is newer. | 7.__14__.2 | 7.__15__.0 | ⚠️ Logged warning |
+| ES major number is newer. | __7__.15.1 | __8__.0.0 | 🚫 Fatal error |
+| ES patch number is older. | 7.15.__1__ | 7.15.__0__ | ⚠️ Logged warning |
+| ES minor number is older. | 7.__15__.1 | 7.__14__.2 | 🚫 Fatal error |
+| ES major number is older. | __8__.0.0 | __7__.15.1 | 🚫 Fatal error |
+
+## Questions? Problems? Suggestions?
+
+- If you've found a bug or want to request a feature, please create a [GitHub Issue](https://github.com/elastic/kibana/issues/new/choose).
+ Please check to make sure someone else hasn't already created an issue for the same topic.
+- Need help using Kibana? Ask away on our [Kibana Discuss Forum](https://discuss.elastic.co/c/kibana) and a fellow community member or
+Elastic engineer will be glad to help you out.
diff --git a/data/readmes/kitops-v1100.md b/data/readmes/kitops-v1100.md
new file mode 100644
index 0000000..bb6ef90
--- /dev/null
+++ b/data/readmes/kitops-v1100.md
@@ -0,0 +1,149 @@
+# KitOps - README (v1.10.0)
+
+**Repository**: https://github.com/kitops-ml/kitops
+**Version**: v1.10.0
+
+---
+
+
+
+
+
+## KitOps: Standards-based packaging & versioning for AI/ML projects
+
+[](./LICENSE)
+[](https://discord.gg/Tapeh8agYy)
+[](https://twitter.com/kit_ops)
+
+## 📚 Table of Contents
+- [What is KitOps?](#what-is-kitops)
+- [KitOps Architecture](#kitops-architecture)
+- [Try KitOps](#try-kitops-in-under-15-minutes)
+- [Benefits](#key-benefits)
+- [Community & Support](#join-kitops-community)
+
+## What is KitOps?
+
+KitOps is a CNCF open standards project for packaging, versioning, and securely sharing AI/ML projects. Built on the OCI ([Open Container Initiative](https://opencontainers.org/)) standard, it integrates seamlessly with your existing AI/ML, software development, and DevOps tools.
+
+It’s the preferred solution for packaging, versioning, and managing assets in security-conscious enterprises, governments, and cloud operators who need to self-host AI models and agents.
+
+### KitOps and the CNCF
+
+KitOps is a CNCF project, and is governed by the same organization and policies that manage Kubernetes, OpenTelemetry, and Prometheus. [This video provides an outline of KitOps in the CNCF](https://youtu.be/iK9mnU0prRU?feature=shared).
+
+KitOps is also the reference implementation of the [CNCF's ModelPack specification](https://github.com/modelpack/model-spec) for a vendor-neutral AI/ML interchange format.
+
+
+[![Official Website]()](https://kitops.org?utm_source=github&utm_medium=kitops-readme)
+
+[![Use Cases]()](https://kitops.org/docs/get-started/?utm_source=github&utm_medium=kitops-readme)
+
+## KitOps Architecture
+
+### ModelKit
+
+KitOps packages your project into a [ModelKit](https://kitops.org/docs/modelkit/intro/) — a self-contained, immutable bundle that includes everything required to reproduce, test, or deploy your AI/ML model.
+
+ModelKits can include code, model weights, datasets, prompts, experiment run results and hyperparameters, metadata, environment configurations, and more.
+
+ModelKits are:
+* Tamper-proof – Ensuring consistency and traceability
+* Signable – Enabling trust and verification
+* Compatible – Natively stored and retrieved in all major container registries
+
+> *ModelKits elevate AI artifacts to first-class, governed assets — just like application code.*
+
+### Kitfile
+A [Kitfile](https://kitops.org/docs/kitfile/kf-overview/) defines your ModelKit. Written in YAML, it maps where each artifact lives and how it fits into the project.
+
+### Kit CLI
+
+The [Kit CLI](https://kitops.org/docs/cli/cli-reference/) not only enables you to create, manage, run, and deploy ModelKits -- it lets you pull only the pieces you need.
+
+### 🎥 Watch KitOps in Action
+
+[](https://www.youtube.com/watch?v=j2qjHf2HzSQ)
+
+This video shows how KitOps streamlines collaboration between data scientists, developers, and SREs using ModelKits.
+
+## 🚀 Try KitOps in under 15 Minutes
+
+1. **Install the CLI**: [for MacOS, Windows, and Linux](https://kitops.org/docs/cli/installation/).
+2. **Pack your first ModelKit**: Learn how to pack, push, and pull using our [Getting Started](...) guide.
+3. **Explore a Quick Start**: [Try pre-built ModelKits](https://jozu.ml/organization/jozu-quickstarts) for LLMs, CVs, and more.
+
+For those who prefer to build from the source, follow [these steps](https://kitops.org/docs/cli/installation/#🛠️-install-from-source) to get the latest version from our repository.
+
+## Key Benefits
+
+KitOps was built to bring discipline to productizing AI/ML projects, with:
+* 📦 Unified packaging and versioning of AI/ML assets
+* 🔐 Secure, signed distribution
+* 🛠️ Toolchain compatibility via OCI
+* ⚙️ Production-ready for enterprise ML workflows
+* 🚢 Create runnable containers for Kubernetes or docker
+* 📈 Audit-ready lineage tracking
+
+To get the most out of KitOps' ModelKits, use them with the **[Jozu Hub](https://jozu.com/)**. Jozu Hub can be installed behind your firewall and use your existing OCI registry in a private cloud, datacenter, or even in an air-gapped environment.
+
+### Simplify Team Collaboration
+
+ModelKits streamline handoffs between:
+* Data scientists preparing and training models
+* Application developers integrating models into services
+* SREs deploying and maintaining models in production
+
+This ensures reliable, repeatable workflows for both development and operations.
+
+### Use KitOps to Speed Up and De-risk AI/ML Projects
+
+KitOps supports packaging for a wide variety of models:
+* Large language models
+* Computer vision models
+* Multi-modal models
+* Predictive models
+* Audio models
+* etc...
+
+> 🇪🇺 EU AI Act Compliance 🔒
+>
+> For our friends in the EU - ModelKits are the perfect way to create a library of model versions for EU AI Act compliance because they're tamper-proof, signable, and auditable.
+
+## Join KitOps Community
+
+For support, release updates, and general KitOps discussion, please join the [KitOps Discord](https://discord.gg/Tapeh8agYy). Follow [KitOps on X](https://twitter.com/Kit_Ops) for daily updates.
+
+If you need help there are several ways to reach our community and [Maintainers](./MAINTAINERS.md) outlined in our [support doc](./SUPPORT.md)
+
+### Joining the KitOps Contributors
+
+We ❤️ our KitOps community and contributors. To learn more about the many ways you can contribute (you don't need to be a coder) and how to get started see our [Contributor's Guide](./CONTRIBUTING.md). Please read our [Governance](./GOVERNANCE.md) and our [Code of Conduct](./CODE-OF-CONDUCT.md) before contributing.
+
+
+### Reporting Issues and Suggesting Features
+
+Your insights help KitOps evolve as an open standard for AI/ML. We *deeply value* the issues and feature requests we get from users in our community :sparkling_heart:. To contribute your thoughts, navigate to the **Issues** tab and click the **New Issue** button.
+
+### KitOps Community Calls (bi-weekly)
+
+**📅 Wednesdays @ 13:30 – 14:00 (America/Toronto)**
+- 🔗 [Google Meet](https://meet.google.com/zfq-uprp-csd)
+- ☎️ +1 647-736-3184 (PIN: 144 931 404#)
+- 🌐 [More numbers](https://tel.meet/zfq-uprp-csd?pin=1283456375953)
+
+
+### A Community Built on Respect
+
+At KitOps, inclusivity, empathy, and responsibility are at our core. Please read our [Code of Conduct](./CODE-OF-CONDUCT.md) to understand the values guiding our community.
+
+---
+
+
+
+
+
diff --git a/data/readmes/kmesh-v120-rc0.md b/data/readmes/kmesh-v120-rc0.md
new file mode 100644
index 0000000..67cf72e
--- /dev/null
+++ b/data/readmes/kmesh-v120-rc0.md
@@ -0,0 +1,133 @@
+# Kmesh - README (v1.2.0-rc.0)
+
+**Repository**: https://github.com/kmesh-net/kmesh
+**Version**: v1.2.0-rc.0
+
+---
+
+
+
+[](/LICENSE) [](https://codecov.io/gh/kmesh-net/kmesh)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkmesh-net%2Fkmesh?ref=badge_shield)
+
+## Introduction
+
+Kmesh is a high-performance and low overhead service mesh data plane based on eBPF and programmable kernel. Kmesh brings traffic management, security and monitoring to service communication without needing application code changes. It is natively sidecarless, zero intrusion and without adding any resource cost to application container.
+
+## Why Kmesh
+
+### Challenges of the Service Mesh Data Plane
+
+Service mesh software represented by Istio has gradually become popular and become an important component of cloud native infrastructure. However, there are still some challenges faced:
+
+- **Extra latency overhead at the proxy layer**: Add [2~3ms](https://istio.io/v1.19/docs/ops/deployment/performance-and-scalability/) latency, which cannot meet the SLA requirements of latency-sensitive applications. Although the community has come up with a variety of optimizations, the overhead introduced by sidecar cannot be completely reduced.
+- **High resources occupation**: Occupy 0.5 vCPU and 50 MB memory per 1000 requests per second going through the proxy, and the deployment density of service container decreases.
+
+### Kmesh Architecture
+
+Kmesh transparently intercept and forward traffic based on node local eBPF without introducing extra connection hops, both the latency and resource overhead are negligible.
+
+
+
+
Kmesh Architecture
+
+
+The main components of Kmesh include:
+
+- **Kmesh-daemon**: The management component per node responsible for bpf prog management, xDS configuration subscribe, observability, and etc.
+- **eBPF Orchestration**: The traffic orchestration implemented based on eBPF, supports L4 load balancing, traffic encryption, monitoring and simple L7 dynamic routing.
+- **Waypoint**: Responsible for advanced L7 traffic governance, can be deployed separately per namespace, per service.
+
+Kmesh innovatively sinks Layer 4 and Simple Layer 7 (HTTP) traffic governance to the kernel, and build a transparent sidecarless service mesh without passing through the proxy layer on the data path. We named this Kernel-Native mode.
+
+
+
+
Kernel-Native Mode
+
+
+Kmesh also provide a Dual-Engine Mode, which makes use of eBPF and waypoint to process L4 and L7 traffic separately, thus allow you to adopt Kmesh incrementally, enabling a smooth transition from no mesh, to a secure L4, to full L7 processing.
+
+
+
+
Dual-Engine Mode
+
+
+### Key features of Kmesh
+
+#### Smooth Compatibility
+
+- Application-transparent Traffic Management
+
+#### High Performance
+
+- Forwarding delay **60%↓**
+- Workload startup performance **40%↑**
+
+#### Low Resource Overhead
+
+- ServiceMesh data plane overhead **70%↓**
+
+#### Zero Trust
+
+- Provide zero trust security with default mutual TLS
+- Policy enforcement both in eBPF and waypoints
+
+#### Safety Isolation
+
+- eBPF Virtual machine security
+- Cgroup level orchestration isolation
+
+#### Open Ecology
+
+- Supports XDS protocol standards
+- Support [Gateway API](https://gateway-api.sigs.k8s.io/)
+
+## Quick Start
+
+Please refer to [quick start](https://kmesh.net/en/docs/setup/quick-start/) and [user guide](docs/kmesh_demo.md) to try Kmesh quickly.
+
+## Performance
+
+Based on [Fortio](https://github.com/fortio/fortio), the performance of Kmesh and Envoy was tested. The test results are as follows:
+
+
+
+For a complete performance test result, please refer to [Kmesh Performance Test](test/performance/README.md).
+
+## Contact
+
+If you have any question, feel free to reach out to us in the following ways:
+
+- [meeting notes](https://docs.google.com/document/d/1fFqolwWMVMk92yXPHvWGrMgsrb8Xru_v4Cve5ummjbk)
+- [mailing list](https://groups.google.com/forum/#!forum/kmesh)
+- [slack](https://cloud-native.slack.com/archives/C06BU2GB8NL)
+- [twitter](https://twitter.com/kmesh_net)
+
+## Community Meeting
+
+Regular Community Meeting:
+
+- Thursday at 16:00 UTC+8 (Chinese)(weekly). [Convert to your timezone](https://www.thetimezoneconverter.com/?t=14%3A30&tz=GMT%2B8&).
+
+Resources:
+
+- [Meeting Link](https://zoom-lfx.platform.linuxfoundation.org/meeting/99299011908?password=f4c31ddd-11ed-42ae-a617-3e0842c39c58)
+
+## Contributing
+
+If you're interested in being a contributor and want to get involved in developing Kmesh, please see [CONTRIBUTING](CONTRIBUTING.md) for more details on submitting patches and the contribution workflow.
+
+## License
+
+The Kmesh user space components are licensed under the
+[Apache License, Version 2.0](./LICENSE).
+The BPF code templates, ko(kernel module) and mesh data accelerate are dual-licensed under the
+[General Public License, Version 2.0 (only)](./bpf/LICENSE.GPL-2.0)
+and the [2-Clause BSD License](./bpf/LICENSE.BSD-2-Clause)
+(you can use the terms of either license, at your option).
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkmesh-net%2Fkmesh?ref=badge_large)
+
+## Credit
+
+This project was initially incubated in the [openEuler community](https://gitee.com/openeuler/Kmesh), thanks openEuler Community for the help on promoting this project in early days.
diff --git a/data/readmes/knative-knative-v1200.md b/data/readmes/knative-knative-v1200.md
new file mode 100644
index 0000000..4155871
--- /dev/null
+++ b/data/readmes/knative-knative-v1200.md
@@ -0,0 +1,39 @@
+# Knative - README (knative-v1.20.0)
+
+**Repository**: https://github.com/knative/serving
+**Version**: knative-v1.20.0
+
+---
+
+# Knative Serving
+
+[](https://pkg.go.dev/github.com/knative/serving)
+[](https://goreportcard.com/report/knative/serving)
+[](https://github.com/knative/serving/releases)
+[](https://github.com/knative/serving/blob/main/LICENSE)
+[](https://cloud-native.slack.com/archives/C04LGHDR9K7)
+[](https://codecov.io/gh/knative/serving)
+[](https://bestpractices.coreinfrastructure.org/projects/5913)
+
+Knative Serving builds on Kubernetes to support deploying and serving of
+applications and functions as serverless containers. Serving is easy to get
+started with and scales to support advanced scenarios.
+
+The Knative Serving project provides middleware primitives that enable:
+
+- Rapid deployment of serverless containers
+- Automatic scaling up and down to zero
+- Routing and network programming
+- Point-in-time snapshots of deployed code and configurations
+
+For documentation on using Knative Serving, see the
+[serving section](https://www.knative.dev/docs/serving/) of the
+[Knative documentation site](https://www.knative.dev/docs).
+
+For documentation on the Knative Serving specification, see the
+[docs](https://github.com/knative/serving/tree/main/docs) folder of this
+repository.
+
+If you are interested in contributing, see [CONTRIBUTING.md](./CONTRIBUTING.md)
+and [DEVELOPMENT.md](./DEVELOPMENT.md). For a list of all help wanted issues
+across Knative, take a look at [CLOTRIBUTOR](https://clotributor.dev/search?project=knative&page=1).
diff --git a/data/readmes/ko-v0180.md b/data/readmes/ko-v0180.md
new file mode 100644
index 0000000..c52bfa8
--- /dev/null
+++ b/data/readmes/ko-v0180.md
@@ -0,0 +1,48 @@
+# ko - README (v0.18.0)
+
+**Repository**: https://github.com/ko-build/ko
+**Version**: v0.18.0
+
+---
+
+# `ko`: Easy Go Containers
+
+[](https://github.com/ko-build/ko/actions?query=workflow%3ABuild)
+[](https://godoc.org/github.com/google/ko)
+[](https://goreportcard.com/report/ko-build/ko)
+[](https://slsa.dev/images/gh-badge-level3.svg)
+
+
+
+---
+
+> 🎉 Google has applied for `ko` to join the Cloud Native Computing Foundation as a Sandbox project! Learn more [here](https://opensource.googleblog.com/2022/10/ko-applies-to-become-a-cncf-sandbox-project.html)!
+
+`ko` is a simple, fast container image builder for Go applications.
+
+It's ideal for use cases where your image contains a single Go application
+without any/many dependencies on the OS base image (e.g., no cgo, no OS package
+dependencies).
+
+`ko` builds images by effectively executing `go build` on your local machine,
+and as such doesn't require `docker` to be installed. This can make it a good
+fit for lightweight CI/CD use cases.
+
+`ko` makes [multi-platform builds](https://ko.build/features/multi-platform/) easy, produces [SBOMs](https://ko.build/features/sboms/) by default, and includes support for simple YAML templating which makes it a powerful tool for [Kubernetes applications](https://ko.build/features/k8s/).
+
+# [Install `ko`](https://ko.build/install/) and [get started](https://ko.build/get-started/)!
+
+### Acknowledgements
+
+This work is based heavily on experience from having built the [Docker](https://github.com/bazelbuild/rules_docker) and [Kubernetes](https://github.com/bazelbuild/rules_k8s) support for [Bazel](https://bazel.build).
+That work was presented [here](https://www.youtube.com/watch?v=RS1aiQqgUTA).
+
+### Discuss
+
+Questions? Comments? Ideas?
+Come discuss `ko` with us in the `#ko-build` channel on the [Kubernetes Slack](https://slack.k8s.io)!
+See you there!
+
+### Community Meetings
+
+You can find all the necessary details about the community meetings in this [page](https://ko.build/community).
diff --git a/data/readmes/konveyor-v081-beta2.md b/data/readmes/konveyor-v081-beta2.md
new file mode 100644
index 0000000..524c7ef
--- /dev/null
+++ b/data/readmes/konveyor-v081-beta2.md
@@ -0,0 +1,309 @@
+# Konveyor - README (v0.8.1-beta.2)
+
+**Repository**: https://github.com/konveyor/operator
+**Version**: v0.8.1-beta.2
+
+---
+
+# Konveyor Operator
+
+[](http://www.apache.org/licenses/LICENSE-2.0.html) [](https://github.com/konveyor/tackle2-operator/pulls) [](https://bestpractices.coreinfrastructure.org/projects/7355)
+
+The Konveyor Operator fully manages the deployment and life cycle of Konveyor on Kubernetes and OpenShift.
+
+## Prerequisites
+
+Please ensure the following requirements are met prior installation.
+
+* [__k8s v1.22+__](https://kubernetes.io/) or [__OpenShift 4.9+__](https://www.openshift.com/)
+* [__Persistent Storage__](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
+* [__Operator Lifecycle Manager (OLM) support__](https://olm.operatorframework.io/)
+* [__Ingress support__](https://kubernetes.io/docs/concepts/services-networking/ingress/)
+* [__Network policy support__](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
+
+### Installing OLM support
+
+We strongly suggest OLM support for Tackle deployments, in some production Kubernetes clusters, OLM might already be present. If not, see the following examples on how to add OLM support to Minikube or standard Kubernetes clusters below:
+
+#### Minikube:
+
+`$ minikube addons enable olm`
+
+`$ minikube addons enable ingress`
+
+#### Kubernetes:
+
+`$ kubectl apply -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml`
+
+`$ kubectl apply -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml`
+
+For detailed and official instructions on adding OLM support to Kubernetes and customizing your installation, refer to [here](https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/install/install.md).
+
+**Note:** Please wait a few minutes for OLM support to become available if this is a new deployment.
+
+#### Kubernetes Network Policies
+
+Tackle can provide namespace network isolation if a supported CNI, such as [Calico](https://minikube.sigs.k8s.io/docs/handbook/network_policy/#further-reading), is installed.
+
+`$ minikube start --network-plugin=cni --cni=calico`
+
+## Konveyor Operator Installation on k8s
+
+### Installing _released versions_
+
+Released (or public betas) of Konveyor are installable on Kubernetes via [OperatorHub](https://operatorhub.io/operator/konveyor-operator).
+
+### Installing _latest_
+
+Deploy Konveyor using manifest:
+
+`$ kubectl apply -f https://raw.githubusercontent.com/konveyor/tackle2-operator/main/tackle-k8s.yaml`
+
+This step will create the konveyor-tackle namespace, catalogsource, and other OLM-related objects.
+
+### Installing _beta_ (or special branches)
+
+If you need to deploy a beta release (or special branch), please replace the *main* branch in the URL with the desired beta branch (i.e v2.0.0-beta.0):
+
+`$ kubectl apply -f https://raw.githubusercontent.com/konveyor/tackle2-operator/v2.0.0-beta.0/tackle-k8s.yaml`
+
+**Note:** Upgrades between beta releases are **not guaranteed**. Once installed, we strongly suggest editing your subscription and switching to Manual upgrade mode for beta releases: `$ kubectl edit subscription` -> installPlanApproval: Manual
+
+### Creating a _Tackle_ CR
+
+**Note:** Tackle **requires** a storage class that supports RWX volumes. Please review storage requirements **prior** to creating the Tackle CR, in case you need to adjust settings.
+
+Use the following command to create the CR:
+
+```
+$ cat << EOF | kubectl apply -f -
+kind: Tackle
+apiVersion: tackle.konveyor.io/v1alpha1
+metadata:
+ name: tackle
+ namespace:
+spec:
+EOF
+```
+
+Once the CR is created, the operator will deploy the hub, UI and configure the rest of the required components.
+
+### Verify _Tackle_ Deployment
+
+Depending on your hardware, it should take around 1-3 minutes to deploy Tackle, below is a sample output of a successful deployment
+
+```
+$ kubectl get pods
+NAME READY STATUS RESTARTS AGE
+c4af2f0f9eab63b6ac49c81b0e517eb37c2efe1bb2ede02e8642cd--1-ghq 0/1 Completed 0 134m
+konveyor-tackle-rm6jb 1/1 Running 0 134m
+tackle-hub-6b6ff674dd-c6xbr 1/1 Running 0 130m
+tackle-keycloak-postgresql-57f5c44bcc-r9w9s 1/1 Running 0 131m
+tackle-keycloak-sso-c65cd79bf-6j4xr 1/1 Running 0 130m
+tackle-operator-6b65fccb7f-q9lpf 1/1 Running 0 133m
+tackle-ui-5f694bddcb-scbh5 1/1 Running 0 130m
+```
+You can access the Konveyor UI in your browser through the `$(minikube ip)` IP.
+
+If you're looking to install Konveyor operator on macOS, follow the guide [here](docs/installation-macos.md).
+
+## Konveyor Operator Installation on OKD/OpenShift
+
+### Installing _released versions_
+
+Released (or public betas) of Konveyor are installable on OpenShift via community operators which appear in [OCP](https://openshift.com/) and [OKD](https://www.okd.io/).
+
+1. Visit the OpenShift Web Console.
+2. Navigate to _Operators => OperatorHub_.
+3. Search for _Konveyor_.
+4. Install the desired _Konveyor_ version.
+
+### Installing _latest_
+
+Installing the latest version is almost identical to installing released versions but requires creating a new catalog source.
+
+1. `oc apply -f https://raw.githubusercontent.com/konveyor/tackle2-operator/main/konveyor-operator-catalog.yaml`
+
+2. Visit the OpenShift Web Console.
+3. Navigate to _Operators => OperatorHub_.
+4. Search for _Konveyor_.
+5. There should be two _Konveyor_ available for installation now.
+6. Select the _Konveyor_ version **without** the _community_ tag.
+7. Proceed to install the latest version using the _development_ channel during the subscription step.
+
+### Tackle CR Creation
+
+1. Visit the OpenShift Web Console, navigate to _Operators => Installed Operators_.
+2. Select _Konveyor_.
+3. Locate _Konveyor_ in the top menu and click on it.
+4. Adjust settings if desired and click "Create instance".
+
+## Tackle CR Settings
+
+If operator defaults need to be altered, the Tackle CR spec can be customized to meet desired needs, see the table below for some of the most significant settings:
+
+Name | Default | Description
+--- | --- | ---
+feature_auth_required | true | Enable keycloak auth or false (single user/noauth)
+feature_isolate_namespace | true | Enable namespace isolation via network policies
+feature_analysis_archiver | true | If enabled, automatically archives old analysis reports when a new one is created
+rwx_supported: | true | Whether or not RWX volumes are supported in the cluster
+hub_database_volume_size | 5Gi | Size requested for Hub database volume
+hub_bucket_volume_size | 100gi | Size requested for Hub bucket volume
+keycloak_database_data_volume_size | 1Gi | Size requested for Keycloak DB volume
+cache_data_volume_size | 100Gi | Size requested for Tackle Cache volume
+cache_storage_class | N/A | Storage class requested for Tackle Cache volume
+hub_bucket_storage_class | N/A | Storage class requested for Tackle Hub Bucket volume
+rwo_storage_class | N/A | Storage class requested for RWO database volumes
+
+## Tackle CR Customize Settings
+
+Custom settings can be applied by editing the `Tackle` CR.
+
+`oc edit tackle -n `
+
+## Keycloak Auth
+If `feature_auth_required` is enabled keycloak will be installed and a random password will be generated.
+
+To view these credentials:
+`oc -n konveyor-tackle extract secret/tackle-keycloak-sso --to=-`
+
+## Enable KAI (Solution Server)
+
+KAI is an optional, experimental “solution server” that integrates with the Hub. To enable it:
+
+1. Update the Tackle CR to enable KAI
+
+```
+kind: Tackle
+apiVersion: tackle.konveyor.io/v1alpha1
+metadata:
+ name: tackle
+ namespace: konveyor-tackle
+spec:
+ kai_solution_server_enabled: true
+```
+
+2. Create a credentials secret named `kai-api-keys` in the same namespace. Choose the provider you will use:
+
+- OpenAI-compatible
+```
+kubectl create secret generic kai-api-keys -n konveyor-tackle \
+ --from-literal=OPENAI_API_BASE='https://api.openai.com/v1' \
+ --from-literal=OPENAI_API_KEY=''
+```
+- Google
+```
+kubectl create secret generic kai-api-keys -n konveyor-tackle \
+ --from-literal=GOOGLE_API_KEY=''
+```
+
+3. Set the provider in the Tackle CR to match your secret
+
+- OpenAI-compatible: `spec.kai_llm_provider: openai`
+- Google: `spec.kai_llm_provider: google`
+
+Example (OpenAI-compatible):
+```
+kind: Tackle
+apiVersion: tackle.konveyor.io/v1alpha1
+metadata:
+ name: tackle
+ namespace: konveyor-tackle
+spec:
+ kai_solution_server_enabled: true
+ kai_llm_provider: openai
+ # optional, pick a suitable model for your provider
+ kai_llm_model: gpt-4o-mini
+```
+
+4. (Optional) Force a reconcile so the operator picks up the secret immediately
+```
+kubectl patch tackle tackle -n konveyor-tackle --type=merge -p \
+'{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
+```
+
+5. Verify KAI resources
+```
+kubectl get deploy,svc -n konveyor-tackle | grep -E 'kai-(api|db)'
+```
+
+6. Access KAI
+
+**Option A: Port-forward**
+```
+kubectl -n konveyor-tackle port-forward services/tackle-ui 8080:8080
+# Solution server should be accessible at localhost:8080/api
+```
+
+**Option B: Ingress resource**
+An ingress resource is also created for external access to the solution server.
+
+Advanced configuration: you can override defaults by setting fields on the `Tackle` CR under `spec`, for example:
+
+```
+spec:
+ kai_solution_server_enabled: true
+ kai_api_key_secret_name: my-kai-secret
+ kai_llm_provider: openai
+ kai_llm_model: gpt-4o-mini
+ kai_enable_demo_mode: "false"
+ kai_enable_trace: "true"
+```
+
+### Supported LLM Providers and Models
+
+The following table shows popular provider/model combinations (not exhaustive):
+
+| Provider (`kai_llm_provider`) | Model (`kai_llm_model`) |
+|----------|----------------|
+| `openai` | `gpt-4`, `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo` |
+| `azure_openai` | `gpt-4`, `gpt-35-turbo` |
+| `bedrock` | `anthropic.claude-3-5-sonnet-20241022-v2:0`, `meta.llama3-1-70b-instruct-v1:0` |
+| `google` | `gemini-2.0-flash-exp`, `gemini-1.5-pro` |
+| `ollama` | `llama3.1`, `codellama`, `mistral` |
+| `groq` | `llama-3.1-70b-versatile`, `mixtral-8x7b-32768` |
+| `anthropic` | `claude-3-5-sonnet-20241022`, `claude-3-haiku-20240307` |
+
+## Konveyor Storage Requirements
+
+Konveyor requires a total of 4 persistent volumes (PVs): 2 RWO and 2 RWX. When
+`kai_solution_server_enabled: true` is enabled, an additional RWO volume is required
+for the Kai database.
+
+Name | Default Size | Access Mode | Description
+--- | --- | --- | ---
+hub database | 5Gi | RWO | Hub DB
+hub bucket | 100Gi | RWX | Hub file storage
+keycloak postgresql | 1Gi | RWO | Keycloak backend DB
+cache | 100Gi | RWX | cache repository
+kai database | 5Gi | RWO | Kai DB (when kai_solution_server_enabled: true)
+
+### Konveyor Storage Custom Settings Example
+
+The example below requests a custom hub bucket volume size and RWX storage class
+
+```
+kind: Tackle
+apiVersion: tackle.konveyor.io/v1alpha1
+metadata:
+ name: tackle
+ namespace:
+spec:
+ hub_bucket_volume_size: "50Gi"
+ cache_storage_class: "nfs"
+```
+
+## Backing up Konveyor on OpenShift
+See [oadp-backups](docs/oadp-backups.md)
+
+## Development
+
+See [development.md](docs/development.md) for details in how to contribute to Tackle operator.
+
+## Konveyor Documentation
+
+See the [Konveyor Documentation](https://konveyor.github.io/konveyor/) for detailed installation instructions as well as how to use Konveyor.
+
+## Code of Conduct
+Refer to Konveyor's Code of Conduct [here](https://github.com/konveyor/community/blob/main/CODE_OF_CONDUCT.md).
diff --git a/data/readmes/koordinator-v170.md b/data/readmes/koordinator-v170.md
new file mode 100644
index 0000000..7a197f5
--- /dev/null
+++ b/data/readmes/koordinator-v170.md
@@ -0,0 +1,91 @@
+# Koordinator - README (v1.7.0)
+
+**Repository**: https://github.com/koordinator-sh/koordinator
+**Version**: v1.7.0
+
+---
+
+
+
Koordinator
+
+
+
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://github.com/koordinator-sh/koordinator/releases/latest)
+[](https://github.com/koordinator-sh/koordinator/actions/workflows/ci.yaml)
+[](https://goreportcard.com/report/github.com/koordinator-sh/koordinator)
+[](https://codecov.io/github/koordinator-sh/koordinator)
+[](CONTRIBUTING.md)
+[](https://join.slack.com/t/koordinator-sh/shared_invite/zt-1756qoub4-Cn4~esfdlfAPsD7cwO2NzA)
+[](https://www.bestpractices.dev/projects/8846)
+
+English | [简体中文](./README-zh_CN.md)
+## Introduction
+
+Koordinator is a QoS based scheduling system for hybrid orchestration workloads on Kubernetes. Its goal is to improve the
+runtime efficiency and reliability of both latency sensitive workloads and batch jobs, simplify the complexity of
+resource-related configuration tuning, and increase pod deployment density to improve resource utilization.
+
+Koordinator enhances the kubernetes user experiences in the workload management by providing the following:
+
+- Improved Resource Utilization: Koordinator is designed to optimize the utilization of cluster resources, ensuring that all nodes are used effectively and efficiently.
+- Enhanced Performance: By using advanced algorithms and techniques, Koordinator aims to improve the performance of Kubernetes clusters, reducing interference between containers and increasing the overall speed of the system.
+- Flexible Scheduling Policies: Koordinator provides a range of options for customizing scheduling policies, allowing administrators to fine-tune the behavior of the system to suit their specific needs.
+- Easy Integration: Koordinator is designed to be easy to integrate into existing Kubernetes clusters, allowing users to start using it quickly and with minimal hassle.
+
+## Quick Start
+
+You can view the full documentation from the [Koordinator website](https://koordinator.sh/docs).
+
+- Install or upgrade Koordinator with [the latest version](https://koordinator.sh/docs/installation).
+- Referring to [best practices](https://koordinator.sh/docs/best-practices/colocation-of-spark-jobs), there will be
+ examples on running co-located workloads.
+
+## Code of conduct
+
+The Koordinator community is guided by our [Code of Conduct](CODE_OF_CONDUCT.md), which we encourage everybody to read
+before participating.
+
+In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making
+participation in our project and our community a harassment-free experience for everyone, regardless of age, body size,
+disability, ethnicity, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity and orientation.
+
+## Contributing
+
+You are warmly welcome to hack on Koordinator. We have prepared a detailed guide [CONTRIBUTING.md](CONTRIBUTING.md).
+
+## Community
+
+The [koordinator-sh/community repository](https://github.com/koordinator-sh/community) hosts all information about
+the community, membership and how to become them, developing inspection, who to contact about what, etc.
+
+We encourage all contributors to become members. We aim to grow an active, healthy community of contributors, reviewers,
+and code owners. Learn more about requirements and responsibilities of membership in
+the [community membership](https://github.com/koordinator-sh/community/blob/main/community-membership.md) page.
+
+Active communication channels:
+
+- Bi-weekly Community Meeting (APAC, *Chinese*):
+ - Tuesday 19:30 GMT+8 (Asia/Shanghai)
+ - [Meeting Link(DingTalk)](https://meeting.dingtalk.com/j/ptVteJpQx5W)
+ - [Notes and agenda](https://alidocs.dingtalk.com/i/nodes/2Amq4vjg89jyZdNnCLw1Abx0W3kdP0wQ)
+- Slack(English): [koordinator channel](https://kubernetes.slack.com/channels/koordinator) in Kubernetes workspace
+- DingTalk(Chinese): Search Group ID `33383887` or scan the following QR Code
+
+
+
+
+
+## License
+
+Koordinator is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE) for the full license text.
+
+
+## Security
+Please report vulnerabilities by email to kubernetes-security@service.aliyun.com. Also see our [SECURITY.md](./SECURITY.md) file for details.
\ No newline at end of file
diff --git a/data/readmes/kpt-v100-beta59.md b/data/readmes/kpt-v100-beta59.md
new file mode 100644
index 0000000..52f41a1
--- /dev/null
+++ b/data/readmes/kpt-v100-beta59.md
@@ -0,0 +1,102 @@
+# kpt - README (v1.0.0-beta.59)
+
+**Repository**: https://github.com/kptdev/kpt
+**Version**: v1.0.0-beta.59
+
+---
+
+
+
+
+[](https://www.bestpractices.dev/projects/10656)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkptdev%2Fkpt?ref=badge_shield)
+
+# kpt: Automate Kubernetes Configuration Editing
+
+kpt is a package-centric toolchain that enables a WYSIWYG configuration authoring, automation, and delivery experience,
+which simplifies managing Kubernetes platforms and KRM-driven infrastructure (e.g.,
+[Config Connector](https://github.com/GoogleCloudPlatform/k8s-config-connector), [Crossplane](https://crossplane.io)) at
+scale by manipulating declarative [Configuration as Data](docs/design-docs/06-config-as-data.md).
+
+*Configuration as Data* is an approach to management of configuration which:
+
+* makes configuration data the source of truth, stored separately from the live
+ state
+* uses a uniform, serializable data model to represent configuration
+* separates code that acts on the configuration from the data and from packages
+ / bundles of the data
+* abstracts configuration file structure and storage from operations that act
+ upon the configuration data; clients manipulating configuration data don’t
+ need to directly interact with storage (git, container images).
+
+See [the FAQ](https://kpt.dev/faq/) for more details about how kpt is different from alternatives.
+
+## Why kpt?
+
+kpt enables WYSIWYG editing and interoperable automation applied to declarative configuration data, similar to how the
+live state can be modified with imperative tools.
+
+See [the rationale](https://kpt.dev/guides/rationale) for more background.
+
+The best place to get started and learn about specific features of kpt is to visit the [kpt website](https://kpt.dev/).
+
+## Install kpt
+
+kpt installation instructions can be found on [kpt.dev/installation](https://github.com/kptdev/kpt/blob/main/documentation/content/en/installation/kpt-cli.md)
+
+## kpt components
+
+The kpt toolchain includes the following components:
+
+- **kpt CLI**: The [kpt CLI](https://kpt.dev/reference/cli/) supports package and function operations, and also
+ deployment, via either direct apply or GitOps. By keeping an inventory of deployed resources, kpt enables resource
+ pruning, aggregated status and observability, and an improved preview experience.
+
+- [**Function SDK**](https://github.com/kptdev/krm-functions-sdk): Any general-purpose or domain-specific language can
+ be used to create functions to transform and/or validate the YAML KRM input/output format, but we provide SDKs to
+ simplify the function authoring process in [Go](https://kpt.dev/book/05-developing-functions/#developing-in-Go).
+
+- [**Function catalog**](https://github.com/kptdev/krm-functions-catalog): A [catalog](https://catalog.kpt.dev/function-catalog) of
+ off-the-shelf, tested functions. kpt makes configuration easy to create and transform, via reusable functions. Because
+ they are expected to be used for in-place transformation, the functions need to be idempotent.
+
+## Roadmap
+
+You can read about the big upcoming features in the [roadmap doc](/docs/ROADMAP.md).
+
+## Contributing
+
+If you are interested in contributing please start with [contribution guidelines](CONTRIBUTING.md).
+
+## Contact
+
+We would love to keep in touch:
+
+1. Join our [Slack channel](https://kubernetes.slack.com/channels/kpt). You'll
+ need to join [Kubernetes on Slack](https://slack.k8s.io/) first.
+1. Join our [Discussions](https://github.com/kptdev/kpt/discussions)
+1. Join our [community meetings](https://zoom-lfx.platform.linuxfoundation.org/meeting/98980817322?password=c09cdcc7-59c0-49c4-9802-ad4d50faafcd&invite=true)
+
+## License
+
+Code is under the [Apache License 2.0](LICENSE), documentation is [CC BY 4.0](LICENSE-documentation).
+
+### License scanning status
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkptdev%2Fkpt?ref=badge_large)
+
+## Governance
+
+The governance of the kpt project and KRM Functiona Catalog are described in the
+[governance repo](https://github.com/kptdev/governance).
+
+## Code of Conduct
+
+The kpt project and the KRM Functions Catalog are following the
+[CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+More information and links about the CNCF Code of Conduct are [here](code-of-conduct.md).
+
+## CNCF
+
+The kpt project including the KRM Functions Catalog is a [CNCF Sandbox](https://www.cncf.io/sandbox-projects/) project.
+
diff --git a/data/readmes/krator-v060.md b/data/readmes/krator-v060.md
new file mode 100644
index 0000000..35f3a55
--- /dev/null
+++ b/data/readmes/krator-v060.md
@@ -0,0 +1,50 @@
+# Krator - README (v0.6.0)
+
+**Repository**: https://github.com/krator-rs/krator
+**Version**: v0.6.0
+
+---
+
+# Krator: Kubernetes Operators using State Machines
+
+[](https://bestpractices.coreinfrastructure.org/projects/5066)
+
+:construction: :construction: **This project is highly experimental.**
+:construction: :construction: It should not be used in production workloads.
+
+Krator acts as an Operator by watching Kubernetes resources and running
+control loops to reconcile cluster state with desired state. Control loops are
+specified using a State Machine API pattern which improves reliability and
+reduces complexity.
+
+## Documentation
+
+[API Documentation](https://docs.rs/krator)
+
+Looking for the developer guide? [Start here](docs/community/developers.md).
+
+## Examples
+
+[Moose Operator](krator/examples)
+
+## Community, discussion, contribution, and support
+
+You can reach the Krator community and developers via the following channels:
+
+- [Kubernetes Slack](https://kubernetes.slack.com):
+ - [#krator](https://kubernetes.slack.com/messages/krator)
+
+## Code of Conduct
+
+This project has adopted the [Microsoft Open Source Code of
+Conduct](https://opensource.microsoft.com/codeofconduct/).
+
+For more information see the [Code of Conduct
+FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
+[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional
+questions or comments.
+
+## Vulnerability Reporting
+
+For sensitive issues, please email one of the project maintainers. For
+other issues, please open an issue in this GitHub repository.
diff --git a/data/readmes/krkn-v4015.md b/data/readmes/krkn-v4015.md
new file mode 100644
index 0000000..fb5033b
--- /dev/null
+++ b/data/readmes/krkn-v4015.md
@@ -0,0 +1,50 @@
+# Krkn - README (v4.0.15)
+
+**Repository**: https://github.com/krkn-chaos/krkn
+**Version**: v4.0.15
+
+---
+
+# Krkn aka Kraken
+
+
+
+[](https://www.bestpractices.dev/projects/10548)
+
+
+
+Chaos and resiliency testing tool for Kubernetes.
+Kraken injects deliberate failures into Kubernetes clusters to check if it is resilient to turbulent conditions.
+
+
+### Workflow
+
+
+
+
+
+
+### How to Get Started
+Instructions on how to setup, configure and run Kraken can be found in the [documentation](https://krkn-chaos.dev/docs/).
+
+
+### Blogs, podcasts and interviews
+Additional resources, including blog posts, podcasts, and community interviews, can be found on the [website](https://krkn-chaos.dev/blog)
+
+
+### Roadmap
+Enhancements being planned can be found in the [roadmap](ROADMAP.md).
+
+
+### Contributions
+We are always looking for more enhancements, fixes to make it better, any contributions are most welcome. Feel free to report or work on the issues filed on github.
+
+[More information on how to Contribute](https://krkn-chaos.dev/docs/contribution-guidelines/)
+
+
+### Community
+Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, tsebasti/Tullio Sebastiani, yogi/Yogananth Subramanian, sahil/Sahil Shah, pradeep/Pradeep Surisetty and ravielluri/Naga Ravi Chaitanya Elluri.
+* [**#krkn on Kubernetes Slack**](https://kubernetes.slack.com/messages/C05SFMHRWK1)
+
+The Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see [Trademark Usage](https://www.linuxfoundation.org/legal/trademark-usage).
diff --git a/data/readmes/krustlet-v100-alpha1.md b/data/readmes/krustlet-v100-alpha1.md
new file mode 100644
index 0000000..570c102
--- /dev/null
+++ b/data/readmes/krustlet-v100-alpha1.md
@@ -0,0 +1,48 @@
+# Krustlet - README (v1.0.0-alpha.1)
+
+**Repository**: https://github.com/krustlet/krustlet
+**Version**: v1.0.0-alpha.1
+
+---
+
+⚠️ This project is currently not actively maintained. Most of the other maintainers have moved on to other WebAssembly related projects. This project could definitely still be useful to anyone who wants to write a custom Kubelet and its sister project [Krator](https://github.com/krustlet/krator) is a state machine-based solution for writing Kubernetes controllers/operators in Rust. If anyone is interested in maintaining these projects, please feel free to reach out!
+
+
+[](https://bestpractices.coreinfrastructure.org/projects/5292)
+
+# Krustlet: Kubernetes Kubelet in Rust for running WASM
+
+:postal_horn: Krustlet 1.0 coming soon!
+
+Krustlet acts as a Kubelet by listening on the event stream for new pods that
+the scheduler assigns to it based on specific Kubernetes
+[tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).
+
+The default implementation of Krustlet listens for the architecture
+`wasm32-wasi` and schedules those workloads to run in a `wasmtime`-based runtime
+instead of a container runtime.
+
+## Documentation
+
+If you're new to the project, get started with [the
+introduction](https://docs.krustlet.dev/intro). For more in-depth information about
+Krustlet, plunge right into the [topic guides](https://docs.krustlet.dev/topics).
+
+Looking for the developer guide? [Start here](https://docs.krustlet.dev/community/developers).
+
+## Community, discussion, contribution, and support
+
+You can reach the Krustlet community and developers via the following channels:
+
+- [Kubernetes Slack](https://kubernetes.slack.com):
+ - [#krustlet](https://kubernetes.slack.com/messages/krustlet)
+- Public Community Call on Mondays at 1:00 PM PT:
+ - [Zoom](https://us04web.zoom.us/j/71695031152?pwd=T0g1d0JDZVdiMHpNNVF1blhxVC9qUT09)
+ - Download the meeting calendar invite
+ [here](./community_meeting.ics)
+
+## Code of Conduct
+
+This project has adopted the [CNCF Code of
+Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
diff --git a/data/readmes/kserve-v0160.md b/data/readmes/kserve-v0160.md
new file mode 100644
index 0000000..82a2805
--- /dev/null
+++ b/data/readmes/kserve-v0160.md
@@ -0,0 +1,79 @@
+# KServe - README (v0.16.0)
+
+**Repository**: https://github.com/kserve/kserve
+**Version**: v0.16.0
+
+---
+
+# KServe
+[](https://pkg.go.dev/github.com/kserve/kserve)
+[](https://github.com/kserve/kserve/actions/workflows/go.yml)
+[](https://goreportcard.com/report/github.com/kserve/kserve)
+[](https://bestpractices.coreinfrastructure.org/projects/6643)
+[](https://github.com/kserve/kserve/releases)
+[](https://github.com/kserve/kserve/blob/master/LICENSE)
+[](https://github.com/kserve/community/blob/main/README.md#questions-and-issues)
+[](https://gurubase.io/g/kserve)
+
+KServe is a standardized distributed generative and predictive AI inference platform for scalable, multi-framework deployment on Kubernetes.
+
+KServe is being [used by many organizations](https://kserve.github.io/website/docs/community/adopters) and is a [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) incubating project.
+
+For more details, visit the [KServe website](https://kserve.github.io/website/).
+
+
+
+### Why KServe?
+
+Single platform that unifies Generative and Predictive AI inference on Kubernetes. Simple enough for quick deployments, yet powerful enough to handle enterprise-scale AI workloads with advanced features.
+
+### Features
+
+**Generative AI**
+ * 🧠 **LLM-Optimized**: OpenAI-compatible inference protocol for seamless integration with large language models
+ * 🚅 **GPU Acceleration**: High-performance serving with GPU support and optimized memory management for large models
+ * 💾 **Model Caching**: Intelligent model caching to reduce loading times and improve response latency for frequently used models
+ * 🗂️ **KV Cache Offloading**: Advanced memory management with KV cache offloading to CPU/disk for handling longer sequences efficiently
+ * 📈 **Autoscaling**: Request-based autoscaling capabilities optimized for generative workload patterns
+ * 🔧 **Hugging Face Ready**: Native support for Hugging Face models with streamlined deployment workflows
+
+**Predictive AI**
+ * 🧮 **Multi-Framework**: Support for TensorFlow, PyTorch, scikit-learn, XGBoost, ONNX, and more
+ * 🔀 **Intelligent Routing**: Seamless request routing between predictor, transformer, and explainer components with automatic traffic management
+ * 🔄 **Advanced Deployments**: Canary rollouts, inference pipelines, and ensembles with InferenceGraph
+ * ⚡ **Autoscaling**: Request-based autoscaling with scale-to-zero for predictive workloads
+ * 🔍 **Model Explainability**: Built-in support for model explanations and feature attribution to understand prediction reasoning
+ * 📊 **Advanced Monitoring**: Enables payload logging, outlier detection, adversarial detection, and drift detection
+ * 💰 **Cost Efficient**: Scale-to-zero on expensive resources when not in use, reducing infrastructure costs
+
+### Learn More
+To learn more about KServe, how to use various supported features, and how to participate in the KServe community,
+please follow the [KServe website documentation](https://kserve.github.io/website).
+Additionally, we have compiled a list of [presentations and demos](https://kserve.github.io/website/docs/community/presentations) to dive through various details.
+
+### :hammer_and_wrench: Installation
+
+#### Standalone Installation
+- **[Standard Kubernetes Installation](https://kserve.github.io/website/docs/admin-guide/overview#raw-kubernetes-deployment)**: Compared to Serverless Installation, this is a more **lightweight** installation. However, this option does not support canary deployment and request based autoscaling with scale-to-zero.
+- **[Knative Installation](https://kserve.github.io/website/docs/admin-guide/overview#serverless-deployment)**: KServe by default installs Knative for **serverless deployment** for InferenceService.
+- **[ModelMesh Installation](https://kserve.github.io/website/docs/admin-guide/overview#modelmesh-deployment)**: You can optionally install ModelMesh to enable **high-scale**, **high-density** and **frequently-changing model serving** use cases.
+- **[Quick Installation](https://kserve.github.io/website/docs/getting-started/quickstart-guide)**: Install KServe on your local machine.
+
+#### Kubeflow Installation
+KServe is an important addon component of Kubeflow, please learn more from the [Kubeflow KServe documentation](https://www.kubeflow.org/docs/external-add-ons/kserve/kserve). Check out the following guides for running [on AWS](https://awslabs.github.io/kubeflow-manifests/main/docs/component-guides/kserve) or [on OpenShift Container Platform](https://github.com/kserve/kserve/blob/master/docs/OPENSHIFT_GUIDE.md).
+
+### :flight_departure: [Create your first InferenceService](https://kserve.github.io/website/docs/getting-started/genai-first-isvc)
+
+### :bulb: [Roadmap](./ROADMAP.md)
+
+### :blue_book: [InferenceService API Reference](https://kserve.github.io/website/docs/reference/crd-api)
+
+### :toolbox: [Developer Guide](https://kserve.github.io/website/docs/developer-guide)
+
+### :writing_hand: [Contributor Guide](https://kserve.github.io/website/docs/developer-guide/contribution)
+
+### :handshake: [Adopters](https://kserve.github.io/website/docs/community/adopters)
+
+### Star History
+
+[](https://www.star-history.com/#kserve/kserve&Date)
diff --git a/data/readmes/kuadrant-v131.md b/data/readmes/kuadrant-v131.md
new file mode 100644
index 0000000..f8bd7a7
--- /dev/null
+++ b/data/readmes/kuadrant-v131.md
@@ -0,0 +1,41 @@
+# Kuadrant - README (v1.3.1)
+
+**Repository**: https://github.com/Kuadrant/kuadrant-operator
+**Version**: v1.3.1
+
+---
+
+# Kuadrant Operator
+
+[](https://github.com/Kuadrant/kuadrant-operator/actions/workflows/code-style.yaml)
+[](https://github.com/Kuadrant/kuadrant-operator/actions/workflows/test.yaml)
+[](https://codecov.io/gh/Kuadrant/kuadrant-operator)
+[](http://www.apache.org/licenses/LICENSE-2.0)
+[](https://www.bestpractices.dev/projects/9242)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2FKuadrant%2Fkuadrant-operator?ref=badge_shield)
+
+## Overview
+
+Kuadrant leverages [Gateway API](https://gateway-api.sigs.k8s.io/) and [Policy Attachment](https://gateway-api.sigs.k8s.io/geps/gep-2648/) to enhance gateway providers like [Istio](https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/) and [Envoy Gateway](https://gateway.envoyproxy.io/) with additional features via Policies. Those features include TLS, DNS, application authentication & authorization, and rate limiting.
+
+You can find more information on the different aspects of Kuadrant at the documentation links below:
+
+- [Overview](https://docs.kuadrant.io)
+- [Getting Started & Installation](https://docs.kuadrant.io/dev/getting-started/)
+- [Architecture](https://docs.kuadrant.io/dev/architecture/docs/design/architectural-overview-v1/)
+
+## Contributing
+
+The [Development guide](doc/overviews/development.md) describes how to build the kuadrant operator and
+how to test your changes before submitting a patch or opening a PR.
+
+Join us on the [#kuadrant](https://kubernetes.slack.com/archives/C05J0D0V525) channel in the Kubernetes Slack workspace,
+for live discussions about the roadmap and more.
+
+## Licensing
+
+This software is licensed under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
+
+See the LICENSE and NOTICE files that should have been provided along with this software for details.
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2FKuadrant%2Fkuadrant-operator?ref=badge_large)
diff --git a/data/readmes/kuasar-v101-alpha1.md b/data/readmes/kuasar-v101-alpha1.md
new file mode 100644
index 0000000..9ea47f7
--- /dev/null
+++ b/data/readmes/kuasar-v101-alpha1.md
@@ -0,0 +1,200 @@
+# Kuasar - README (v1.0.1-alpha1)
+
+**Repository**: https://github.com/kuasar-io/kuasar
+**Version**: v1.0.1-alpha1
+
+---
+
+
+
+
+
+Kuasar is an efficient container runtime that provides cloud-native, all-scenario container solutions by supporting multiple sandbox techniques. Written in Rust, it offers a standard sandbox abstraction based on the sandbox API. Additionally, Kuasar provides an optimized framework to accelerate container startup and reduce unnecessary overheads.
+
+# Supported Sandboxes
+
+| Sandboxer | Sandbox | Status |
+|------------|------------------|-----------------|
+| MicroVM | Cloud Hypervisor | Supported |
+| | QEMU | Supported |
+| | Firecracker | Planned in 2024 |
+| | StratoVirt | Supported |
+| Wasm | WasmEdge | Supported |
+| | Wasmtime | Supported |
+| | Wasmer | Planned in 2024 |
+| App Kernel | gVisor | Planned in 2024 |
+| | Quark | Supported |
+| runC | runC | Supported |
+# Why Kuasar?
+
+In the container world, a sandbox is a technique used to separate container processes from each other, and from the operating system itself. After the introduction of the [Sandbox API](https://github.com/containerd/containerd/issues/4131), sandbox has become the first-class citizen in containerd. With more and more sandbox techniques available in the container world, a management service called "sandboxer" is expected to be proposed.
+
+Kuasar supports various types of sandboxers, making it possible for users to select the most appropriate sandboxer for each application, according to application requirements.
+
+Compared with other container runtimes, Kuasar has the following advantages:
+
++ **Unified Sandbox Abstraction**: The sandbox is a first-class citizen in Kuasar as the Kuasar is entirely built upon the Sandbox API, which was previewed by the containerd community in October 2022. Kuasar fully utilizes the advantages of the Sandbox API, providing a unified way for sandbox access and management, and improving sandbox O&M efficiency.
++ **Multi-Sandbox Colocation**: Kuasar has built-in support for mainstream sandboxes, allowing multiple types of sandboxes to run on a single node. Kuasar is able to balance user's demands for security isolation, fast startup, and standardization, and enables a serverless node resource pool to meet various cloud-native scenario requirements.
++ **Optimized Framework**: Optimization has been done in Kuasar via removing all pause containers and replacing shim processes by a single resident sandboxer process, bringing about a 1:N process management model, which has a better performance than the current 1:1 shim v2 process model. The benchmark test results showed that Kuasar's sandbox startup speed 2x, while the resource overhead for management was reduced by 99%. More details please refer to [Performance](#performance).
++ **Open and Neutral**: Kuasar is committed to building an open and compatible multi-sandbox technology ecosystem. Thanks to the Sandbox API, it is more convenient and time-saving to integrate sandbox technologies. Kuasar keeps an open and neutral attitude towards sandbox technologies, therefore all sandbox technologies are welcome. Currently, the Kuasar project is collaborating with open-source communities and projects such as WasmEdge, openEuler and QuarkContainers.
+
+# Kuasar Architecture
+
+
+
+Sandboxers in Kuasar use their own isolation techniques for the containers, and they are also external plugins of containerd built on the new sandbox plugin mechanism. A discussion about the sandboxer plugin has been raised in this [Containerd issue](https://github.com/containerd/containerd/issues/7739), with a community meeting record and slides attached in this [comment](https://github.com/containerd/containerd/issues/7739#issuecomment-1384797825). Now this feature has been put into 2.0 milestone.
+
+Currently, Kuasar provides three types of sandboxers - **MicroVM Sandboxer**, **App Kernel Sandboxer** and **Wasm Sandboxer** - all of which have been proven to be secure isolation techniques in a multi-tenant environment. The general architecture of a sandboxer consists of two modules: one that implements the Sandbox API to manage the sandbox's lifecycle, and the other that implements the Task API to handle operations related to containers.
+
+Additionally, Kuasar is also a platform under active development, and we welcome more sandboxers can be built on top of it, such as Runc sandboxer.
+
+## MicroVM Sandboxer
+
+In the microVM sandbox scenario, the VM process provides complete virtual machines and Linux kernels based on open-source VMMs such as [Cloud Hypervisor](https://www.cloudhypervisor.org/), [StratoVirt](https://gitee.com/openeuler/stratovirt), [Firecracker](https://firecracker-microvm.github.io/) and [QEMU](https://www.qemu.org/). **All of these vm must be running on virtualization-enabled node, otherwise, it won't work!**. Hence, the `vmm-sandboxer` of MicroVM sandboxer is responsible for launching VMs and calling APIs, and the `vmm-task`, as the init process in VMs, plays the role of running container processes. The container IO can be exported via vsock or uds.
+
+The microVM sandboxer avoids the necessity of running shim process on the host, bringing about a cleaner and more manageable architecture with only one process per pod.
+
+
+
+*Please note that only Cloud Hypervisor, StratoVirt and QEMU are supported currently.*
+
+## App Kernel Sandboxer
+
+The app kernel sandbox launches a KVM virtual machine and a guest kernel, without any application-level hypervisor or Linux kernel. This allows for customized optimization to speed up startup procedure, reduce memory overheads, and improve IO and network performance. Examples of such app kernel sandboxes include [gVisor](https://gvisor.dev/) and [Quark](https://github.com/QuarkContainer/Quark).
+
+Quark is an application kernel sandbox that utilizes its own hypervisor named `QVisor` and a customized kernel called `QKernel`. With customized modifications to these components, Quark can achieve significant performance.
+
+The `quark-sandboxer` of app kernel sandboxer starts `Qvisor` and an app kernel named `Qkernel`. Whenever containerd needs to start a container in the sandbox, the `quark-task` in `QVisor` will call `Qkernel` to launch a new container. All containers within the same pod will be running within the same process.
+
+
+
+*Please note that only Quark is currently supported.*
+
+## Wasm Sandboxer
+
+The wasm sandbox, such as [WasmEdge](https://wasmedge.org/) or [Wasmtime](https://wasmtime.dev/), is incredibly lightweight, but it may have constraints for some applications at present. The `wasm-sandboxer` and `wasm-task` launch containers within a WebAssembly runtime. Whenever containerd needs to start a container in the sandbox, the `wasm-task` will fork a new process, start a new WasmEdge runtime, and run the Wasm code inside it. All containers within the same pod will share the same Namespace/Cgroup resources with the `wasm-task` process.
+
+
+## Runc Sandboxer
+
+Except secure containers, Kuasar also has provide the ability for [runC](https://github.com/opencontainers/runc) containers. In order to generate a seperate namespace, a slight process is created by the `runc-sandboxer` through double folked and then becomes the PID 1. Based on this namespace, the `runc-task` can create the container process and join the namespace. If the container need a private namespace, it will unshare a new namespace for itself.
+
+
+
+# Performance
+
+The performance of Kuasar is measured by two metrics:
+
++ End-to-End containers startup time.
++ Process memory consumption to run containers.
+
+We used the Cloud Hypervisor in the benchmark test and tested the startup time of 100 PODs under both serial and parallel scenario. The result demonstrates that Kuasar outperforms open-source [Kata-containers](https://github.com/kata-containers/kata-containers) in terms of both startup speed and memory consumption.
+
+For detailed test scripts, test data, and results, please refer to the [benchmark test](tests/benchmark/Benchmark.md).
+
+# Quick Start
+
+## Prerequisites
+
+### 1. OS
+The minimum versions of Linux distributions supported by Kuasar are *Ubuntu 22.04* or *CentOS 8* or openEuler 23.03.
+
+Please also note that Quark requires a Linux kernel version >= 5.15.
+
+### 2. Sandbox
+
++ MicroVM: To launch a microVM-based sandbox, a hypervisor must be installed on the **virtualization-enabled** host.
+ + It is recommended to install Cloud Hypervisor by default. You can find Cloud Hypervisor installation instructions [here](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/docs/building.md).
+ + If you want to run kuasar with iSulad container engine and StratoVirt hypervisor, you can refer to this guide [how-to-run-kuasar-with-isulad-and-stratovirt](docs/vmm/how-to-run-kuasar-with-isulad-and-stratovirt.md).
++ Quark: To use Quark, please refer to the installation instructions [here](docs/quark/README.md).
++ WasmEdge: To start WebAssembly sandboxes, you need to install WasmEdge v0.13.5. Instructions for installing WasmEdge can be found in [install-a-specific-version-of-wasmedge](https://wasmedge.org/docs/start/install/#install-a-specific-version-of-wasmedge).
+
+### 3. containerd
+
+Kuasar sandboxers are external plugins of containerd, so both containerd and its CRI plugin are required in order to manage the sandboxes and containers.
+
+We offer two ways to interact Kuasar with containerd:
+
++ **EXPERIMENTAL in containerd 2.0 milestone**: If you desire the full experience of Kuasar, please install [containerd under kuasar-io organization](docs/containerd.md). Rest assured that our containerd is built based on the official v1.7.0, so there is no need to worry about missing any functionalities.
+
++ If the compatibility is a real concern, you need to install official containerd v1.7.0 with an extra [kuasar-shim](shim) for request forwarding, see [here](docs/shim/README.md). However, it's possible that this way may be deprecated in the future as containerd 2.0 evolves.
+
+### 4. crictl
+
+Since Kuasar is built on top of the Sandbox API, which has already been integrated into the CRI of containerd, it makes sense to experience Kuasar from the CRI level.
+
+`crictl` is a debug CLI for CRI. To install it, please see [here](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md#install-crictl)
+
+### 5. virtiofsd
+
+MicroVMs like Cloud Hypervisor needs a virtiofs daemon to share the directories on the host. Please refer to [virtiofsd guide](https://gitlab.com/virtio-fs/virtiofsd).
+
+## Build from source
+
+Rust 1.67 or higher version is required to compile Kuasar. Build it with root user:
+
+```shell
+git clone https://github.com/kuasar-io/kuasar.git
+cd kuasar
+make all
+make install
+```
+
+> Tips: `make all` build command will download the Rust and Golang packages from the internet network, so you need to provide the `http_proxy` and `https_proxy` environments for the `make all` command.
+>
+> If a self-signed certificate is used in the `make all` build command execution environment, you may encounter SSL issues with downloading resources from https URL failed. Therefore, you need to provide a CA-signed certificate and copy it into the root directory of the Kuasar project, then rename it as "proxy.crt". In this way, our build script will use the "proxy.crt" certificate to access the https URLs of Rust and Golang installation packages.
+
+## Start Kuasar
+
+Launch the sandboxers by the following commands:
+
++ For vmm: `nohup vmm-sandboxer --listen /run/vmm-sandboxer.sock --dir /run/kuasar-vmm &`
++ For quark: `nohup quark-sandboxer --listen /run/quark-sandboxer.sock --dir /var/lib/kuasar-quark &`
++ For wasm: `nohup wasm-sandboxer --listen /run/wasm-sandboxer.sock --dir /run/kuasar-wasm &`
++ For runc: `nohup runc-sandboxer --listen /run/runc-sandboxer.sock --dir /run/kuasar-runc &`
+
+## Start Container
+
+Since Kuasar is a low-level container runtime, all interactions should be done via CRI in containerd, such as crictl or Kubernetes. We use crictl as examples:
+
++ For vmm, quark or runc, run the following scripts:
+
+ `examples/run_example_container.sh kuasar-vmm`, `examples/run_example_container.sh kuasar-quark` or `examples/run_example_container.sh kuasar-runc`
+
++ For wasm: Wasm container needs its own container image so our script has to build and import the container image at first.
+
+ `examples/run_example_wasm_container.sh`
+
+# Contact
+
+If you have questions, feel free to reach out to us in the following ways:
+
+- [mailing list](https://groups.google.com/forum/#!forum/kuasar)
+- [slack](https://cloud-native.slack.com/archives/C052JRURD8V) | [Join](https://slack.cncf.io/)
+
+# Contributing
+
+If you're interested in being a contributor and want to get involved in developing the Kuasar code, please see [CONTRIBUTING](CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
+
+# License
+
+Kuasar is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
+
+Kuasar documentation is under the [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/legalcode).
diff --git a/data/readmes/kube-burner-v200.md b/data/readmes/kube-burner-v200.md
new file mode 100644
index 0000000..5d3ae50
--- /dev/null
+++ b/data/readmes/kube-burner-v200.md
@@ -0,0 +1,74 @@
+# Kube-burner - README (v2.0.0)
+
+**Repository**: https://github.com/kube-burner/kube-burner
+**Version**: v2.0.0
+
+---
+
+
+
+[](https://goreportcard.com/report/github.com/kube-burner/kube-burner)
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://www.bestpractices.dev/projects/8264)
+
+# What is Kube-burner
+
+Kube-burner is a Kubernetes performance and scale test orchestration toolset. It provides multi-faceted functionality, the most important of which are summarized below.
+
+- Create, delete and patch Kubernetes resources at scale.
+- Prometheus metric collection and indexing.
+- Measurements.
+- Alerting.
+
+Kube-burner is a binary application written in golang that makes extensive usage of the official k8s client library, [client-go](https://github.com/kubernetes/client-go).
+
+
+
+
+## Documentation
+
+Documentation is [available here](https://kube-burner.github.io/kube-burner/)
+
+## Quick start
+
+Install latest stable version with:
+
+```shell
+curl -Ls https://raw.githubusercontent.com/kube-burner/kube-burner/refs/heads/main/hack/install.sh | sh
+```
+
+> [!NOTE]
+> Default installation path is `${HOME}/.local/bin/`, you can change it by setting `INSTALL_DIR` environment variable
+> before running the script
+
+## Downloading Kube-burner
+
+In case you want to start tinkering with Kube-burner now:
+
+- You can find the binaries in the [releases section of the repository](https://github.com/kube-burner/kube-burner/releases).
+- There's also a container image available at [quay](https://quay.io/repository/kube-burner/kube-burner?tab=tags).
+
+### Example configs
+
+Example configuration files can be found at the [examples directory](./examples/workloads).
+
+## Building from Source
+
+Kube-burner provides multiple build options:
+
+- `make build` - Standard build for development
+- `make build-release` - Optimized release build
+- `make build-hardened` - Security-hardened static binary
+- `make build-hardened-cgo` - Full security hardening with CGO (requires glibc)
+
+The default builds produce static binaries that work across all Linux distributions. The CGO-hardened build offers additional security features but requires glibc to be present on the target system.
+
+## Contributing Guidelines, CI, and Code Style
+
+Please read the [Contributing section](https://kube-burner.github.io/kube-burner/latest/contributing/) before contributing to this project. It provides information on how to contribute, guidelines for setting an environment a CI checks to be done before commiting code.
+
+This project utilizes a Continuous Integration (CI) pipeline to ensure code quality and maintain project standards. The CI process automatically builds, tests, and verifies the project on each commit and pull request.
+
+## Code of Conduct
+
+This project is for everyone. We ask that our users and contributors take a few minutes to review our [Code of Conduct](./code-of-conduct.md).
diff --git a/data/readmes/kube-ovn-v11418.md b/data/readmes/kube-ovn-v11418.md
new file mode 100644
index 0000000..7a8a580
--- /dev/null
+++ b/data/readmes/kube-ovn-v11418.md
@@ -0,0 +1,110 @@
+# Kube-OVN - README (v1.14.18)
+
+**Repository**: https://github.com/kubeovn/kube-ovn
+**Version**: v1.14.18
+
+---
+
+
+
+[](https://github.com/kubeovn/kube-ovn/blob/master/LICENSE)
+[](https://github.com/kubeovn/kube-ovn/releases)
+[](https://img.shields.io/docker/pulls/kubeovn/kube-ovn)
+
+[](https://goreportcard.com/report/github.com/kubeovn/kube-ovn)
+
+[中文文档](https://kubeovn.github.io/docs/)
+
+If you are looking for a powerful networking solution that excels in both container and VM scenarios, or if you need robust multi-tenant networking capabilities, Kube-OVN is your ideal choice in the Cloud Native era.
+
+Kube-OVN, a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/), integrates OVN-based Network Virtualization with Kubernetes. It provides enhanced support for KubeVirt workloads and unique multi-tenant capabilities, offering enterprise-grade network features with superior performance and simplified operations.
+
+## Community
+The Kube-OVN community is waiting for your participation!
+
+- 💭 Chat with us at [Slack](https://communityinviter.com/apps/kube-ovn/kube-ovn)
+- 📅 Join the [Online Meeting](https://docs.google.com/document/d/1OPFC3s0rVxGkLR5GaUayNC6Nx9lwvjapg_hQl4MWt3E/edit#heading=h.1e73t98gdg9l)
+- 💬 微信用户请[填写表单](https://jinshuju.net/f/lyrEow)加入交流群!
+
+## Features
+- **Namespaced Subnets**: Each Namespace can have a unique Subnet (backed by a Logical Switch). Pods within the Namespace will have IP addresses allocated from the Subnet. It's also possible for multiple Namespaces to share a Subnet.
+- **Vlan/Underlay Support**: In addition to overlay network, Kube-OVN also supports underlay and vlan mode network for better performance and direct connectivity with physical network.
+- **VPC Support**: Multi-tenant network with independent address spaces, where each tenant has its own network infrastructure such as eips, nat gateways, security groups and loadbalancers.
+- **Static IP Addresses for Workloads**: Allocate random or static IP addresses to workloads.
+- **Multi-Cluster Network**: Connect different Kubernetes/Openstack clusters into one L3 network.
+- **TroubleShooting Tools**: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshoot complicate network issues.
+- **Prometheus & Grafana Integration**: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.
+- **ARM Support**: Kube-OVN can run on x86_64 and arm64 platforms.
+- **Windows Support**: Kube-OVN can run on Windows worker nodes.
+- **Subnet Isolation**: Can configure a Subnet to deny any traffic from source IP addresses not within the same Subnet. Can whitelist specific IP addresses and IP ranges.
+- **Network Policy**: Implementing networking.k8s.io/NetworkPolicy API by high performance ovn ACL.
+- **DualStack IP Support**: Pod can run in IPv4-Only/IPv6-Only/DualStack mode.
+- **Pod NAT and EIP**: Manage the pod external traffic and external ip like tradition VM.
+- **IPAM for Multi NIC**: A cluster-wide IPAM for CNI plugins other than Kube-OVN, such as macvlan/vlan/host-device to take advantage of subnet and static ip allocation functions in Kube-OVN.
+- **Dynamic QoS**: Configure Pod/Gateway Ingress/Egress traffic rate/priority/loss/latency on the fly.
+- **Embedded Load Balancers**: Replace kube-proxy with the OVN embedded high performance distributed L2 Load Balancer.
+- **Distributed Gateways**: Every Node can act as a Gateway to provide external network connectivity.
+- **Namespaced Gateways**: Every Namespace can have a dedicated Gateway for Egress traffic.
+- **Direct External Connectivity**:Pod IP can be exposed to external network directly.
+- **BGP Support**: Pod/Subnet IP can be exposed to external by BGP router protocol.
+- **Traffic Mirror**: Duplicated container network traffic for monitoring, diagnosing and replay.
+- **Hardware Offload**: Boost network performance and save CPU resource by offloading OVS flow table to hardware.
+- **DPDK Support**: DPDK application now can run in Pod with OVS-DPDK.
+- **Cilium Integration**: Cilium can take over the work of kube-proxy.
+- **F5 CES Integration**: F5 can help better manage the outgoing traffic of k8s pod/container.
+
+## Network Topology
+
+The Switch, Router and Firewall showed in the diagram below are all distributed on all Nodes. There is no single point of failure for in-cluster network.
+
+
+
+## Monitoring Dashboard
+
+Kube-OVN offers prometheus integration with grafana dashboards to visualize network quality.
+
+
+
+## Quick Start
+Kube-OVN is easy to install with all necessary components/dependencies included. If you already have a Kubernetes cluster without any cni plugin, please refer to the [Installation Guide](https://kubeovn.github.io/docs/stable/en/start/one-step-install/).
+
+If you want to install Kubernetes from scratch, you can try [kubespray](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/kube-ovn.md) or for Chinese users try [kubeasz](https://github.com/easzlab/kubeasz/blob/master/docs/setup/network-plugin/kube-ovn.md) to deploy a production ready Kubernetes cluster with Kube-OVN embedded.
+
+## Documents
+
+- [Overview](https://kubeovn.github.io/docs/en/)
+- [Getting Start](https://kubeovn.github.io/docs/en/start/prepare/)
+- [User Guide](https://kubeovn.github.io/docs/en/guide/setup-options/)
+- [Operations](https://kubeovn.github.io/docs/en/ops/kubectl-ko/)
+- [Advanced Usage](https://kubeovn.github.io/docs/en/advance/multi-nic/)
+- [Reference](https://kubeovn.github.io/docs/en/reference/architecture/)
+
+## Contribution
+We are looking forward to your PR!
+
+- [Development Guide](https://kubeovn.github.io/docs/en/reference/dev-env/)
+- [Architecture Guide](https://kubeovn.github.io/docs/en/reference/architecture/)
+
+
+## FAQ
+0. Q: What's the different with other CNIs?
+
+ A: Different CNI Implementations have different scopes, there is no single implementation that can resolve all network problems. Kube-OVN is aiming to bring SDN to Cloud Native.
+ If you are missing the old day network concepts like VPC, Subnet, customize route, security groups etc. you can not find corresponding functions in any other CNIs. Then Kube-OVN
+ is your only choice when you need these functions to build datacenter or enterprise network fabric.
+
+2. Q: How about the scalability of Kube-OVN?
+
+ A: We have simulated 200 Nodes with 10k Pods by kubemark, and it works fine. Some community users have deployed one cluster with 500 Nodes and 10k+ Pods in production. It's still not reach the limitation, but we don't have enough resources to find the limitation.
+
+3. Q: What's the Addressing/IPAM? Node-specific or cluster-wide?
+
+ A: Kube-OVN uses a cluster-wide IPAM, Pod address can float to any nodes in the cluster.
+
+4. Q: What's the encapsulation?
+
+ A: For overlay mode, Kube-OVN uses Geneve/Vxlan/STT to encapsulate packets between nodes. For Vlan/Underlay mode there is no encapsulation.
+
+
+
+
diff --git a/data/readmes/kube-rs-201.md b/data/readmes/kube-rs-201.md
new file mode 100644
index 0000000..46a1378
--- /dev/null
+++ b/data/readmes/kube-rs-201.md
@@ -0,0 +1,185 @@
+# kube-rs - README (2.0.1)
+
+**Repository**: https://github.com/kube-rs/kube
+**Version**: 2.0.1
+
+---
+
+# kube-rs
+
+[](https://crates.io/crates/kube)
+[](https://github.com/rust-lang/rust/releases/tag/1.85.0)
+[](https://kube.rs/kubernetes-version)
+[](https://bestpractices.coreinfrastructure.org/projects/5413)
+[](https://discord.gg/tokio)
+
+A [Rust](https://rust-lang.org/) client for [Kubernetes](http://kubernetes.io) in the style of a more generic [client-go](https://github.com/kubernetes/client-go), a runtime abstraction inspired by [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime), and a derive macro for [CRDs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) inspired by [kubebuilder](https://book.kubebuilder.io/reference/generating-crd.html). Hosted by [CNCF](https://cncf.io/) as a [Sandbox Project](https://www.cncf.io/sandbox-projects/).
+
+These crates build upon Kubernetes [apimachinery](https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go) + [api concepts](https://kubernetes.io/docs/reference/using-api/api-concepts/) to enable generic abstractions. These abstractions allow Rust reinterpretations of reflectors, controllers, and custom resource interfaces, so that you can write applications easily.
+
+## Installation
+
+Select a version of `kube` along matching versions of [k8s-openapi](https://github.com/Arnavion/k8s-openapi) and [schemars](https://github.com/GREsau/schemars) for Kubernetes structs and matching schemas. See also historical [Kubernetes versions](https://kube.rs/kubernetes-version/).
+
+```toml
+[dependencies]
+kube = { version = "2.0.1", features = ["runtime", "derive"] }
+k8s-openapi = { version = "0.26.0", features = ["latest", "schemars"] }
+schemars = { version = "1" }
+```
+
+See [features](https://kube.rs/features/) for a quick overview of default-enabled / opt-in functionality. You can remove `schemars` parts if you do not need the `kube/derive` feature.
+
+## Upgrading
+
+See [kube.rs/upgrading](https://kube.rs/upgrading/).
+Noteworthy changes are highlighted in [releases](https://github.com/kube-rs/kube/releases), and archived in the [changelog](https://kube.rs/changelog/).
+
+## Usage
+
+See the **[examples directory](https://github.com/kube-rs/kube/blob/main/examples)** for how to use any of these crates.
+
+- **[kube API Docs](https://docs.rs/kube/)**
+- **[kube.rs](https://kube.rs)**
+
+Official examples:
+
+- [version-rs](https://github.com/kube-rs/version-rs): lightweight deployment `reflector` using axum
+- [controller-rs](https://github.com/kube-rs/controller-rs): `Controller` of a crd inside actix
+
+For real world projects see [ADOPTERS](https://kube.rs/adopters/).
+
+## Api
+
+The [`Api`](https://docs.rs/kube/latest/kube/struct.Api.html) is what interacts with Kubernetes resources, and is generic over [`Resource`](https://docs.rs/kube/latest/kube/trait.Resource.html):
+
+```rust
+use k8s_openapi::api::core::v1::Pod;
+let pods: Api = Api::default_namespaced(client);
+
+let pod = pods.get("blog").await?;
+println!("Got pod: {pod:?}");
+
+let patch = json!({"spec": {
+ "activeDeadlineSeconds": 5
+}});
+let pp = PatchParams::apply("kube");
+let patched = pods.patch("blog", &pp, &Patch::Apply(patch)).await?;
+assert_eq!(patched.spec.active_deadline_seconds, Some(5));
+
+pods.delete("blog", &DeleteParams::default()).await?;
+```
+
+See the examples ending in `_api` examples for more detail.
+
+## Custom Resource Definitions
+
+Working with custom resources uses automatic code-generation via [proc_macros in kube-derive](https://docs.rs/kube/latest/kube/derive.CustomResource.html).
+
+You need to add `#[derive(CustomResource, JsonSchema)]` and some `#[kube(attrs..)]` on a __spec__ struct:
+
+```rust
+#[derive(CustomResource, Debug, Serialize, Deserialize, Default, Clone, JsonSchema)]
+#[kube(group = "kube.rs", version = "v1", kind = "Document", namespaced)]
+pub struct DocumentSpec {
+ title: String,
+ content: String,
+}
+```
+
+Then you can use the generated wrapper struct `Document` as a [`kube::Resource`](https://docs.rs/kube/*/kube/trait.Resource.html):
+
+```rust
+let docs: Api = Api::default_namespaced(client);
+let d = Document::new("guide", DocumentSpec::default());
+println!("doc: {:?}", d);
+println!("crd: {:?}", serde_yaml::to_string(&Document::crd()));
+```
+
+There are a ton of kubebuilder-like instructions that you can annotate with here. See the [documentation](https://docs.rs/kube/latest/kube/derive.CustomResource.html) or the `crd_` prefixed [examples](https://github.com/kube-rs/kube/blob/main/examples) for more.
+
+**NB:** `#[derive(CustomResource)]` requires the `derive` feature enabled on `kube`.
+
+## Runtime
+
+The `runtime` module exports the `kube_runtime` crate and contains higher level abstractions on top of the `Api` and `Resource` types so that you don't have to do all the `watch`/`resourceVersion`/storage book-keeping yourself.
+
+### Watchers
+
+A streaming interface (similar to informers) that presents [`watcher::Event`](https://docs.rs/kube/latest/kube/runtime/watcher/enum.Event.html)s and does automatic relists under the hood.
+
+```rust
+let api = Api::::default_namespaced(client);
+let stream = watcher(api, Config::default()).default_backoff().applied_objects();
+```
+
+This now gives a continual stream of events and you do not need to care about the watch having to restart, or connections dropping.
+
+```rust
+while let Some(event) = stream.try_next().await? {
+ println!("Applied: {}", event.name_any());
+}
+```
+
+
+Note the base items from a `watcher` stream are an abstraction above the native `WatchEvent` to allow for store buffering. If you are following along to "see what changed", you can use utilities from [`WatchStreamExt`](https://docs.rs/kube/latest/kube/runtime/trait.WatchStreamExt.html), such as `applied_objects` to get a more conventional stream.
+
+## Reflectors
+
+A `reflector` is a `watcher` with `Store` on `K`. It acts on all the `Event` exposed by `watcher` to ensure that the state in the `Store` is as accurate as possible.
+
+```rust
+let nodes: Api = Api::all(client);
+let lp = Config::default().labels("kubernetes.io/arch=amd64");
+let (reader, writer) = reflector::store();
+let rf = reflector(writer, watcher(nodes, lp));
+```
+
+At this point you can listen to the `reflector` as if it was a `watcher`, but you can also query the `reader` at any point.
+
+### Controllers
+
+A `Controller` is a `reflector` along with an arbitrary number of watchers that schedule events internally to send events through a reconciler:
+
+```rust
+Controller::new(root_kind_api, Config::default())
+ .owns(child_kind_api, Config::default())
+ .run(reconcile, error_policy, context)
+ .for_each(|res| async move {
+ match res {
+ Ok(o) => info!("reconciled {:?}", o),
+ Err(e) => warn!("reconcile failed: {}", Report::from(e)),
+ }
+ })
+ .await;
+```
+
+Here `reconcile` and `error_policy` refer to functions you define. The first will be called when the root or child elements change, and the second when the `reconciler` returns an `Err`.
+
+See the [controller guide](https://kube.rs/controllers/intro/) for how to write these.
+
+## TLS
+
+Uses [rustls](https://github.com/rustls/rustls) with `ring` provider (default) or `aws-lc-rs` provider (optional).
+
+To switch [rustls providers](https://docs.rs/rustls/latest/rustls/crypto/struct.CryptoProvider.html), turn off `default-features` and enable the `aws-lc-rs` feature:
+
+```toml
+kube = { version = "2.0.1", default-features = false, features = ["client", "rustls-tls", "aws-lc-rs"] }
+```
+
+To switch to `openssl`, turn off `default-features`, and enable the `openssl-tls` feature:
+
+```toml
+kube = { version = "2.0.1", default-features = false, features = ["client", "openssl-tls"] }
+```
+
+This will pull in `openssl` and `hyper-openssl`. If `default-features` is left enabled, you will pull in two TLS stacks, and the default will remain as `rustls`.
+
+## musl-libc
+
+Kube will work with [distroless](https://github.com/kube-rs/controller-rs/blob/main/Dockerfile), [scratch](https://github.com/constellation-rs/constellation/blob/27dc89d0d0e34896fd37d638692e7dfe60a904fc/Dockerfile), and `alpine` (it's also possible to use alpine as a builder [with some caveats](https://github.com/kube-rs/kube/issues/331#issuecomment-715962188)).
+
+## License
+
+Apache 2.0 licensed. See LICENSE for details.
diff --git a/data/readmes/kube-vip-v102.md b/data/readmes/kube-vip-v102.md
new file mode 100644
index 0000000..90b6121
--- /dev/null
+++ b/data/readmes/kube-vip-v102.md
@@ -0,0 +1,78 @@
+# kube-vip - README (v1.0.2)
+
+**Repository**: https://github.com/kube-vip/kube-vip
+**Version**: v1.0.2
+
+---
+
+# kube-vip
+
+High Availability and Load-Balancing
+
+
+
+[](https://github.com/kube-vip/kube-vip/actions/workflows/main.yaml) [](https://insights.linuxfoundation.org/project/kube-vip) [&message=212&color=0094FF&logo=linuxfoundation&logoColor=white&style=flat)](https://insights.linuxfoundation.org/project/kube-vip)
+
+## Overview
+Kubernetes Virtual IP and Load-Balancer for both control plane and Kubernetes services
+
+The idea behind `kube-vip` is a small self-contained Highly-Available option for all environments, especially:
+
+- Bare-Metal
+- Edge (arm / Raspberry PI)
+- Virtualisation
+- Pretty much anywhere else :)
+
+**NOTE:** All documentation of both usage and architecture are now available at [https://kube-vip.io](https://kube-vip.io).
+
+## Features
+
+Kube-Vip was originally created to provide a HA solution for the Kubernetes control plane, over time it has evolved to incorporate that same functionality into Kubernetes service type [load-balancers](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
+
+- VIP addresses can be both IPv4 or IPv6
+- Control Plane with ARP (Layer 2) or BGP (Layer 3)
+- Control Plane using either [leader election](https://godoc.org/k8s.io/client-go/tools/leaderelection) or [raft](https://en.wikipedia.org/wiki/Raft_(computer_science))
+- Control Plane HA with kubeadm (static Pods)
+- Control Plane HA with K3s/and others (daemonsets)
+- Service LoadBalancer using [leader election](https://godoc.org/k8s.io/client-go/tools/leaderelection) for ARP (Layer 2)
+- Service LoadBalancer using multiple nodes with BGP
+- Service LoadBalancer address pools per namespace or global
+- Service LoadBalancer address via (existing network DHCP)
+- Service LoadBalancer address exposure to gateway via UPNP
+- Egress! Kube-vip will utilise a service loadbalancer as both the ingress and **egress** for a pod.
+- ... manifest generation, vendor API integrations and many more...
+
+## Why?
+
+The purpose of `kube-vip` is to simplify the building of HA Kubernetes clusters, which at this time can involve a few components and configurations that all need to be managed. This was blogged about in detail by [thebsdbox](https://twitter.com/thebsdbox/) here -> [https://thebsdbox.co.uk/2020/01/02/Designing-Building-HA-bare-metal-Kubernetes-cluster/#Networking-load-balancing](https://thebsdbox.co.uk/2020/01/02/Designing-Building-HA-bare-metal-Kubernetes-cluster/#Networking-load-balancing).
+
+### Alternative HA Options
+
+`kube-vip` provides both a floating or virtual IP address for your kubernetes cluster as well as load-balancing the incoming traffic to various control-plane replicas. At the current time to replicate this functionality a minimum of two pieces of tooling would be required:
+
+**VIP**:
+- [Keepalived](https://www.keepalived.org/)
+- [UCARP](https://ucarp.wordpress.com/)
+- Hardware Load-balancer (functionality differs per vendor)
+
+
+**LoadBalancing**:
+- [HAProxy](http://www.haproxy.org/)
+- [Nginx](http://nginx.com)
+- Hardware Load-balancer (functionality differs per vendor)
+
+All of these would require a separate level of configuration and in some infrastructures multiple teams in order to implement. Also when considering the software components, they may require packaging into containers or if they’re pre-packaged then security and transparency may be an issue. Finally, in edge environments we may have limited room for hardware (no HW load-balancer) or packages solutions in the correct architectures might not exist (e.g. ARM). Luckily with `kube-vip` being written in GO, it’s small(ish) and easy to build for multiple architectures, with the added security benefit of being the only thing needed in the container.
+
+## Troubleshooting and Feedback
+
+Please raise issues on the GitHub repository and as mentioned check the documentation at [https://kube-vip.io](https://kube-vip.io/).
+
+## Contributing
+
+Thanks for taking the time to join our community and start contributing! We welcome pull requests. Feel free to dig through the [issues](https://github.com/kube-vip/kube-vip/issues) and jump in.
+
+:warning: This project has issue compiling on MacOS, please compile it on linux distribution
+
+## Star History
+
+[](https://star-history.com/#kube-vip/kube-vip&Date)
diff --git a/data/readmes/kubean-v0302.md b/data/readmes/kubean-v0302.md
new file mode 100644
index 0000000..fe2b659
--- /dev/null
+++ b/data/readmes/kubean-v0302.md
@@ -0,0 +1,141 @@
+# Kubean - README (v0.30.2)
+
+**Repository**: https://github.com/kubean-io/kubean
+**Version**: v0.30.2
+
+---
+
+# :seedling: Kubean
+
+> [简体中文](./README_zh.md)
+
+
+
+
+
+
+
+ Kubean is a production-ready cluster lifecycle management toolchain based on kubespray and other cluster LCM engine.
+
+
+## :anchor: Awesome features
+
+- **Simplicity:** Deploying of Kubean and powerful lifecycle management of kubernetes cluster implementing by declarative API.
+- **Offline Supported**: Offline packages(os-pkgs, images, binarys) are released with the release. You won't have to worry about how to gather all the resources you need.
+- **Compatibility**: Multi-arch delivery Supporting. Such as AMD, ARM with common Linux distributions. Also include Kunpeng with Kylin.
+- **Expandability**: Allowing custom actions be added to cluster without any changes for Kubespray.
+
+## :surfing_man: Quick start
+
+### Killercoda tutorials
+
+We created a [scenario](https://killercoda.com/kubean) on [killercoda](https://killercoda.com), which is an online platform for interactive technique learning. You can try it in there.
+
+### Local install
+
+1. Ensure that you have a Kubernetes cluster running, on which Helm is installed
+
+2. Deploy kubean-operator
+
+ ```shell
+ helm repo add kubean-io https://kubean-io.github.io/kubean-helm-chart/
+ helm install kubean kubean-io/kubean --create-namespace -n kubean-system
+ ```
+
+ Then check kubean-operator status by running:
+
+ ```shell
+ kubectl get pods -n kubean-system
+ ```
+
+3. Online deploy an all-in-one cluster with minimal configuration
+
+ 1. A simple way is to use [AllInOne.yml](./examples/install/1.minimal/),
+ replacing ``, ``, and other strings with actual values.
+
+ 2. Start `kubeanClusterOps` to run the kubespray job.
+
+ ```shell
+ kubectl apply -f examples/install/1.minimal
+ ```
+
+ 3. Check the kubespray job status.
+
+ ```shell
+ kubectl get job -n kubean-system
+ ```
+
+
+
+
+
+## :ocean: Kubernetes compatibility
+
+| Kubean Version | Kubernetes Version Range | Kubernetes Default Version | kubespray SHA |
+| :-----: | :-----------: | :-----: | :-----: |
+| v0.29.1 | v1.31 ~ v1.33 | v1.32.9 | fbf957a |
+| v0.28.5 | v1.31 ~ v1.33 | v1.31.6 | 13c70d3 |
+| v0.27.3 | v1.31 ~ v1.33 | v1.31.6 | 502ba66 |
+| v0.26.4 | v1.31 ~ v1.33 | v1.31.6 | 739e5e1 |
+| v0.25.2 | v1.30 ~ v1.32 | v1.31.6 | d0e9088 |
+| v0.24.2 | v1.30 ~ v1.32 | v1.31.6 | 4ad9f9b |
+| v0.23.9 | v1.30 ~ v1.32 | v1.31.6 | a4843ea |
+| v0.22.5 | v1.29 ~ v1.31 | v1.30.5 | d173f1d |
+
+To check the list of Kubernetes versions supported by Kubean, refer to the [Kubernetes versions list](./docs/zh/usage/support_k8s_version.md).
+
+## :book: Roadmap
+
+For detailed information about all the planned features, refer to the [roadmap](docs/en/develop/roadmap.md).
+
+## :book: Documents
+
+Please visit our website: [kubean-io.github.io/kubean/](https://kubean-io.github.io/kubean/)
+
+## :envelope: Join us
+
+You can connect with us on the following channels:
+
+- Slack: join the [#Kubean](https://cloud-native.slack.com/messages/kubean) channel on CNCF Slack by requesting an [invitation](https://slack.cncf.io/) from CNCF Slack. Once you have access to CNCF Slack, you can join the Kubean channel.
+- Email: refer to the [MAINTAINERS.md](./MAINTAINERS.md) to find the email addresses of all maintainers. Feel free to contact them via email to report any issues or ask questions.
+
+## :thumbsup: Contributors
+
+
\ No newline at end of file
diff --git a/data/readmes/kubearmor-v165.md b/data/readmes/kubearmor-v165.md
new file mode 100644
index 0000000..9fd050e
--- /dev/null
+++ b/data/readmes/kubearmor-v165.md
@@ -0,0 +1,69 @@
+# KubeArmor - README (v1.6.5)
+
+**Repository**: https://github.com/kubearmor/KubeArmor
+**Version**: v1.6.5
+
+---
+
+
+
+[](https://github.com/kubearmor/KubeArmor/actions/workflows/ci-test-ginkgo.yml/)
+[](https://bestpractices.coreinfrastructure.org/projects/5401)
+[](https://clomonitor.io/projects/cncf/kubearmor)
+[](https://securityscorecards.dev/viewer/?uri=github.com/kubearmor/KubeArmor)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubearmor%2FKubeArmor?ref=badge_shield)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubearmor%2FKubeArmor?ref=badge_shield)
+[](https://cloud-native.slack.com/archives/C02R319HVL3)
+[](https://github.com/kubearmor/KubeArmor/discussions)
+[](https://hub.docker.com/r/kubearmor/kubearmor)
+[](https://artifacthub.io/packages/search?kind=19)
+
+KubeArmor is a cloud-native runtime security enforcement system that restricts the behavior \(such as process execution, file access, and networking operations\) of pods, containers, and nodes (VMs) at the system level.
+
+KubeArmor leverages [Linux security modules \(LSMs\)](https://en.wikipedia.org/wiki/Linux_Security_Modules) such as [AppArmor](https://en.wikipedia.org/wiki/AppArmor), [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux), or [BPF-LSM](https://docs.kernel.org/bpf/prog_lsm.html) to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF.
+
+| | |
+|:---|:---|
+| :muscle: **[Harden Infrastructure](getting-started/hardening_guide.md)** :chains: Protect critical paths such as cert bundles :clipboard: MITRE, STIGs, CIS based rules :left_luggage: Restrict access to raw DB table | :ring: **[Least Permissive Access](getting-started/least_permissive_access.md)** :traffic_light: Process Whitelisting :traffic_light: Network Whitelisting :control_knobs: Control access to sensitive assets |
+| :telescope: **[Application Behavior](getting-started/workload_visibility.md)** :dna: Process execs, File System accesses :compass: Service binds, Ingress, Egress connections :microscope: Sensitive system call profiling | :snowflake: **[Deployment Models](getting-started/deployment_models.md)** :wheel_of_dharma: Kubernetes Deployment :whale2: Containerized Deployment :computer: VM/Bare-Metal Deployment |
+
+## Architecture Overview
+
+
+
+## Documentation :notebook:
+
+* :point_right: [Getting Started](getting-started/deployment_guide.md)
+* :dart: [Use Cases](getting-started/use-cases/hardening.md)
+* :heavy_check_mark: [KubeArmor Support Matrix](getting-started/support_matrix.md)
+* :chess_pawn: [How is KubeArmor different?](getting-started/differentiation.md)
+* :scroll: Security Policy for Pods/Containers [[Spec](getting-started/security_policy_specification.md)] [[Examples](getting-started/security_policy_examples.md)]
+* :scroll: Cluster level security Policy for Pods/Containers [[Spec](getting-started/cluster_security_policy_specification.md)] [[Examples](getting-started/cluster_security_policy_examples.md)]
+* :scroll: Security Policy for Hosts/Nodes [[Spec](getting-started/host_security_policy_specification.md)] [[Examples](getting-started/host_security_policy_examples.md)]
+... [detailed documentation](https://docs.kubearmor.io/kubearmor/)
+
+### Contributors :busts_in_silhouette:
+
+* :blue_book: [Contribution Guide](contribution/contribution_guide.md)
+* :technologist: [Development Guide](contribution/development_guide.md), [Testing Guide](contribution/testing_guide.md)
+* :raised_hand: [Join KubeArmor Slack](https://cloud-native.slack.com/archives/C02R319HVL3)
+* :question: [FAQs](getting-started/FAQ.md)
+
+### Biweekly Meeting
+
+- :speaking_head: [Zoom Link](http://zoom.kubearmor.io)
+- :page_facing_up: Minutes: [Document](https://docs.google.com/document/d/1IqIIG9Vz-PYpbUwrH0u99KYEM1mtnYe6BHrson4NqEs/edit)
+- :calendar: Calendar invite: [Google Calendar](http://www.google.com/calendar/event?action=TEMPLATE&dates=20220210T150000Z%2F20220210T153000Z&text=KubeArmor%20Community%20Call&location=&details=%3Ca%20href%3D%22https%3A%2F%2Fdocs.google.com%2Fdocument%2Fd%2F1IqIIG9Vz-PYpbUwrH0u99KYEM1mtnYe6BHrson4NqEs%2Fedit%22%3EMinutes%20of%20Meeting%3C%2Fa%3E%0A%0A%3Ca%20href%3D%22%20http%3A%2F%2Fzoom.kubearmor.io%22%3EZoom%20Link%3C%2Fa%3E&recur=RRULE:FREQ=WEEKLY;INTERVAL=2;BYDAY=TH&ctz=Asia/Calcutta), [ICS file](getting-started/resources/KubeArmorMeetup.ics)
+
+## Notice/Credits :handshake:
+
+- KubeArmor uses [Tracee](https://github.com/aquasecurity/tracee/)'s system call utility functions.
+
+## CNCF
+
+KubeArmor is [Sandbox Project](https://www.cncf.io/projects/kubearmor/) of the Cloud Native Computing Foundation.
+
+
+## ROADMAP
+
+KubeArmor roadmap is tracked via [KubeArmor Projects](https://github.com/orgs/kubearmor/projects?query=is%3Aopen)
diff --git a/data/readmes/kubeclipper-v141.md b/data/readmes/kubeclipper-v141.md
new file mode 100644
index 0000000..d925298
--- /dev/null
+++ b/data/readmes/kubeclipper-v141.md
@@ -0,0 +1,286 @@
+# KubeClipper - README (v1.4.1)
+
+**Repository**: https://github.com/kubeclipper/kubeclipper
+**Version**: v1.4.1
+
+---
+
+
+
+
+
+
+Manage kubernetes in the most light and convenient way
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+---
+
+## What is KubeClipper
+
+> English | [中文](README_zh.md)
+
+[KubeClipper](https://kubeclipper.io/) is a lightweight web service that provides a friendly web console GUI, APIs, and a CLI tool for **Kubernetes cluster lifecycle management**.
+KubeClipper provides flexible Kubernetes as a Service (KaaS), which allows users to rapidly deploy K8S clusters
+anywhere(cloud, hypervisor, bare metal) and provides continuous lifecycle management capabilities
+(installation, deleting, upgrading, backup and restoration, cluster scaling, remote access, plug-in management,
+application store).see [Feature List](https://github.com/kubeclipper/kubeclipper#features) for details.
+
+**🎯 Project Goal**:Manage Kubernetes in the most light and convenient way.
+
+## Features
+
+
+ ☸️ Cluster Lifecycle Management
+ Supports deployment of Kubernetes on any infrastructure and provides comprehensive cluster lifecycle management.
+
+
Multiple Deployment Modes: online/offline deployment support
+
Multi-Architecture: x86/64 & arm64 support
+
Cluster Import: registration and management of external clusters (non-Kubeclipper created)
+
...
+
+
+
+
+ 🌐 Node Management
+
+
Automatic node registration
+
Node information collection
+
Node terminal
+
...
+
+
+
+
+ 🚪 Identity and Access Management (IAM)
+ Provides a unified authentication and authorization system with fine-grained role-based access control.
+
+
+
RBAC-based user permission system
+
OIDC integration
+
...
+
+
+
+## Roadmap & Todo list
+
+* 🚀 Cluster Installation Optimization
+ * Use images to encapsulate installation package resources to reduce complexity. Reuse mature image technology
+* 💻 Kubernetes Web Console
+ * Workload resources & monitoring presentation
+ * Tenant based cluster access control
+* 📦 Application Store
+ * Application lifecycle management
+ * Support web UI & CLI interface
+* 🧩 Common Application and Plugin Integrations
+ * LB & Ingress
+ * Monitoring
+ * Kubernetes Dashboard
+ * KubeEdge
+* 🕸 Managed Clusters
+ * Support KoK clusters.
+
+## Architecture
+
+### Core
+
+
+
+### Node
+
+
+
+### Network
+
+
+
+Explore the architecture of Kubeclipper on [kubeclipper.io](https://kubeclipper.io/docs/overview/).
+
+## Quick Start
+
+For users who are new to KubeClipper and want to get started quickly, it is recommended to use the All-in-One installation mode, which can help you quickly deploy KubeClipper with zero configuration.
+
+### Preparations
+
+KubeClipper itself does not take up too many resources, but in order to run Kubernetes better in the future, it is recommended that the hardware configuration should not be lower than the minimum requirements.
+
+You only need to prepare a host with reference to the following requirements for machine hardware and operating system.
+
+#### Hardware recommended configuration
+
+* Make sure your machine meets the minimum hardware requirements: CPU >= 2 cores, RAM >= 2GB.
+* Operating System: CentOS 7.x / Ubuntu 18.04 / Ubuntu 20.04.
+
+#### Node requirements
+
+* Nodes must be able to connect via `SSH`.
+
+* You can use the `sudo` / `curl` / `wget` / `tar` command on this node.
+
+> It is recommended that your operating system is in a clean state (no additional software is installed), otherwise, conflicts may occur.
+
+### Deploy KubeClipper
+
+#### Download kcctl
+
+KubeClipper provides command line tools 🔧 kcctl to simplify operations.
+
+You can download the latest version of kcctl directly with the following command:
+
+```bash
+# Install latest release
+curl -sfL https://oss.kubeclipper.io/get-kubeclipper.sh | bash -
+# In China, you can add env "KC_REGION=cn", we use registry.aliyuncs.com/google_containers instead of k8s.gcr.io
+curl -sfL https://oss.kubeclipper.io/get-kubeclipper.sh | KC_REGION=cn bash -
+# The latest release version is downloaded by default. You can download the specified version. For example, specify the master development version to be installed
+curl -sfL https://oss.kubeclipper.io/get-kubeclipper.sh | KC_REGION=cn KC_VERSION=master bash -
+```
+
+> It is highly recommended that you install the latest release to experience more features.
+> You can also download the specified version on the **[GitHub Release Page](https://github.com/kubeclipper/kubeclipper/releases)**.
+
+Check if the installation is successful with the following command:
+
+```bash
+kcctl version
+```
+
+#### Get Started with Installation
+
+In this quick start tutorial, you only need to run just one command for installation:
+
+If you want to install AIO mode
+
+```bash
+# install default release
+kcctl deploy
+# you can use KC_VERSION to install the specified version, default is latest release
+KC_VERSION=master kcctl deploy
+```
+
+If you want to install multi node, Use `kcctl deploy -h` for more information about a command
+
+After you runn this command, kcctl will check your installation environment and enter the installation process, if the conditions are met.
+
+After printing the KubeClipper banner, the installation is complete.
+
+```bash
+ _ __ _ _____ _ _
+| | / / | | / __ \ (_)
+| |/ / _ _| |__ ___| / \/ |_ _ __ _ __ ___ _ __
+| \| | | | '_ \ / _ \ | | | | '_ \| '_ \ / _ \ '__|
+| |\ \ |_| | |_) | __/ \__/\ | | |_) | |_) | __/ |
+\_| \_/\__,_|_.__/ \___|\____/_|_| .__/| .__/ \___|_|
+ | | | |
+ |_| |_|
+```
+
+### Login Console
+
+When deployed successfully, you can open a browser and visit `http://$IP` to enter the KubeClipper console.
+
+
+
+ You can log in with the default account and password `admin / Thinkbig1`.
+
+> You may need to configure port forwarding rules and open ports in security groups for external users to access the console.
+
+### Create a k8s cluster
+
+When `kubeclipper` is deployed successfully, you can use the **kcctl** **tool** or **console** to create a k8s cluster. In the quick start tutorial, we use the kcctl tool to create.
+
+Then create a k8s cluster with the following command:
+
+```bash
+NODE=$(kcctl get node -o yaml|grep ipv4DefaultIP:|sed 's/ipv4DefaultIP: //')
+
+kcctl create cluster --master $NODE --name demo --untaint-master
+```
+
+The cluster creation will be completed in about 3 minutes, or you can use the following command to view the cluster status:
+
+```bash
+kcctl get cluster -o yaml|grep status -A5
+```
+
+> You can also enter the console to view real-time logs.
+
+Once the cluster enter the `Running` state , it means that the creation is complete. You can use `kubectl get cs` command to view the cluster status.
+
+## Development and Debugging
+
+Reference:[development-guide](docs/dev-guide.md)
+
+1. fork repo and clone
+2. run etcd locally, usually use docker / podman to run etcd container
+
+ ```bash
+ export HostIP="Your-IP"
+ docker run -d \
+ --net host \
+ k8s.gcr.io/etcd:3.5.0-0 etcd \
+ --advertise-client-urls http://${HostIP}:2379 \
+ --initial-advertise-peer-urls http://${HostIP}:2380 \
+ --initial-cluster=infra0=http://${HostIP}:2380 \
+ --listen-client-urls http://${HostIP}:2379,http://127.0.0.1:2379 \
+ --listen-metrics-urls http://127.0.0.1:2381 \
+ --listen-peer-urls http://${HostIP}:2380 \
+ --name infra0 \
+ --snapshot-count=10000 \
+ --data-dir=/var/lib/etcd
+ ```
+
+3. change `kubeclipper-server.yaml` etcd.serverList to your locally etcd cluster
+4. `make build`
+5. `./dist/kubeclipper-server serve`
+
+## Contributing
+
+Please follow [Community](https://github.com/kubeclipper/community) to join us.
+
+See [GOVERNANCE.md](./GOVERNANCE.md) for project governance and decision-making process.
+
+## Code of Conduct
+
+This project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+Please read our [CODE_OF_CONDUCT.md](./CODE_OF_CONDUCT.md) for details.
+
+## Star History
+
+
+
+## Landscapes
+
+
+
+KubeDL enables deep learning workloads to run on Kubernetes more easily and efficiently.
+
+KubeDL is a [CNCF sandbox](https://www.cncf.io/sandbox-projects/) project.
+
+
+
+
+
+## Features
+
+- Support training and inferences workloads (Tensorflow, Pytorch. [Mars](https://github.com/mars-project/mars) etc.)in a single unified controller. Features include advanced scheduling, acceleration using cache, metadata persistentcy, file sync, enable service discovery for training in host network etc.
+- Automatically tunes the best configurations for ML model deployment. - [Morphling Github](https://github.com/alibaba/morphling)
+- Package and deploy ML Model in container and track the model lineage natively with Kubernentes CRD.
+
+Check the website: https://kubedl.io
+
+
+
+
+
+
+## Getting Involved
+
+| Platform | Purpose | Estimated Response Time |
+|-------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------|
+| [DingTalk](https://github.com/kubedl-io/kubedl/blob/master/docs/img/kubedl-dingtalk.png ) | For discussions about development and questions about usage. | < 1 day |
+| [Github Issues](https://github.com/kubedl-io/kubedl/issues) | For reporting bugs and filing feature requests. | < 2 days |
+| E-Mail(cncf-kubedl-maintainers@lists.cncf.io) | For discussing specific topics or ask for help from community members/maintainers. | < 3 days |
+
+## Publications
+
+Morphling: Fast, Near-Optimal Auto-Configuration for Cloud-Native Model Serving. ACM Socc 2021[link](https://dl.acm.org/doi/10.1145/3472883.3486987)
+
+## License
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubedl-io%2Fkubedl?ref=badge_large)
diff --git a/data/readmes/kubeedge-v1220.md b/data/readmes/kubeedge-v1220.md
new file mode 100644
index 0000000..f0101d4
--- /dev/null
+++ b/data/readmes/kubeedge-v1220.md
@@ -0,0 +1,139 @@
+# KubeEdge - README (v1.22.0)
+
+**Repository**: https://github.com/kubeedge/kubeedge
+**Version**: v1.22.0
+
+---
+
+# KubeEdge
+[](https://goreportcard.com/report/github.com/kubeedge/kubeedge)
+[](/LICENSE)
+[](https://github.com/kubeedge/kubeedge/releases)
+[](https://bestpractices.coreinfrastructure.org/projects/3018)
+
+
+
+English | [简体中文](./README_zh.md)
+
+KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge.
+It consists of cloud part and edge part, provides core infrastructure support for networking, application deployment and metadata synchronization
+between cloud and edge. It also supports **MQTT** which enables edge devices to access through edge nodes.
+
+With KubeEdge it is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge.
+With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is produced.
+With data processed at the Edge, the responsiveness is increased dramatically and data privacy is protected.
+
+KubeEdge is a graduation-level hosted project by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF). KubeEdge graduation [announcement](https://www.cncf.io/announcements/2024/10/15/cloud-native-computing-foundation-announces-kubeedge-graduation/) by CNCF.
+
+## Advantages
+
+- **Kubernetes-native support**: Managing edge applications and edge devices in the cloud with fully compatible Kubernetes APIs.
+- **Cloud-Edge Reliable Collaboration**: Ensure reliable messages delivery without loss over unstable cloud-edge network.
+- **Edge Autonomy**: Ensure edge nodes run autonomously and the applications in edge run normally, when the cloud-edge network is unstable or edge is offline and restarted.
+- **Edge Devices Management**: Managing edge devices through Kubernetes native APIs implemented by CRD.
+- **Extremely Lightweight Edge Agent**: Extremely lightweight Edge Agent(EdgeCore) to run on resource constrained edge.
+
+
+## How It Works
+
+KubeEdge consists of cloud part and edge part.
+
+### Architecture
+
+
+
+
+
+### In the Cloud
+- [CloudHub](https://kubeedge.io/en/docs/architecture/cloud/cloudhub): a web socket server responsible for watching changes at the cloud side, caching and sending messages to EdgeHub.
+- [EdgeController](https://kubeedge.io/en/docs/architecture/cloud/edge_controller): an extended kubernetes controller which manages edge nodes and pods metadata so that the data can be targeted to a specific edge node.
+- [DeviceController](https://kubeedge.io/en/docs/architecture/cloud/device_controller): an extended kubernetes controller which manages devices so that the device metadata/status data can be synced between edge and cloud.
+
+
+### On the Edge
+- [EdgeHub](https://kubeedge.io/en/docs/architecture/edge/edgehub): a web socket client responsible for interacting with Cloud Service for the edge computing (like Edge Controller as in the KubeEdge Architecture). This includes syncing cloud-side resource updates to the edge, and reporting edge-side host and device status changes to the cloud.
+- [Edged](https://kubeedge.io/en/docs/architecture/edge/edged): an agent that runs on edge nodes and manages containerized applications.
+- [EventBus](https://kubeedge.io/en/docs/architecture/edge/eventbus): a MQTT client to interact with MQTT servers (mosquitto), offering publish and subscribe capabilities to other components.
+- [ServiceBus](https://kubeedge.io/en/docs/architecture/edge/servicebus): an HTTP client to interact with HTTP servers (REST), offering HTTP client capabilities to components of cloud to reach HTTP servers running at edge.
+- [DeviceTwin](https://kubeedge.io/en/docs/architecture/edge/devicetwin): responsible for storing device status and syncing device status to the cloud. It also provides query interfaces for applications.
+- [MetaManager](https://kubeedge.io/en/docs/architecture/edge/metamanager): the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).
+
+## Kubernetes compatibility
+
+| | Kubernetes 1.26 | Kubernetes 1.27 | Kubernetes 1.28 | Kubernetes 1.29 | Kubernetes 1.30 | Kubernetes 1.31 |
+|------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|
+| KubeEdge 1.18 | + | ✓ | ✓ | ✓ | - | - |
+| KubeEdge 1.19 | + | ✓ | ✓ | ✓ | - | - |
+| KubeEdge 1.20 | + | + | ✓ | ✓ | ✓ | - |
+| KubeEdge 1.21 | + | + | ✓ | ✓ | ✓ | - |
+| KubeEdge 1.22 | + | + | + | ✓ | ✓ | ✓ |
+| KubeEdge HEAD (master) | + | + | + | ✓ | ✓ | ✓ |
+
+Key:
+* `✓` KubeEdge and the Kubernetes version are exactly compatible.
+* `+` KubeEdge has features or API objects that may not be present in the Kubernetes version.
+* `-` The Kubernetes version has features or API objects that KubeEdge can't use.
+
+## Guides
+
+Get start with this [doc](https://kubeedge.io/en/docs).
+
+See our documentation on [kubeedge.io](https://kubeedge.io) for more details.
+
+To learn deeply about KubeEdge, try some examples on [examples](https://github.com/kubeedge/examples).
+
+## Roadmap
+
+* [2024 Roadmap](https://github.com/kubeedge/community/blob/master/roadmap.md)
+
+## Meeting
+
+Technical Steering Committees (TSC) Meeting:
+- Pacific Time: **Wednesdays at 10:00-11:00 Beijing Time** (biweekly, starting from Feb. 26th 2020).
+([Convert to your timezone.](https://www.thetimezoneconverter.com/?t=10%3A00&tz=GMT%2B8&))
+
+Regular Community Meeting:
+- Europe Time: **Wednesdays at 16:00-17:30 Beijing Time** (weekly, starting from Feb. 19th 2020).
+([Convert to your timezone.](https://www.thetimezoneconverter.com/?t=16%3A30&tz=GMT%2B8&))
+
+Resources:
+- [Meeting notes and agenda](https://docs.google.com/document/d/1Sr5QS_Z04uPfRbA7PrXr3aPwCRpx7EtsyHq7mp6CnHs/edit)
+- [Meeting recordings](https://www.youtube.com/playlist?list=PLQtlO1kVWGXkRGkjSrLGEPJODoPb8s5FM)
+- [Meeting link](https://zoom.us/j/4167237304)
+- [Meeting Calendar](https://calendar.google.com/calendar/embed?src=8rjk8o516vfte21qibvlae3lj4%40group.calendar.google.com) | [Subscribe](https://calendar.google.com/calendar?cid=OHJqazhvNTE2dmZ0ZTIxcWlidmxhZTNsajRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ)
+
+## Contact
+
+If you need support, start with the [troubleshooting guide](https://kubeedge.io/en/docs/developer/troubleshooting), and work your way through the process that we've outlined.
+
+If you have questions, feel free to reach out to us in the following ways:
+
+- [mailing list](https://groups.google.com/forum/#!forum/kubeedge)
+- [slack](https://kubeedge.io/docs/community/slack)
+- [twitter](https://twitter.com/kubeedge)
+
+## Contributing
+
+If you're interested in being a contributor and want to get involved in
+developing the KubeEdge code, please see [CONTRIBUTING](./CONTRIBUTING.md) for
+details on submitting patches and the contribution workflow.
+
+## Security
+
+### Security Audit
+
+A third party security audit of KubeEdge has been completed in July 2022. Additionally, the KubeEdge community completed an overall system security analysis of KubeEdge. The detailed reports are as follows.
+
+- [Security audit](https://github.com/kubeedge/community/blob/master/sig-security/sig-security-audit/KubeEdge-security-audit-2022.pdf)
+
+- [Threat model and security protection analysis paper](https://github.com/kubeedge/community/blob/master/sig-security/sig-security-audit/KubeEdge-threat-model-and-security-protection-analysis.md)
+
+### Reporting security vulnerabilities
+
+We encourage security researchers, industry organizations and users to proactively report suspected vulnerabilities to our security team (`cncf-kubeedge-security@lists.cncf.io`), the team will help diagnose the severity of the issue and determine how to address the issue as soon as possible.
+
+For further details please see [Security Policy](https://github.com/kubeedge/community/blob/master/team-security/SECURITY.md) for our security process and how to report vulnerabilities.
+
+## License
+
+KubeEdge is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
diff --git a/data/readmes/kubeflow-v1100.md b/data/readmes/kubeflow-v1100.md
new file mode 100644
index 0000000..5b9ac89
--- /dev/null
+++ b/data/readmes/kubeflow-v1100.md
@@ -0,0 +1,69 @@
+# Kubeflow - README (v1.10.0)
+
+**Repository**: https://github.com/kubeflow/kubeflow
+**Version**: v1.10.0
+
+---
+
+# Kubeflow
+
+[](https://www.kubeflow.org/docs/about/community/#kubeflow-slack-channels)
+[](https://clomonitor.io/projects/cncf/kubeflow)
+
+
+
+## What is Kubeflow
+
+[Kubeflow](https://www.kubeflow.org/) is the foundation of tools for AI Platforms on Kubernetes.
+
+AI platform teams can build on top of Kubeflow by using each project independently or deploying the
+entire AI reference platform to meet their specific needs. The Kubeflow AI reference platform is
+composable, modular, portable, and scalable, backed by an ecosystem of Kubernetes-native
+projects that cover every stage of the [AI lifecycle](https://www.kubeflow.org/docs/started/architecture/#kubeflow-projects-in-the-ai-lifecycle).
+
+Whether you’re an AI practitioner, a platform administrator, or a team of developers, Kubeflow
+offers modular, scalable, and extensible tools to support your AI use cases.
+
+Please refer to [the official documentation](https://www.kubeflow.org/docs/) for more information.
+
+## What are Kubeflow Projects
+
+Kubeflow is composed of multiple open source projects that address different aspects
+of the AI lifecycle. These projects are designed to be usable both independently and as part of the
+Kubeflow AI reference platform. This provides flexibility for users who may not need the full
+end-to-end AI platform capabilities but want to leverage specific functionalities, such as model
+training or model serving.
+
+| Kubeflow Project | Source Code |
+| ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------- |
+| [KServe](https://www.kubeflow.org/docs/external-add-ons/kserve/) | [`kserve/kserve`](https://github.com/kserve/kserve) |
+| [Kubeflow Katib](https://www.kubeflow.org/docs/components/katib/) | [`kubeflow/katib`](https://github.com/kubeflow/katib) |
+| [Kubeflow Model Registry](https://www.kubeflow.org/docs/components/model-registry/) | [`kubeflow/model-registry`](https://github.com/kubeflow/model-registry) |
+| [Kubeflow Notebooks](https://www.kubeflow.org/docs/components/notebooks/) | [`kubeflow/notebooks`](https://github.com/kubeflow/notebooks) |
+| [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/) | [`kubeflow/pipelines`](https://github.com/kubeflow/pipelines) |
+| [Kubeflow SDK](https://github.com/kubeflow/sdk) | [`kubeflow/sdk`](https://github.com/kubeflow/sdk) |
+| [Kubeflow Spark Operator](https://www.kubeflow.org/docs/components/spark-operator/) | [`kubeflow/spark-operator`](https://github.com/kubeflow/spark-operator) |
+| [Kubeflow Trainer](https://www.kubeflow.org/docs/components/trainer/) | [`kubeflow/trainer`](https://github.com/kubeflow/trainer) |
+
+## What is the Kubeflow AI Reference Platform
+
+The Kubeflow AI reference platform refers to the full suite of Kubeflow projects bundled together
+with additional integration and management tools. Kubeflow AI reference platform deploys the
+comprehensive toolkit for the entire AI lifecycle. The Kubeflow AI reference platform can be
+installed via [Packaged Distributions](https://www.kubeflow.org/docs/started/installing-kubeflow/#packaged-distributions)
+or [Kubeflow Manifests](https://www.kubeflow.org/docs/started/installing-kubeflow/#kubeflow-manifests).
+
+| Kubeflow AI Reference Platform Tool | Source Code |
+| --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
+| [Central Dashboard](https://www.kubeflow.org/docs/components/central-dash/) | [`kubeflow/dashboard`](https://github.com/kubeflow/dashboard) |
+| [Profile Controller](https://www.kubeflow.org/docs/components/central-dash/profiles/) | [`kubeflow/dashboard`](https://github.com/kubeflow/dashboard) |
+| [Kubeflow Manifests](https://www.kubeflow.org/docs/started/installing-kubeflow/#kubeflow-manifests) | [`kubeflow/manifests`](https://github.com/kubeflow/manifests) |
+
+## Kubeflow Community
+
+Kubeflow is a community-led project maintained by the
+[Kubeflow Working Groups](https://www.kubeflow.org/docs/about/governance/#4-working-groups)
+under the guidance of the [Kubeflow Steering Committee](https://www.kubeflow.org/docs/about/governance/#2-kubeflow-steering-committee-ksc).
+
+We encourage you to learn about the [Kubeflow Community](https://www.kubeflow.org/docs/about/community/)
+and how to [contribute](https://www.kubeflow.org/docs/about/contributing/) to the project!
diff --git a/data/readmes/kuberhealthy-v280-rc2.md b/data/readmes/kuberhealthy-v280-rc2.md
new file mode 100644
index 0000000..972daf7
--- /dev/null
+++ b/data/readmes/kuberhealthy-v280-rc2.md
@@ -0,0 +1,136 @@
+# Kuberhealthy - README (v2.8.0-rc2)
+
+**Repository**: https://github.com/kuberhealthy/kuberhealthy
+**Version**: v2.8.0-rc2
+
+---
+
+
+
+
+**Note: Kuberhealthy is currently undergoing a total rewrite in the `main` branch.**
+
+**Kuberhealthy is a [Kubernetes](https://kubernetes.io) [operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) for [synthetic monitoring](https://en.wikipedia.org/wiki/Synthetic_monitoring) and [continuous process verification](https://en.wikipedia.org/wiki/Continued_process_verification).** [Write your own tests](docs/CHECK_CREATION.md) in any language and Kuberhealthy will run them for you. Automatically creates metrics for [Prometheus](https://prometheus.io). Includes simple JSON status page. **Now part of the CNCF!**
+
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://goreportcard.com/report/github.com/kuberhealthy/kuberhealthy)
+[](https://bestpractices.coreinfrastructure.org/projects/2822)
+[](https://twitter.com/kuberhealthy)
+[](https://kubernetes.slack.com/messages/CB9G7HWTE)
+
+## Table of Contents
+
+- ❓ [What is Kuberhealthy?](#what-is-kuberhealthy)
+- 🚀 [Installation](#installation)
+- 📈 [Visualized](#visualized)
+- 🧪 [Create Synthetic Checks](#create-synthetic-checks-for-your-apis)
+- 📊 [Status Page](#status-page)
+- 🤝 [Contributing](#contributing)
+- 📅 [Monthly Community Meeting](#monthly-community-meeting)
+
+## What is Kuberhealthy?
+
+Kuberhealthy lets you continuously verify that your applications and Kubernetes clusters are working as expected. By creating a custom resource (a [`KuberhealthyCheck`](docs/CHECK_CREATION.md#creating-your-khcheck-resource)) in your cluster, you can easily enable [various synthetic tests](docs/CHECKS_REGISTRY.md) and get Prometheus metrics for them.
+
+Kuberhealthy comes with [lots of useful checks already available](docs/CHECKS_REGISTRY.md) to ensure the core functionality of Kubernetes, but checks can be used to test anything you like. We encourage you to [write your own check container](docs/CHECK_CREATION.md) in any language to test your own applications. It really is quick and easy!
+
+Kuberhealthy serves the status of all checks on a simple JSON status page, a [Prometheus](https://prometheus.io/) metrics endpoint (at `/metrics`), and supports InfluxDB metric forwarding for integration into your choice of alerting solution.
+
+
+
+## Installation
+
+Kuberhealthy requires Kubernetes 1.16 or above. You can install it with plain YAML manifests or with Helm.
+
+- For detailed installation steps, see the [installation guide](docs/INSTALLATION.md).
+- To configure Kuberhealthy after installation, see the [configuration documentation](docs/CONFIGURATION.md).
+
+## Visualized
+
+Here is an illustration of how Kuberhealthy provisions and operates checker pods. The following process is illustrated:
+
+- An admin creates a [`KuberhealthyCheck`](docs/CHECK_CREATION.md#creating-your-khcheck-resource) resource that calls for a synthetic Kubernetes daemonset to be deployed and tested every 15 minutes. This will ensure that all nodes in the Kubernetes cluster can provision containers properly.
+- Kuberhealthy observes this new `KuberhealthyCheck` resource.
+- Kuberhealthy schedules a checker pod to manage the lifecycle of this check.
+- The checker pod creates a daemonset using the Kubernetes API.
+- The checker pod observes the daemonset and waits for all daemonset pods to become `Ready`
+- The checker pod deletes the daemonset using the Kubernetes API.
+- The checker pod observes the daemonset being fully cleaned up and removed.
+- The checker pod reports a successful test result back to Kuberhealthy's API.
+- Kuberhealthy stores this check's state and makes it available to various metrics systems.
+
+
+
+
+## Included Checks
+
+You can use any of [the pre-made checks](https://github.com/kuberhealthy/kuberhealthy/blob/master/docs/CHECKS_REGISTRY.md#khcheck-registry) by simply enabling them. By default Kuberhealthy comes with several checks to test Kubernetes deployments, daemonsets, and DNS.
+
+#### Some checks you can easily enable:
+
+- [SSL Handshake Check](https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/ssl-handshake-check/README.md) - checks SSL certificate validity and warns when certs are about to expire.
+- [CronJob Scheduling Failures](https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/cronjob-checker/README.md) - checks for events indicating that a CronJob has failed to create Job pods.
+- [Image Pull Check](https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/test-check#image-pull-check) - checks that an image can be pulled from an image repository.
+- [Deployment Check](https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/deployment-check/README.md) - verifies that a fresh deployment can run, deploy multiple pods, pass traffic, do a rolling update (without dropping connections), and clean up successfully.
+- [Daemonset Check](https://github.com/kuberhealthy/kuberhealthy/blob/master/cmd/daemonset-check/README.md) - verifies that a daemonset can be created, fully provisioned, and torn down. This checks the full kubelet functionality of every node in your Kubernetes cluster.
+- [Storage Provisioner Check](https://github.com/ChrisHirsch/kuberhealthy-storage-check) - verifies that a pod with persistent storage can be configured on every node in your cluster.
+
+
+## Create Synthetic Checks for Your APIs
+
+You can easily create synthetic tests to check your applications and APIs with real world use cases. This is a great way to be confident that your application functions as expected in the real world at all times.
+
+Here is a full check example written in `go`. Just implement `doCheckStuff` and you're off!
+
+
+```go
+package main
+
+import (
+ "github.com/kuberhealthy/kuberhealthy/v2/pkg/checks/external/checkclient"
+)
+
+func main() {
+ ok := doCheckStuff()
+ if !ok {
+ checkclient.ReportFailure([]string{"Test has failed!"})
+ return
+ }
+ checkclient.ReportSuccess()
+}
+
+```
+
+You can read more about [how checks are configured](docs/CHECK_CREATION.md#creating-your-khcheck-resource) and [learn how to create your own check container](docs/CHECK_CREATION.md). Checks can be written in any language and helpful clients for checks not written in Go can be found in the [clients directory](/clients).
+
+### Status Page
+
+Kuberhealthy serves a simple JSON status page and Prometheus metrics endpoint. See the [status page guide](docs/STATUS_PAGE.md) for output examples and details.
+
+## Contributing
+
+If you're interested in contributing to this project:
+- Check out the [Contributing Guide](CONTRIBUTING.md).
+ - If you use Kuberhealthy in a production environment, add yourself to the list of [Kuberhealthy adopters](ADOPTERS.md)!
+- Check out [open issues](https://github.com/kuberhealthy/kuberhealthy/issues). If you're new to the project, look for the `good first issue` tag.
+- We're always looking for check contributions (either in suggestions or in PRs) as well as feedback from folks implementing
+Kuberhealthy locally or in a test environment.
+
+### Hermit
+
+While working on Kuberhealthy, you can take advantage of the included [Hermit](https://cashapp.github.io/hermit/) dev
+environment to get Go & other tooling without having to install them separately on your local machine.
+
+Just use the following command to activate the environment, and you're good to go:
+
+```zsh
+. ./bin/activate-hermit
+```
+
+## Monthly Community Meeting
+
+If you would like to talk directly to the core maintainers to discuss ideas, code reviews, or other complex issues, we have a monthly Zoom meeting on the **24th day** of every month at **04:30 PM Pacific Time**.
+
+- [Click here to download the invite file](https://zoom.us/meeting/tJIlcuyrqT8qHNWDSx3ZozYamoq2f0ruwfB0/ics?icsToken=98tyKuCupj4vGdORsB-GRowAGo_4Z-nwtilfgo1quCz9UBpceDr3O-1TYLQvAs3H)
+or
+- [Click here to join the zoom meeting right now (968 5537 4061)](https://zoom.us/j/96855374061)
diff --git a/data/readmes/kubernetes-v1350-rc0.md b/data/readmes/kubernetes-v1350-rc0.md
new file mode 100644
index 0000000..ad734ee
--- /dev/null
+++ b/data/readmes/kubernetes-v1350-rc0.md
@@ -0,0 +1,107 @@
+# Kubernetes - README (v1.35.0-rc.0)
+
+**Repository**: https://github.com/kubernetes/kubernetes
+**Version**: v1.35.0-rc.0
+
+---
+
+# Kubernetes (K8s)
+
+[](https://bestpractices.coreinfrastructure.org/projects/569) [](https://goreportcard.com/report/github.com/kubernetes/kubernetes) 
+
+
+
+----
+
+Kubernetes, also known as K8s, is an open source system for managing [containerized applications]
+across multiple hosts. It provides basic mechanisms for the deployment, maintenance,
+and scaling of applications.
+
+Kubernetes builds upon a decade and a half of experience at Google running
+production workloads at scale using a system called [Borg],
+combined with best-of-breed ideas and practices from the community.
+
+Kubernetes is hosted by the Cloud Native Computing Foundation ([CNCF]).
+If your company wants to help shape the evolution of
+technologies that are container-packaged, dynamically scheduled,
+and microservices-oriented, consider joining the CNCF.
+For details about who's involved and how Kubernetes plays a role,
+read the CNCF [announcement].
+
+----
+
+## To start using K8s
+
+See our documentation on [kubernetes.io].
+
+Take a free course on [Scalable Microservices with Kubernetes].
+
+To use Kubernetes code as a library in other applications, see the [list of published components](https://git.k8s.io/kubernetes/staging/README.md).
+Use of the `k8s.io/kubernetes` module or `k8s.io/kubernetes/...` packages as libraries is not supported.
+
+## To start developing K8s
+
+The [community repository] hosts all information about
+building Kubernetes from source, how to contribute code
+and documentation, who to contact about what, etc.
+
+If you want to build Kubernetes right away there are two options:
+
+##### You have a working [Go environment].
+
+```
+git clone https://github.com/kubernetes/kubernetes
+cd kubernetes
+make
+```
+
+##### You have a working [Docker environment].
+
+```
+git clone https://github.com/kubernetes/kubernetes
+cd kubernetes
+make quick-release
+```
+
+For the full story, head over to the [developer's documentation].
+
+## Support
+
+If you need support, start with the [troubleshooting guide],
+and work your way through the process that we've outlined.
+
+That said, if you have questions, reach out to us
+[one way or another][communication].
+
+[announcement]: https://cncf.io/news/announcement/2015/07/new-cloud-native-computing-foundation-drive-alignment-among-container
+[Borg]: https://research.google.com/pubs/pub43438.html?authuser=1
+[CNCF]: https://www.cncf.io/about
+[communication]: https://git.k8s.io/community/communication
+[community repository]: https://git.k8s.io/community
+[containerized applications]: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
+[developer's documentation]: https://git.k8s.io/community/contributors/devel#readme
+[Docker environment]: https://docs.docker.com/engine
+[Go environment]: https://go.dev/doc/install
+[kubernetes.io]: https://kubernetes.io
+[Scalable Microservices with Kubernetes]: https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615
+[troubleshooting guide]: https://kubernetes.io/docs/tasks/debug/
+
+## Community Meetings
+
+The [Calendar](https://www.kubernetes.dev/resources/calendar/) has the list of all the meetings in the Kubernetes community in a single location.
+
+## Adopters
+
+The [User Case Studies](https://kubernetes.io/case-studies/) website has real-world use cases of organizations across industries that are deploying/migrating to Kubernetes.
+
+## Governance
+
+Kubernetes project is governed by a framework of principles, values, policies and processes to help our community and constituents towards our shared goals.
+
+The [Kubernetes Community](https://github.com/kubernetes/community/blob/master/governance.md) is the launching point for learning about how we organize ourselves.
+
+The [Kubernetes Steering community repo](https://github.com/kubernetes/steering) is used by the Kubernetes Steering Committee, which oversees governance of the Kubernetes project.
+
+## Roadmap
+
+The [Kubernetes Enhancements repo](https://github.com/kubernetes/enhancements) provides information about Kubernetes releases, as well as feature tracking and backlogs.
diff --git a/data/readmes/kubescape-v3046.md b/data/readmes/kubescape-v3046.md
new file mode 100644
index 0000000..3d8e7e8
--- /dev/null
+++ b/data/readmes/kubescape-v3046.md
@@ -0,0 +1,507 @@
+# Kubescape - README (v3.0.46)
+
+**Repository**: https://github.com/kubescape/kubescape
+**Version**: v3.0.46
+
+---
+
+[](https://github.com/kubescape/kubescape/releases)
+[](https://github.com/kubescape/kubescape/actions/workflows/02-release.yaml)
+[](https://goreportcard.com/report/github.com/kubescape/kubescape)
+[](https://gitpod.io/#https://github.com/kubescape/kubescape)
+[](https://github.com/kubescape/kubescape/blob/master/LICENSE)
+[](https://landscape.cncf.io/?item=provisioning--security-compliance--kubescape)
+[](https://artifacthub.io/packages/search?repo=kubescape)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubescape%2Fkubescape?ref=badge_shield&issueType=license)
+[](https://www.bestpractices.dev/projects/6944)
+[](https://securityscorecards.dev/viewer/?uri=github.com/kubescape/kubescape)
+[](https://kubescape.io/docs/)
+[](https://github.com/kubescape/kubescape/stargazers)
+[](https://twitter.com/kubescape)
+[](https://cloud-native.slack.com/archives/C04EY3ZF9GE)
+
+# Kubescape
+
+
+
+
+
+
+
+_Comprehensive Kubernetes Security from Development to Runtime_
+
+Kubescape is an open-source Kubernetes security platform that provides comprehensive security coverage, from left to right across the entire development and deployment lifecycle. It offers hardening, posture management, and runtime security capabilities to ensure robust protection for Kubernetes environments.
+
+Kubescape was created by [ARMO](https://www.armosec.io/?utm_source=github&utm_medium=repository) and is a [Cloud Native Computing Foundation (CNCF) incubating project](https://www.cncf.io/projects/).
+
+_Please [star ⭐](https://github.com/kubescape/kubescape/stargazers) the repo if you want us to continue developing and improving Kubescape!_
+
+---
+
+## 📑 Table of Contents
+
+- [Features](#-features)
+- [Demo](#-demo)
+- [Quick Start](#-quick-start)
+- [Installation](#-installation)
+- [CLI Commands](#-cli-commands)
+- [Usage Examples](#-usage-examples)
+- [Architecture](#-architecture)
+- [In-Cluster Operator](#-in-cluster-operator)
+- [Integrations](#-integrations)
+- [Community](#-community)
+- [Changelog](#changelog)
+- [License](#license)
+
+---
+
+## ✨ Features
+
+| Feature | Description |
+|---------|-------------|
+| 🔍 **Misconfiguration Scanning** | Scan clusters, YAML files, and Helm charts against NSA-CISA, MITRE ATT&CK®, and CIS Benchmarks |
+| 🐳 **Image Vulnerability Scanning** | Detect CVEs in container images using [Grype](https://github.com/anchore/grype) |
+| 🩹 **Image Patching** | Automatically patch vulnerable images using [Copacetic](https://github.com/project-copacetic/copacetic) |
+| 🔧 **Auto-Remediation** | Automatically fix misconfigurations in Kubernetes manifests |
+| 🛡️ **Admission Control** | Enforce security policies with Validating Admission Policies (VAP) |
+| 📊 **Runtime Security** | eBPF-based runtime monitoring via [Inspektor Gadget](https://github.com/inspektor-gadget) |
+| 🤖 **AI Integration** | MCP server for AI assistant integration |
+
+---
+
+## 🎬 Demo
+
+
+
+---
+
+## 🚀 Quick Start
+
+### 1. Install Kubescape
+
+```sh
+curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
+```
+
+> 💡 See [Installation](#-installation) for more options (Homebrew, Krew, Windows, etc.)
+
+### 2. Run Your First Scan
+
+```sh
+# Scan your current cluster
+kubescape scan
+
+# Scan a specific YAML file or directory
+kubescape scan /path/to/manifests/
+
+# Scan a container image for vulnerabilities
+kubescape scan image nginx:latest
+```
+
+### 3. Explore the Results
+
+Kubescape provides a detailed security posture overview including:
+- Control plane security status
+- Access control risks
+- Workload misconfigurations
+- Network policy gaps
+- Compliance scores (MITRE, NSA)
+
+---
+
+## 📦 Installation
+
+### One-Line Install (Linux/macOS)
+
+```bash
+curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
+```
+
+### Package Managers
+
+| Platform | Command |
+|----------|---------|
+| **Homebrew** | `brew install kubescape` |
+| **Krew** | `kubectl krew install kubescape` |
+| **Arch Linux** | `yay -S kubescape` |
+| **Ubuntu** | `sudo add-apt-repository ppa:kubescape/kubescape && sudo apt install kubescape` |
+| **NixOS** | `nix-shell -p kubescape` |
+| **Chocolatey** | `choco install kubescape` |
+| **Scoop** | `scoop install kubescape` |
+
+### Windows (PowerShell)
+
+```powershell
+iwr -useb https://raw.githubusercontent.com/kubescape/kubescape/master/install.ps1 | iex
+```
+
+📖 **[Full Installation Guide →](docs/installation.md)**
+
+---
+
+## 🛠️ CLI Commands
+
+Kubescape provides a comprehensive CLI with the following commands:
+
+| Command | Description |
+|---------|-------------|
+| [`kubescape scan`](#scanning) | Scan cluster, files, or images for security issues |
+| [`kubescape scan image`](#image-scanning) | Scan container images for vulnerabilities |
+| [`kubescape fix`](#auto-fix) | Auto-fix misconfigurations in manifest files |
+| [`kubescape patch`](#image-patching) | Patch container images to fix vulnerabilities |
+| [`kubescape list`](#list-frameworks-and-controls) | List available frameworks and controls |
+| [`kubescape download`](#offline-support) | Download artifacts for offline/air-gapped use |
+| [`kubescape config`](#configuration) | Manage cached configurations |
+| [`kubescape operator`](#operator-commands) | Interact with in-cluster Kubescape operator |
+| [`kubescape vap`](#validating-admission-policies) | Manage Validating Admission Policies |
+| [`kubescape mcpserver`](#mcp-server) | Start MCP server for AI assistant integration |
+| `kubescape completion` | Generate shell completion scripts |
+| `kubescape version` | Display version information |
+
+---
+
+## 📖 Usage Examples
+
+### Scanning
+
+#### Scan a Running Cluster
+
+```bash
+# Default scan (all frameworks)
+kubescape scan
+
+# Scan with a specific framework
+kubescape scan framework nsa
+kubescape scan framework mitre
+kubescape scan framework cis-v1.23-t1.0.1
+
+# Scan a specific control
+kubescape scan control C-0005 -v
+```
+
+#### Scan Files and Repositories
+
+```bash
+# Scan local YAML files
+kubescape scan /path/to/manifests/
+
+# Scan a Helm chart
+kubescape scan /path/to/helm/chart/
+
+# Scan a Git repository
+kubescape scan https://github.com/kubescape/kubescape
+
+# Scan with Kustomize
+kubescape scan /path/to/kustomize/directory/
+```
+
+#### Scan Options
+
+```bash
+# Include/exclude namespaces
+kubescape scan --include-namespaces production,staging
+kubescape scan --exclude-namespaces kube-system,kube-public
+
+# Use alternative kubeconfig
+kubescape scan --kubeconfig /path/to/kubeconfig
+
+# Set compliance threshold (exit code 1 if below threshold)
+kubescape scan --compliance-threshold 80
+
+# Set severity threshold
+kubescape scan --severity-threshold high
+```
+
+#### Output Formats
+
+```bash
+# JSON output
+kubescape scan --format json --output results.json
+
+# JUnit XML (for CI/CD)
+kubescape scan --format junit --output results.xml
+
+# SARIF (for GitHub Code Scanning)
+kubescape scan --format sarif --output results.sarif
+
+# HTML report
+kubescape scan --format html --output report.html
+
+# PDF report
+kubescape scan --format pdf --output report.pdf
+```
+
+### Image Scanning
+
+```bash
+# Scan a public image
+kubescape scan image nginx:1.21
+
+# Scan with verbose output
+kubescape scan image nginx:1.21 -v
+
+# Scan a private registry image
+kubescape scan image myregistry/myimage:tag --username user --password pass
+```
+
+### Auto-Fix
+
+Automatically fix misconfigurations in your manifest files:
+
+```bash
+# First, scan and save results to JSON
+kubescape scan /path/to/manifests --format json --output results.json
+
+# Then apply fixes
+kubescape fix results.json
+
+# Dry run (preview changes without applying)
+kubescape fix results.json --dry-run
+
+# Apply fixes without confirmation prompts
+kubescape fix results.json --no-confirm
+```
+
+### Image Patching
+
+Patch container images to fix OS-level vulnerabilities:
+
+```bash
+# Start buildkitd (required)
+sudo buildkitd &
+
+# Patch an image
+sudo kubescape patch --image docker.io/library/nginx:1.22
+
+# Specify custom output tag
+sudo kubescape patch --image nginx:1.22 --tag nginx:1.22-patched
+
+# See detailed vulnerability report
+sudo kubescape patch --image nginx:1.22 -v
+```
+
+📖 **[Full Patch Command Documentation →](cmd/patch/README.md)**
+
+### List Frameworks and Controls
+
+```bash
+# List available frameworks
+kubescape list frameworks
+
+# List all controls
+kubescape list controls
+
+# Output as JSON
+kubescape list controls --format json
+```
+
+### Offline Support
+
+Download artifacts for air-gapped environments:
+
+```bash
+# Download all artifacts
+kubescape download artifacts --output /path/to/offline/dir
+
+# Download a specific framework
+kubescape download framework nsa --output /path/to/nsa.json
+
+# Scan using downloaded artifacts
+kubescape scan --use-artifacts-from /path/to/offline/dir
+```
+
+### Configuration
+
+```bash
+# View current configuration
+kubescape config view
+
+# Set account ID
+kubescape config set accountID
+
+# Delete cached configuration
+kubescape config delete
+```
+
+### Operator Commands
+
+Interact with the in-cluster Kubescape operator:
+
+```bash
+# Trigger a configuration scan
+kubescape operator scan configurations
+
+# Trigger a vulnerability scan
+kubescape operator scan vulnerabilities
+```
+
+### Validating Admission Policies
+
+Manage Kubernetes Validating Admission Policies:
+
+```bash
+# Deploy the Kubescape CEL admission policy library
+kubescape vap deploy-library | kubectl apply -f -
+
+# Create a policy binding
+kubescape vap create-policy-binding \
+ --name my-policy-binding \
+ --policy c-0016 \
+ --namespace my-namespace | kubectl apply -f -
+```
+
+### MCP Server
+
+Start an MCP (Model Context Protocol) server for AI assistant integration:
+
+```bash
+kubescape mcpserver
+```
+
+The MCP server exposes Kubescape's vulnerability and configuration scan data to AI assistants, enabling natural language queries about your cluster's security posture.
+
+**Available MCP Tools:**
+- `list_vulnerability_manifests` - Discover vulnerability manifests
+- `list_vulnerabilities_in_manifest` - List CVEs in a manifest
+- `list_vulnerability_matches_for_cve` - Get details for a specific CVE
+- `list_configuration_security_scan_manifests` - List configuration scan results
+- `get_configuration_security_scan_manifest` - Get configuration scan details
+
+---
+
+## 🏗️ Architecture
+
+Kubescape can run in two modes:
+
+### CLI Mode
+
+The CLI is a standalone tool that scans clusters, files, and images on-demand.
+
+
+
+---
+
+## 👥 Community
+
+Kubescape is a CNCF incubating project with an active community.
+
+### Get Involved
+
+- 💬 **[Slack - Users Channel](https://cloud-native.slack.com/archives/C04EY3ZF9GE)** - Ask questions, get help
+- 💬 **[Slack - Developers Channel](https://cloud-native.slack.com/archives/C04GY6H082K)** - Contribute to development
+- 🐛 **[GitHub Issues](https://github.com/kubescape/kubescape/issues)** - Report bugs and request features
+- 📋 **[Project Board](https://github.com/orgs/kubescape/projects/4)** - See what we're working on
+- 🗺️ **[Roadmap](https://github.com/kubescape/project-governance/blob/main/ROADMAP.md)** - Future plans
+
+### Contributing
+
+We welcome contributions! Please see our:
+- **[Contributing Guide](https://github.com/kubescape/project-governance/blob/main/CONTRIBUTING.md)**
+- **[Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)**
+
+### Community Resources
+
+- **[Community Info](https://github.com/kubescape/project-governance/blob/main/COMMUNITY.md)**
+- **[Governance](https://github.com/kubescape/project-governance/blob/main/GOVERNANCE.md)**
+- **[Security Policy](https://github.com/kubescape/project-governance/blob/main/SECURITY.md)**
+- **[Maintainers](https://github.com/kubescape/project-governance/blob/main/MAINTAINERS.md)**
+
+### Contributors
+
+
+
+
+
+---
+
+## Changelog
+
+Kubescape changes are tracked on the [releases page](https://github.com/kubescape/kubescape/releases).
+
+---
+
+## License
+
+Copyright 2021-2025, the Kubescape Authors. All rights reserved.
+
+Kubescape is released under the [Apache 2.0 license](LICENSE).
+
+Kubescape is a [Cloud Native Computing Foundation (CNCF) incubating project](https://www.cncf.io/projects/kubescape/) and was contributed by [ARMO](https://www.armosec.io/?utm_source=github&utm_medium=repository).
+
+
+
+
\ No newline at end of file
diff --git a/data/readmes/kubeslice-kubeslice-worker-140.md b/data/readmes/kubeslice-kubeslice-worker-140.md
new file mode 100644
index 0000000..bcfa098
--- /dev/null
+++ b/data/readmes/kubeslice-kubeslice-worker-140.md
@@ -0,0 +1,43 @@
+# KubeSlice - README (kubeslice-worker-1.4.0)
+
+**Repository**: https://github.com/kubeslice/kubeslice
+**Version**: kubeslice-worker-1.4.0
+
+---
+
+# Kubeslice Community Kubernetes Helm Charts
+
+[](https://opensource.org/licenses/Apache-2.0)
+
+KubeSlice provides network services to applications that need secure and highly available connectivity between multiple clusters. KubeSlice creates a flat overlay network connecting clusters. The overlay network can be described as an application slice that provides a slice of connectivity between the pods of an application running in multiple clusters. It can also be described as an application-specific VPC that spans across clusters. Pods can connect to the slice overlay network and communicate with each other seamlessly across cluster boundaries.
+
+# Architecture Overview
+See [Kubeslice Reference Architecture](https://kubeslice.io/documentation/open-source/latest/overview/architecture), to get an overview of the overall architecture and core components.
+
+# Usage
+
+[Helm](https://helm.sh) must be installed to use charts. For more information, see [documentation](https://helm.sh/docs/) to get started.
+
+
+Once Helm is set up properly, add the repo as follows:
+
+```console
+helm repo add kubeslice https://kubeslice.github.io/kubeslice/
+```
+
+You can then run `helm search repo kubeslice` to see the charts.
+
+Note: Please refer to the following link for details on generating [your github personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
+
+
+Quick Start
+---
+
+See [Sandbox](https://kubeslice.io/documentation/open-source/1.3.0/playground/sandbox) for instructions on setting up a local kubeslice setup using [`kind`](https://kind.sigs.k8s.io/) for non-production use.
+
+For setting up Kubeslice on other cloud platforms, refer to the [Cloud Clusters Demo](https://kubeslice.io/documentation/open-source/latest/tutorials/kubeslice-cli-tutorials/kubeslice-cli-demo-on-cloud-clusters).
+
+
+Guide
+---
+A full and comprehensive documentation is available on our opensource [documentation](https://kubeslice.io/documentation/open-source/) website.
diff --git a/data/readmes/kubestellar-v0290.md b/data/readmes/kubestellar-v0290.md
new file mode 100644
index 0000000..62b3ee8
--- /dev/null
+++ b/data/readmes/kubestellar-v0290.md
@@ -0,0 +1,117 @@
+# KubeStellar - README (v0.29.0)
+
+**Repository**: https://github.com/kubestellar/kubestellar
+**Version**: v0.29.0
+
+---
+
+
+
+
+
+
+
+
+
+
+## Multi-cluster Configuration Management for Edge, Multi-Cloud, and Hybrid Cloud
+
+[](https://www.firsttimersonly.com/)
+[](https://github.com/kubestellar/kubestellar/actions/workflows/broken-links-crawler.yml)
+[](https://www.bestpractices.dev/projects/8266)
+[](https://scorecard.dev/viewer/?uri=github.com/kubestellar/kubestellar)
+[](https://artifacthub.io/packages/search?repo=kubestellar)
+
+
+
+
+
+
+
+**KubeStellar** is a Cloud Native Computing Foundation (CNCF) Sandbox project that simplifies the deployment and configuration of applications across multiple Kubernetes clusters. It provides a seamless experience akin to using a single cluster, and it integrates with the tools you're already familiar with, eliminating the need to modify existing resources.
+
+KubeStellar is particularly beneficial if you're currently deploying in a single cluster and are looking to expand to multiple clusters, or if you're already using multiple clusters and are seeking a more streamlined developer experience.
+
+
+
+
+
+The use of multiple clusters offers several advantages, including:
+
+- Separation of environments (e.g., development, testing, staging)
+- Isolation of groups, teams, or departments
+- Compliance with enterprise security or data governance requirements
+- Enhanced resiliency, including across different clouds
+- Improved resource availability
+- Access to heterogeneous resources
+- Capability to run applications on the edge, including in disconnected environments
+
+In a single-cluster setup, developers typically access the cluster and deploy Kubernetes objects directly. Without KubeStellar, multiple clusters are usually deployed and configured individually, which can be time-consuming and complex.
+
+KubeStellar simplifies this process by allowing developers to define a binding policy between clusters and Kubernetes objects. It then uses your regular single-cluster tooling to deploy and configure each cluster based on these binding policies, making multi-cluster operations as straightforward as managing a single cluster. This approach enhances productivity and efficiency, making KubeStellar a valuable tool in a multi-cluster Kubernetes environment.
+
+## Website
+
+For usage, architecture, and other documentation, see [the website](https://kubestellar.io).
+
+## Contributing
+
+We ❤️ our contributors! If you're interested in helping us out, please head over to our [Contributing](https://github.com/kubestellar/kubestellar/blob/main/CONTRIBUTING.md) guide and be sure to look at `main` or the release of interest to you.
+
+This community has a [Code of Conduct](./CODE_OF_CONDUCT.md). Please make sure to follow it.
+
+## Our Roadmap
+Have a look at what we are working on next, see our [Roadmap](docs/content/direct/roadmap.md)
+
+## Getting in touch
+
+There are several ways to communicate with us:
+
+Instantly get access to our documents and meeting invites at http://kubestellar.io/joinus
+
+- The [`#kubestellar-dev` channel](https://cloud-native.slack.com/archives/C097094RZ3M) in the [CNCF Slack workspace](https://communityinviter.com/apps/cloud-native/cncf)
+- Our mailing lists:
+ - [kubestellar-dev](https://groups.google.com/g/kubestellar-dev) for development discussions
+ - [kubestellar-users](https://groups.google.com/g/kubestellar-users) for discussions among users and potential users
+- Subscribe to the [community meeting calendar](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MWM4a2loZDZrOWwzZWQzZ29xanZwa3NuMWdfMjAyMzA1MThUMTQwMDAwWiBiM2Q2NWM5MmJlZDdhOTg4NGVmN2ZlOWUzZjZjOGZlZDE2ZjZmYjJmODExZjU3NTBmNTQ3NTY3YTVkZDU4ZmVkQGc&tmsrc=b3d65c92bed7a9884ef7fe9e3f6c8fed16f6fb2f811f5750f547567a5dd58fed%40group.calendar.google.com&scp=ALL) for community meetings and events
+ - The [kubestellar-dev](https://groups.google.com/g/kubestellar-dev) mailing list is subscribed to this calendar
+- See recordings of past KubeStellar community meetings on [YouTube](https://www.youtube.com/@kubestellar)
+- See [upcoming](https://github.com/kubestellar/kubestellar/issues?q=is%3Aissue+is%3Aopen+label%3Acommunity-meeting) and [past](https://github.com/kubestellar/kubestellar/issues?q=is%3Aissue+is%3Aclosed+label%3Acommunity-meeting) community meeting agendas and notes
+- Browse the [shared Google Drive](https://drive.google.com/drive/folders/1p68MwkX0sYdTvtup0DcnAEsnXElobFLS?usp=sharing) to share design docs, notes, etc.
+ - Members of the [kubestellar-dev](https://groups.google.com/g/kubestellar-dev) mailing list can view this drive
+- Follow us on:
+ - LinkedIn - [#kubestellar](https://www.linkedin.com/feed/hashtag/?keywords=kubestellar)
+ - Medium - [kubestellar.medium.com](https://medium.com/@kubestellar/list/predefined:e785a0675051:READING_LIST)
+
+
+
+
+
+[](https://goreportcard.com/report/github.com/kubevela/kubevela)
+
+[](https://codecov.io/gh/kubevela/kubevela)
+[](/LICENSE)
+[](https://github.com/kubevela/kubevela/releases)
+[](https://www.tickgit.com/browse?repo=github.com/kubevela/kubevela)
+[](https://twitter.com/oam_dev)
+[](https://artifacthub.io/packages/search?repo=kubevela)
+[](https://bestpractices.coreinfrastructure.org/projects/4602)
+
+[](https://scorecard.dev/viewer/?uri=github.com/kubevela/kubevela)
+[](https://opensource.alibaba.com/contribution_leaderboard/details?projectValue=kubevela)
+
+## Introduction
+
+KubeVela is a modern application delivery platform that makes deploying and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable.
+
+
+
+## Highlights
+
+KubeVela practices the "render, orchestrate, deploy" workflow with below highlighted values added to existing ecosystem:
+
+#### **Deployment as Code**
+
+Declare your deployment plan as workflow, run it automatically with any CI/CD or GitOps system, extend or re-program the workflow steps with [CUE](https://cuelang.org/).
+No ad-hoc scripts, no dirty glue code, just deploy. The deployment workflow in KubeVela is powered by [Open Application Model](https://oam.dev/).
+
+#### **Built-in observability, multi-tenancy and security support**
+
+Choose from the wide range of LDAP integrations we provided out-of-box, enjoy enhanced [multi-tenancy and multi-cluster authorization and authentication](https://kubevela.net/docs/platform-engineers/auth/advance),
+pick and apply fine-grained RBAC modules and customize them as per your own supply chain requirements.
+All delivery process has fully [automated observability dashboards](https://kubevela.net/docs/platform-engineers/operations/observability).
+
+#### **Multi-cloud/hybrid-environments app delivery as first-class citizen**
+
+Natively supports multi-cluster/hybrid-cloud scenarios such as progressive rollout across test/staging/production environments,
+automatic canary, blue-green and continuous verification, rich placement strategy across clusters and clouds,
+along with automated cloud environments provision.
+
+#### **Lightweight but highly extensible architecture**
+
+Minimize your control plane deployment with only one pod and 0.5c1g resources to handle thousands of application delivery.
+Glue and orchestrate all your infrastructure capabilities as reusable modules with a highly extensible architecture
+and share the large growing community [addons](https://kubevela.net/docs/reference/addons/overview).
+
+## Getting Started
+
+* [Introduction](https://kubevela.io/docs)
+* [Installation](https://kubevela.io/docs/install)
+* [Deploy Your Application](https://kubevela.io/docs/quick-start)
+
+### Get Your Own Demo with Alibaba Cloud
+
+- install KubeVela on a Serverless K8S cluster in 3 minutes, try:
+
+
+
+
+
+## Documentation
+
+Full documentation is available on the [KubeVela website](https://kubevela.io/).
+
+## Blog
+
+Official blog is available on [KubeVela blog](https://kubevela.io/blog).
+
+## Community
+
+We want your contributions and suggestions!
+One of the easiest ways to contribute is to participate in discussions on the Github Issues/Discussion, chat on IM or the bi-weekly community calls.
+For more information on the community engagement, developer and contributing guidelines and more, head over to the [KubeVela community repo](https://github.com/kubevela/community).
+
+### Contact Us
+
+Reach out with any questions you may have and we'll make sure to answer them as soon as possible!
+
+- Slack: [CNCF Slack kubevela channel](https://cloud-native.slack.com/archives/C01BLQ3HTJA) (*English*)
+- [DingTalk Group](https://page.dingtalk.com/wow/dingtalk/act/en-home): `23310022` (*Chinese*)
+- Wechat Group (*Chinese*): Broker wechat to add you into the user group.
+
+
+
+### Community Call
+
+Every two weeks we host a community call to showcase new features, review upcoming milestones, and engage in a Q&A. All are welcome!
+
+- Bi-weekly Community Call:
+ - [Meeting Notes](https://docs.google.com/document/d/1nqdFEyULekyksFHtFvgvFAYE-0AMHKoS3RMnaKsarjs).
+ - [Video Records](https://www.youtube.com/channel/UCSCTHhGI5XJ0SEhDHVakPAA/videos).
+- Bi-weekly Chinese Community Call:
+ - [Video Records](https://space.bilibili.com/180074935/channel/seriesdetail?sid=1842207).
+
+## Talks and Conferences
+
+Check out [KubeVela videos](https://kubevela.io/videos/talks/en/oam-dapr) for these talks and conferences.
+
+## Contributing
+
+Check out [CONTRIBUTING](https://kubevela.io/docs/contributor/overview) to see how to develop with KubeVela
+
+## Report Vulnerability
+
+Security is a first priority thing for us at KubeVela. If you come across a related issue, please send email to security@mail.kubevela.io .
+
+## Code of Conduct
+
+KubeVela adopts [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
\ No newline at end of file
diff --git a/data/readmes/kubevirt-v170.md b/data/readmes/kubevirt-v170.md
new file mode 100644
index 0000000..8d8feb7
--- /dev/null
+++ b/data/readmes/kubevirt-v170.md
@@ -0,0 +1,156 @@
+# KubeVirt - README (v1.7.0)
+
+**Repository**: https://github.com/kubevirt/kubevirt
+**Version**: v1.7.0
+
+---
+
+# KubeVirt
+
+
+
+
+
+**KubeVirt** is a virtual machine management add-on for Kubernetes.
+The aim is to provide a common ground for virtualization solutions on top of
+Kubernetes.
+
+## Introduction
+
+### Virtualization extension for Kubernetes
+
+At its core, KubeVirt extends [Kubernetes][k8s] by adding
+additional virtualization resource types (especially the `VM` type) through
+[Kubernetes's Custom Resource Definitions API][crd].
+By using this mechanism, the Kubernetes API can be used to manage these `VM`
+resources alongside all other resources Kubernetes provides.
+
+The resources themselves are not enough to launch virtual machines.
+For this to happen the _functionality and business logic_ needs to be added to
+the cluster. The functionality is not added to Kubernetes itself, but rather
+added to a Kubernetes cluster by _running_ additional controllers and agents
+on an existing cluster.
+
+The necessary controllers and agents are provided by KubeVirt.
+
+As of today KubeVirt can be used to declaratively
+
+ * Create a predefined VM
+ * Schedule a VM on a Kubernetes cluster
+ * Launch a VM
+ * Stop a VM
+ * Delete a VM
+
+[](https://asciinema.org/a/497168)
+
+## To start using KubeVirt
+
+Try our quickstart at [kubevirt.io](https://kubevirt.io/get_kubevirt/).
+
+See our user documentation at [kubevirt.io/docs](https://kubevirt.io/user-guide).
+
+Once you have the basics, you can learn more about how to run KubeVirt and its newest features by taking a look at:
+
+ * [KubeVirt blog](https://kubevirt.io/blogs/)
+ * [KubeVirt Youtube channel](https://www.youtube.com/channel/UC2FH36TbZizw25pVT1P3C3g)
+
+## To start developing KubeVirt
+
+To set up a development environment please read our
+[Getting Started Guide](docs/getting-started.md). To learn how to contribute, please read our [contribution guide](https://github.com/kubevirt/kubevirt/blob/main/CONTRIBUTING.md).
+
+You can learn more about how KubeVirt is designed (and why it is that way),
+and learn more about the major components by taking a look at
+[our developer documentation](docs/):
+
+ * [Architecture](docs/architecture.md) - High-level view on the architecture
+ * [Components](docs/components.md) - Detailed look at all components
+ * [API Reference](https://kubevirt.io/api-reference/)
+
+## Useful links
+
+The KubeVirt SIG-release repo is responsible for information regarding upcoming and previous releases.
+
+ * [KubeVirt to Kubernetes version support matrix](https://github.com/kubevirt/sig-release/blob/main/releases/k8s-support-matrix.md) - Verify the versions of KubeVirt that are built and supported for your version of Kubernetes
+ * [Noteworthy changes for the next KubeVirt release](https://github.com/kubevirt/sig-release/blob/main/upcoming-changes.md) - Pre-release notes for the upcoming release
+ * [Release schedule](https://github.com/kubevirt/sig-release/blob/main/releases/) - For our current and previous releases
+
+## Community
+
+If you got enough of code and want to speak to people, then you got a couple
+of options:
+
+* Follow us on [Twitter](https://twitter.com/kubevirt)
+* Chat with us on Slack via [#virtualization @ kubernetes.slack.com](https://kubernetes.slack.com/?redir=%2Farchives%2FC8ED7RKFE)
+* Discuss with us on the [kubevirt-dev Google Group](https://groups.google.com/forum/#!forum/kubevirt-dev)
+* Stay informed about designs and upcoming events by watching our [community content](https://github.com/kubevirt/community/)
+
+### Related resources
+
+ * [Kubernetes][k8s]
+ * [Libvirt][libvirt]
+ * [Cockpit][cockpit]
+ * [kubevirt.core][kubevirt.core] Ansible collection
+
+### Submitting patches
+
+When sending patches to the project, the submitter is required to certify that
+they have the legal right to submit the code. This is achieved by adding a line
+
+ Signed-off-by: Real Name
+
+to the bottom of every commit message. Existence of such a line certifies
+that the submitter has complied with the Developer's Certificate of Origin 1.1,
+(as defined in the file docs/developer-certificate-of-origin).
+
+This line can be automatically added to a commit in the correct format, by
+using the '-s' option to 'git commit'.
+
+## License
+
+KubeVirt is distributed under the
+[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.txt).
+
+ This file is part of the KubeVirt project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+ Copyright The KubeVirt Authors.
+
+[//]: # (Reference links)
+ [k8s]: https://kubernetes.io
+ [crd]: https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/
+ [ovirt]: https://www.ovirt.org
+ [cockpit]: https://cockpit-project.org/
+ [libvirt]: https://www.libvirt.org
+ [kubevirt.core]: https://github.com/kubevirt/kubevirt.core
+
+## FOSSA Status
+
+[](https://app.fossa.com/projects/custom%2B13072%2Fgit%40github.com%3Akubevirt%2Fkubevirt.git?ref=badge_large)
diff --git a/data/readmes/kubewarden-v1310.md b/data/readmes/kubewarden-v1310.md
new file mode 100644
index 0000000..c994eff
--- /dev/null
+++ b/data/readmes/kubewarden-v1310.md
@@ -0,0 +1,252 @@
+# Kubewarden - README (v1.31.0)
+
+**Repository**: https://github.com/kubewarden/kubewarden-controller
+**Version**: v1.31.0
+
+---
+
+[](https://github.com/kubewarden/community/blob/main/REPOSITORIES.md#core-scope)
+[](https://github.com/kubewarden/community/blob/main/REPOSITORIES.md#stable)
+[](https://artifacthub.io/packages/helm/kubewarden/kubewarden-controller)
+[](https://www.bestpractices.dev/projects/6502)
+[](https://app.fossa.com/projects/custom%252B25850%252Fgithub.com%252Fkubewarden%252Fkubewarden-controller?ref=badge_shield)
+[](https://scorecard.dev/viewer/?uri=github.com/kubewarden/kubewarden-controller)
+[](https://clomonitor.io/projects/cncf/kubewarden)
+
+Kubewarden is a Kubernetes Dynamic Admission Controller that uses policies written
+in WebAssembly.
+
+For more information refer to the [official Kubewarden website](https://kubewarden.io/).
+
+# kubewarden-controller
+
+`kubewarden-controller` is a Kubernetes controller that allows you to
+dynamically register Kubewarden admission policies.
+
+The `kubewarden-controller` reconciles the admission policies you
+have registered with the Kubernetes webhooks of the cluster where
+it's deployed.
+
+## Installation
+
+The kubewarden-controller can be deployed using a Helm chart. For instructions,
+see https://charts.kubewarden.io.
+
+## Usage
+
+Once the kubewarden-controller is up and running, you can define Kubewarden policies
+using the `ClusterAdmissionPolicy` resource.
+
+The documentation of this Custom Resource can be found
+[here](https://github.com/kubewarden/kubewarden-controller/blob/main/docs/crds/README.asciidoc)
+or on [docs.crds.dev](https://doc.crds.dev/github.com/kubewarden/kubewarden-controller).
+
+**Note:** `ClusterAdmissionPolicy` resources are cluster-wide.
+
+### Deploy your first admission policy
+
+The following snippet defines a Kubewarden Policy based on the
+[psp-capabilities](https://github.com/kubewarden/psp-capabilities)
+policy:
+
+```yaml
+apiVersion: policies.kubewarden.io/v1alpha2
+kind: ClusterAdmissionPolicy
+metadata:
+ name: psp-capabilities
+spec:
+ module: registry://ghcr.io/kubewarden/policies/psp-capabilities:v0.1.3
+ rules:
+ - apiGroups: [""]
+ apiVersions: ["v1"]
+ resources: ["pods"]
+ operations:
+ - CREATE
+ - UPDATE
+ mutating: true
+ settings:
+ allowed_capabilities:
+ - CHOWN
+ required_drop_capabilities:
+ - NET_ADMIN
+```
+
+This `ClusterAdmissionPolicy` evaluates all the `CREATE` and `UPDATE` operations
+performed against Pods. The homepage of this policy provides more insights about
+how this policy behaves.
+
+Creating the resource inside Kubernetes is sufficient to enforce the policy:
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/kubewarden/kubewarden-controller/main/config/samples/policies_v1alpha2_clusteradmissionpolicy.yaml
+```
+
+### Remove your first admission policy
+
+You can delete the admission policy you just created:
+
+```console
+kubectl delete clusteradmissionpolicy psp-capabilities
+kubectl patch clusteradmissionpolicy psp-capabilities -p '{"metadata":{"finalizers":null}}' --type=merge
+```
+
+## Learn more
+
+The [documentation](https://docs.kubewarden.io) provides more insights
+about how the project works and how to use it.
+
+# Software bill of materials & provenance
+
+Kubewarden controller has its software bill of materials (SBOM) and build
+[Provenance](https://slsa.dev/spec/v1.0/provenance) information published every
+release. It follows the [SPDX](https://spdx.dev/) format and
+[SLSA](https://slsa.dev/provenance/v0.2#schema) provenance schema.
+Both of the files are generated by [Docker
+buildx](https://docs.docker.com/build/metadata/attestations/) during the build
+process and stored in the container registry together with the container image
+as well as upload in the release page.
+
+You can find them together with the signature and certificate used to sign it
+in the [release
+assets](https://github.com/kubewarden/kubewarden-controller/releases), and
+attached to the image as JSON-encoded documents following the [in-toto SPDX
+predicate](https://github.com/in-toto/attestation/blob/main/spec/predicates/spdx.md)
+format. You can obtain them with
+[`crane`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md)
+or [`docker buildx imagetools
+inspect`](https://docs.docker.com/reference/cli/docker/buildx/imagetools/inspect).
+
+You can verify the container image with:
+
+```shell
+cosign verify-blob --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
+ --certificate-identity="https://github.com/${{github.repository_owner}}/kubewarden-controller/.github/workflows/attestation.yml@" \
+ --bundle kubewarden-controller-attestation-amd64-provenance-cosign.bundle \
+ kubewarden-controller-attestation-amd64-provenance.json
+```
+
+To verify the attestation manifest and its layer signatures:
+
+```shell
+cosign verify --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
+ --certificate-identity="https://github.com/${{github.repository_owner}}/kubewarden-controller/.github/workflows/attestation.yml@" \
+ ghcr.io/kubewarden/kubewarden-controller@sha256:1abc0944378d9f3ee2963123fe84d045248d320d76325f4c2d4eb201304d4c4e
+```
+
+That sha256 hash is the digest of the attestation manifest or its layers.
+Therefore, you need to find this hash in the registry using the UI or tools
+like `crane`. For example, the following command will show you all the
+attestation manifests of the `latest` tag:
+
+```shell
+crane manifest ghcr.io/kubewarden/kubewarden-controller:latest | jq '.manifests[] | select(.annotations["vnd.docker.reference.type"]=="attestation-manifest")'
+{
+ "mediaType": "application/vnd.oci.image.manifest.v1+json",
+ "digest": "sha256:fc01fa6c82cffeffd23b737c7e6b153357d1e499295818dad0c7d207f64e6ee8",
+ "size": 1655,
+ "annotations": {
+ "vnd.docker.reference.digest": "sha256:611d499ec9a26034463f09fa4af4efe2856086252d233b38e3fc31b0b982d369",
+ "vnd.docker.reference.type": "attestation-manifest"
+ },
+ "platform": {
+ "architecture": "unknown",
+ "os": "unknown"
+ }
+}
+{
+ "mediaType": "application/vnd.oci.image.manifest.v1+json",
+ "digest": "sha256:e0cd736c2241407114256e09a4cdeef55eb81dcd374c5785c4e5c9362a0088a2",
+ "size": 1655,
+ "annotations": {
+ "vnd.docker.reference.digest": "sha256:03e5db83a25ea2ac498cf81226ab8db8eb53a74a2c9102e4a1da922d5f68b70f",
+ "vnd.docker.reference.type": "attestation-manifest"
+ },
+ "platform": {
+ "architecture": "unknown",
+ "os": "unknown"
+ }
+}
+```
+
+Then you can use the `digest` field to verify the attestation manifest and its
+layers signatures.
+
+```shell
+cosign verify --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
+ --certificate-identity="https://github.com/${{github.repository_owner}}/kubewarden-controller/.github/workflows/attestation.yml@" \
+ ghcr.io/kubewarden/kubewarden-controller@sha256:fc01fa6c82cffeffd23b737c7e6b153357d1e499295818dad0c7d207f64e6ee8
+
+crane manifest ghcr.io/kubewarden/kubewarden-controller@sha256:fc01fa6c82cffeffd23b737c7e6b153357d1e499295818dad0c7d207f64e6ee8
+{
+ "schemaVersion": 2,
+ "mediaType": "application/vnd.oci.image.manifest.v1+json",
+ "config": {
+ "mediaType": "application/vnd.oci.image.config.v1+json",
+ "digest": "sha256:eda788a0e94041a443eca7286a9ef7fce40aa2832263f7d76c597186f5887f6a",
+ "size": 463
+ },
+ "layers": [
+ {
+ "mediaType": "application/vnd.in-toto+json",
+ "digest": "sha256:563689cdee407ab514d057fe2f8f693189279e10bfe4f31f277e24dee00793ea",
+ "size": 94849,
+ "annotations": {
+ "in-toto.io/predicate-type": "https://spdx.dev/Document"
+ }
+ },
+ {
+ "mediaType": "application/vnd.in-toto+json",
+ "digest": "sha256:7ce0572628290373e17ba0bbb44a9ec3c94ba36034124931d322ca3fbfb768d9",
+ "size": 7363045,
+ "annotations": {
+ "in-toto.io/predicate-type": "https://spdx.dev/Document"
+ }
+ },
+ {
+ "mediaType": "application/vnd.in-toto+json",
+ "digest": "sha256:dacf511c5ec7fd87e8692bd08c3ced2c46f4da72e7271b82f1b3720d5b0a8877",
+ "size": 71331,
+ "annotations": {
+ "in-toto.io/predicate-type": "https://spdx.dev/Document"
+ }
+ },
+ {
+ "mediaType": "application/vnd.in-toto+json",
+ "digest": "sha256:594da3e8bd8c6ee2682b0db35857933f9558fd98ec092344a6c1e31398082f4d",
+ "size": 980,
+ "annotations": {
+ "in-toto.io/predicate-type": "https://spdx.dev/Document"
+ }
+ },
+ {
+ "mediaType": "application/vnd.in-toto+json",
+ "digest": "sha256:7738d8d506c6482aaaef1d22ed920468ffaf4975afd28f49bb50dba2c20bf2ca",
+ "size": 13838,
+ "annotations": {
+ "in-toto.io/predicate-type": "https://slsa.dev/provenance/v0.2"
+ }
+ }
+ ]
+}
+
+cosign verify --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
+ --certificate-identity="https://github.com/${{github.repository_owner}}/kubewarden-controller/.github/workflows/attestation.yml@" \
+ ghcr.io/kubewarden/kubewarden-controller@sha256:594da3e8bd8c6ee2682b0db35857933f9558fd98ec092344a6c1e31398082f4d
+```
+
+Note that each attestation manifest (for each architecture) has its own layers.
+Each layer is a different SBOM SPDX or provenance file generated by Docker
+Buildx during the multi stage build process. You can also use `crane` to
+download the attestation file:
+
+```shell
+crane blob ghcr.io/kubewarden/kubewarden-controller@sha256:7738d8d506c6482aaaef1d22ed920468ffaf4975afd28f49bb50dba2c20bf2ca
+```
+
+## Security disclosure
+
+See [SECURITY.md](https://github.com/kubewarden/community/blob/main/SECURITY.md) on the kubewarden/community repo.
+
+# Changelog
+
+See [GitHub Releases content](https://github.com/kubewarden/kubewarden-controller/releases).
diff --git a/data/readmes/kudo-v0190.md b/data/readmes/kudo-v0190.md
new file mode 100644
index 0000000..2cd324a
--- /dev/null
+++ b/data/readmes/kudo-v0190.md
@@ -0,0 +1,42 @@
+# KUDO - README (v0.19.0)
+
+**Repository**: https://github.com/kudobuilder/kudo
+**Version**: v0.19.0
+
+---
+
+# KUDO
+
+
+
+[](https://circleci.com/gh/kudobuilder/kudo)
+
+Kubernetes Universal Declarative Operator (KUDO) provides a declarative approach to building production-grade Kubernetes Operators covering the entire application lifecycle.
+
+## Getting Started
+
+Please refer to the [getting started guide](https://kudo.dev/docs/) documentation.
+
+## Resources
+
+* Slack Channel: [#kudo](https://kubernetes.slack.com/archives/CG3HTFCMV)
+* Google Group: [kudobuilder@googlegroups.com](https://groups.google.com/forum/#!forum/kudobuilder)
+* Planned Work: [Sprint Dashboard](https://github.com/orgs/kudobuilder/projects/1)
+
+## Community Meetings
+
+We have open community meetings every 2 weeks on Thursday at 9:00 a.m. PT. (17:00 UTC)
+
+* [Agenda and Notes](https://docs.google.com/document/d/1UqgtCMUHSsOohZYF8K7zX8WcErttuMSx7NbvksIbZgg)
+* [Zoom Meeting](https://d2iq.zoom.us/j/443128842)
+
+
+## Community, Events, Discussion, Contribution, and Support
+
+Learn more on how to engage with the KUDO community on the [community page](https://kudo.dev/community/).
+
+## Getting Involved
+
+* Read the [code of conduct](code-of-conduct.md)
+* Read the [contribution guide](CONTRIBUTING.md)
+* Details on running and debugging locally read [development guide](development.md)
diff --git a/data/readmes/kudu-1171.md b/data/readmes/kudu-1171.md
new file mode 100644
index 0000000..e46e335
--- /dev/null
+++ b/data/readmes/kudu-1171.md
@@ -0,0 +1,490 @@
+# Kudu - README (1.17.1)
+
+**Repository**: https://github.com/apache/kudu
+**Version**: 1.17.1
+**Branch**: 1.17.1
+
+---
+
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+= Kudu Developer Documentation
+
+== Building and installing Kudu
+
+Follow the steps in the
+https://kudu.apache.org/docs/installation.html#build_from_source[documentation]
+to build and install Kudu from source
+
+=== Building Kudu out of tree
+
+A single Kudu source tree may be used for multiple builds, each with its
+own build directory. Build directories may be placed anywhere in the
+filesystem with the exception of the root directory of the source tree. The
+Kudu build is invoked with a working directory of the build directory
+itself, so you must ensure it exists (i.e. create it with _mkdir -p_). It's
+recommended to place all build directories within the _build_ subdirectory;
+_build/latest_ will be symlinked to most recently created one.
+
+The rest of this document assumes the build directory
+_/build/debug_.
+
+=== Automatic rebuilding of dependencies
+
+The script `thirdparty/build-if-necessary.sh` is invoked by cmake, so
+new thirdparty dependencies added by other developers will be downloaded
+and built automatically in subsequent builds if necessary.
+
+To disable the automatic invocation of `build-if-necessary.sh`, set the
+`NO_REBUILD_THIRDPARTY` environment variable:
+
+[source,bash]
+----
+$ cd build/debug
+$ NO_REBUILD_THIRDPARTY=1 cmake ../..
+----
+
+This can be particularly useful when trying to run tools like `git bisect`
+between two commits which may have different dependencies.
+
+
+=== Building Kudu itself
+
+[source,bash]
+----
+# Add /thirdparty/installed/common/bin to your $PATH
+# before other parts of $PATH that may contain cmake, such as /usr/bin
+# For example: "export PATH=$HOME/git/kudu/thirdparty/installed/common/bin:$PATH"
+# if using bash.
+$ mkdir -p build/debug
+$ cd build/debug
+$ cmake ../..
+$ make -j8 # or whatever level of parallelism your machine can handle
+----
+
+The build artifacts, including the test binaries, will be stored in
+_build/debug/bin/_.
+
+To omit the Kudu unit tests during the build, add -DNO_TESTS=1 to the
+invocation of cmake. For example:
+
+[source,bash]
+----
+$ cd build/debug
+$ cmake -DNO_TESTS=1 ../..
+----
+
+== Running unit/functional tests
+
+To run the Kudu unit tests, you can use the `ctest` command from within the
+_build/debug_ directory:
+
+[source,bash]
+----
+$ cd build/debug
+$ ctest -j8
+----
+
+This command will report any tests that failed, and the test logs will be
+written to _build/debug/test-logs_.
+
+Individual tests can be run by directly invoking the test binaries in
+_build/debug/bin_. Since Kudu uses the Google C++ Test Framework (gtest),
+specific test cases can be run with gtest flags:
+
+[source,bash]
+----
+# List all the tests within a test binary, then run a single test
+$ build/debug/bin/tablet-test --gtest_list_tests
+$ build/debug/bin/tablet-test --gtest_filter=TestTablet/9.TestFlush
+----
+
+gtest also allows more complex filtering patterns. See the upstream
+documentation for more details.
+
+=== Running tests with the clang AddressSanitizer enabled
+
+
+AddressSanitizer is a nice clang feature which can detect many types of memory
+errors. The Jenkins setup for kudu runs these tests automatically on a regular
+basis, but if you make large changes it can be a good idea to run it locally
+before pushing. To do so, you'll need to build using `clang`:
+
+[source,bash]
+----
+$ mkdir -p build/asan
+$ cd build/asan
+$ CC=../../thirdparty/clang-toolchain/bin/clang \
+ CXX=../../thirdparty/clang-toolchain/bin/clang++ \
+ ../../thirdparty/installed/common/bin/cmake \
+ -DKUDU_USE_ASAN=1 ../..
+$ make -j8
+$ ctest -j8
+----
+
+The tests will run significantly slower than without ASAN enabled, and if any
+memory error occurs, the test that triggered it will fail. You can then use a
+command like:
+
+
+[source,bash]
+----
+$ cd build/asan
+$ ctest -R failing-test
+----
+
+to run just the failed test.
+
+NOTE: For more information on AddressSanitizer, please see the
+https://clang.llvm.org/docs/AddressSanitizer.html[ASAN web page].
+
+=== Running tests with the clang Undefined Behavior Sanitizer (UBSAN) enabled
+
+
+Similar to the above, you can use a special set of clang flags to enable the Undefined
+Behavior Sanitizer. This will generate errors on certain pieces of code which may
+not themselves crash but rely on behavior which isn't defined by the C++ standard
+(and thus are likely bugs). To enable UBSAN, follow the same directions as for
+ASAN above, but pass the `-DKUDU_USE_UBSAN=1` flag to the `cmake` invocation.
+
+In order to get a stack trace from UBSan, you can use gdb on the failing test, and
+set a breakpoint as follows:
+
+----
+(gdb) b __ubsan::Diag::~Diag
+----
+
+Then, when the breakpoint fires, gather a backtrace as usual using the `bt` command.
+
+=== Running tests with ThreadSanitizer enabled
+
+ThreadSanitizer (TSAN) is a feature of recent Clang and GCC compilers which can
+detect improperly synchronized access to data along with many other threading
+bugs. To enable TSAN, pass `-DKUDU_USE_TSAN=1` to the `cmake` invocation,
+recompile, and run tests. For example:
+
+[source,bash]
+----
+$ mkdir -p build/tsan
+$ cd build/tsan
+$ CC=../../thirdparty/clang-toolchain/bin/clang \
+ CXX=../../thirdparty/clang-toolchain/bin/clang++ \
+ ../../thirdparty/installed/common/bin/cmake \
+ -DKUDU_USE_TSAN=1 ../..
+$ make -j8
+$ ctest -j8
+----
+
+TSAN may truncate a few lines of the stack trace when reporting where the error
+is. This can be bewildering. It's documented for TSANv1 here:
+https://code.google.com/p/data-race-test/wiki/ThreadSanitizerAlgorithm
+It is not mentioned in the documentation for TSANv2, but has been observed.
+In order to find out what is _really_ happening, set a breakpoint on the TSAN
+report in GDB using the following incantation:
+
+[source,bash]
+----
+$ gdb -ex 'set disable-randomization off' -ex 'b __tsan::PrintReport' ./some-test
+----
+
+
+=== Generating code coverage reports
+
+
+In order to generate a code coverage report, you must use the following flags:
+
+[source,bash]
+----
+$ mkdir -p build/coverage
+$ cd build/coverage
+$ CC=../../thirdparty/clang-toolchain/bin/clang \
+ CXX=../../thirdparty/clang-toolchain/bin/clang++ \
+ cmake -DKUDU_GENERATE_COVERAGE=1 ../..
+$ make -j4
+$ ctest -j4
+----
+
+This will generate the code coverage files with extensions .gcno and .gcda. You can then
+use a tool like `gcovr` or `llvm-cov gcov` to visualize the results. For example, using
+gcovr:
+
+[source,bash]
+----
+$ cd build/coverage
+$ mkdir cov_html
+$ ../../thirdparty/installed/common/bin/gcovr \
+ --gcov-executable=$(pwd)/../../build-support/llvm-gcov-wrapper \
+ --html --html-details -o cov_html/coverage.html
+----
+
+Then open `cov_html/coverage.html` in your web browser.
+
+=== Running lint checks
+
+Kudu uses cpplint.py from Google to enforce coding style guidelines. You can run the
+lint checks via cmake using the `ilint` target:
+
+[source,bash]
+----
+$ make ilint
+----
+
+This will scan any file which is dirty in your working tree, or changed since the last
+gerrit-integrated upstream change in your git log. If you really want to do a full
+scan of the source tree, you may use the `lint` target instead.
+
+=== Running clang-tidy checks
+
+Kudu also uses the clang-tidy tool from LLVM to enforce coding style
+guidelines. You can run the tidy checks via cmake using the `tidy` target:
+
+[source,bash]
+----
+$ make tidy
+----
+
+This will scan any changes in the latest commit in the local tree. At the time
+of writing, it will not scan any changes that are not locally committed.
+
+=== Running include-what-you-use (IWYU) checks
+
+Kudu uses the https://github.com/include-what-you-use/include-what-you-use[IWYU]
+tool to keep the set of headers in the C++ source files consistent. For more
+information on what _consistent_ means, see
+https://github.com/include-what-you-use/include-what-you-use/blob/master/docs/WhyIWYU.md[_Why IWYU_].
+
+You can run the IWYU checks via cmake using the `iwyu` target:
+
+[source,bash]
+----
+$ make iwyu
+----
+
+This will scan any file which is dirty in your working tree, or changed since the last
+gerrit-integrated upstream change in your git log.
+
+If you want to run against a specific file, or against all files, you can use the
+`iwyu.py` script:
+
+[source,bash]
+----
+$ ./build-support/iwyu.py
+----
+
+See the output of `iwyu.py --help` for details on various modes of operation.
+
+[[building-docs]]
+=== Building Kudu documentation
+
+Kudu's documentation is written in asciidoc and lives in the _docs_ subdirectory.
+
+To build the documentation (this is primarily useful if you would like to
+inspect your changes before submitting them to Gerrit), use the `docs` target:
+
+[source,bash]
+----
+$ make docs
+----
+
+This will invoke `docs/support/scripts/make_docs.sh`, which requires
+`asciidoctor` to process the doc sources and produce the HTML documentation,
+emitted to _build/docs_. This script requires `ruby` and `gem` to be installed
+on the system path, and will attempt to install `asciidoctor` and other related
+dependencies into `$HOME/.gems` using https://bundler.io/[bundler].
+
+Some of the dependencies require a recent version of Ruby. To build the
+documentation on a system that comes with an older Ruby version (such as Ruby
+2.0 on CentOS 7), it is easiest to use https://github.com/rbenv/rbenv[rbenv] to
+install Ruby 2.7.
+
+WARNING: As the default values for some configuration options differ between Mac
+and Linux (e.g. file vs log block manager) and the configuration reference is
+generated by running the binaries with `-help`, the documentation *MUST NOT* be
+generated on Mac for publishing purposes, only for verification.
+
+NOTE: Java API docs can only be built on Java 8 due to Javadoc compatibility
+issues.
+
+[[updating-the-site]]
+=== Updating the Kudu web site documentation
+
+To update the documentation that is integrated into the Kudu web site,
+including Java and C++ client API documentation, you may run the following
+command:
+
+[source,bash]
+----
+$ ./docs/support/scripts/make_site.sh
+----
+
+This script will use your local Git repository to check out a shallow clone of
+the 'gh-pages' branch and use `make_docs.sh` to generate the HTML documentation
+for the web site. It will also build the Javadoc and Doxygen documentation.
+These will be placed inside the checked-out web site, along with a tarball
+containing only the generated documentation (the _docs/_ and _apidocs/_ paths
+on the web site). Everything can be found in the _build/site_ subdirectory.
+
+To build the C++ Client API you need to have Doxygen 1.8.19 or later which is
+fairly new so you might need to
+https://www.doxygen.nl/manual/install.html#install_src_unix[build it from
+source]. To build it on RHEL/CentOS you'll also need
+https://www.softwarecollections.org/en/scls/rhscl/devtoolset-8/[devtoolset] as
+Doxygen uses {cpp}14 since 1.8.17.
+
+You can proceed to commit the changes in the pages repository and send a code
+review for your changes. In the future, this step may be automated whenever
+changes are checked into the main Kudu repository.
+
+After making changes to the `gh-pages` branch, follow the instructions below
+when you want to deploy those changes to the live web site.
+
+WARNING: The site *MUST NOT* be built on Mac for publishing purposes, only for
+verification. See the warning in <> for details.
+
+=== Deploying changes to the Apache Kudu web site
+
+When the documentation is updated on the `gh-pages` branch, or when other web
+site files on that branch are updated, the following procedure can be used to
+deploy the changes to the official Apache Kudu web site. Committers have
+permissions to publish changes to the live site.
+
+[source,bash]
+----
+git checkout gh-pages
+git fetch origin
+git merge --ff-only origin/gh-pages
+./site_tool proof # Check for broken links (takes a long time to run)
+./site_tool publish # Generate the static HTML for the site.
+cd _publish && git push # Update the live web site.
+----
+
+NOTE: sometimes, due to glitches with the ASF gitpubsub system, a large commit,
+such as a change to the docs, will not get mirrored to the live site. Adding an
+empty commit and doing another git push tends to fix the problem. See the git
+log for examples of people doing this in the past.
+
+== Improving build times
+
+=== Caching build output
+
+The kudu build is compatible with ccache. Simply install your distro's _ccache_ package,
+prepend _/usr/lib/ccache_ to your `PATH`, and watch your object files get cached. Link
+times won't be affected, but you will see a noticeable improvement in compilation
+times. You may also want to increase the size of your cache using "ccache -M new_size".
+
+=== Improving linker speed
+
+One of the major time sinks in the Kudu build is linking. GNU ld is historically
+quite slow at linking large C++ applications. The alternative linker `gold` is much
+better at it. It's part of the `binutils` package in modern distros (try `binutils-gold`
+in older ones). To enable it, simply repoint the _/usr/bin/ld_ symlink from `ld.bfd` to
+`ld.gold`.
+
+Note that gold doesn't handle weak symbol overrides properly (see
+https://sourceware.org/bugzilla/show_bug.cgi?id=16979[this bug report] for details).
+As such, it cannot be used with shared objects (see below) because it'll cause
+tcmalloc's alternative malloc implementation to be ignored.
+
+=== Building Kudu with dynamic linking
+
+Kudu can be built into shared objects, which, when used with ccache, can result in a
+dramatic build time improvement in the steady state. Even after a `make clean` in the build
+tree, all object files can be served from ccache. By default, `debug` and `fastdebug` will
+use dynamic linking, while other build types will use static linking. To enable
+dynamic linking explicitly, run:
+
+[source,bash]
+----
+$ cmake -DKUDU_LINK=dynamic ../..
+----
+
+Subsequent builds will create shared objects instead of archives and use them when
+linking the kudu binaries and unit tests. The full range of options for `KUDU_LINK` are
+`static`, `dynamic`, and `auto`. The default is `auto` and only the first letter
+matters for the purpose of matching.
+
+NOTE: Static linking is incompatible with TSAN.
+
+
+== Developing Kudu in Eclipse
+
+Eclipse can be used as an IDE for Kudu. To generate Eclipse project files, run:
+
+[source,bash]
+----
+$ mkdir -p
+$ cd
+$ rm -rf CMakeCache.txt CMakeFiles/
+$ cmake -G "Eclipse CDT4 - Unix Makefiles" -DCMAKE_CXX_COMPILER_ARG1=-std=c++17
+----
+
+When the Eclipse generator is run in a subdirectory of the source tree, the
+resulting project is incomplete. That's why it's recommended to use a directory
+that's a sibling to the source tree. See [1] for more details.
+
+It's critical that _CMakeCache.txt_ be removed prior to running the generator,
+otherwise the extra Eclipse generator logic (the CMakeFindEclipseCDT4.make module)
+won't run and standard system includes will be missing from the generated project.
+
+Thanks to [2], the Eclipse generator ignores the `-std=c++17` definition and we must
+add it manually on the command line via `CMAKE_CXX_COMPILER_ARG1`.
+
+By default, the Eclipse CDT indexer will index everything under the _kudu/_
+source tree. It tends to choke on certain complicated source files within
+_thirdparty_. In CDT 8.7.0, the indexer will generate so many errors that it'll
+exit early, causing many spurious syntax errors to be highlighted. In older
+versions of CDT, it'll spin forever.
+
+Either way, these complicated source files must be excluded from indexing. To do
+this, right click on the project in the Project Explorer and select Properties. In
+the dialog box, select "C/C++ Project Paths", select the Source tab, highlight
+"Exclusion filter: (None)", and click "Edit...". In the new dialog box, click
+"Add Multiple...". Select every subdirectory inside _thirdparty_ except _installed_.
+Click OK all the way out and rebuild the project index by right clicking the project
+in the Project Explorer and selecting Index -> Rebuild.
+
+With this exclusion, the only false positives (shown as "red squigglies") that
+CDT presents appear to be in atomicops functions (`NoBarrier_CompareAndSwap` for
+example).
+
+Another way to approach enormous source code indexing in Ecplise is to get rid of
+unnecessary source code in "thirdparty/src" directory right after building code
+and before opening project in Eclipse. You can remove all source code except
+hadoop, hive and sentry directories.
+Additionally, if you encounter red squigglies in code editor due to
+Eclipse's poor macro discovery, you may need to provide Eclipse with preprocessor
+macros values, which it could not extract during auto-discovery.
+Go to "Project Explorer" -> "Properties" -> "C/C++ General" ->
+"Preprocessor Include Paths, Macros, etc" -> "Entries" tab -> Language "GNU C++" ->
+Setting Entries "CDT User Setting Entries" -> button "Add"
+-> choose "Preprocessor Macro" [3]
+
+Another Eclipse annoyance stems from the "[Targets]" linked resource that Eclipse
+generates for each unit test. These are probably used for building within Eclipse,
+but one side effect is that nearly every source file appears in the indexer twice:
+once via a target and once via the raw source file. To fix this, simply delete the
+[Targets] linked resource via the Project Explorer. Doing this should have no effect
+on writing code, though it may affect your ability to build from within Eclipse.
+
+1. https://cmake.org/pipermail/cmake-developers/2011-November/014153.html
+2. https://public.kitware.com/Bug/view.php?id=15102
+3. https://www.eclipse.org/community/eclipse_newsletter/2013/october/article4.php
+
+== Export Control Notice
+
+This distribution uses cryptographic software and may be subject to export controls.
+Please refer to docs/export_control.adoc for more information.
diff --git a/data/readmes/kuma-v2125.md b/data/readmes/kuma-v2125.md
new file mode 100644
index 0000000..8a69e5a
--- /dev/null
+++ b/data/readmes/kuma-v2125.md
@@ -0,0 +1,168 @@
+# Kuma - README (v2.12.5)
+
+**Repository**: https://github.com/kumahq/kuma
+**Version**: v2.12.5
+
+---
+
+[![][kuma-logo]][kuma-url]
+
+**Builds**
+
+
+
+**Code quality**
+
+[](https://goreportcard.com/report/github.com/kumahq/kuma)
+[](https://www.bestpractices.dev/projects/5576)
+[](https://securityscorecards.dev/viewer/?uri=github.com/kumahq/kuma)
+[](https://github.com/kumahq/kuma/blob/master/LICENSE)
+[](https://clomonitor.io/projects/cncf/kuma)
+
+**Releases**
+
+[](https://hub.docker.com/u/kumahq)
+[](https://artifacthub.io/packages/search?repo=kuma)
+[](https://cloudsmith.io/~kong/repos/kuma-binaries-release/packages)
+
+**Social**
+
+[](https://join.slack.com/t/kuma-mesh/shared_invite/zt-1rcll3y6t-DkV_CAItZUoy0IvCwQ~jlQ)
+[](https://twitter.com/intent/follow?screen_name=KumaMesh)
+
+Kuma is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to its broad universal workload support, combined with native support for Envoy as its data plane proxy technology (but with no Envoy expertise required), Kuma provides modern L4-L7 service connectivity, discovery, security, observability, routing and more across any service on any platform, databases included.
+
+Easy to use, with built-in service mesh policies for security, traffic control, discovery, observability and more, Kuma ships with an advanced multi-zone and multi-mesh support that automatically enables cross-zone communication across different clusters and clouds, and automatically propagates service mesh policies across the infrastructure. Kuma is currently being adopted by enterprise organizations around the world to support distributed service meshes across the application teams, on both Kubernetes and VMs.
+
+Originally created and donated by Kong, Kuma is today CNCF (Cloud Native Computing Foundation) Sandbox project and therefore available with the same openness and neutrality as every other CNCF project. Kuma has been engineered to be both powerful yet simple to use, reducing the complexity of running a service mesh across every organization with very unique capabilities like multi-zone support, multi-mesh support, and a gradual and intuitive learning curve.
+
+Users that require enterprise-level support for Kuma can explore the [enterprise offerings](https://kuma.io/enterprise/) available.
+
+Built by Envoy contributors at Kong 🦍.
+
+[![][kuma-gui]][kuma-url]
+
+## Get Started
+
+- [Installation](https://kuma.io/install)
+- [Documentation](https://kuma.io/docs)
+
+## Get Involved
+
+- [Join the Kuma Slack](https://join.slack.com/t/kuma-mesh/shared_invite/zt-1wi2o3uok-x0KmKJiSzjQy7NgNkC8IAA) or the #kuma channel in the [CNCF Slack](https://slack.cncf.io/) exists but is not actively in use.
+- Attend a [Community Call](https://docs.google.com/document/d/1HgnK3wJIEg8uFlivdrhrPZYWTpElWWu3mhFDXj-bMWQ/edit?usp=sharing) monthly on the second Wednesday. [Add to Calendar](https://calendar.google.com/calendar/u/0/r/eventedit/copy/YzdmZmxtY2FuNmljMTM3cTZqZDZ2ZzNlZjNfMjAyMjAyMDlUMTYzMDAwWiBrb25naHEuY29tXzFtYTk2c3NkZ2dmaDlmcnJjczk3ZXB1MzhvQGc/dGFyeW4uam9uZXNAa29uZ2hxLmNvbQ?scp=ALL&sf=true)
+- Follow us on [Twitter](https://twitter.com/kumamesh)
+- Read the [blog](https://kuma.io/blog/)
+
+**Need help?** In your journey with Kuma you can get in touch with the broader community via the official [community channels](https://kuma.io/community).
+
+## Summary
+
+- [**Why Kuma?**](#why-kuma)
+- [**Features**](#features)
+- [**Distributions**](#distributions)
+- [**Development**](#development)
+- [**Enterprise Demo**](#enterprise-demo)
+- [**License**](#license)
+
+## Why Kuma?
+
+Built with enterprise use-cases in mind, Kuma is a universal service mesh that supports both Kubernetes and VMs deployments across single and multi-zone setups, with turnkey service mesh policies to get up and running easily while supporting multi-tenancy and multi-mesh on the same control plane. Kuma is a CNCF Sandbox project.
+
+Unlike other service mesh solutions, Kuma innovates the service mesh ecosystem by providing ease of use, native support for both Kubernetes and VMs on both the control plane and the data plane, multi-mesh support that can cross every boundary including Kubernetes namespaces, out of the box multi-zone and multi-cluster support with automatic policy synchronization and connectivity, zero-trust, observability and compliance in one-click, support for custom workload attributes that can be leveraged to accelerate PCI and GDPR compliance, and much more.
+
+Below is an example of using Kuma's attributes to route all traffic generated by any PCI-compliant service in Switzerland, to only be routed within the Swiss region:
+
+```yaml
+apiVersion: kuma.io/v1alpha1
+kind: TrafficRoute
+mesh: default
+metadata:
+ name: ch-pci-compliance
+spec:
+ sources:
+ - match:
+ kuma.io/service: '*'
+ kuma.io/zone: 'CH'
+ PCI: true
+ destinations:
+ - match:
+ kuma.io/service: '*'
+ conf:
+ loadBalancer:
+ roundRobin: {}
+ split:
+ - weight: 100
+ destination:
+ kuma.io/service: '*'
+ kuma.io/zone: 'CH'
+```
+
+The above example can also be applied on virtual machines via the built-in `kumactl` CLI.
+
+With Kuma, our application teams can stop building connectivity management code in every service and every application, and they can rely on modern service mesh infrastructure instead to improve their efficiency and the overall agility of the organization:
+
+[![][kuma-benefits]][kuma-url]
+
+## Features
+
+* **Universal Control Plane**: Easy to use, distributed, runs anywhere on both Kubernetes and VM/Bare Metal.
+* **Lightweight Data Plane**: Powered by Envoy to process any L4/L7 traffic, with automatic Envoy bootstrapping.
+* **Automatic DP Injection**: No code changes required in K8s. Easy YAML specification for VM and Bare Metal deployments.
+* **Multi-Mesh**: To setup multiple isolated Meshes in one cluster and one Control Plane, lowering OPs cost.
+* **Single and Multi Zone**: To deploy a service mesh that is cross-platform, cross-cloud and cross-cluster.
+* **Automatic Discovery & Ingress**: With built-in service discovery and connectivity across single and multi-zones.
+* **Global & Remote CPs**: For scalability across deployments with multiple zones, including hybrid VMs + K8s meshes.
+* **mTLS**: Automatic mTLS issuing, identity and encryption with optional support for third-party CA.
+* **TLS Rotation**: Automatic certificate rotation for all the data planes, with configurable settings.
+* **Internal & External Services**: Aggregation of internal services and support for services outside the mesh.
+* **Traffic Permissions**: To firewall traffic between the services of a Mesh.
+* **Traffic Routing**: With dynamic load-balancing for blue/green, canary, versioning and rollback deployments.
+* **Fault Injection**: To harden our systems by injecting controlled artificial faults and observe the behavior.
+* **Traffic Logs**: To log all the activity to a third-party service, like Splunk or ELK.
+* **Traffic Tracing**: To observe the full trace of the service traffic and determine bottlenecks.
+* **Traffic Metrics**: For every Envoy dataplane managed by Kuma with native Prometheus/Grafana support.
+* **Retries**: To improve application reliability by automatically retrying requests.
+* **Proxy Configuration Templating**: The easiest way to run and configure Envoy with low-level configuration.
+* **Gateway Support**: To support any API Gateway or Ingress, like [Kong Gateway](https://github.com/Kong/kong).
+* **Healthchecks**: Both active and passive.
+* **GUI**: Out of the box browser GUI to explore all the Service Meshes configured in the system.
+* **Tagging Selectors**: To apply sophisticated regional, cloud-specific and team-oriented policies.
+* **Platform-Agnostic**: Support for Kubernetes, VMs, and bare metal. Including hybrid deployments.
+* **Transparent Proxying**: Out of the box transparent proxying on Kubernetes, VMs and any other platform.
+* **Network Overlay**: Create a configurable Mesh overlay across different Kubernetes clusters and namespaces.
+
+## Distributions
+
+Kuma is a platform-agnostic product that ships in different distributions. You can explore the available installation options at [the official website](https://kuma.io/install).
+
+You can use Kuma for modern greenfield applications built on containers as well as existing applications running on more traditional infrastructure. Kuma can be fully configured via CRDs (Custom Resource Definitions) on Kubernetes and via a RESTful HTTP API in other environments that can be easily integrated with CI/CD workflows.
+
+Kuma also provides an easy to use `kumactl` CLI client for every environment, and an official GUI that can be accessed by the browser.
+
+## Roadmap
+
+Kuma releases a minor version on a 10-week release cycle.
+The roadmap is tracked using milestones: https://github.com/kumahq/kuma/milestones
+
+## Development
+
+Kuma is under active development and production-ready.
+
+See [Developer Guide](DEVELOPER.md) for further details.
+
+## Enterprise Support
+
+If you are implementing Kuma in a mission-critical environment and require enterprise support and features, please visit [Enterprise](https://kuma.io/enterprise/) to explore the available offerings.
+
+## Package Hosting
+
+Package repository hosting is graciously provided by [Cloudsmith](https://cloudsmith.com).
+Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that
+enables your organization to create, store and share packages in any format, to any place, with total
+confidence.
+
+[kuma-url]: https://kuma.io/
+[kuma-logo]: https://kuma-public-assets.s3.amazonaws.com/kuma-logo-v2.png
+[kuma-gui]: https://kuma-public-assets.s3.amazonaws.com/kuma-gui-v3.jpg
+[kuma-benefits]: https://kuma-public-assets.s3.amazonaws.com/kuma-benefits-v2.png
diff --git a/data/readmes/kured-1200.md b/data/readmes/kured-1200.md
new file mode 100644
index 0000000..d02d33e
--- /dev/null
+++ b/data/readmes/kured-1200.md
@@ -0,0 +1,74 @@
+# Kured - README (1.20.0)
+
+**Repository**: https://github.com/kubereboot/kured
+**Version**: 1.20.0
+
+---
+
+# kured - Kubernetes Reboot Daemon
+
+[](https://artifacthub.io/packages/helm/kured/kured)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubereboot%2Fkured?ref=badge_shield)
+[](https://clomonitor.io/projects/cncf/kured)
+[](https://www.bestpractices.dev/projects/8867)
+
+
+
+- [kured - Kubernetes Reboot Daemon](#kured---kubernetes-reboot-daemon)
+ - [Introduction](#introduction)
+ - [Documentation](#documentation)
+ - [Getting Help](#getting-help)
+ - [Trademarks](#trademarks)
+ - [License](#license)
+
+## Introduction
+
+Kured (KUbernetes REboot Daemon) is a Kubernetes daemonset that
+performs safe automatic node reboots when the need to do so is
+indicated by the package management system of the underlying OS.
+
+- Watches for the presence of a reboot sentinel file e.g. `/var/run/reboot-required`
+ or the successful run of a sentinel command.
+- Utilises a lock in the API server to ensure only one node reboots at
+ a time
+- Optionally defers reboots in the presence of active Prometheus alerts or selected pods
+- Cordons & drains worker nodes before reboot, uncordoning them after
+
+## Documentation
+
+Find all our docs on :
+
+- [All Kured Documentation](https://kured.dev/docs/)
+- [Installing Kured](https://kured.dev/docs/installation/)
+- [Configuring Kured](https://kured.dev/docs/configuration/)
+- [Operating Kured](https://kured.dev/docs/operation/)
+- [Developing Kured](https://kured.dev/docs/development/)
+
+And there's much more!
+
+## Getting Help
+
+If you have any questions about, feedback for or problems with `kured`:
+
+- Invite yourself to the CNCF Slack.
+- Ask a question on the [#kured](https://cloud-native.slack.com/archives/kured) slack channel.
+- [File an issue](https://github.com/kubereboot/kured/issues/new).
+- Join us in [our monthly meeting](https://docs.google.com/document/d/1AWT8YDdqZY-Se6Y1oAlwtujWLVpNVK2M_F_Vfqw06aI/edit),
+ every first Wednesday of the month at 16:00 UTC.
+- You might want to [join the kured-dev mailing list](https://lists.cncf.io/g/cncf-kured-dev) as well.
+
+We follow the [CNCF Code of Conduct](CODE_OF_CONDUCT.md).
+
+Your feedback is always welcome!
+
+## Trademarks
+
+**Kured is a [Cloud Native Computing Foundation](https://cncf.io/) Sandbox project.**
+
+
+
+The Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see [Trademark Usage](https://www.linuxfoundation.org/trademark-usage/).
+
+## License
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkubereboot%2Fkured?ref=badge_large)
diff --git a/data/readmes/kusionstack-v0150.md b/data/readmes/kusionstack-v0150.md
new file mode 100644
index 0000000..6acdd31
--- /dev/null
+++ b/data/readmes/kusionstack-v0150.md
@@ -0,0 +1,164 @@
+# KusionStack - README (v0.15.0)
+
+**Repository**: https://github.com/KusionStack/kusion
+**Version**: v0.15.0
+
+---
+
+
+
+## What is Kusion?
+
+Kusion is an intent-driven [Platform Orchestrator](https://internaldeveloperplatform.org/platform-orchestrators/), which sits at the core of an [Internal Developer Platform (IDP)](https://internaldeveloperplatform.org/what-is-an-internal-developer-platform/). With Kusion you can enable app-centric development, your developers only need to write a single application specification - [AppConfiguration](https://www.kusionstack.io/docs/concepts/appconfigurations). AppConfiguration defines the workload and all resource dependencies without needing to supply environment-specific values, Kusion ensures it provides everything needed for the application to run.
+
+Kusion helps app developers who are responsible for creating applications and the platform engineers responsible for maintaining the infrastructure the applications run on. These roles may overlap or align differently in your organization, but Kusion is intended to ease the workload for any practitioner responsible for either set of tasks.
+
+
+
+
+
+
+## How does Kusion work?
+
+As a Platform Orchestrator, Kusion enables you to address challenges often associated with Day 0 and Day 1. Both platform engineers and application engineers can benefit from Kusion.
+
+There are two key workflows for Kusion:
+
+1. **Day 0 - Set up the modules and workspaces:** Platform engineers create shared modules for deploying applications and their underlying infrastructure, and workspace definitions for the target landing zone. These standardized, shared modules codify the requirements of stakeholders across the organization including security, compliance, and finance.
+
+ Kusion modules abstract the complexity of underlying infrastructure tooling, enabling app developers to deploy their applications using a self-service model.
+
+
+
+ 
+
+
+2. **Day 1 - Set up the application:** Application developers leverage the workspaces and modules created by the platform engineers to deploy applications and their supporting infrastructure. The platform team maintains the workspaces and modules, which allows application developers to focus on building applications using a repeatable process on standardized infrastructure.
+
+
+
+ 
+
+
+
+## Introducing Kusion Server with a Developer Portal
+
+Starting With Kusion v0.14.0, we are officially introducing Kusion Server with a Developer Portal.
+
+Kusion Server runs as a long-running service, providing the same set of functionalities as the Kusion CLI, with additional capabilities to manage application metadata and visualized application resource graphs.
+
+Kusion Server manages instances of Projects, Stacks, Workspaces, Runs, etc. centrally via a Developer Portal and a set of RESTful APIs for other systems to integrate with.
+
+https://github.com/user-attachments/assets/d4edac23-cc01-417f-b08d-de137253c9eb
+
+## Quick Start With Kusion Server with a Developer Portal
+
+To get started with Kusion Server:
+
+1. First, follow the [Installation Guide](https://www.kusionstack.io/docs/getting-started/getting-started-with-kusion-server/install-kusion) to install Kusion Server.
+
+>**Note** that you need to configure your kubeconfig properly according to the guide, as Kusion Server requires access to a Kubernetes cluster to function properly.
+
+2. Then, follow the [QuickStart Guide](https://www.kusionstack.io/docs/getting-started/getting-started-with-kusion-server/deliver-quickstart) to learn how to use Kusion Server to deploy your first application.
+
+## Quick Start With Kusion CLI
+
+This guide will cover:
+
+1. Install Kusion CLI.
+2. Deploy an application to Kubernetes with Kusion.
+
+### Install
+
+#### Homebrew (macOS & Linux)
+
+```shell
+# tap formula repository Kusionstack/tap
+brew tap KusionStack/tap
+
+# install Kusion
+brew install KusionStack/tap/kusion
+```
+
+#### Powershell
+
+```
+# install Kusion latest version
+powershell -Command "iwr -useb https://www.kusionstack.io/scripts/install.ps1 | iex"
+```
+
+> For more information about CLI installation, please refer to the [CLI Installation Guide](https://www.kusionstack.io/docs/getting-started/getting-started-with-kusion-cli/install-kusion) for more options.
+
+### Deploy
+
+To deploy an application, you can run the `kusion apply` command.
+
+> To rapidly get Kusion up and running, please refer to the [Quick Start Guide](https://www.kusionstack.io/docs/getting-started/getting-started-with-kusion-cli/deliver-quickstart).
+
+
+
+## Case Studies
+
+Check out these case studies on how Kusion can be useful in production:
+
+- Jan 2025: [Configuration Management at Ant Group: Generated Manifest & Immutable Desired State](https://blog.kusionstack.io/configuration-management-at-ant-group-generated-manifest-immutable-desired-state-3c50e363a3fb)
+- Jan 2024: [Modeling application delivery using AppConfiguration](https://blog.kusionstack.io/modeling-application-delivery-using-appconfiguration-d291830de8f1)
+
+## Contact
+
+If you have any questions, feel free to reach out to us in the following ways:
+
+- [Slack](https://kusionstack.slack.com) | [Join](https://join.slack.com/t/kusionstack/shared_invite/zt-2drafxksz-VzCZZwlraHP4xpPeh_g8lg)
+- [DingTalk Group](https://page.dingtalk.com/wow/dingtalk/act/en-home): `42753001` (Chinese)
+- WeChat Group (Chinese): Add the WeChat assistant to bring you into the user group.
+
+
+
+## Contributing
+
+If you're interested in contributing, please refer to the [Contributing Guide](./CONTRIBUTING.md) **before submitting
+a pull request**.
+
+## License
+
+Kusion is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
+
+## OpenSSF Best Practice Badge
+[](https://www.bestpractices.dev/projects/9586)
diff --git a/data/readmes/kustomize-kyamlv0210.md b/data/readmes/kustomize-kyamlv0210.md
new file mode 100644
index 0000000..7d9714d
--- /dev/null
+++ b/data/readmes/kustomize-kyamlv0210.md
@@ -0,0 +1,242 @@
+# Kustomize - README (kyaml/v0.21.0)
+
+**Repository**: https://github.com/kubernetes-sigs/kustomize
+**Version**: kyaml/v0.21.0
+
+---
+
+# kustomize
+
+`kustomize` lets you customize raw, template-free YAML
+files for multiple purposes, leaving the original YAML
+untouched and usable as is.
+
+`kustomize` targets kubernetes; it understands and can
+patch [kubernetes style] API objects. It's like
+[`make`], in that what it does is declared in a file,
+and it's like [`sed`], in that it emits edited text.
+
+This tool is sponsored by [sig-cli] ([KEP]).
+
+ - [Installation instructions](https://kubectl.docs.kubernetes.io/installation/kustomize/)
+ - [General documentation](https://kubectl.docs.kubernetes.io/references/kustomize/)
+ - [Examples](examples)
+
+[](https://prow.k8s.io/job-history/kubernetes-jenkins/pr-logs/directory/kustomize-presubmit-master)
+[](https://goreportcard.com/report/github.com/kubernetes-sigs/kustomize)
+
+## kubectl integration
+
+To find the kustomize version embedded in recent versions of kubectl, run `kubectl version`:
+
+```sh
+> kubectl version --client
+Client Version: v1.31.0
+Kustomize Version: v5.4.2
+```
+
+The kustomize build flow at [v2.0.3] was added
+to [kubectl v1.14][kubectl announcement]. The kustomize
+flow in kubectl remained frozen at v2.0.3 until kubectl v1.21,
+which [updated it to v4.0.5][kust-in-kubectl update]. It will
+be updated on a regular basis going forward, and such updates
+will be reflected in the Kubernetes release notes.
+
+| Kubectl version | Kustomize version |
+| --------------- | ----------------- |
+| < v1.14 | n/a |
+| v1.14-v1.20 | v2.0.3 |
+| v1.21 | v4.0.5 |
+| v1.22 | v4.2.0 |
+| v1.23 | v4.4.1 |
+| v1.24 | v4.5.4 |
+| v1.25 | v4.5.7 |
+| v1.26 | v4.5.7 |
+| v1.27 | v5.0.1 |
+
+[v2.0.3]: https://github.com/kubernetes-sigs/kustomize/releases/tag/v2.0.3
+[#2506]: https://github.com/kubernetes-sigs/kustomize/issues/2506
+[#1500]: https://github.com/kubernetes-sigs/kustomize/issues/1500
+[kust-in-kubectl update]: https://github.com/kubernetes/kubernetes/blob/4d75a6238a6e330337526e0513e67d02b1940b63/CHANGELOG/CHANGELOG-1.21.md#kustomize-updates-in-kubectl
+
+For examples and guides for using the kubectl integration please
+see the [kubernetes documentation].
+
+## Usage
+
+
+### 1) Make a [kustomization] file
+
+In some directory containing your YAML [resource]
+files (deployments, services, configmaps, etc.), create a
+[kustomization] file.
+
+This file should declare those resources, and any
+customization to apply to them, e.g. _add a common
+label_.
+
+```
+
+base: kustomization + resources
+
+kustomization.yaml deployment.yaml service.yaml
++---------------------------------------------+ +-------------------------------------------------------+ +-----------------------------------+
+| apiVersion: kustomize.config.k8s.io/v1beta1 | | apiVersion: apps/v1 | | apiVersion: v1 |
+| kind: Kustomization | | kind: Deployment | | kind: Service |
+| labels: | | metadata: | | metadata: |
+| - includeSelectors: true | | name: myapp | | name: myapp |
+| pairs: | | spec: | | spec: |
+| app: myapp | | selector: | | selector: |
+| resources: | | matchLabels: | | app: myapp |
+| - deployment.yaml | | app: myapp | | ports: |
+| - service.yaml | | template: | | - port: 6060 |
+| configMapGenerator: | | metadata: | | targetPort: 6060 |
+| - name: myapp-map | | labels: | +-----------------------------------+
+| literals: | | app: myapp |
+| - KEY=value | | spec: |
++---------------------------------------------+ | containers: |
+ | - name: myapp |
+ | image: myapp |
+ | resources: |
+ | limits: |
+ | memory: "128Mi" |
+ | cpu: "500m" |
+ | ports: |
+ | - containerPort: 6060 |
+ +-------------------------------------------------------+
+
+```
+
+File structure:
+
+> ```
+> ~/someApp
+> ├── deployment.yaml
+> ├── kustomization.yaml
+> └── service.yaml
+> ```
+
+The resources in this directory could be a fork of
+someone else's configuration. If so, you can easily
+rebase from the source material to capture
+improvements, because you don't modify the resources
+directly.
+
+Generate customized YAML with:
+
+```
+kustomize build ~/someApp
+```
+
+The YAML can be directly [applied] to a cluster:
+
+> ```
+> kustomize build ~/someApp | kubectl apply -f -
+> ```
+
+
+### 2) Create [variants] using [overlays]
+
+Manage traditional [variants] of a configuration - like
+_development_, _staging_ and _production_ - using
+[overlays] that modify a common [base].
+
+```
+
+overlay: kustomization + patches
+
+kustomization.yaml replica_count.yaml cpu_count.yaml
++-----------------------------------------------+ +-------------------------------+ +------------------------------------------+
+| apiVersion: kustomize.config.k8s.io/v1beta1 | | apiVersion: apps/v1 | | apiVersion: apps/v1 |
+| kind: Kustomization | | kind: Deployment | | kind: Deployment |
+| labels: | | metadata: | | metadata: |
+| - includeSelectors: true | | name: myapp | | name: myapp |
+| pairs: | | spec: | | spec: |
+| variant: prod | | replicas: 80 | | template: |
+| resources: | +-------------------------------+ | spec: |
+| - ../../base | | containers: |
+| patches: | | - name: myapp |
+| - path: replica_count.yaml | | resources: |
+| - path: cpu_count.yaml | | limits: |
++-----------------------------------------------+ | memory: "128Mi" |
+ | cpu: "7000m" |
+ +------------------------------------------+
+```
+
+
+File structure:
+> ```
+> ~/someApp
+> ├── base
+> │ ├── deployment.yaml
+> │ ├── kustomization.yaml
+> │ └── service.yaml
+> └── overlays
+> ├── development
+> │ ├── cpu_count.yaml
+> │ ├── kustomization.yaml
+> │ └── replica_count.yaml
+> └── production
+> ├── cpu_count.yaml
+> ├── kustomization.yaml
+> └── replica_count.yaml
+> ```
+
+Take the work from step (1) above, move it into a
+`someApp` subdirectory called `base`, then
+place overlays in a sibling directory.
+
+An overlay is just another kustomization, referring to
+the base, and referring to patches to apply to that
+base.
+
+This arrangement makes it easy to manage your
+configuration with `git`. The base could have files
+from an upstream repository managed by someone else.
+The overlays could be in a repository you own.
+Arranging the repo clones as siblings on disk avoids
+the need for git submodules (though that works fine, if
+you are a submodule fan).
+
+Generate YAML with
+
+```sh
+kustomize build ~/someApp/overlays/production
+```
+
+The YAML can be directly [applied] to a cluster:
+
+> ```sh
+> kustomize build ~/someApp/overlays/production | kubectl apply -f -
+> ```
+
+## Community
+
+- [file a bug](https://kubectl.docs.kubernetes.io/contributing/kustomize/bugs/)
+- [contribute a feature](https://kubectl.docs.kubernetes.io/contributing/kustomize/features/)
+- [propose a larger enhancement](https://github.com/kubernetes-sigs/kustomize/tree/master/proposals)
+
+### Code of conduct
+
+Participation in the Kubernetes community
+is governed by the [Kubernetes Code of Conduct].
+
+[`make`]: https://www.gnu.org/software/make
+[`sed`]: https://www.gnu.org/software/sed
+[DAM]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#declarative-application-management
+[KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2377-Kustomize/README.md
+[Kubernetes Code of Conduct]: code-of-conduct.md
+[applied]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#apply
+[base]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#base
+[declarative configuration]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#declarative-application-management
+[kubectl announcement]: https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement
+[kubernetes documentation]: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
+[kubernetes style]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kubernetes-style-object
+[kustomization]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization
+[overlay]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#overlay
+[overlays]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#overlay
+[release page]: https://github.com/kubernetes-sigs/kustomize/releases
+[resource]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#resource
+[resources]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#resource
+[sig-cli]: https://github.com/kubernetes/community/blob/master/sig-cli/README.md
+[variants]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#variant
diff --git a/data/readmes/kylin-kylin-502.md b/data/readmes/kylin-kylin-502.md
new file mode 100644
index 0000000..1fbeec5
--- /dev/null
+++ b/data/readmes/kylin-kylin-502.md
@@ -0,0 +1,146 @@
+# Kylin - README (kylin-5.0.2)
+
+**Repository**: https://github.com/apache/kylin
+**Version**: kylin-5.0.2
+
+---
+
+
+
+---
+Apache Kylin is a leading open source OLAP engine for Big Data capable for sub-second query latency on trillions of records. Since being created and open sourced by eBay in 2014, and graduated to Top Level Project of Apache Software Foundation in 2015.
+Kylin has quickly been adopted by thousands of organizations world widely as their critical analytics application for Big Data.
+
+Kylin has following key strengths:
+
+- High qerformance, high concurrency, sub-second query latency
+- Unified big data warehouse architecture
+- Seamless integration with BI tools
+- Comprehensive and enterprise-ready capabilities
+
+
+
+
+## What's New in Kylin 5.0
+
+### 📊 1. Internal Table
+Kylin now support internal table, which is designed for flexible query and lakehouse scenarios.
+
+### 🦁 2. Model & Index Recommendation
+
+With recommendation engine, you don't have to be an expert of modeling. Kylin now can auto modeling and optimizing indexes from you query history.
+You can also create model by importing sql text.
+
+### 👾 3. Native Compute Engine
+
+Start from version 5.0, Kylin has integrated Gluten-ClickHouse Backend(incubating in apache software foundation) as native compute engine. And use Gluten mergetree as the default storage format of internal table.
+Which can bring 2~4x performance improvement compared with vanilla spark. Both model and internal table queries can get benefits from the Gluten integration.
+
+### 🧜🏻♀️ 4. Streaming Data Source
+
+Kylin now support Apache Kafka as streaming data source of model building. Users can create a fusion model to implement streaming-batch hybrid analysis.
+
+## Significant Change
+
+### 🤖1. Metadata Refactory
+In Kylin 5.0, we have refactored the metadata storage structure and the transaction process, removed the project lock and Epoch mechanism. This has significantly improved transaction interface performance and system concurrency capabilities.
+
+To upgrade from 5.0 alpha, beta, follow the [Metadata Migration Guide](https://kylin.apache.org/docs/operations/system-operation/cli_tool/metadata_operation#migration)
+
+The metadata migration tool for upgrading from Kylin 4.0 is not tested, please contact kylin user or dev mailing list for help.
+
+### Other Optimizations and Improvements
+Please refer to [Release Notes](https://kylin.apache.org/docs/release_notes/) for more details.
+
+## Quick Start
+
+### 🐳 Play Kylin in Docker
+
+To explore new features in Kylin 5 on a laptop, we recommend pulling the Docker image and checking the [Apache Kylin Standalone Image on Docker Hub](https://hub.docker.com/r/apachekylin/apache-kylin-standalone) (For amd64 platform).
+
+```shell
+docker run -d \
+ --name Kylin5-Machine \
+ --hostname localhost \
+ -e TZ=UTC \
+ -m 10G \
+ -p 7070:7070 \
+ -p 8088:8088 \
+ -p 9870:9870 \
+ -p 8032:8032 \
+ -p 8042:8042 \
+ -p 2181:2181 \
+ apachekylin/apache-kylin-standalone:5.0.0-GA
+```
+
+
+---
+### Introduction
+
+Kylin utilizes multidimensional modeling theory to build star or snowflake schemas based on tables, making it a powerful tool for large-scale data analysis. The **model** is Kylin's core component, consisting of three key aspects: *model design*, *index design*, and *data loading*. By carefully designing the model, optimizing indexes, and pre-computed data, queries executed on Kylin can avoid scanning the entire dataset, potentially reducing response times to mere seconds, even for petabyte-scale data.
+
++ **Model design** refers to establishing relationships between data tables to enable fast extraction of key information from multidimensional data. The core elements of model design are computed columns, dimensions, measures, and join relations.
+
++ **Index design** refers to creating indexes (CUBEs) within the model to precompute query results, thereby reducing query response time. Well-designed indexes not only improve query performance but also help minimize the storage and data-loading costs associated with precomputation.
+
++ **Data loading** refers to the process of importing data into the model, enabling queries to utilize the pre-built indexes rather than scanning the entire dataset. This allows for faster query responses by leveraging the model's optimized structure.
+
+
+
+### Core Concepts
+
+- **Dimension**: A perspective of viewing data, which can be used to describe object attributes or characteristics, for example, product category.
+
+- **Measure**: An aggregated sum, which is usually a continuous value, for example, product sales.
+
+- **Pre-computation**: The process of aggregating data based on model dimension combinations and of storing the results as indexes to accelerate data query.
+
+- **Index**: Also called CUBE, which is used to accelerate data query. Indexes are divided into:
+ - **Aggregate Index**: An aggregated combination of multiple dimensions and measures, and can be used to answer aggregate queries such as total sales for a given year.
+ - **Table Index**: A multilevel index in a wide table and can be used to answer detailed queries such as the last 100 transactions of a certain user.
+
+
+---
+
+### Why Use Kylin
+
++ **Low Query Latency vs. Large Volume**
+
+ When analyzing massive data, there are some techniques to speed up computing and storage, but they cannot change the time complexity of query, that is, query latency and data volume are linearly dependent.
+
+ If it takes 1 minute to query 100 million entries of data records, querying 10 billion data entries will take about 1 hour and 40 minutes. When companies want to analyze all business data piled up over the years or to add complexity to query, say, with more dimensions, queries will be running extremely slow or even time out.
+
+ 
+
++ **Pre-computation vs. Runtime Computation**
+
+ Pre-computation and runtime computation are two approaches to calculating results in data processing and analytics. **Pre-computation** involves calculating and storing results in advance, so they can be quickly retrieved when a query is run. In contrast, **runtime computation** dynamically computes results during query execution, processing raw data and applying aggregations, filters, or transformations as needed for each query.
+
+ Kylin primarily focuses on **pre-computation** to enhance query performance. However, we also offer advanced features that partially support runtime computation. For more details, please refer to [Table Snapshot](https://kylin.apache.org/docs/model/snapshot/), [Runtime Join](https://kylin.apache.org/docs/model/features/runtime_join), and [Internal Table](https://kylin.apache.org/docs/internaltable/intro).
+
+
++ **Manual Modeling vs. Recommendation**
+
+ Before Kylin 5.0, model design had to be done manually, which was a tedious process requiring extensive knowledge of multidimensional modeling. However, this changed with the introduction of Kylin 5.0. We now offer a new approach to model design, called **recommendation**, which allows models to be created by importing SQL, along with an automatic way to remove unnecessary indexes. Additionally, the system can leverage query history to generate index recommendations, further optimizing query performance. For more details, please refer to [Recommendation](https://kylin.apache.org/docs/model/rec/intro).
+
+
++ **Batch Data vs. Streaming Data**
+
+ In the OLAP field, data has traditionally been processed in batches. However, this is changing as more companies are now required to handle both batch and streaming data to meet their business objectives. The ability to process data in real-time has become increasingly critical for applications such as real-time analytics, monitoring, and event-driven decision-making.
+
+ To address these evolving needs, we have introduced support for streaming data in the new version. This allows users to efficiently process and analyze data as it is generated, complementing the traditional batch processing capabilities. For more details, please refer to [Streaming](https://kylin.apache.org/docs/model/streaming/intro).
diff --git a/data/readmes/kyuubi-v1102.md b/data/readmes/kyuubi-v1102.md
new file mode 100644
index 0000000..c4792f1
--- /dev/null
+++ b/data/readmes/kyuubi-v1102.md
@@ -0,0 +1,153 @@
+# Kyuubi - README (v1.10.2)
+
+**Repository**: https://github.com/apache/kyuubi
+**Version**: v1.10.2
+
+---
+
+
+
+
+
+# Apache Kyuubi
+
+Apache Kyuubi™ is a distributed and multi-tenant gateway to provide serverless
+SQL on data warehouses and lakehouses.
+
+## What is Kyuubi?
+
+Kyuubi provides a pure SQL gateway through Thrift JDBC/ODBC interface for end-users to manipulate large-scale data with pre-programmed and extensible Spark SQL engines. This "out-of-the-box" model minimizes the barriers and costs for end-users to use Spark at the client side. At the server-side, Kyuubi server and engines' multi-tenant architecture provides the administrators a way to achieve computing resource isolation, data security, high availability, high client concurrency, etc.
+
+
+
+- [x] A HiveServer2-like API
+- [x] Multi-tenant Spark Support
+- [x] Running Spark in a serverless way
+
+### Target Users
+
+Kyuubi's goal is to make it easy and efficient for `anyone` to use Spark(maybe other engines soon) and facilitate users to handle big data like ordinary data. Here, `anyone` means that users do not need to have a Spark technical background but a human language, SQL only. Sometimes, SQL skills are unnecessary when integrating Kyuubi with Apache Superset, which supports rich visualizations and dashboards.
+
+In typical big data production environments with Kyuubi, there should be system administrators and end-users.
+
+- System administrators: A small group consists of Spark experts responsible for Kyuubi deployment, configuration, and tuning.
+- End-users: Focus on business data of their own, not where it stores, how it computes.
+
+Additionally, the Kyuubi community will continuously optimize the whole system with various features, such as History-Based Optimizer, Auto-tuning, Materialized View, SQL Dialects, Functions, etc.
+
+### Usage scenarios
+
+#### Port workloads from HiveServer2 to Spark SQL
+
+In typical big data production environments, especially secured ones, all bundled services manage access control lists to restricting access to authorized users. For example, Hadoop YARN divides compute resources into queues. With Queue ACLs, it can identify and control which users/groups can take actions on particular queues. Similarly, HDFS ACLs control access of HDFS files by providing a way to set different permissions for specific users/groups.
+
+Apache Spark is a unified analytics engine for large-scale data processing. It provides a Distributed SQL Engine, a.k.a, the Spark Thrift Server(STS), designed to be seamlessly compatible with HiveServer2 and get even better performance.
+
+HiveServer2 can identify and authenticate a caller, and then if the caller also has permissions for the YARN queue and HDFS files, it succeeds. Otherwise, it fails. However, on the one hand, STS is a single Spark application. The user and queue to which STS belongs are uniquely determined at startup. Consequently, STS cannot leverage cluster managers such as YARN and Kubernetes for resource isolation and sharing or control the access for callers by the single user inside the whole system. On the other hand, the Thrift Server is coupled in the Spark driver's JVM process. This coupled architecture puts a high risk on server stability and makes it unable to handle high client concurrency or apply high availability such as load balancing as it is stateful.
+
+Kyuubi extends the use of STS in a multi-tenant model based on a unified interface and relies on the concept of multi-tenancy to interact with cluster managers to finally gain the ability of resources sharing/isolation and data security. The loosely coupled architecture of the Kyuubi server and engine dramatically improves the client concurrency and service stability of the service itself.
+
+#### DataLake/Lakehouse Support
+
+The vision of Kyuubi is to unify the portal and become an easy-to-use data lake management platform. Different kinds of workloads, such as ETL processing and BI analytics, can be supported by one platform, using one copy of data, with one SQL interface.
+
+- Logical View support via Kyuubi DataLake Metadata APIs
+- Multiple Catalogs support
+- SQL Standard Authorization support for DataLake(coming)
+
+#### Cloud Native Support
+
+Kyuubi can deploy its engines on different kinds of Cluster Managers, such as, Hadoop YARN, Kubernetes, etc.
+
+
+
+### The Kyuubi Ecosystem(present and future)
+
+The figure below shows our vision for the Kyuubi Ecosystem. Some of them have been realized, some in development,
+and others would not be possible without your help.
+
+
+
+## Online Documentation
+
+## Quick Start
+
+Ready? [Getting Started](https://kyuubi.readthedocs.io/en/master/quick_start/) with Kyuubi.
+
+## [Contributing](./CONTRIBUTING.md)
+
+## Project & Community Status
+
+
+
+## Aside
+
+The project took its name from a character of a popular Japanese manga - `Naruto`.
+The character is named `Kyuubi Kitsune/Kurama`, which is a nine-tailed fox in mythology.
+`Kyuubi` spread the power and spirit of fire, which is used here to represent the powerful [Apache Spark](http://spark.apache.org).
+Its nine tails stand for end-to-end multi-tenancy support of this project.
diff --git a/data/readmes/kyverno-v1161.md b/data/readmes/kyverno-v1161.md
new file mode 100644
index 0000000..281bd39
--- /dev/null
+++ b/data/readmes/kyverno-v1161.md
@@ -0,0 +1,155 @@
+# Kyverno - README (v1.16.1)
+
+**Repository**: https://github.com/kyverno/kyverno
+**Version**: v1.16.1
+
+---
+
+
+
+# Kyverno [](https://twitter.com/intent/tweet?text=Cloud%20Native%20Policy%20Management.%20No%20new%20language%20required%1&url=https://github.com/kyverno/kyverno/&hashtags=kubernetes,devops)
+
+**Cloud Native Policy Management 🎉**
+
+[](https://github.com/kyverno/kyverno/actions)
+[](https://goreportcard.com/report/github.com/kyverno/kyverno)
+
+[](https://github.com/kyverno/kyverno/stargazers)
+[](https://bestpractices.coreinfrastructure.org/projects/5327)
+[](https://securityscorecards.dev/viewer/?uri=github.com/kyverno/kyverno)
+[](https://slsa.dev)
+[](https://artifacthub.io/packages/search?repo=kyverno)
+[](https://app.codecov.io/gh/kyverno/kyverno/branch/main)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fkyverno%2Fkyverno?ref=badge_shield)
+
+
+
+## 📑 Table of Contents
+
+- [About Kyverno](#about-kyverno)
+- [Documentation](#-documentation)
+- [Demos & Tutorials](#-demos--tutorials)
+- [Popular Use Cases](#-popular-use-cases)
+- [Explore the Policy Library](#-explore-the-policy-library)
+- [Getting Help](#-getting-help)
+- [Contributing](#-contributing)
+- [Software Bill of Materials](#software-bill-of-materials)
+- [Community Highlights](#-community-highlights)
+- [Contributors](#contributors)
+- [License](#license)
+
+## About Kyverno
+
+Kyverno is a Kubernetes-native policy engine designed for platform engineering teams. It enables security, compliance, automation, and governance through policy-as-code. Kyverno can:
+
+- Validate, mutate, generate, and clean up resources using Kubernetes admission controls and background scans.
+- Verify container image signatures for supply chain security.
+- Operate with tools you already use — like `kubectl`, `kustomize`, and Git.
+
+
+
+
+
+## 📙 Documentation
+
+Kyverno installation and reference documentation is available at [kyverno.io](https://kyverno.io).
+
+- 👉 **[Quick Start](https://kyverno.io/docs/introduction/#quick-start)**
+- 👉 **[Installation Guide](https://kyverno.io/docs/installation/)**
+- 👉 **[Policy Library](https://kyverno.io/policies/)**
+
+## 🎥 Demos & Tutorials
+
+- ▶️ [Getting Started with Kyverno – YouTube](https://www.youtube.com/results?search_query=kyverno+tutorial)
+- 🧪 [Kyverno Playground](https://playground.kyverno.io/)
+
+## 🎯 Popular Use Cases
+
+Kyverno helps platform teams enforce best practices and security standards. Some common use cases include:
+
+### 1. **Security & Compliance**
+- Enforce Pod Security Standards (PSS)
+- Require specific security contexts
+- Validate container image sources and signatures
+- Enforce CIS Benchmark policies
+
+### 2. **Operational Excellence**
+- Auto-label workloads
+- Enforce naming conventions
+- Generate default configurations (e.g., NetworkPolicies)
+- Validate YAML and Helm manifests
+
+### 3. **Cost Optimization**
+- Enforce resource quotas and limits
+- Require cost allocation labels
+- Validate instance types
+- Clean up unused resources
+
+### 4. **Developer Guardrails**
+- Require readiness/liveness probes
+- Enforce ingress/egress policies
+- Validate container image versions
+- Auto-inject config maps or secrets
+
+## 📚 Explore the Policy Library
+
+Discover hundreds of production-ready Kyverno policies for security, operations, cost control, and developer enablement.
+
+👉 [Browse the Policy Library](https://kyverno.io/policies/)
+
+## 🙋 Getting Help
+
+We’re here to help:
+
+- 🐞 File a [GitHub Issue](https://github.com/kyverno/kyverno/issues)
+- 💬 Join the [Kyverno Slack Channel](https://slack.k8s.io/#kyverno)
+- 📅 Attend [Community Meetings](https://kyverno.io/community/#community-meetings)
+- ⭐️ [Star this repository](https://github.com/kyverno/kyverno/stargazers) to stay updated
+
+## ➕ Contributing
+
+Thank you for your interest in contributing to Kyverno!
+
+- ✅ Read the [Contribution Guidelines](/CONTRIBUTING.md)
+- 🧵 Join [GitHub Discussions](https://github.com/kyverno/kyverno/discussions)
+- 📖 Read the [Development Guide](/DEVELOPMENT.md)
+- 🏁 Check [Good First Issues](https://github.com/kyverno/kyverno/labels/good%20first%20issue) and request with `/assign`
+- 🌱 Explore the [Community page](https://kyverno.io/community/)
+
+## 🧾 Software Bill of Materials
+
+All Kyverno images include a Software Bill of Materials (SBOM) in [CycloneDX](https://cyclonedx.org/) format. SBOMs are available at:
+
+- 👉 [`ghcr.io/kyverno/sbom`](https://github.com/orgs/kyverno/packages?tab=packages&q=sbom)
+- 👉 [Fetching the SBOM](https://kyverno.io/docs/security/#fetching-the-sbom-for-kyverno)
+
+## 👥 Contributors
+
+Kyverno is built and maintained by our growing community of contributors!
+
+
+
+
+
+_Made with [contributors-img](https://contrib.rocks)_
+
+## 📄 License
+
+Copyright 2025, the Kyverno project. All rights reserved.
+Kyverno is licensed under the [Apache License 2.0](LICENSE).
+
+Kyverno is a [Cloud Native Computing Foundation (CNCF) Incubating project](https://www.cncf.io/projects/) and was contributed by [Nirmata](https://nirmata.com/?utm_source=github&utm_medium=repository).
diff --git a/data/readmes/lazydocker-v0242.md b/data/readmes/lazydocker-v0242.md
new file mode 100644
index 0000000..d51de9d
--- /dev/null
+++ b/data/readmes/lazydocker-v0242.md
@@ -0,0 +1,333 @@
+# lazydocker - README (v0.24.2)
+
+**Repository**: https://github.com/jesseduffield/lazydocker
+**Version**: v0.24.2
+
+---
+
+
+
+A simple terminal UI for both docker and docker-compose, written in Go with the [gocui](https://github.com/jroimartin/gocui 'gocui') library.
+
+
+[](https://goreportcard.com/report/github.com/jesseduffield/lazydocker)
+[](https://golangci.com)
+[](http://godoc.org/github.com/jesseduffield/lazydocker)
+
+[](https://github.com/jesseduffield/lazydocker/releases)
+[](https://github.com/jesseduffield/lazydocker/releases/latest)
+[](https://github.com/Homebrew/homebrew-core/blob/master/Formula/lazydocker.rb)
+
+
+
+[Demo](https://youtu.be/NICqQPxwJWw)
+
+## Sponsors
+
+
+ Maintenance of this project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here. 💙
+
+
+
+
+
+
+## Elevator Pitch
+
+Minor rant incoming: Something's not working? Maybe a service is down. `docker-compose ps`. Yep, it's that microservice that's still buggy. No issue, I'll just restart it: `docker-compose restart`. Okay now let's try again. Oh wait the issue is still there. Hmm. `docker-compose ps`. Right so the service must have just stopped immediately after starting. I probably would have known that if I was reading the log stream, but there is a lot of clutter in there from other services. I could get the logs for just that one service with `docker compose logs --follow myservice` but that dies everytime the service dies so I'd need to run that command every time I restart the service. I could alternatively run `docker-compose up myservice` and in that terminal window if the service is down I could just `up` it again, but now I've got one service hogging a terminal window even after I no longer care about its logs. I guess when I want to reclaim the terminal realestate I can do `ctrl+P,Q`, but... wait, that's not working for some reason. Should I use ctrl+C instead? I can't remember if that closes the foreground process or kills the actual service.
+
+What a headache!
+
+Memorising docker commands is hard. Memorising aliases is slightly less hard. Keeping track of your containers across multiple terminal windows is near impossible. What if you had all the information you needed in one terminal window with every common command living one keypress away (and the ability to add custom commands as well). Lazydocker's goal is to make that dream a reality.
+
+- [Requirements](https://github.com/jesseduffield/lazydocker#requirements)
+- [Installation](https://github.com/jesseduffield/lazydocker#installation)
+- [Usage](https://github.com/jesseduffield/lazydocker#usage)
+- [Keybindings](/docs/keybindings)
+- [Cool Features](https://github.com/jesseduffield/lazydocker#cool-features)
+- [Contributing](https://github.com/jesseduffield/lazydocker#contributing)
+- [Video Tutorial](https://youtu.be/NICqQPxwJWw)
+- [Config Docs](/docs/Config.md)
+- [Twitch Stream](https://www.twitch.tv/jesseduffield)
+- [FAQ](https://github.com/jesseduffield/lazydocker#faq)
+
+## Requirements
+
+- Docker >= **29.0.0** (API >= **1.24**)
+- Docker-Compose >= **1.23.2** (optional)
+
+## Installation
+
+### Homebrew
+
+Normally `lazydocker` formula can be found in the Homebrew core but we suggest you to tap our formula to get frequently updated one. It works with Linux, too.
+
+**Tap**:
+```sh
+brew install jesseduffield/lazydocker/lazydocker
+```
+
+**Core**:
+```sh
+brew install lazydocker
+```
+
+### Scoop (Windows)
+
+You can install `lazydocker` using [scoop](https://scoop.sh/):
+
+```sh
+scoop install lazydocker
+```
+### Chocolatey (Windows)
+
+You can install `lazydocker` using [Chocolatey](https://chocolatey.org/):
+
+```sh
+choco install lazydocker
+```
+### asdf-vm
+
+You can install [asdf-lazydocker plugin](https://github.com/comdotlinux/asdf-lazydocker) using [asdf-vm](https://asdf-vm.com/):
+#### Setup (Once)
+```sh
+asdf plugin add lazydocker https://github.com/comdotlinux/asdf-lazydocker.git
+```
+
+#### For Install / Upgrade
+```sh
+asdf list all lazydocker
+asdf install lazydocker latest
+asdf global lazydocker latest
+```
+
+### Binary Release (Linux/OSX/Windows)
+
+You can manually download a binary release from [the release page](https://github.com/jesseduffield/lazydocker/releases).
+
+Automated install/update, don't forget to always verify what you're piping into bash:
+
+```sh
+curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
+```
+The script installs downloaded binary to `$HOME/.local/bin` directory by default, but it can be changed by setting `DIR` environment variable.
+
+### Go
+
+Required Go Version >= **1.19**
+
+```sh
+go install github.com/jesseduffield/lazydocker@latest
+```
+
+Required Go version >= **1.8**, <= **1.17**
+
+```sh
+go get github.com/jesseduffield/lazydocker
+```
+
+### Arch Linux AUR
+
+You can install lazydocker using the [AUR](https://aur.archlinux.org/packages/lazydocker) by running:
+
+```sh
+yay -S lazydocker
+```
+
+### Docker
+
+[](https://hub.docker.com/r/lazyteam/lazydocker)
+[](https://hub.docker.com/r/lazyteam/lazydocker)
+[](https://hub.docker.com/r/lazyteam/lazydocker)
+
+1. Click if you have an ARM device
+
+ - If you have a ARM 32 bit v6 architecture
+
+ ```sh
+ docker build -t lazyteam/lazydocker \
+ --build-arg BASE_IMAGE_BUILDER=arm32v6/golang \
+ --build-arg GOARCH=arm \
+ --build-arg GOARM=6 \
+ https://github.com/jesseduffield/lazydocker.git
+ ```
+
+ - If you have a ARM 32 bit v7 architecture
+
+ ```sh
+ docker build -t lazyteam/lazydocker \
+ --build-arg BASE_IMAGE_BUILDER=arm32v7/golang \
+ --build-arg GOARCH=arm \
+ --build-arg GOARM=7 \
+ https://github.com/jesseduffield/lazydocker.git
+ ```
+
+ - If you have a ARM 64 bit v8 architecture
+
+ ```sh
+ docker build -t lazyteam/lazydocker \
+ --build-arg BASE_IMAGE_BUILDER=arm64v8/golang \
+ --build-arg GOARCH=arm64 \
+ https://github.com/jesseduffield/lazydocker.git
+ ```
+
+
+
+1. Run the container
+
+ ```sh
+ docker run --rm -it -v \
+ /var/run/docker.sock:/var/run/docker.sock \
+ -v /yourpath:/.config/jesseduffield/lazydocker \
+ lazyteam/lazydocker
+ ```
+
+ - Don't forget to change `/yourpath` to an actual path you created to store lazydocker's config
+ - You can also use this [docker-compose.yml](https://github.com/jesseduffield/lazydocker/blob/master/docker-compose.yml)
+ - You might want to create an alias, for example:
+
+ ```sh
+ echo "alias lzd='docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock -v /yourpath/config:/.config/jesseduffield/lazydocker lazyteam/lazydocker'" >> ~/.zshrc
+ ```
+
+
+
+For development, you can build the image using:
+
+```sh
+git clone https://github.com/jesseduffield/lazydocker.git
+cd lazydocker
+docker build -t lazyteam/lazydocker \
+ --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
+ --build-arg VCS_REF=`git rev-parse --short HEAD` \
+ --build-arg VERSION=`git describe --abbrev=0 --tag` \
+ .
+```
+
+If you encounter a compatibility issue with Docker bundled binary, try rebuilding
+the image with the build argument `--build-arg DOCKER_VERSION="v$(docker -v | cut -d" " -f3 | rev | cut -c 2- | rev)"`
+so that the bundled docker binary matches your host docker binary version.
+
+### Manual
+
+You'll need to [install Go](https://golang.org/doc/install)
+
+```
+git clone https://github.com/jesseduffield/lazydocker.git
+cd lazydocker
+go install
+```
+
+You can also use `go run main.go` to compile and run in one go (pun definitely intended)
+
+## Usage
+
+Call `lazydocker` in your terminal. I personally use this a lot so I've made an alias for it like so:
+
+```
+echo "alias lzd='lazydocker'" >> ~/.zshrc
+```
+
+(you can substitute .zshrc for whatever rc file you're using)
+
+- Basic video tutorial [here](https://youtu.be/NICqQPxwJWw).
+- List of keybindings
+ [here](/docs/keybindings).
+
+## Cool features
+
+everything is one keypress away (or one click away! Mouse support FTW):
+
+- viewing the state of your docker or docker-compose container environment at a glance
+- viewing logs for a container/service
+- viewing ascii graphs of your containers' metrics so that you can not only feel but also look like a developer
+- customising those graphs to measure nearly any metric you want
+- attaching to a container/service
+- restarting/removing/rebuilding containers/services
+- viewing the ancestor layers of a given image
+- pruning containers, images, or volumes that are hogging up disk space
+
+## Contributing
+
+There is still a lot of work to go! Please check out the [contributing guide](CONTRIBUTING.md).
+For contributor discussion about things not better discussed here in the repo, join the discord channel
+
+
+
+## Donate
+
+If you would like to support the development of lazydocker, consider [sponsoring me](https://github.com/sponsors/jesseduffield) (github is matching all donations dollar-for-dollar for 12 months)
+
+## Social
+
+If you want to see what I (Jesse) am up to in terms of development, follow me on
+[twitter](https://twitter.com/DuffieldJesse) or watch me program on
+[twitch](https://www.twitch.tv/jesseduffield)
+
+## FAQ
+
+### How do I edit my config?
+
+By opening lazydocker, clicking on the 'project' panel in the top left, and pressing 'o' (or 'e' if your editor is vim). See [Config Docs](/docs/Config.md)
+
+### How do I get text to wrap in my main panel?
+
+In the future I want to make this the default, but for now there are some CPU issues that arise with wrapping. If you want to enable wrapping, use `gui.wrapMainPanel: true`
+
+### How do you select text?
+
+Because we support mouse events, you will need to hold option while dragging the mouse to indicate you're trying to select text rather than click on something. Alternatively you can disable mouse events via the `gui.ignoreMouseEvents` config value.
+
+Mac Users: See [Issue #190](https://github.com/jesseduffield/lazydocker/issues/190) for other options.
+
+### Why can't I see my container's logs?
+
+By default we only show logs from the last hour, so that we're not putting too much strain on the machine. This may be why you can't see logs when you first start lazydocker. This can be overwritten in the config's `commandTemplates`
+
+If you are running lazydocker in Docker container, it is a know bug, that you can't see logs or CPU usage.
+
+## Alternatives
+
+- [docui](https://github.com/skanehira/docui) - Skanehira beat me to the punch on making a docker terminal UI, so definitely check out that repo as well! I think the two repos can live in harmony though: lazydocker is more about managing existing containers/services, and docui is more about creating and configuring them.
+- [Portainer](https://github.com/portainer/portainer) - Portainer tries to solve the same problem but it's accessed via your browser rather than your terminal. It also supports docker swarm.
+- See [Awesome Docker list](https://github.com/veggiemonk/awesome-docker/blob/master/README.md#terminal) for similar tools to work with Docker.
diff --git a/data/readmes/lighthouse-v801.md b/data/readmes/lighthouse-v801.md
new file mode 100644
index 0000000..1d6d9af
--- /dev/null
+++ b/data/readmes/lighthouse-v801.md
@@ -0,0 +1,88 @@
+# Lighthouse - README (v8.0.1)
+
+**Repository**: https://github.com/sigp/lighthouse
+**Version**: v8.0.1
+
+---
+
+# Lighthouse: Ethereum consensus client
+
+An open-source Ethereum consensus client, written in Rust and maintained by Sigma Prime.
+
+[![Book Status]][Book Link] [![Chat Badge]][Chat Link]
+
+[Chat Badge]: https://img.shields.io/badge/chat-discord-%237289da
+[Chat Link]: https://discord.gg/cyAszAh
+[Book Status]:https://img.shields.io/badge/user--docs-unstable-informational
+[Book Link]: https://lighthouse-book.sigmaprime.io
+[stable]: https://github.com/sigp/lighthouse/tree/stable
+[unstable]: https://github.com/sigp/lighthouse/tree/unstable
+[blog]: https://lighthouse-blog.sigmaprime.io
+
+[Documentation](https://lighthouse-book.sigmaprime.io)
+
+
+
+## Overview
+
+Lighthouse is:
+
+- Ready for use on Ethereum consensus mainnet.
+- Fully open-source, licensed under Apache 2.0.
+- Security-focused. Fuzzing techniques have been continuously applied and several external security reviews have been performed.
+- Built in [Rust](https://www.rust-lang.org), a modern language providing unique safety guarantees and
+ excellent performance (comparable to C++).
+- Funded by various organisations, including Sigma Prime, the
+ Ethereum Foundation, Consensys, the Decentralization Foundation and private individuals.
+- Actively involved in the specification and security analysis of the
+ Ethereum proof-of-stake consensus specification.
+
+## Staking Deposit Contract
+
+The Lighthouse team acknowledges
+[`0x00000000219ab540356cBB839Cbe05303d7705Fa`](https://etherscan.io/address/0x00000000219ab540356cbb839cbe05303d7705fa)
+as the canonical staking deposit contract address.
+
+## Documentation
+
+The [Lighthouse Book](https://lighthouse-book.sigmaprime.io) contains information for users and
+developers.
+
+The Lighthouse team maintains a blog at [https://blog.sigmaprime.io/tag/lighthouse][blog] which contains periodic
+progress updates, roadmap insights and interesting findings.
+
+## Branches
+
+Lighthouse maintains two permanent branches:
+
+- [`stable`][stable]: Always points to the latest stable release.
+ - This is ideal for most users.
+- [`unstable`][unstable]: Used for development, contains the latest PRs.
+ - Developers should base their PRs on this branch.
+
+## Contributing
+
+Lighthouse welcomes contributors.
+
+If you are looking to contribute, please head to the
+[Contributing](https://lighthouse-book.sigmaprime.io/contributing.html) section
+of the Lighthouse book.
+
+## Contact
+
+The best place for discussion is the [Lighthouse Discord
+server](https://discord.gg/cyAszAh).
+
+Sign up to the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb) mailing list for email
+notifications about releases, network status and other important information.
+
+Encrypt sensitive messages using our [PGP
+key](https://keybase.io/sigp/pgp_keys.asc?fingerprint=15e66d941f697e28f49381f426416dc3f30674b0).
+
+## Donations
+
+Lighthouse is an open-source project and a public good. Funding public goods is
+hard and we're grateful for the donations we receive from the community via:
+
+- [Gitcoin Grants](https://gitcoin.co/grants/25/lighthouse-ethereum-20-client).
+- Ethereum address: `0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b` (donation.sigmaprime.eth).
diff --git a/data/readmes/lima-v202.md b/data/readmes/lima-v202.md
new file mode 100644
index 0000000..d469f6c
--- /dev/null
+++ b/data/readmes/lima-v202.md
@@ -0,0 +1,105 @@
+# Lima - README (v2.0.2)
+
+**Repository**: https://github.com/lima-vm/lima
+**Version**: v2.0.2
+
+---
+
+[[🌎**Web site**]](https://lima-vm.io/)
+[[📖**Documentation**]](https://lima-vm.io/docs/)
+[[👤**Slack (`#lima`)**]](https://slack.cncf.io)
+
+
+
+
+
+
+# Lima: Linux Machines
+
+[](https://deepwiki.com/lima-vm/lima)
+[](https://www.bestpractices.dev/projects/6505)
+[](https://scorecard.dev/viewer/?uri=github.com/lima-vm/lima)
+
+[Lima](https://lima-vm.io/) launches Linux virtual machines with automatic file sharing and port forwarding (similar to WSL2).
+
+The original goal of Lima was to promote [containerd](https://containerd.io) including [nerdctl (contaiNERD ctl)](https://github.com/containerd/nerdctl)
+to Mac users, but Lima can be used for non-container applications as well.
+
+Lima also supports other container engines (Docker, Podman, Kubernetes, etc.) and non-macOS hosts (Linux, NetBSD, etc.).
+
+## Getting started
+Set up (Homebrew):
+```bash
+brew install lima
+limactl start
+```
+
+To run Linux commands:
+```bash
+lima uname -a
+```
+
+To run containers with containerd:
+```bash
+lima nerdctl run --rm hello-world
+```
+
+To run containers with Docker:
+```bash
+limactl start template://docker
+export DOCKER_HOST=$(limactl list docker --format 'unix://{{.Dir}}/sock/docker.sock')
+docker run --rm hello-world
+```
+
+To run containers with Kubernetes:
+```bash
+limactl start template://k8s
+export KUBECONFIG=$(limactl list k8s --format 'unix://{{.Dir}}/copied-from-guest/kubeconfig.yaml')
+kubectl apply -f ...
+```
+
+See for the further information.
+
+## Contributing
+
+We welcome contributions! Please see our [Contributing Guide](https://lima-vm.io/docs/community/contributing/) for details on:
+
+- **Developer Certificate of Origin (DCO)**: All commits must be signed off with `git commit -s`
+- Code licensing and pull request guidelines
+- Testing requirements
+
+## Community
+### Adopters
+
+Container environments:
+- [Rancher Desktop](https://rancherdesktop.io/): Kubernetes and container management to the desktop
+- [Colima](https://github.com/abiosoft/colima): Docker (and Kubernetes) on macOS with minimal setup
+- [Finch](https://github.com/runfinch/finch): Finch is a command line client for local container development
+- [Podman Desktop](https://podman-desktop.io/): Podman Desktop GUI has a plug-in for Lima virtual machines
+
+GUI:
+- [Lima xbar plugin](https://github.com/unixorn/lima-xbar-plugin): [xbar](https://xbarapp.com/) plugin to start/stop VMs from the menu bar and see their running status.
+- [lima-gui](https://github.com/afbjorklund/lima-gui): Qt GUI for Lima
+
+### Communication channels
+
+- [GitHub Discussions](https://github.com/lima-vm/lima/discussions)
+- `#lima` channel in the CNCF Slack
+ - New account:
+ - Login:
+- Zoom meetings (tentatively monthly)
+ - Meeting notes & agenda proposals: https://github.com/lima-vm/lima/discussions/categories/meetings
+ - Calendar: https://zoom-lfx.platform.linuxfoundation.org/meetings/lima
+
+### Code of Conduct
+Lima follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
+
+- - -
+**We are a [Cloud Native Computing Foundation](https://cncf.io/) incubating project.**
+
+
+
+
+
+
+The Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see [Trademark Usage](https://www.linuxfoundation.org/legal/trademark-usage).
diff --git a/data/readmes/linkerd-edge-25121.md b/data/readmes/linkerd-edge-25121.md
new file mode 100644
index 0000000..af410d5
--- /dev/null
+++ b/data/readmes/linkerd-edge-25121.md
@@ -0,0 +1,124 @@
+# Linkerd - README (edge-25.12.1)
+
+**Repository**: https://github.com/linkerd/linkerd2
+**Version**: edge-25.12.1
+
+---
+
+# Linkerd
+
+![Linkerd][logo]
+
+[](https://bestpractices.coreinfrastructure.org/projects/4629)
+[![GitHub Actions Status][github-actions-badge]][github-actions]
+[](LICENSE)
+[![Go Report Card][go-report-card-badge]][go-report-card]
+[![Go Reference][go-doc-badge]][go-doc]
+[![Slack Status][slack-badge]][slack]
+
+:balloon: Welcome to Linkerd! :wave:
+
+Linkerd is an ultralight, security-first service mesh for Kubernetes. Linkerd
+adds critical security, observability, and reliability features to your
+Kubernetes stack with no code change required.
+
+Linkerd is a Cloud Native Computing Foundation ([CNCF][cncf]) project.
+
+## Repo layout
+
+This is the primary repo for the Linkerd 2.x line of development.
+
+The complete list of Linkerd repos is:
+
+* [linkerd2][linkerd2]: Main Linkerd 2.x repo, including control plane and CLI
+* [linkerd2-proxy][proxy]: Linkerd 2.x data plane proxy
+* [linkerd2-proxy-api][proxy-api]: Linkerd 2.x gRPC API bindings
+* [linkerd][linkerd1]: Linkerd 1.x
+* [website][linkerd-website]: linkerd.io website (including docs for 1.x and
+ 2.x)
+
+## Quickstart and documentation
+
+You can run Linkerd on any modern Kubernetes cluster in a matter of seconds.
+See the [Linkerd Getting Started Guide][getting-started] for how.
+
+For more comprehensive documentation, start with the [Linkerd
+docs][linkerd-docs]. (The doc source code is available in the
+[website][linkerd-website] repo.)
+
+## Working in this repo
+
+[`BUILD.md`](BUILD.md) includes general information on how to work in this repo.
+
+We :heart: pull requests! See [`CONTRIBUTING.md`](CONTRIBUTING.md) for info on
+contributing changes.
+
+## Get involved
+
+* Join Linkerd's [user mailing list][linkerd-users], [developer mailing
+ list][linkerd-dev], and [announcements mailing list][linkerd-announce].
+* Follow [@Linkerd][twitter] on Twitter.
+* Join the [Linkerd Slack][slack].
+
+## Steering Committee meetings
+
+We host regular online meetings for the Linkerd Steering Committee. All are
+welcome to attend, but audio and video participation is limited to Steering
+Committee members and maintainers. These meetings are currently scheduled on an
+ad-hoc basis and announced on the [linkerd-users][linkerd-users] mailing list.
+
+* [Zoom link](https://zoom.us/my/cncflinkerd)
+* [Minutes from previous meetings](https://docs.google.com/document/d/1GDNM5eosiyjVDo6YHXBMsvlpyzUldgg-XLMNzf7I404/edit)
+* [Recordings from previous meetings](https://www.youtube.com/playlist?list=PLI9FkLPXDscBHP91Ud3lyJScI4ZCjRG6F)
+
+## Code of Conduct
+
+This project is for everyone. We ask that our users and contributors take a few
+minutes to review our [Code of Conduct][CoC].
+
+## Security
+
+See [SECURITY.md](SECURITY.md) for our security policy, including how to report
+vulnerabilities.
+
+Linkerd undergoes periodic third-party security audits and we
+[publish the results here](https://github.com/linkerd/linkerd2/tree/main/audits).
+
+## License
+
+Copyright 2025 the Linkerd Authors. All rights reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+these files except in compliance with the License. You may obtain a copy of the
+License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software distributed
+under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+CONDITIONS OF ANY KIND, either express or implied. See the License for the
+specific language governing permissions and limitations under the License.
+
+
+[github-actions]: https://github.com/linkerd/linkerd2/actions
+[github-actions-badge]: https://github.com/linkerd/linkerd2/actions/workflows/actions.yml/badge.svg
+[cncf]: https://www.cncf.io/
+[CoC]: https://github.com/linkerd/linkerd/wiki/Linkerd-code-of-conduct
+[getting-started]: https://linkerd.io/2/getting-started/
+[go-report-card]: https://goreportcard.com/report/github.com/linkerd/linkerd2
+[go-report-card-badge]: https://goreportcard.com/badge/github.com/linkerd/linkerd2
+[go-doc-badge]: https://pkg.go.dev/badge/github.com/linkerd/linkerd2.svg
+[go-doc]: https://pkg.go.dev/github.com/linkerd/linkerd2
+[linkerd1]: https://github.com/linkerd/linkerd
+[linkerd2]: https://github.com/linkerd/linkerd2
+[linkerd-announce]: https://lists.cncf.io/g/cncf-linkerd-announce
+[linkerd-dev]: https://lists.cncf.io/g/cncf-linkerd-dev
+[linkerd-docs]: https://linkerd.io/2/overview/
+[linkerd-users]: https://lists.cncf.io/g/cncf-linkerd-users
+[linkerd-website]: https://github.com/linkerd/website
+[logo]: https://user-images.githubusercontent.com/9226/33582867-3e646e02-d90c-11e7-85a2-2e238737e859.png
+[proxy]: https://github.com/linkerd/linkerd2-proxy
+[proxy-api]: https://github.com/linkerd/linkerd2-proxy-api
+[slack-badge]: http://slack.linkerd.io/badge.svg
+[slack]: http://slack.linkerd.io
+[twitter]: https://twitter.com/linkerd
diff --git a/data/readmes/litmus-3230.md b/data/readmes/litmus-3230.md
new file mode 100644
index 0000000..3ef00d3
--- /dev/null
+++ b/data/readmes/litmus-3230.md
@@ -0,0 +1,181 @@
+# Litmus - README (3.23.0)
+
+**Repository**: https://github.com/litmuschaos/litmus
+**Version**: 3.23.0
+
+---
+
+# [LitmusChaos](https://litmuschaos.io/)
+
+
+### Open Source Chaos Engineering Platform
+
+[](https://slack.litmuschaos.io)
+
+[](https://hub.docker.com/r/litmuschaos/chaos-operator)
+[](https://github.com/litmuschaos/litmus/stargazers)
+[](https://github.com/litmuschaos/litmus/issues)
+[](https://twitter.com/LitmusChaos)
+[](https://www.bestpractices.dev/projects/3202)
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Flitmuschaos%2Flitmus?ref=badge_shield)
+[](https://www.youtube.com/channel/UCa57PMqmz_j0wnteRa9nCaw)
+[](https://gurubase.io/g/litmuschaos)
+
+
+#### *Read this in [other languages](translations/TRANSLATIONS.md).*
+
+[🇰🇷](translations/README-ko.md) [🇨🇳](translations/README-chn.md) [🇧🇷](translations/README-pt-br.md) [🇮🇳](translations/README-hi.md)
+
+
+## Overview
+
+LitmusChaos is an open source Chaos Engineering platform that enables teams to identify weaknesses & potential outages in infrastructures by
+inducing chaos tests in a controlled way. Developers & SREs can practice Chaos Engineering with LitmusChaos as it is easy to use, based on modern
+Chaos Engineering principles & community collaborated. It is 100% open source & a CNCF project.
+
+LitmusChaos takes a cloud-native approach to create, manage and monitor chaos. The platform itself runs as a set of microservices and uses Kubernetes
+custom resources (CRs) to define the chaos intent, as well as the steady state hypothesis.
+
+At a high-level, Litmus comprises of:
+
+- **Chaos Control Plane**: A centralized chaos management tool called chaos-center, which helps construct, schedule and visualize Litmus chaos workflows
+- **Chaos Execution Plane Services**: Made up of a chaos agent and multiple operators that execute & monitor the experiment within a defined
+ target Kubernetes environment.
+
+
+
+At the heart of the platform are the following chaos custom resources:
+
+- **ChaosExperiment**: A resource to group the configuration parameters of a particular fault. ChaosExperiment CRs are essentially installable templates
+ that describe the library carrying out the fault, indicate permissions needed to run it & the defaults it will operate with. Through the ChaosExperiment, Litmus supports BYOC (bring-your-own-chaos) that helps integrate (optional) any third-party tooling to perform the fault injection.
+
+- **ChaosEngine**: A resource to link a Kubernetes application workload/service, node or an infra component to a fault described by the ChaosExperiment.
+ It also provides options to tune the run properties and specify the steady state validation constraints using 'probes'. ChaosEngine is watched by the
+ Chaos-Operator, which reconciles it (triggers experiment execution) via runners.
+
+The ChaosExperiment & ChaosEngine CRs are embedded within a Workflow object that can string together one or more experiments in a desired order.
+
+- **ChaosResult**: A resource to hold the results of the experiment run. It provides details of the success of each validation constraint,
+ the revert/rollback status of the fault as well as a verdict. The Chaos-exporter reads the results and exposes information as prometheus metrics.
+ ChaosResults are especially useful during automated runs.
+
+ChaosExperiment CRs are hosted on hub.litmuschaos.io. It is a central hub where the
+application developers or vendors share their chaos experiments so that their users can use them to increase the resilience of the applications
+in production.
+
+## Use cases
+
+- **For Developers**: To run chaos experiments during application development as an extension of unit testing or integration testing.
+- **For CI/CD pipeline builders**: To run chaos as a pipeline stage to find bugs when the application is subjected to fail paths in a pipeline.
+- **For SREs**: To plan and schedule chaos experiments into the application and/or surrounding infrastructure. This practice identifies the weaknesses
+ in the deployment system and increases resilience.
+
+## Getting Started with Litmus
+
+To get started, check out the Litmus Docs and specifically the Installation section of the Getting Started with Litmus page.
+
+## Contributing to Chaos Hub
+
+Check out the Contributing Guidelines for the Chaos Hub
+
+
+## Community
+
+### Community Resources:
+
+Feel free to reach out if you have any queries,concerns, or feature requests
+
+- Give us a star ⭐️ - If you are using LitmusChaos or think it is an interesting project, we would love a star ❤️
+
+- Follow LitmusChaos on Twitter [@LitmusChaos](https://twitter.com/LitmusChaos).
+
+- Subscribe to the [LitmusChaos YouTube channel](https://www.youtube.com/channel/UCa57PMqmz_j0wnteRa9nCaw) for regular updates & meeting recordings.
+
+- To join our [Slack Community](https://slack.litmuschaos.io/) and meet our community members, put forward your questions & opinions, join the #litmus channel on the [Kubernetes Slack](https://slack.k8s.io/).
+
+### Community Meetings
+
+1. Community Meetings
+- These will be hosted every 3rd Wednesday of every month at 5:30 PM GMT /6:30 PM CEST /10 PM IST
+- These meetings cover community updates, new feature or release announcements, and user/adopter stories. Everyone in the community is welcome to join and participate in discussions.
+
+
+2. Contributor Meetings
+- These will be hosted every second & last Thursday of every month at 2:30 PM GMT /3:30 PM CEST /7 PM IST
+- These meetings focus on both technical and non-technical contributions to LitmusChaos. Maintainers, current contributors, and aspiring contributors are encouraged to join to discuss issues, fixes, enhancements, and future contributions.
+
+Fill out the [LitmusChaos Meetings invite form](https://forms.gle/qawjtFUeL431jmpv7) to get your Calendar invite!
+
+- [Sync Up Agenda & Meeting Notes](https://hackmd.io/a4Zu_sH4TZGeih-xCimi3Q)
+- [Release Tracker](https://github.com/litmuschaos/litmus/milestones)
+
+### Videos
+
+- [What if Your System Experiences an Outage? Let's Build a Resilient Systems with Chaos Engineering](https://www.youtube.com/watch?v=3mjGEh905u4&t=1s) @ [CNCF](https://www.youtube.com/@cncf)
+- [Enhancing Cyber Resilience Through Zero Trust Chaos Experiments in Cloud Native Environments](https://youtu.be/BelNIk4Bkng) @ [CNCF](https://www.youtube.com/@cncf)
+- [LitmusChaos, with Karthik Satchitanand](https://www.youtube.com/watch?v=ks2R57hhFZk&t=503s) @ [The Kubernetes Podcast from Google](https://www.youtube.com/@TheKubernetesPodcast)
+- [Cultural Shifts: Fostering a Chaos First Mindset in Platform Engineering](https://www.youtube.com/watch?v=WUXFKxgZRsk) @ [CNCF](https://www.youtube.com/@cncf)
+- [Fire in the Cloud: Bringing Managed Services Under the Ambit of Cloud-Native Chaos Engineering](https://www.youtube.com/watch?v=xCDQp5E3VUs) @ [CNCF](https://www.youtube.com/@cncf)
+- [Security Controls for Safe Chaos Experimentation](https://www.youtube.com/watch?v=whCkvLKAw74) @ [CNCF](https://www.youtube.com/@cncf)
+- [Chaos Engineering For Hybrid Targets With LitmusChaos](https://www.youtube.com/watch?v=BZL-ngvbpbU&t=751s) @ [CNCF](https://www.youtube.com/@cncf)
+- [Cloud Native Live: Litmus Chaos Engine and a microservices demo app](https://youtu.be/hOghvd9qCzI)
+- [Chaos Engineering hands-on - An SRE ideating Chaos Experiments and using LitmusChaos | July 2022](https://youtu.be/_x_7SiesjF0)
+- [Achieve Digital Product Resiliency with Chaos Engineering](https://youtu.be/PQrmBHgk0ps)
+- [Case Study: Bringing Chaos Engineering to the Cloud Native Developers](https://youtu.be/KSl-oKk6TPA) @ [CNCF](https://www.youtube.com/@cncf)
+- [Cloud Native Chaos Engineering with LitmusChaos](https://www.youtube.com/watch?v=ItUUqejdXr0) @ [CNCF](https://www.youtube.com/@cncf)
+- [How to create Chaos Experiments with Litmus | Litmus Chaos tutorial](https://youtu.be/mwu5eLgUKq4) @ [Is it Observable](https://www.youtube.com/c/IsitObservable)
+- [Cloud Native Chaos Engineering Preview With LitmusChaos](https://youtu.be/pMWqhS-F3tQ)
+- [Get started with Chaos Engineering with Litmus](https://youtu.be/5CI8d-SKBfc) @ [Containers from the Couch](https://www.youtube.com/c/ContainersfromtheCouch)
+- [Litmus 2 - Chaos Engineering Meets Argo Workflows](https://youtu.be/B8DfYnDh2F4) @ [DevOps Toolkit](https://youtube.com/c/devopstoolkit)
+- [Hands-on with Litmus 2.0 | Rawkode Live](https://youtu.be/D0t3emVLLko) @ [Rawkode Academy](https://www.youtube.com/channel/UCrber_mFvp_FEF7D9u8PDEA)
+- [Introducing LitmusChaos 2.0 / Dok Talks #74](https://youtu.be/97BiCNtJbDw) @ [DoK.community](https://www.youtube.com/channel/UCUnXJbHQ89R2uSfKsqQwGvQ)
+- [Introduction to Cloud Native Chaos Engineering](https://youtu.be/LK0oDLQE4S8) @ [Kunal Kushwaha](https://www.youtube.com/channel/UCBGOUQHNNtNGcGzVq5rIXjw)
+- [#EveryoneCanContribute cafe: Litmus - Chaos Engineering for your Kubernetes](https://youtu.be/IiyrEiK4stQ) @ [GitLab Unfiltered](https://www.youtube.com/channel/UCMtZ0sc1HHNtGGWZFDRTh5A)
+- [Litmus - Chaos Engineering for Kubernetes (CNCFMinutes 9)](https://youtu.be/rDQ9XKbSJIc) @ [Saiyam Pathak](https://www.youtube.com/channel/UCi-1nnN0eC9nRleXdZA6ncg)
+- [Chaos Engineering with Litmus Chaos by Prithvi Raj || HACKODISHA Workshop](https://youtu.be/eyAG0svCsQA) @ [Webwiz](https://www.youtube.com/channel/UC9yM_PkV0QIIsPA3qPrp)
+
+[And More....](https://www.youtube.com/channel/UCa57PMqmz_j0wnteRa9nCaw)
+
+### Blogs
+
+- CNCF: [Introduction to LitmusChaos](https://www.cncf.io/blog/2020/08/28/introduction-to-litmuschaos/)
+- Hackernoon: [Manage and Monitor Chaos via Litmus Custom Resources](https://hackernoon.com/solid-tips-on-how-to-manage-and-monitor-chaos-via-litmus-custom-resources-5g1s33m9)
+- [Observability Considerations in Chaos: The Metrics Story](https://dev.to/ksatchit/observability-considerations-in-chaos-the-metrics-story-6cb)
+
+Community Blogs:
+
+- LiveWyer: [LitmusChaos Showcase: Chaos Experiments in a Helm Chart Test Suite](https://livewyer.io/blog/2021/03/22/litmuschaos-showcase-chaos-experiments-in-a-helm-chart-test-suite/)
+- Jessica Cherry: [Test Kubernetes cluster failures and experiments in your terminal](https://opensource.com/article/21/6/kubernetes-litmus-chaos)
+- Yang Chuansheng(KubeSphere): [KubeSphere 部署 Litmus 至 Kubernetes 开启混沌实验](https://kubesphere.io/zh/blogs/litmus-kubesphere/)
+- Saiyam Pathak(Civo): [Chaos Experiments on Kubernetes using Litmus to ensure your cluster is production ready](https://www.civo.com/learn/chaos-engineering-kubernetes-litmus)
+- Andreas Krivas(Container Solutions):[Comparing Chaos Engineering Tools for Kubernetes Workloads](https://blog.container-solutions.com/comparing-chaos-engineering-tools)
+- Akram Riahi(WeScale):[Chaos Engineering : Litmus sous tous les angles](https://blog.wescale.fr/2021/03/11/chaos-engineering-litmus-sous-tous-les-angles/)
+- Prashanto Priyanshu(LensKart):[Lenskart’s approach to Chaos Engineering-Part 2](https://blog.lenskart.com/lenskarts-approach-to-chaos-engineering-part-2-6290e4f3a74e)
+- DevsDay.ru(Russian):[LitmusChaos at Kubecon EU '21](https://devsday.ru/blog/details/40746)
+
+
+## Adopters
+
+Check out the Adopters of LitmusChaos
+
+(_Send a PR to the above page if you are using Litmus in your chaos engineering practice_)
+
+## License
+
+Litmus is licensed under the Apache License, Version 2.0. See [LICENSE](./LICENSE) for the full license text. Some of the projects used by the Litmus project may be governed by a different license, please refer to its specific license.
+
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Flitmuschaos%2Flitmus?ref=badge_large)
+
+Litmus Chaos is part of the CNCF Projects.
+
+[](https://landscape.cncf.io/?selected=litmus)
+
+## Important Links
+
+
+ Litmus Docs
+
+
+
+ CNCF Landscape
+
diff --git a/data/readmes/logging-operator-kube-logging-621.md b/data/readmes/logging-operator-kube-logging-621.md
new file mode 100644
index 0000000..3b70bf8
--- /dev/null
+++ b/data/readmes/logging-operator-kube-logging-621.md
@@ -0,0 +1,117 @@
+# Logging Operator (Kube Logging) - README (6.2.1)
+
+**Repository**: https://github.com/kube-logging/logging-operator
+**Version**: 6.2.1
+
+---
+
+
+
+
+# Logging operator
+
+The Logging operator is now a [CNCF Sandbox](https://www.cncf.io/sandbox-projects/) project.
+
+The Logging operator solves your logging-related problems in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.
+
+1. The operator deploys and configures a log collector (currently a Fluent Bit DaemonSet) on every node to collect container and application logs from the node file system.
+1. Fluent Bit queries the Kubernetes API and enriches the logs with metadata about the pods, and transfers both the logs and the metadata to a log forwarder instance.
+1. The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders.
+
+Your logs are always transferred on authenticated and encrypted channels.
+
+This operator helps you bundle logging information with your applications: you can describe the behavior of your application in its charts, the Logging operator does the rest.
+
+## What is this operator for?
+
+This operator helps you bundle logging information with your applications: you can describe the behavior of your application in its charts, the Logging operator does the rest.
+
+
+
+## Feature highlights
+
+- [x] Namespace isolation
+- [x] Native Kubernetes label selectors
+- [x] Secure communication (TLS)
+- [x] Configuration validation
+- [x] Multiple flow support (multiply logs for different transformations)
+- [x] Multiple output support (store the same logs in multiple storage: S3, GCS, ES, Loki and more...)
+- [x] Multiple logging system support (multiple fluentd, fluent-bit deployment on the same cluster)
+
+## Architecture
+
+The Logging operator manages the log collectors and log forwarders of your logging infrastructure, and the routing rules that specify where you want to send your different log messages.
+
+The **log collectors** are endpoint agents that collect the logs of your Kubernetes nodes and send them to the log forwarders. Logging operator currently uses Fluent Bit as log collector agents.
+
+The **log forwarder** instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs. The Logging operator supports Fluentd and syslog-ng as log forwarders. Which log forwarder is best for you depends on your logging requirements.
+
+You can filter and process the incoming log messages using the **flow** custom resource of the log forwarder to route them to the appropriate **output**. The outputs are the destinations where you want to send your log messages, for example, Elasticsearch, or an Amazon S3 bucket. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users can reference but cannot modify. Note that flows and outputs are specific to the type of log forwarder you use (Fluentd or syslog-ng).
+
+You can configure the Logging operator using the following Custom Resource Definitions.
+
+- [Logging](https://kube-logging.github.io/docs/logging-infrastructure/logging/) - The `Logging` resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It also contains configurations for Fluent Bit, Fluentd, and syslog-ng.
+- CRDs for Fluentd:
+ - [Output](https://kube-logging.github.io/docs/configuration/output/) - Defines a Fluentd Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also `ClusterOutput`. To configure syslog-ng outputs, see `SyslogNGOutput`.
+ - [Flow](https://kube-logging.github.io/docs/configuration/flow/) - Defines a Fluentd logging flow using `filters` and `outputs`. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also `ClusterFlow`. To configure syslog-ng flows, see `SyslogNGFlow`.
+ - [ClusterOutput](https://kube-logging.github.io/docs/configuration/output/) - Defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the `controlNamespace` only unless `allowClusterResourcesFromAllNamespaces` is set to true.
+ - [ClusterFlow](https://kube-logging.github.io/docs/configuration/flow/) - Defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the `controlNamespace` only unless `allowClusterResourcesFromAllNamespaces` is set to true. To configure syslog-ng clusterflows, see `SyslogNGClusterFlow`.
+- CRDs for syslog-ng (these resources like their Fluentd counterparts, but are tailored to features available via syslog-ng):
+ - [SyslogNGOutput](https://kube-logging.github.io/docs/configuration/output/#syslogngoutput) - Defines a syslog-ng Output for a logging flow, where the log messages are sent using Fluentd. This is a namespaced resource. See also `SyslogNGClusterOutput`. To configure Fluentd outputs, see `output`.
+ - [SyslogNGFlow](https://kube-logging.github.io/docs/configuration/flow/#syslogngflow) - Defines a syslog-ng logging flow using `filters` and `outputs`. Basically, the flow routes the selected log messages to the specified outputs. This is a namespaced resource. See also `SyslogNGClusterFlow`. To configure Fluentd flows, see `flow`.
+ - [SyslogNGClusterOutput](https://kube-logging.github.io/docs/configuration/output/#syslogngoutput) - Defines a syslog-ng output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the `controlNamespace` only unless `allowClusterResourcesFromAllNamespaces` is set to true.
+ - [SyslogNGClusterFlow](https://kube-logging.github.io/docs/configuration/flow/#syslogngflow) - Defines a syslog-ng logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the `controlNamespace` only unless `allowClusterResourcesFromAllNamespaces` is set to true. To configure Fluentd clusterflows, see `clusterflow`.
+
+See the [detailed CRDs documentation](https://kube-logging.github.io/docs/configuration/crds/).
+
+
+
+## Quickstart
+
+[](https://asciinema.org/a/315998)
+
+Follow these [quickstart guides](https://kube-logging.github.io/docs/quickstarts/) to try out the Logging operator!
+
+### Install
+
+Deploy Logging operator with our [Helm chart](https://kube-logging.github.io/docs/install/#deploy-logging-operator-with-helm).
+
+> Caution: The **master branch** is under heavy development. Use [releases](https://github.com/kube-logging/logging-operator/releases) instead of the master branch to get stable software.
+
+## Support
+
+If you encounter problems while using the Logging operator the documentation does not address, [open an issue](https://github.com/kube-logging/logging-operator/issues) or talk to us on the [#logging-operator Discord channel](https://discord.gg/eAcqmAVU2u).
+
+## Documentation
+
+ You can find the complete documentation on the [Logging operator documentation page](https://kube-logging.github.io/docs/) :blue_book:
+
+## Contributing
+
+If you find this project useful, help us:
+
+- Support the development of this project and star this repo! :star:
+- If you use the Logging operator in a production environment, add yourself to the list of production [adopters](https://github.com/kube-logging/logging-operator/blob/master/ADOPTERS.md). :metal:
+- Help new users with issues they may encounter :muscle:
+- Send a pull request with your new features and bug fixes :rocket:
+
+Please read the [Organisation's Code of Conduct](https://github.com/kube-logging/.github/blob/main/CODE_OF_CONDUCT.md)!
+
+*For more information, read the [developer documentation](https://kube-logging.github.io/docs/developers)*.
+
+## License
+
+Copyright (c) 2021-2023 [Cisco Systems, Inc. and its affiliates](https://cisco.com)
+Copyright (c) 2017-2020 [Banzai Cloud, Inc.](https://banzaicloud.com)
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/data/readmes/logstash-v922.md b/data/readmes/logstash-v922.md
new file mode 100644
index 0000000..e1e47cd
--- /dev/null
+++ b/data/readmes/logstash-v922.md
@@ -0,0 +1,296 @@
+# Logstash - README (v9.2.2)
+
+**Repository**: https://github.com/elastic/logstash
+**Version**: v9.2.2
+
+---
+
+# Logstash
+
+Logstash is part of the [Elastic Stack](https://www.elastic.co/products) along with Beats, Elasticsearch and Kibana. Logstash is a server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite "stash." (Ours is Elasticsearch, naturally.). Logstash has over 200 plugins, and you can write your own very easily as well.
+
+For more info, see
+
+## Documentation and Getting Started
+
+You can find the documentation and getting started guides for Logstash
+on the [elastic.co site](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html)
+
+For information about building the documentation, see the README in https://github.com/elastic/docs
+
+## Downloads
+
+You can download officially released Logstash binaries, as well as debian/rpm packages for the
+supported platforms, from [downloads page](https://www.elastic.co/downloads/logstash).
+
+## Need Help?
+
+- [Logstash Forum](https://discuss.elastic.co/c/logstash)
+- [Logstash Documentation](https://www.elastic.co/guide/en/logstash/current/index.html)
+- [Logstash Product Information](https://www.elastic.co/products/logstash)
+- [Elastic Support](https://www.elastic.co/subscriptions)
+
+## Logstash Plugins
+
+Logstash plugins are hosted in separate repositories under the [logstash-plugins](https://github.com/logstash-plugins) github organization. Each plugin is a self-contained Ruby gem which gets published to RubyGems.org.
+
+### Writing your own Plugin
+
+Logstash is known for its extensibility. There are hundreds of plugins for Logstash and you can write your own very easily! For more info on developing and testing these plugins, please see the [working with plugins section](https://www.elastic.co/guide/en/logstash/current/contributing-to-logstash.html)
+
+### Plugin Issues and Pull Requests
+
+**Please open new issues and pull requests for plugins under its own repository**
+
+For example, if you have to report an issue/enhancement for the Elasticsearch output, please do so [here](https://github.com/logstash-plugins/logstash-output-elasticsearch/issues).
+
+Logstash core will continue to exist under this repository and all related issues and pull requests can be submitted here.
+
+## Developing Logstash Core
+
+### Prerequisites
+
+* Install JDK version 11 or 17. Make sure to set the `JAVA_HOME` environment variable to the path to your JDK installation directory. For example `set JAVA_HOME=`
+* Install JRuby 9.2.x It is recommended to use a Ruby version manager such as [RVM](https://rvm.io/) or [rbenv](https://github.com/sstephenson/rbenv).
+* Install `rake` and `bundler` tool using `gem install rake` and `gem install bundler` respectively.
+
+### RVM install (optional)
+
+If you prefer to use rvm (ruby version manager) to manage Ruby versions on your machine, follow these directions. In the Logstash folder:
+
+```sh
+gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
+\curl -sSL https://get.rvm.io | bash -s stable --ruby=$(cat .ruby-version)
+```
+
+### Check Ruby version
+
+Before you proceed, please check your ruby version by:
+
+```sh
+$ ruby -v
+```
+
+The printed version should be the same as in the `.ruby-version` file.
+
+### Building Logstash
+
+The Logstash project includes the source code for all of Logstash, including the Elastic-Licensed X-Pack features and functions; to run Logstash from source using only the OSS-licensed code, export the `OSS` environment variable with a value of `true`:
+
+``` sh
+export OSS=true
+```
+
+* Set up the location of the source code to build
+
+``` sh
+export LOGSTASH_SOURCE=1
+export LOGSTASH_PATH=/YOUR/LOGSTASH/DIRECTORY
+```
+
+#### Install dependencies with `gradle` **(recommended)**[^1]
+
+* Install development dependencies
+```sh
+./gradlew installDevelopmentGems
+```
+
+* Install default plugins and other dependencies
+
+```sh
+./gradlew installDefaultGems
+```
+
+### Verify the installation
+
+To verify your environment, run the following to start Logstash and send your first event:
+
+```sh
+bin/logstash -e 'input { stdin { } } output { stdout {} }'
+```
+
+This should start Logstash with stdin input waiting for you to enter an event
+
+```sh
+hello world
+2016-11-11T01:22:14.405+0000 0.0.0.0 hello world
+```
+
+**Advanced: Drip Launcher**
+
+[Drip](https://github.com/ninjudd/drip) is a tool that solves the slow JVM startup problem while developing Logstash. The drip script is intended to be a drop-in replacement for the java command. We recommend using drip during development, in particular for running tests. Using drip, the first invocation of a command will not be faster but the subsequent commands will be swift.
+
+To tell logstash to use drip, set the environment variable `` JAVACMD=`which drip` ``.
+
+Example (but see the *Testing* section below before running rspec for the first time):
+
+ JAVACMD=`which drip` bin/rspec
+
+**Caveats**
+
+Drip does not work with STDIN. You cannot use drip for running configs which use the stdin plugin.
+
+## Building Logstash Documentation
+
+To build the Logstash Reference (open source content only) on your local
+machine, clone the following repos:
+
+[logstash](https://github.com/elastic/logstash) - contains main docs about core features
+
+[logstash-docs](https://github.com/elastic/logstash-docs) - contains generated plugin docs
+
+[docs](https://github.com/elastic/docs) - contains doc build files
+
+Make sure you have the same branch checked out in `logstash` and `logstash-docs`.
+Check out `master` in the `docs` repo.
+
+Run the doc build script from within the `docs` repo. For example:
+
+```
+./build_docs.pl --doc ../logstash/docs/index.asciidoc --chunk=1 -open
+```
+
+## Testing
+
+Most of the unit tests in Logstash are written using [rspec](http://rspec.info/) for the Ruby parts. For the Java parts, we use [junit](https://junit.org). For testing you can use the *test* `rake` tasks and the `bin/rspec` command, see instructions below:
+
+### Core tests
+
+1- To run the core tests you can use the Gradle task:
+
+ ./gradlew test
+
+ or use the `rspec` tool to run all tests or run a specific test:
+
+ bin/rspec
+ bin/rspec spec/foo/bar_spec.rb
+
+ Note that before running the `rspec` command for the first time you need to set up the RSpec test dependencies by running:
+
+ ./gradlew bootstrap
+
+2- To run the subset of tests covering the Java codebase only run:
+
+ ./gradlew javaTests
+
+3- To execute the complete test-suite including the integration tests run:
+
+ ./gradlew check
+
+4- To execute a single Ruby test run:
+
+ SPEC_OPTS="-fd -P logstash-core/spec/logstash/api/commands/default_metadata_spec.rb" ./gradlew :logstash-core:rubyTests --tests org.logstash.RSpecTests
+
+5- To execute single spec for integration test, run:
+
+ ./gradlew integrationTests -PrubyIntegrationSpecs=specs/slowlog_spec.rb
+
+Sometimes you might find a change to a piece of Logstash code causes a test to hang. These can be hard to debug.
+
+If you set `LS_JAVA_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"` you can connect to a running Logstash with your IDEs debugger which can be a great way of finding the issue.
+
+### Plugins tests
+
+To run the tests of all currently installed plugins:
+
+ rake test:plugins
+
+You can install the default set of plugins included in the logstash package:
+
+ rake test:install-default
+
+---
+Note that if a plugin is installed using the plugin manager `bin/logstash-plugin install ...` do not forget to also install the plugins development dependencies using the following command after the plugin installation:
+
+ bin/logstash-plugin install --development
+
+## Building Artifacts
+
+Built artifacts will be placed in the `LS_HOME/build` directory, and will create the directory if it is not already present.
+
+You can build a Logstash snapshot package as tarball or zip file
+
+```sh
+./gradlew assembleTarDistribution
+./gradlew assembleZipDistribution
+```
+
+OSS-only artifacts can similarly be built with their own gradle tasks:
+```sh
+./gradlew assembleOssTarDistribution
+./gradlew assembleOssZipDistribution
+
+```
+
+You can also build .rpm and .deb, but the [fpm](https://github.com/jordansissel/fpm) tool is required.
+
+```sh
+rake artifact:rpm
+rake artifact:deb
+```
+
+and:
+
+```sh
+rake artifact:rpm_oss
+rake artifact:deb_oss
+```
+
+## Using a Custom JRuby Distribution
+
+If you want the build to use a custom JRuby you can do so by setting a path to a custom
+JRuby distribution's source root via the `custom.jruby.path` Gradle property.
+
+E.g.
+
+```sh
+./gradlew clean test -Pcustom.jruby.path="/path/to/jruby"
+```
+
+## Project Principles
+
+* Community: If a newbie has a bad time, it's a bug.
+* Software: Make it work, then make it right, then make it fast.
+* Technology: If it doesn't do a thing today, we can make it do it tomorrow.
+
+## Contributing
+
+All contributions are welcome: ideas, patches, documentation, bug reports,
+complaints, and even something you drew up on a napkin.
+
+Programming is not a required skill. Whatever you've seen about open source and
+maintainers or community members saying "send patches or die" - you will not
+see that here.
+
+It is more important that you are able to contribute.
+
+For more information about contributing, see the
+[CONTRIBUTING](./CONTRIBUTING.md) file.
+
+## Footnotes
+
+[^1]: Use bundle instead of gradle to install dependencies
+
+ #### Alternatively, instead of using `gradle` you can also use `bundle`:
+
+ * Install development dependencies
+
+ ```sh
+ bundle config set --local path vendor/bundle
+ bundle install
+ ```
+
+ * Bootstrap the environment:
+
+ ```sh
+ rake bootstrap
+ ```
+
+ * You can then use `bin/logstash` to start Logstash, but there are no plugins installed. To install default plugins, you can run:
+
+ ```sh
+ rake plugin:install-default
+ ```
+
+ This will install the 80+ default plugins which makes Logstash ready to connect to multiple data sources, perform transformations and send the results to Elasticsearch and other destinations.
+
diff --git a/data/readmes/loki-operatorv090.md b/data/readmes/loki-operatorv090.md
new file mode 100644
index 0000000..45feb21
--- /dev/null
+++ b/data/readmes/loki-operatorv090.md
@@ -0,0 +1,168 @@
+# Loki - README (operator/v0.9.0)
+
+**Repository**: https://github.com/grafana/loki
+**Version**: operator/v0.9.0
+
+---
+
+
+
+
+
+
+[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:loki)
+
+# Loki: like Prometheus, but for logs.
+
+Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by [Prometheus](https://prometheus.io/).
+It is designed to be very cost effective and easy to operate.
+It does not index the contents of the logs, but rather a set of labels for each log stream.
+
+Compared to other log aggregation systems, Loki:
+
+- does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run.
+- indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus.
+- is an especially good fit for storing [Kubernetes](https://kubernetes.io/) Pod logs. Metadata such as Pod labels is automatically scraped and indexed.
+- has native support in Grafana (needs Grafana v6.0).
+
+A Loki-based logging stack consists of 3 components:
+
+- [Alloy](https://github.com/grafana/alloy) is agent, responsible for gathering logs and sending them to Loki.
+- [Loki](https://github.com/grafana/loki) is the main service, responsible for storing logs and processing queries.
+- [Grafana](https://github.com/grafana/grafana) for querying and displaying the logs.
+
+**Note that Alloy replaced Promtail in the stack, because Promtail is considered to be feature complete, and future development for logs collection will be in [Grafana Alloy](https://github.com/grafana/alloy).**
+
+Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies.
+Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
+
+## Getting started
+
+* [Installing Loki](https://grafana.com/docs/loki/latest/installation/)
+* [Installing Alloy](https://grafana.com/docs/loki/latest/send-data/alloy/)
+* [Getting Started](https://grafana.com/docs/loki/latest/get-started/)
+
+## Upgrading
+
+* [Upgrading Loki](https://grafana.com/docs/loki/latest/upgrading/)
+
+## Documentation
+
+* [Latest release](https://grafana.com/docs/loki/latest/)
+* [Upcoming release](https://grafana.com/docs/loki/next/), at the tip of the main branch
+
+Commonly used sections:
+
+- [API documentation](https://grafana.com/docs/loki/latest/api/) for getting logs into Loki.
+- [Labels](https://grafana.com/docs/loki/latest/getting-started/labels/)
+- [Operations](https://grafana.com/docs/loki/latest/operations/)
+- [Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) is an agent which tails log files and pushes them to Loki.
+- [Pipelines](https://grafana.com/docs/loki/latest/clients/promtail/pipelines/) details the log processing pipeline.
+- [Docker Driver Client](https://grafana.com/docs/loki/latest/clients/docker-driver/) is a Docker plugin to send logs directly to Loki from Docker containers.
+- [LogCLI](https://grafana.com/docs/loki/latest/query/logcli/) provides a command-line interface for querying logs.
+- [Loki Canary](https://grafana.com/docs/loki/latest/operations/loki-canary/) monitors your Loki installation for missing logs.
+- [Troubleshooting](https://grafana.com/docs/loki/latest/operations/troubleshooting/) presents help dealing with error messages.
+- [Loki in Grafana](https://grafana.com/docs/loki/latest/operations/grafana/) describes how to set up a Loki datasource in Grafana.
+
+## Getting Help
+
+If you have any questions or feedback regarding Loki:
+
+- Search existing thread in the Grafana Labs community forum for Loki: [https://community.grafana.com](https://community.grafana.com/c/grafana-loki/)
+- Ask a question on the Loki Slack channel. To invite yourself to the Grafana Slack, visit [https://slack.grafana.com/](https://slack.grafana.com/) and join the #loki channel.
+- [File an issue](https://github.com/grafana/loki/issues/new) for bugs, issues and feature suggestions.
+- Send an email to [lokiproject@googlegroups.com](mailto:lokiproject@googlegroups.com), or use the [web interface](https://groups.google.com/forum/#!forum/lokiproject).
+- UI issues should be filed directly in [Grafana](https://github.com/grafana/grafana/issues/new).
+
+Your feedback is always welcome.
+
+## Further Reading
+
+- The original [design doc](https://docs.google.com/document/d/11tjK_lvp1-SVsFZjgOTr1vV3-q6vBAsZYIQ5ZeYBkyM/view) for Loki is a good source for discussion of the motivation and design decisions.
+- Callum Styan's March 2019 DevOpsDays Vancouver talk "[Grafana Loki: Log Aggregation for Incident Investigations][devopsdays19-talk]".
+- Grafana Labs blog post "[How We Designed Loki to Work Easily Both as Microservices and as Monoliths][architecture-blog]".
+- Tom Wilkie's early-2019 CNCF Paris/FOSDEM talk "[Grafana Loki: like Prometheus, but for logs][fosdem19-talk]" ([slides][fosdem19-slides], [video][fosdem19-video]).
+- David Kaltschmidt's KubeCon 2018 talk "[On the OSS Path to Full Observability with Grafana][kccna18-event]" ([slides][kccna18-slides], [video][kccna18-video]) on how Loki fits into a cloud-native environment.
+- Goutham Veeramachaneni's blog post "[Loki: Prometheus-inspired, open source logging for cloud natives](https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/)" on details of the Loki architecture.
+- David Kaltschmidt's blog post "[Closer look at Grafana's user interface for Loki](https://grafana.com/blog/2019/01/02/closer-look-at-grafanas-user-interface-for-loki/)" on the ideas that went into the logging user interface.
+
+[devopsdays19-talk]: https://grafana.com/blog/2019/05/06/how-loki-correlates-metrics-and-logs--and-saves-you-money/
+[architecture-blog]: https://grafana.com/blog/2019/04/15/how-we-designed-loki-to-work-easily-both-as-microservices-and-as-monoliths/
+[fosdem19-talk]: https://fosdem.org/2019/schedule/event/loki_prometheus_for_logs/
+[fosdem19-slides]: https://speakerdeck.com/grafana/grafana-loki-like-prometheus-but-for-logs
+[fosdem19-video]: https://mirror.as35701.net/video.fosdem.org/2019/UB2.252A/loki_prometheus_for_logs.mp4
+[kccna18-event]: https://kccna18.sched.com/event/GrXC/on-the-oss-path-to-full-observability-with-grafana-david-kaltschmidt-grafana-labs
+[kccna18-slides]: https://speakerdeck.com/davkal/on-the-path-to-full-observability-with-oss-and-launch-of-loki
+[kccna18-video]: https://www.youtube.com/watch?v=U7C5SpRtK74&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU&index=346
+
+## Contributing
+
+Refer to [CONTRIBUTING.md](CONTRIBUTING.md)
+
+### Building from source
+
+Loki can be run in a single host, no-dependencies mode using the following commands.
+
+You need an up-to-date version of [Go](https://go.dev/), we recommend using the version found in our [Makefile](https://github.com/grafana/loki/blob/main/Makefile)
+
+```bash
+# Checkout source code
+$ git clone https://github.com/grafana/loki
+$ cd loki
+
+# Build binary
+$ go build ./cmd/loki
+
+# Run executable
+$ ./loki -config.file=./cmd/loki/loki-local-config.yaml
+```
+
+Alternatively, on Unix systems you can use `make` to build the binary, which adds additional arguments to the `go build` command.
+
+```bash
+# Build binary
+$ make loki
+
+# Run executable
+$ ./cmd/loki/loki -config.file=./cmd/loki/loki-local-config.yaml
+```
+
+To build Promtail on non-Linux platforms, use the following command:
+
+```bash
+$ go build ./clients/cmd/promtail
+```
+
+On Linux, Promtail requires the systemd headers to be installed if
+Journal support is enabled.
+To enable Journal support the go build tag flag `promtail_journal_enabled` should be passed
+
+With Journal support on Ubuntu, run with the following commands:
+
+```bash
+$ sudo apt install -y libsystemd-dev
+$ go build --tags=promtail_journal_enabled ./clients/cmd/promtail
+```
+
+With Journal support on CentOS, run with the following commands:
+
+```bash
+$ sudo yum install -y systemd-devel
+$ go build --tags=promtail_journal_enabled ./clients/cmd/promtail
+```
+
+Otherwise, to build Promtail without Journal support, run `go build`
+with CGO disabled:
+
+```bash
+$ CGO_ENABLED=0 go build ./clients/cmd/promtail
+```
+
+## Adopters
+
+Please see [ADOPTERS.md](ADOPTERS.md) for some of the organizations using Loki today.
+If you would like to add your organization to the list, please open a PR to add it to the list.
+
+## License
+
+Grafana Loki is distributed under [AGPL-3.0-only](LICENSE). For Apache-2.0 exceptions, see [LICENSING.md](LICENSING.md).
diff --git a/data/readmes/longhorn-v1101.md b/data/readmes/longhorn-v1101.md
new file mode 100644
index 0000000..495be0a
--- /dev/null
+++ b/data/readmes/longhorn-v1101.md
@@ -0,0 +1,171 @@
+# Longhorn - README (v1.10.1)
+
+**Repository**: https://github.com/longhorn/longhorn
+**Version**: v1.10.1
+
+---
+
+
+
+
+
+
A CNCF Incubating Project. Visit longhorn.io for the full documentation.
+
+Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud-native storage built using Kubernetes and container primitives.
+
+Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply` command or by using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
+
+Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:
+
+1. Enterprise-grade distributed storage with no single point of failure
+2. Incremental snapshot of block storage
+3. Backup to secondary storage (NFSv4 or S3-compatible object storage) built on efficient change block detection
+4. Recurring snapshot and backup
+5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
+6. Intuitive GUI dashboard
+
+You can read more technical details of Longhorn [here](https://longhorn.io/).
+
+# Releases
+
+> **NOTE**:
+> - __\*__ means the release branch is under active support and will have periodic follow-up patch releases.
+> - __Latest__ release means the version is the latest release of the newest release branch.
+> - __Stable__ release means the version is stable and has been widely adopted by users.
+> - Release EOL: One year after the first stable version. For the details, please refer to [Release Support](https://github.com/longhorn/longhorn/wiki/Release-Schedule-&-Support#release-support).
+
+https://github.com/longhorn/longhorn/releases
+
+| Release | Latest Version | Stable Versions | Release Note | Important Note | Active |
+|-----------|-----------------|-----------------------------------|----------------------------------------------------------------|--------------------------------------------------------------|--------|
+| **1.9*** | 1.9.1 | 1.9.1 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.9.1) | [🔗](https://longhorn.io/docs/1.9.1/important-notes) | ✅ |
+| **1.8*** | 1.8.2 | 1.8.2 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.8.2) | [🔗](https://longhorn.io/docs/1.8.2/important-notes) | ✅ |
+| **1.7*** | 1.7.3 | 1.7.3, 1.7.2, 1.7.1 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.7.3) | [🔗](https://longhorn.io/docs/1.7.3/important-notes) | ✅ |
+| 1.6 | 1.6.4 | 1.6.4, 1.6.3, 1.6.2, 1.6.1 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.6.4) | [🔗](https://longhorn.io/docs/1.6.4/deploy/important-notes) | |
+| 1.5 | 1.5.5 | 1.5.5, 1.5.4, 1.5.3 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.5.5) | [🔗](https://longhorn.io/docs/1.5.5/deploy/important-notes) | |
+| 1.4 | 1.4.4 | 1.4.4, 1.4.3, 1.4.2, 1.4.1 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.4.4) | [🔗](https://longhorn.io/docs/1.4.4/deploy/important-notes) | |
+| 1.3 | 1.3.3 | 1.3.3, 1.3.2 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.3.3) | [🔗](https://longhorn.io/docs/1.3.3/deploy/important-notes) | |
+| 1.2 | 1.2.6 | 1.2.6, 1.2.5, 1.2.4, 1.2.3, 1.2.2 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.2.6) | [🔗](https://longhorn.io/docs/1.2.6/deploy/important-notes) | |
+| 1.1 | 1.1.3 | 1.1.3, 1.1.2 | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.1.3) | | |
+
+# Roadmap
+
+https://github.com/longhorn/longhorn/wiki/Roadmap
+
+# Components
+
+Longhorn is 100% open-source software. Project source code is spread across several repositories:
+
+* Manager: [](https://github.com/longhorn/longhorn-manager/actions/workflows/build.yml)[](https://goreportcard.com/report/github.com/longhorn/longhorn-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-manager?ref=badge_shield)
+* Instance Manager: [](https://github.com/longhorn/longhorn-instance-manager/actions/workflows/build.yml)[](https://goreportcard.com/report/github.com/longhorn/longhorn-instance-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-instance-manager?ref=badge_shield)
+* Engine: [](https://github.com/longhorn/longhorn-engine/actions/workflows/build.yml)[](https://goreportcard.com/report/github.com/longhorn/longhorn-engine)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-engine?ref=badge_shield)
+* Share Manager: [](https://github.com/longhorn/longhorn-share-manager/actions/workflows/build.yml)[](https://goreportcard.com/report/github.com/longhorn/longhorn-share-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-share-manager?ref=badge_shield)
+* Backing Image Manager: [](https://github.com/longhorn/backing-image-manager/actions/workflows/build.yml)[](https://goreportcard.com/report/github.com/longhorn/backing-image-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Fbacking-image-manager?ref=badge_shield)
+* UI: [](https://github.com/longhorn/longhorn-ui/actions/workflows/build.yml)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-ui?ref=badge_shield)
+
+| Component | What it does | GitHub repo |
+| :----------------------------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------ |
+| Longhorn Backing Image Manager | Backing image download, sync, and deletion in a disk | [longhorn/backing-image-manager](https://github.com/longhorn/backing-image-manager) |
+| Longhorn Instance Manager | Controller/replica instance lifecycle management | [longhorn/longhorn-instance-manager](https://github.com/longhorn/longhorn-instance-manager) |
+| Longhorn Manager | Longhorn orchestration, includes CSI driver for Kubernetes | [longhorn/longhorn-manager](https://github.com/longhorn/longhorn-manager) |
+| Longhorn Share Manager | NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes | [longhorn/longhorn-share-manager](https://github.com/longhorn/longhorn-share-manager) |
+| Longhorn UI | The Longhorn dashboard | [longhorn/longhorn-ui](https://github.com/longhorn/longhorn-ui) |
+
+| Library | What it does | GitHub repo |
+| :----------------------------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------ |
+| Longhorn Engine | V1 Core controller/replica logic | [longhorn/longhorn-engine](https://github.com/longhorn/longhorn-engine) |
+| Longhorn SPDK Engine | V2 Core controller/replica logic | [longhorn/longhorn-spdk-engine](https://github.com/longhorn/longhorn-spdk-engine) |
+| iSCSI Helper | V1 iSCSI client and server libraries | [longhorn/go-iscsi-helper](https://github.com/longhorn/go-iscsi-helper) |
+| SPDK Helper | V2 SPDK client and server libraries | [longhorn/go-spdk-helper](https://github.com/longhorn/go-spdk-helper) |
+| Backup Store | Backkup libraries | [longhorn/backupstore](https://github.com/longhorn/backupstore) |
+| Common Libraries | | [longhorn/go-common-libs](https://github.com/longhorn/go-common-libs) |
+
+
+
+# Get Started
+
+## Requirements
+
+For the installation requirements, refer to the [Longhorn documentation.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements)
+
+## Installation
+
+> **NOTE**:
+> Please note that the master branch is for the upcoming feature release development.
+> For an official release installation or upgrade, please take a look at the ways below.
+
+Longhorn can be installed on a Kubernetes cluster in several ways:
+
+- [Rancher App Marketplace](https://longhorn.io/docs/latest/deploy/install/install-with-rancher/)
+- [kubectl](https://longhorn.io/docs/latest/deploy/install/install-with-kubectl/)
+- [Helm](https://longhorn.io/docs/latest/deploy/install/install-with-helm/)
+
+## Documentation
+
+The official Longhorn documentation is [here.](https://longhorn.io/docs)
+
+# Get Involved
+
+## Discussion, Feedback
+
+If having any discussions or feedback, feel free to [file a discussion](https://github.com/longhorn/longhorn/discussions).
+
+## Features Request, Bug Reporting
+
+If having any issues, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
+We have a weekly community issue review meeting to review all reported issues or enhancement requests.
+
+When creating a bug issue, please help upload the support bundle to the issue or send it to
+[longhorn-support-bundle](mailto:longhorn-support-bundle@suse.com).
+
+## Report Vulnerabilities
+
+If any vulnerabilities are found, please report them to [longhorn-security](mailto:longhorn-security@suse.com).
+
+# Community
+
+Longhorn is open-source software, so contributions are greatly welcome.
+Please read [Code of Conduct](./CODE_OF_CONDUCT.md) and [Contributing Guideline](./CONTRIBUTING.md) before contributing.
+
+Contributing code is not the only way of contributing. We value feedback very much and many of the Longhorn features originated from users' feedback.
+If you have any feedback, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
+
+## Slack
+
+You can also provide feedback or join the conversation with other developers, users, and contributors on the [CNCF](https://slack.cncf.io/) [#longhorn](https://cloud-native.slack.com/messages/longhorn) Slack channel.
+This is a good place to learn about Longhorn, ask questions, and share your experiences.
+
+## Community Meeting and Office Hours
+
+We host a monthly community meeting on the **3rd Thursday**, alternating between *AMER/EU-friendly* and *APAC-friendly times* - at **4 PM UTC** and **8 AM UTC** respectively.
+
+Everyone is welcome to join us. You can find the calendar invite [here](https://zoom-lfx.platform.linuxfoundation.org/meetings/longhorn?view=list)
+
+## Longhorn Mailing List
+
+Subscribe to our [developer](https://lists.cncf.io/g/cncf-longhorn-dev) and [users](https://lists.cncf.io/g/cncf-longhorn-users) to stay updated with the latest news and events.
+
+You can read more about our community and its events here: https://github.com/longhorn/community
+
+# License
+
+Copyright (c) 2014-2025 The Longhorn Authors
+
+Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
+
+[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
+
+Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
+
+## Longhorn is a [CNCF Incubating Project](https://www.cncf.io/projects/)
+
+
diff --git a/data/readmes/loxilb-v0984.md b/data/readmes/loxilb-v0984.md
new file mode 100644
index 0000000..9a2c4fa
--- /dev/null
+++ b/data/readmes/loxilb-v0984.md
@@ -0,0 +1,202 @@
+# LoxiLB - README (v0.9.8.4)
+
+**Repository**: https://github.com/loxilb-io/loxilb
+**Version**: v0.9.8.4
+
+---
+
+
+
+
+[](https://www.loxilb.io) [](https://ebpf.io/projects#loxilb) [](https://goreportcard.com/report/github.com/loxilb-io/loxilb) [](https://www.bestpractices.dev/projects/8472)  
+ [![Info][docs-shield]][docs-url] [](https://join.slack.com/t/loxilb/shared_invite/zt-2b3xx14wg-P7WHj5C~OEON_jviF0ghcQ)
+
+## What is loxilb
+loxilb is an open source cloud-native load-balancer based on GoLang/eBPF with the goal of achieving cross-compatibility across a wide range of on-prem, public-cloud or hybrid K8s environments. loxilb is being developed to support the adoption of cloud-native tech in telco, mobility, and edge computing.
+
+## Kubernetes with loxilb
+
+Kubernetes defines many service constructs like cluster-ip, node-port, load-balancer, ingress etc for pod to pod, pod to service and outside-world to service communication.
+
+
+
+All these services are provided by load-balancers/proxies operating at Layer4/Layer7. Since Kubernetes is highly modular, these services can be provided by different software modules. For example, kube-proxy is used by default to provide cluster-ip and node-port services. For some services like LB and Ingress, no default is usually provided.
+
+Service type load-balancer is usually provided by public cloud-provider(s) as a managed entity. But for on-prem and self-managed clusters, there are only a few good options available. Even for provider-managed K8s like EKS, there are many who would want to bring their own LB to clusters running anywhere. Additionally, Telco 5G and edge services introduce unique challenges due to the variety of exotic protocols involved, including GTP, SCTP, SRv6, SEPP, and DTLS, making seamless integration particularly challenging. loxilb provides service type load-balancer as its main use-case. loxilb can be run in-cluster or ext-to-cluster as per user need.
+
+loxilb works as a L4 load-balancer/service-proxy by default. Although L4 load-balancing provides great performance and functionality, an equally performant L7 load-balancer is also necessary in K8s for various use-cases. loxilb also supports L7 load-balancing in the form of Kubernetes Ingress implementation which is enhanced with eBPF sockmap helpers. This also benefit users who need L4 and L7 load-balancing under the same hood.
+
+Additionally, loxilb also supports:
+- [x] kube-proxy replacement with eBPF(full cluster-mesh implementation for Kubernetes)
+- [x] Ingress Support
+- [x] Kubernetes Gateway API
+- [x] HA capable Egress for Kubernetes
+- [ ] Kubernetes Network Policies
+
+## Telco-Cloud with loxilb
+For deploying telco-cloud with cloud-native functions, loxilb can be used as an enhanced SCP(service communication proxy). SCP is a communication proxy defined by [3GPP](https://www.etsi.org/deliver/etsi_ts/129500_129599/129500/16.04.00_60/ts_129500v160400p.pdf) and aimed at telco micro-services running in cloud-native environment. Read more in this [blog](https://dev.to/nikhilmalik/5g-service-communication-proxy-with-loxilb-4242)
+
+
+Telco-cloud requires load-balancing and communication across various interfaces/standards like N2, N4, E2(ORAN), S6x, 5GLAN, GTP etc. Each of these present its own unique challenges which loxilb aims to solve e.g.:
+- N4 requires PFCP level session-intelligence
+- N2 requires NGAP parsing capability(Related Blogs - [Blog-1](https://www.loxilb.io/post/ngap-load-balancing-with-loxilb), [Blog-2](https://futuredon.medium.com/5g-sctp-loadbalancer-using-loxilb-b525198a9103), [Blog-3](https://medium.com/@ben0978327139/5g-sctp-loadbalancer-using-loxilb-applying-on-free5gc-b5c05bb723f0))
+- S6x requires Diameter/SCTP multi-homing LB support(Related [Blog](https://www.loxilb.io/post/k8s-introducing-sctp-multihoming-functionality-with-loxilb))
+- MEC use-cases might require UL-CL understanding(Related [Blog](https://futuredon.medium.com/5g-uplink-classifier-using-loxilb-7593a4d66f4c))
+- Hitless failover support might be essential for mission-critical applications
+- E2 might require SCTP-LB with OpenVPN bundled together
+- SIP support is needed to enable cloud-native VOIP
+- N32 requires support for Security Edge Protection Proxy(SEPP)
+
+## Why choose loxilb?
+
+- ```Performs``` much better compared to its competitors across various architectures
+ * [Single-Node Performance](https://loxilb-io.github.io/loxilbdocs/perf-single/)
+ * [Multi-Node Performance](https://loxilb-io.github.io/loxilbdocs/perf-multi/)
+ * [Performance on ARM](https://www.loxilb.io/post/running-loxilb-on-aws-graviton2-based-ec2-instance)
+ * [Short Demo on Performance](https://www.youtube.com/watch?v=MJXcM0x6IeQ)
+- Utilizes ebpf which makes it ```flexible``` as well as ```customizable```
+- Advanced ```quality of service``` for workloads (per LB, per end-point or per client)
+- Works with ```any``` Kubernetes distribution/CNI
+ (k8s / k3s / k0s / kind / OpenShift + Calico, Flannel, Cilium, Weave, Multus, etc)
+- Kube-proxy replacement with loxilb allows ```simple plug-in``` with any existing/deployed pod-networking software
+- Extensive support for ```SCTP workloads``` (with multi-homing) on K8s
+- Dual stack with ```NAT66, NAT64``` support for K8s
+- K8s ```multi-cluster``` support *(planned 🚧)*
+- Runs in ```any``` cloud (public cloud / on-prem) or ```standalone``` environments
+
+
+## Overall features of loxilb
+- L4/NAT stateful loadbalancer
+ * NAT44, NAT66, NAT64 with One-ARM, FullNAT, DSR etc
+ * Support for TCP, UDP, SCTP (w/ multi-homing), QUIC, FTP, TFTP etc
+- High-availability support with BFD detection for hitless/maglev/cgnat clustering
+- Extensive and scalable end-point liveness probes for cloud-native environments
+- Stateful firewalling and IPSEC/Wireguard support
+- Optimized implementation for features like [Conntrack](https://thermalcircle.de/doku.php?id=blog:linux:connection_tracking_1_modules_and_hooks), QoS, etc
+- Full compatibility for ipvs (ipvs policies can be auto inherited)
+- Policy oriented L7 proxy support - HTTP1.0, 1.1, 2.0, 3.0
+
+## Components of loxilb
+- GoLang based control plane components
+- A scalable/efficient [eBPF](https://ebpf.io/) based data-path implementation
+- Integrated goBGP based routing stack
+- A kubernetes operator [kube-loxilb](https://github.com/loxilb-io/kube-loxilb) written in Go
+- A kubernetes ingress [implementation](https://github.com/loxilb-io/loxilb-ingress)
+
+## Architectural Considerations
+- [Understanding loxilb modes and deployment in K8s with kube-loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/kube-loxilb.md)
+- [Understanding High-availability with loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/ha-deploy.md)
+
+## Getting Started
+#### loxilb as ext-cluster pod
+- [K8s : loxilb ext-mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k8s-flannel-ext.md)
+- [K3s : loxilb with default flannel](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k3s_quick_start_flannel.md)
+- [K3s : loxilb with calico](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k3s_quick_start_calico.md)
+- [K3s : loxilb with cilium](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/quick_start_with_cilium.md)
+- [K0s : loxilb with default kube-router networking](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k0s_quick_start.md)
+- [EKS : loxilb ext-mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/eks-external.md)
+
+#### loxilb as in-cluster pod
+- [K8s : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k8s-flannel-incluster.md)
+- [K3s : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k3s_quick_start_incluster.md)
+- [K0s : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k0s_quick_start_incluster.md)
+- [MicroK8s : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/microk8s_quick_start_incluster.md)
+- [EKS : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/eks-incluster.md)
+- [RedHat OCP : loxilb in-cluster mode](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/rhocp-quickstart-incluster.md)
+
+#### loxilb as service-proxy (kube-proxy replacement)
+- [K3s : loxilb service-proxy with flannel](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/service-proxy-flannel.md)
+- [K3s : loxilb service-proxy with calico](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/service-proxy-calico.md)
+
+#### loxilb as Kubernetes Ingress
+- [K3s: How to run loxilb-ingress](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/loxilb-ingress.md)
+
+#### loxilb in standalone mode
+- [Run loxilb standalone](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/standalone.md)
+
+## Advanced Guides
+- [How-To : Service-group zones with loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/service-zones.md)
+- [How-To : Access end-points outside K8s](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/ext-ep.md)
+- [How-To : Deploy multi-server K3s HA with loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/k3s-multi-master.md)
+- [How-To : Deploy loxilb with multi-AZ HA support in AWS](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/aws-multi-az.md)
+- [How-To : Deploy loxilb with multi-cloud HA support](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/multi-cloud-ha.md)
+- [How-To : Deploy loxilb with ingress-nginx](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/loxilb-nginx-ingress.md)
+- [How-To : Run loxilb in-cluster with secondary networks](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/loxilb-incluster-multus.md)
+- [How-To : Kubernetes service sharding with loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/service-sharding.md)
+- [How-To : loxilb L4/L7 Load-Balancing with Kubernetes Gateway API](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/gw-api.md)
+
+## Knowledge-Base
+- [What is eBPF](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/ebpf.md)
+- [What is k8s service - load-balancer](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/lb.md)
+- [Architecture in brief](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/arch.md)
+- [Code organization](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/code.md)
+- [eBPF internals of loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/loxilbebpf.md)
+- [What are loxilb NAT Modes](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/nat.md)
+- [loxilb load-balancer algorithms](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/lb-algo.md)
+- [Manual steps to build/run](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/run.md)
+- [Debugging loxilb](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/debugging.md)
+- [loxicmd command-line tool usage](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/cmd.md)
+- [Developer's guide to loxicmd](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/cmd-dev.md)
+- [Developer's guide to loxilb API](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/api-dev.md)
+- [HTTPS guide for loxilb API](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/https.md)
+- [API Reference - loxilb web-Api](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/api.md)
+- [Performance Reports](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/perf.md)
+- [Development Roadmap](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/roadmap.md)
+- [Contribute](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/contribute.md)
+- [System Requirements](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/requirements.md)
+- [Frequently Asked Questions- FAQs](https://github.com/loxilb-io/loxilbdocs/blob/main/docs/faq.md)
+- [Blogs](https://www.loxilb.io/blog)
+- [Demo Videos](https://www.youtube.com/@loxilb697)
+
+## Community
+
+### Slack
+Join the loxilb [Slack](https://www.loxilb.io/members) channel to chat with loxilb developers and other loxilb users. This is a good place to learn about loxilb, ask questions, and work collaboratively.
+
+### General Discussion
+Feel free to post your queries in github [discussion](https://github.com/loxilb-io/loxilb/discussions). If you find any issue/bugs, please raise an [issue](https://github.com/loxilb-io/loxilb/issues) in github and members from loxilb community will be happy to help.
+
+### Community Posts
+- [5G SCTP Load Balancer using LoxiLB](https://futuredon.medium.com/5g-sctp-loadbalancer-using-loxilb-b525198a9103)
+- [5G Uplink Classifier using LoxiLB](https://futuredon.medium.com/5g-uplink-classifier-using-loxilb-7593a4d66f4c)
+- [5G SCTP Load Balancer with free5gc](https://medium.com/@ben0978327139/5g-sctp-loadbalancer-using-loxilb-applying-on-free5gc-b5c05bb723f0)
+- [K8s - Bring load balancing to Multus workloads with LoxiLB](https://cloudybytes.medium.com/k8s-bringing-load-balancing-to-multus-workloads-with-loxilb-a0746f270abe)
+- [K3s - Using LoxiLB as External Service Load Balancer](https://cloudybytes.medium.com/k3s-using-loxilb-as-external-service-lb-2ea4ce61e159)
+- [Kubernetes Services - Achieving Optimal performance is elusive](https://cloudybytes.medium.com/kubernetes-services-achieving-optimal-performance-is-elusive-5def5183c281)
+
+## CICD Workflow Status
+
+| Features(Ubuntu20.04) | Features(Ubuntu22.04)| Features(Ubuntu24.04)| Features(RedHat9)|
+|:----------|:-------------|:-------------|:-------------|
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/docker-image.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/docker-multiarch.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/docker-multiarch.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/docker-multiarch.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/basic-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/basic-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/basic-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/basic-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/tcp-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/tcp-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/tcp-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/tcp-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/udp-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/udp-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/udp-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/udp-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/sctp-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/sctp-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/sctp-sanity-ubuntu-24.yml) |[](https://github.com/loxilb-io/loxilb/actions/workflows/sctp-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/advanced-lb-sanity.yml)| [](https://github.com/loxilb-io/loxilb/actions/workflows/advanced-lb-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/advanced-lb-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/advanced-lb-sanity-rh9.yml)|
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/nat66-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/nat66-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/nat66-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/nat66-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/ipsec-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/ipsec-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/ipsec-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/ipsec-sanity-rh9.yml) |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/liveness-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/liveness-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/liveness-sanity-ubuntu-24.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/liveness-sanity-rh9.yml) |
+| | [](https://github.com/loxilb-io/loxilb/actions/workflows/scale-sanity-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/scale-sanity-ubuntu-24.yml) | |
+|[](https://github.com/loxilb-io/loxilb/actions/workflows/perf.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/perf.yml) |[](https://github.com/loxilb-io/loxilb/actions/workflows/perf.yml) | |
+
+| K8s Base Tests | K8s Adv Tests | EKS Test |
+|:-------------|:-------------|:-------------|
+|[](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-base-sanity.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k8s-calico-ipvs.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/eks.yaml)|
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-flannel.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k8s-calico-ipvs2.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-flannel-ubuntu-22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k8s-calico-ipvs3.yml) | |
+|[](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-flannel-cluster.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k8s-calico-ipvs3-ha.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-calico.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-flannel-incluster.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-cilium-cluster.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-flannel-incluster-l2.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-sctpmh.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/dual-stack.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-sctpmh-ubuntu22.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-loxi-gwapi.yml) | |
+| [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-sctpmh-2.yml) | [](https://github.com/loxilb-io/loxilb/actions/workflows/k3s-loxi-ingress.yml) | |
+
+
+## 📚 Please check loxilb [website](https://www.loxilb.io) for more detailed info.
+
+[docs-shield]: https://img.shields.io/badge/info-docs-blue
+[docs-url]: https://loxilb-io.github.io/loxilbdocs/
+[slack=shield]: https://img.shields.io/badge/Community-Join%20Slack-blue
+[slack-url]: https://www.loxilb.io/members
+
diff --git a/data/readmes/mariadb-mariadb-1221.md b/data/readmes/mariadb-mariadb-1221.md
new file mode 100644
index 0000000..b1c0db2
--- /dev/null
+++ b/data/readmes/mariadb-mariadb-1221.md
@@ -0,0 +1,76 @@
+# MariaDB - README (mariadb-12.2.1)
+
+**Repository**: https://github.com/MariaDB/server
+**Version**: mariadb-12.2.1
+
+---
+
+# Code status:
+
+* [](https://ci.appveyor.com/project/rasmushoj/server) ci.appveyor.com
+
+## MariaDB: The innovative open source database
+
+MariaDB was designed as a drop-in replacement of MySQL(R) with more
+features, new storage engines, fewer bugs, and better performance.
+
+MariaDB is brought to you by the MariaDB Foundation and the MariaDB Corporation.
+Please read the CREDITS file for details about the MariaDB Foundation,
+and who is developing MariaDB.
+
+MariaDB is developed by many of the original developers of MySQL who
+now work for the MariaDB Corporation, the MariaDB Foundation and by
+many people in the community.
+
+MySQL, which is the base of MariaDB, is a product and trademark of Oracle
+Corporation, Inc. For a list of developers and other contributors,
+see the Credits appendix. You can also run 'SHOW authors' to get a
+list of active contributors.
+
+A description of the MariaDB project and a manual can be found at:
+
+https://mariadb.org
+
+https://mariadb.com/kb/en/
+
+https://mariadb.com/kb/en/mariadb-vs-mysql-features/
+
+https://mariadb.com/kb/en/mariadb-versus-mysql-compatibility/
+
+https://mariadb.com/kb/en/new-and-old-releases/
+
+# Getting the code, building it and testing it
+
+Refer to the following guide: https://mariadb.org/get-involved/getting-started-for-developers/get-code-build-test/
+which outlines how to build the source code correctly and run the MariaDB testing framework,
+as well as which branch to target for your contributions.
+
+# Help
+
+More help is available from the Maria Discuss mailing list
+https://lists.mariadb.org/postorius/lists/discuss.lists.mariadb.org/ and MariaDB's Zulip
+instance, https://mariadb.zulipchat.com/
+
+# Licensing
+
+***************************************************************************
+
+MariaDB is specifically available only under version 2 of the GNU
+General Public License (GPLv2). (I.e. Without the "any later version"
+clause.) This is inherited from MySQL. Please see the README file in
+the MySQL distribution for more information.
+
+License information can be found in the COPYING file. Third party
+license information can be found in the THIRDPARTY file.
+
+***************************************************************************
+
+# Bug Reports
+
+Bug and/or error reports regarding MariaDB should be submitted at:
+https://jira.mariadb.org
+
+For reporting security vulnerabilities, see our [security-policy](https://mariadb.org/about/security-policy/).
+
+The code for MariaDB, including all revision history, can be found at:
+https://github.com/MariaDB/server
diff --git a/data/readmes/maven-maven-400-rc-5.md b/data/readmes/maven-maven-400-rc-5.md
new file mode 100644
index 0000000..5b70714
--- /dev/null
+++ b/data/readmes/maven-maven-400-rc-5.md
@@ -0,0 +1,98 @@
+# Maven - README (maven-4.0.0-rc-5)
+
+**Repository**: https://github.com/apache/maven
+**Version**: maven-4.0.0-rc-5
+
+---
+
+
+Apache Maven
+============
+
+[][license]
+[](https://github.com/jvm-repo-rebuild/reproducible-central/blob/master/content/org/apache/maven/maven/README.md)
+- [master](https://github.com/apache/maven) = 4.1.x: [](https://central.sonatype.com/artifact/org.apache.maven/apache-maven)
+- [4.0.x](https://github.com/apache/maven/tree/maven-4.0.x): [](https://central.sonatype.com/artifact/org.apache.maven/apache-maven)
+[][build]
+[][test-results]
+- [3.9.x](https://github.com/apache/maven/tree/maven-3.9.x): [](https://central.sonatype.com/artifact/org.apache.maven/apache-maven)
+
+
+Apache Maven is a software project management and comprehension tool. Based on
+the concept of a project object model (POM), Maven can manage a project's
+build, reporting and documentation from a central piece of information.
+
+If you think you have found a bug, please file an issue in the [Maven Issue Tracker](https://github.com/apache/maven/issues).
+
+Documentation
+-------------
+
+More information can be found on [Apache Maven Homepage][maven-home].
+Questions related to the usage of Maven should be posted on
+the [Maven User List][users-list].
+
+
+Where can I get the latest release?
+-----------------------------------
+You can download the release source from our [download page][maven-download].
+
+Contributing
+------------
+
+If you are interested in the development of Maven, please consult the
+documentation first and afterward you are welcome to join the developers
+mailing list to ask questions or discuss new ideas/features/bugs etc.
+
+Take a look into the [contribution guidelines](CONTRIBUTING.md).
+
+License
+-------
+This code is under the [Apache License, Version 2.0, January 2004][license].
+
+See the [`NOTICE`](./NOTICE) file for required notices and attributions.
+
+Donations
+---------
+Do you like Apache Maven? Then [donate back to the ASF](https://www.apache.org/foundation/contributing.html) to support the development.
+
+Quick Build
+-------
+If you want to bootstrap Maven, you'll need:
+- Java 17+
+- Maven 3.6.3 or later
+- Run Maven, specifying a location into which the completed Maven distro should be installed:
+ ```
+ mvn -DdistributionTargetDir="$HOME/app/maven/apache-maven-4.0.x-SNAPSHOT" clean package
+ ```
+
+
+[home]: https://maven.apache.org/
+[license]: https://www.apache.org/licenses/LICENSE-2.0
+[build]: https://ci-maven.apache.org/job/Maven/job/maven-box/job/maven/job/maven-4.0.x/
+[test-results]: https://ci-maven.apache.org/job/Maven/job/maven-box/job/maven/job/maven-4.0.x/lastCompletedBuild/testReport/
+[build-status]: https://img.shields.io/jenkins/s/https/ci-maven.apache.org/job/Maven/job/maven-box/job/maven/job/maven-4.0.x.svg?
+[build-tests]: https://img.shields.io/jenkins/t/https/ci-maven.apache.org/job/Maven/job/maven-box/job/maven/job/maven-4.0.x.svg?
+[maven-home]: https://maven.apache.org/
+[maven-download]: https://maven.apache.org/download.cgi
+[users-list]: https://maven.apache.org/mailing-lists.html
+[dev-ml-list]: https://www.mail-archive.com/dev@maven.apache.org/
+[code-style]: http://maven.apache.org/developers/conventions/code.html
+[core-it]: https://maven.apache.org/core-its/core-it-suite/
+[building-maven]: https://maven.apache.org/guides/development/guide-building-maven.html
+[cla]: https://www.apache.org/licenses/#clas
+
diff --git a/data/readmes/memcached-flash-with-wbuf-stack.md b/data/readmes/memcached-flash-with-wbuf-stack.md
new file mode 100644
index 0000000..91c8bb0
--- /dev/null
+++ b/data/readmes/memcached-flash-with-wbuf-stack.md
@@ -0,0 +1,42 @@
+# Memcached - README (flash-with-wbuf-stack)
+
+**Repository**: https://github.com/memcached/memcached
+**Version**: flash-with-wbuf-stack
+
+---
+
+# Memcached
+
+Memcached is a high performance multithreaded event-based key/value cache
+store intended to be used in a distributed system.
+
+See: https://memcached.org/about
+
+A fun story explaining usage: https://memcached.org/tutorial
+
+If you're having trouble, try the wiki: https://memcached.org/wiki
+
+If you're trying to troubleshoot odd behavior or timeouts, see:
+https://memcached.org/timeouts
+
+https://memcached.org/ is a good resource in general. Please use the mailing
+list to ask questions, github issues aren't seen by everyone!
+
+## Dependencies
+
+* libevent, http://www.monkey.org/~provos/libevent/ (libevent-dev)
+
+## Environment
+
+Be warned that the -k (mlockall) option to memcached might be
+dangerous when using a large cache. Just make sure the memcached machines
+don't swap. memcached does non-blocking network I/O, but not disk. (it
+should never go to disk, or you've lost the whole point of it)
+
+## Website
+
+* http://www.memcached.org
+
+## Contributing
+
+See https://github.com/memcached/memcached/wiki/DevelopmentRepos
diff --git a/data/readmes/merbridge-081.md b/data/readmes/merbridge-081.md
new file mode 100644
index 0000000..1b0a0c9
--- /dev/null
+++ b/data/readmes/merbridge-081.md
@@ -0,0 +1,99 @@
+# Merbridge - README (0.8.1)
+
+**Repository**: https://github.com/merbridge/merbridge
+**Version**: 0.8.1
+
+---
+
+# merbridge
+
+[](https://bestpractices.coreinfrastructure.org/projects/6382)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fmerbridge%2Fmerbridge?ref=badge_shield)
+
+Use eBPF to speed up your Service Mesh like crossing an Einstein-Rosen Bridge.
+
+## Usage
+
+### Install
+
+You just only need to run the following command on your Istio cluster to get eBPF to speed up Istio:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one.yaml
+```
+
+Or on a Linkerd cluster:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-linkerd.yaml
+```
+
+Or on a Kuma cluster:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-kuma.yaml
+```
+
+Or on an [OSM](https://github.com/openservicemesh/osm)/[OSM-Edge](https://github.com/flomesh-io/osm-edge) cluster:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-osm.yaml
+```
+
+> Note: It currently only works on Linux kernel >= 5.7, run `uname -r` to check your kernel version before installing Merbridge.
+
+If you want to install Merbridge by `Helm`, read the guidelines: [Deploy Merbridge with Helm](deploy/).
+
+### Uninstall
+
+- Istio:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one.yaml
+```
+
+- Linkerd:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-linkerd.yaml
+```
+
+- Kuma:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-kuma.yaml
+```
+
+- [OSM](https://github.com/openservicemesh/osm)/[OSM-Edge](https://github.com/flomesh-io/osm-edge):
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/merbridge/merbridge/main/deploy/all-in-one-osm.yaml
+```
+
+## Get involved
+
+Join the [Merbridge slack](https://join.slack.com/t/merbridge/shared_invite/zt-11uc3z0w7-DMyv42eQ6s5YUxO5mZ5hwQ).
+
+## Contributors
+
+
+
+
+
+Made with [contrib.rocks](https://contrib.rocks).
+
+## License
+
+Copyright 2023 the Merbridge Authors. All rights reserved.
+
+Licensed under the Apache License, Version 2.0.
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fmerbridge%2Fmerbridge?ref=badge_large)
+
+## Landscapes
+
+
If you like Meshery, please ★ this repository to show your support! 🤩
+
+MESHERY IS A CLOUD NATIVE COMPUTING FOUNDATION PROJECT
+
+
+
+
+
+
+
+
+A self-service engineering platform, Meshery, is the open source, cloud native manager that enables the design and management of all Kubernetes-based infrastructure and applications (multi-cloud). Among other features, As an extensible platform, Meshery offers visual and collaborative GitOps, freeing you from the chains of YAML while managing Kubernetes multi-cluster deployments.
+
Open Meshery extension, Kanvas, in your browser: https://kanvas.new
+
+
+
+
+
+
+
+# Functionality
+
+## Infrastructure Lifecycle Management
+
+Meshery manages the configuration, deployment, and operation of your Cloud services and Kubernetes clusters while supporting hundreds of different types of cloud native infrastructure integrations. Meshery supports [300+ integrations](https://meshery.io/integrations).
+
+
+
+
+
+
+Find infrastructure configuration patterns in Meshery's catalog of curated design templates filled with configuration best practices.
+
+### Multiple Kubernetes Clusters and Multiple Clouds
+
+
+
+Meshery provides a single pane of glass to manage multiple Kubernetes clusters across any infrastructure, including various cloud providers. Meshery enables consistent configuration, operation, and observability across your entire Kubernetes landscape.
+
+
Dry-run your deployments
+Meshery leverages Kubernetes' built-in dry-run capabilities to allow you to simulate deployments without actually applying the changes to your cluster. This enables you to:
+
+- Validate configurations: Ensure your deployment specifications (e.g., YAML manifests, Helm charts, Meshery Designs) are syntactically correct and will be accepted by the Kubernetes API server.
+- Identify potential issues: Detect errors in your configurations, such as invalid resource definitions, missing fields, or API version mismatches, before they impact your live environment.
+- Preview changes: Understand the objects that Kubernetes would create or modify during a real deployment.
+- Integrate with CI/CD: Incorporate dry-run as a step in your continuous integration and continuous delivery pipelines to automate pre-deployment checks and prevent faulty deployments.
+
+By providing this dry-run functionality, Meshery helps you increase the reliability and stability of your Kubernetes deployments by catching potential problems early in the development and deployment process.
+
+
+
+
+### Visually and collaboratively manage your infrastructure
+
+Using a GitOps-centric approach, visually and collaboratively design and manage your infrastructure and microservices. Meshery intelligently infers the manner in which each resource [interrelates](https://docs.meshery.io/concepts/logical/relationships) with each other. Meshery supports a broad variety of built-in relationships between components, which you can use to create your own custom relationships.
+
+
+
+
Context-Aware Policies For Applications
+
+
Leverage built-in relationships to enforce configuration best practices consistently from code to Kubernetes. Customize Configure your infrastructure with confidence without needing to know or write Open Policy Agent's Rego query language.
+
+
+## Workspaces: Your team's Google Drive for cloud native projects
+
+
+
+Workspaces let you organize your work and serve as the central point of collaboration for you and your teams and point of access control to Environments and their resources.
+
+
Manage your connections with Environments
+
+
+
Environments make it easier for you to manage, share, and work with a collection of resources as a group, instead of dealing with all your Connections and Credentials on an individual basis.
+
+
+
See changes to your infra before you merge
+
+
+
+Get snapshots of your infrastructure directly in your PRs. Preview your deployment, view changes pull request-to-pull request and get infrastructure snapshots within your PRs by connecting Kanvas to your GitHub repositories.
+
+
+
+
+## Platform Engineering with Meshery's Extension Points
+
+Extend Meshery as your self-service engineering platform by taking advantage of its [vast set of extensibility features](https://docs.meshery.io/extensibility), including gRPC adapters, hot-loadable Reactjs packages and Golang plugins, subscriptions on NATS topics, consumable _and_ extendable API interfaces via REST and GraphQL.The great number of extension points in Meshery make it ideal as the foundation of your internal developer platform.
+
+
Access the Cloud Native Patterns for Kubernetes
+
+
Design and manage all of your cloud native infrastructure using the design configurator in Meshery or start from a template using the patterns from the catalog.
+
+
+Meshery offers robust capabilities for managing multiple tenants within a shared Kubernetes infrastructure. Meshery provides the tools and integrations necessary to create a secure, isolated, and manageable multi-tenant environments, allowing multiple teams or organizations with granular control over their role-based access controls.
+
+Meshery's "multi-player" functionality refers to its collaborative features that enable multiple users to interact with and manage cloud native infrastructure simultaneously. This is primarily facilitated through Kanvas, a Meshery extension visual designer and management interface.
+
+## Performance Management
+
+Meshery offers load generation and performance characterization to help you assess and optimize the performance of your applications and infrastructure.
+
+
+
+
Create and reuse performance profiles for consistent characterization of the configuration of your infrastructure in context of how it performs.
+
+
+
Manage the performance of your infrastructure and its workloads
+
+
+
+Baseline and track your cloud native performance from release to release.
+
+- Use performance profiles to track the historical performance of your workloads.
+- Track your application performance from version to version.
+- Understand behavioral differences between cloud native network functions.
+- Compare performance across infrastructure deployments.
+
+
+
+
+
Load Generation and Microservice Performance Characteristization
+
+
+
+
+
+
+- **Multiple Load Generators:** Meshery supports various load generators, including Fortio, Wrk2, and Nighthawk, allowing users to choose the tool that best suits your needs.
+- **Configurable Performance Profiles:** Meshery provides a highly configurable set of load profiles with tunable facets, enabling users to generate TCP, gRPC, and HTTP load. You can customize parameters such as duration, concurrent threads, concurrent generators, and load generator type.
+- **Statistical Analysis:** Meshery performs statistical analysis on the results of performance tests, presenting data in the form of histograms with latency buckets. Understand the distribution of response times and identify potential bottlenecks.
+- **Comparison of Test Results:** Meshery enables you to compare the difference in request performance (latency and throughput) between independent performance tests. Save your load test configurations as Performance Profiles, making it easy to rerun tests with the same settings and track performance variations over time.
+- **Kubernetes Cluster and Workload Metrics:** - Meshery connects to one or more Prometheus servers to gather both cluster and application metrics. Meshery also integrates with Grafana, allowing you to import your existing dashboards and visualize performance data.
+
+
In an effort to produce infrastructure agnostic tooling, Meshery uses the Cloud Native Performance specification as a common format to capture and measure your infrastructure's performance against a universal cloud native performance index. Meshery participates in advancing cloud native infrastructure adoption through the standardization of APIs. Meshery enables you to measure the value provided by Docker, Kubernetes, or other cloud native infrastructure in the context of the overhead incurred.
+
+
+
+
+
+
Get Started with Meshery
+
+
+
Using `mesheryctl`
+
Meshery runs as a set of containers inside or outside of your Kubernetes clusters.
+
+## Join the Meshery community
+
+
+Our projects are community-built and welcome collaboration. 👍 Be sure to see the Contributor Journey Map and Community Handbook for a tour of resources available to you and the Repository Overview for a cursory description of repository by technology and programming language. Jump into community Slack or discussion forum to participate.
+
+
+
Find your MeshMate
+
+
MeshMates are experienced Meshery community members, who will help you learn your way around, discover live projects, and expand your community network. Connect with a MeshMate today!
+ Not sure where to start? Grab an open issue with the help-wanted label.
+
+
+
+
+
+## Contributing
+
+Please do! We're a warm and welcoming community of open source contributors. Please join. All types of contributions are welcome. Be sure to read the [Contributor Guides](https://docs.meshery.io/project/contributing) for a tour of resources available to you and how to get started.
+
+
+
+
+
+### Stargazers
+
+
+ If you like Meshery, please ★ star this repository to show your support! 🤩
+
+
+
+
+
+### License
+
+This repository and site are available as open-source under the terms of the [Apache 2.0 License](https://opensource.org/licenses/Apache-2.0).
+
+#### Software Bill of Materials (SBOM)
+
+Meshery's [Software Bill of Materials](https://github.com/meshery/meshery/actions/workflows/bom.yaml) (SBOM) is available as a build artifact.
+
diff --git a/data/readmes/meshery-wasm-v08178.md b/data/readmes/meshery-wasm-v08178.md
new file mode 100644
index 0000000..544a07b
--- /dev/null
+++ b/data/readmes/meshery-wasm-v08178.md
@@ -0,0 +1,337 @@
+# Meshery (Wasm) - README (v0.8.178)
+
+**Repository**: https://github.com/meshery/meshery
+**Version**: v0.8.178
+
+---
+
+
+
If you like Meshery, please ★ this repository to show your support! 🤩
+
+MESHERY IS A CLOUD NATIVE COMPUTING FOUNDATION PROJECT
+
+
+
+
+
+
+
+
+A self-service engineering platform, Meshery, is the open source, cloud native manager that enables the design and management of all Kubernetes-based infrastructure and applications (multi-cloud). Among other features, As an extensible platform, Meshery offers visual and collaborative GitOps, freeing you from the chains of YAML while managing Kubernetes multi-cluster deployments.
+
Open Meshery extension, Kanvas, in your browser: https://kanvas.new
+
+
+
+
+
+
+
+# Functionality
+
+## Infrastructure Lifecycle Management
+
+Meshery manages the configuration, deployment, and operation of your Cloud services and Kubernetes clusters while supporting hundreds of different types of cloud native infrastructure integrations. Meshery supports [300+ integrations](https://meshery.io/integrations).
+
+
+
+
+
+
+Find infrastructure configuration patterns in Meshery's catalog of curated design templates filled with configuration best practices.
+
+### Multiple Kubernetes Clusters and Multiple Clouds
+
+
+
+Meshery provides a single pane of glass to manage multiple Kubernetes clusters across any infrastructure, including various cloud providers. Meshery enables consistent configuration, operation, and observability across your entire Kubernetes landscape.
+
+
Dry-run your deployments
+Meshery leverages Kubernetes' built-in dry-run capabilities to allow you to simulate deployments without actually applying the changes to your cluster. This enables you to:
+
+- Validate configurations: Ensure your deployment specifications (e.g., YAML manifests, Helm charts, Meshery Designs) are syntactically correct and will be accepted by the Kubernetes API server.
+- Identify potential issues: Detect errors in your configurations, such as invalid resource definitions, missing fields, or API version mismatches, before they impact your live environment.
+- Preview changes: Understand the objects that Kubernetes would create or modify during a real deployment.
+- Integrate with CI/CD: Incorporate dry-run as a step in your continuous integration and continuous delivery pipelines to automate pre-deployment checks and prevent faulty deployments.
+
+By providing this dry-run functionality, Meshery helps you increase the reliability and stability of your Kubernetes deployments by catching potential problems early in the development and deployment process.
+
+
+
+
+### Visually and collaboratively manage your infrastructure
+
+Using a GitOps-centric approach, visually and collaboratively design and manage your infrastructure and microservices. Meshery intelligently infers the manner in which each resource [interrelates](https://docs.meshery.io/concepts/logical/relationships) with each other. Meshery supports a broad variety of built-in relationships between components, which you can use to create your own custom relationships.
+
+
+
+
Context-Aware Policies For Applications
+
+
Leverage built-in relationships to enforce configuration best practices consistently from code to Kubernetes. Customize Configure your infrastructure with confidence without needing to know or write Open Policy Agent's Rego query language.
+
+
+## Workspaces: Your team's Google Drive for cloud native projects
+
+
+
+Workspaces let you organize your work and serve as the central point of collaboration for you and your teams and point of access control to Environments and their resources.
+
+
Manage your connections with Environments
+
+
+
Environments make it easier for you to manage, share, and work with a collection of resources as a group, instead of dealing with all your Connections and Credentials on an individual basis.
+
+
+
See changes to your infra before you merge
+
+
+
+Get snapshots of your infrastructure directly in your PRs. Preview your deployment, view changes pull request-to-pull request and get infrastructure snapshots within your PRs by connecting Kanvas to your GitHub repositories.
+
+
+
+
+## Platform Engineering with Meshery's Extension Points
+
+Extend Meshery as your self-service engineering platform by taking advantage of its [vast set of extensibility features](https://docs.meshery.io/extensibility), including gRPC adapters, hot-loadable Reactjs packages and Golang plugins, subscriptions on NATS topics, consumable _and_ extendable API interfaces via REST and GraphQL.The great number of extension points in Meshery make it ideal as the foundation of your internal developer platform.
+
+
Access the Cloud Native Patterns for Kubernetes
+
+
Design and manage all of your cloud native infrastructure using the design configurator in Meshery or start from a template using the patterns from the catalog.
+
+
+Meshery offers robust capabilities for managing multiple tenants within a shared Kubernetes infrastructure. Meshery provides the tools and integrations necessary to create a secure, isolated, and manageable multi-tenant environments, allowing multiple teams or organizations with granular control over their role-based access controls.
+
+Meshery's "multi-player" functionality refers to its collaborative features that enable multiple users to interact with and manage cloud native infrastructure simultaneously. This is primarily facilitated through Kanvas, a Meshery extension visual designer and management interface.
+
+## Performance Management
+
+Meshery offers load generation and performance characterization to help you assess and optimize the performance of your applications and infrastructure.
+
+
+
+
Create and reuse performance profiles for consistent characterization of the configuration of your infrastructure in context of how it performs.
+
+
+
Manage the performance of your infrastructure and its workloads
+
+
+
+Baseline and track your cloud native performance from release to release.
+
+- Use performance profiles to track the historical performance of your workloads.
+- Track your application performance from version to version.
+- Understand behavioral differences between cloud native network functions.
+- Compare performance across infrastructure deployments.
+
+
+
+
+
Load Generation and Microservice Performance Characteristization
+
+
+
+
+
+
+- **Multiple Load Generators:** Meshery supports various load generators, including Fortio, Wrk2, and Nighthawk, allowing users to choose the tool that best suits your needs.
+- **Configurable Performance Profiles:** Meshery provides a highly configurable set of load profiles with tunable facets, enabling users to generate TCP, gRPC, and HTTP load. You can customize parameters such as duration, concurrent threads, concurrent generators, and load generator type.
+- **Statistical Analysis:** Meshery performs statistical analysis on the results of performance tests, presenting data in the form of histograms with latency buckets. Understand the distribution of response times and identify potential bottlenecks.
+- **Comparison of Test Results:** Meshery enables you to compare the difference in request performance (latency and throughput) between independent performance tests. Save your load test configurations as Performance Profiles, making it easy to rerun tests with the same settings and track performance variations over time.
+- **Kubernetes Cluster and Workload Metrics:** - Meshery connects to one or more Prometheus servers to gather both cluster and application metrics. Meshery also integrates with Grafana, allowing you to import your existing dashboards and visualize performance data.
+
+
In an effort to produce infrastructure agnostic tooling, Meshery uses the Cloud Native Performance specification as a common format to capture and measure your infrastructure's performance against a universal cloud native performance index. Meshery participates in advancing cloud native infrastructure adoption through the standardization of APIs. Meshery enables you to measure the value provided by Docker, Kubernetes, or other cloud native infrastructure in the context of the overhead incurred.
+
+
+
+
+
+
Get Started with Meshery
+
+
+
Using `mesheryctl`
+
Meshery runs as a set of containers inside or outside of your Kubernetes clusters.
+
+## Join the Meshery community
+
+
+Our projects are community-built and welcome collaboration. 👍 Be sure to see the Contributor Journey Map and Community Handbook for a tour of resources available to you and the Repository Overview for a cursory description of repository by technology and programming language. Jump into community Slack or discussion forum to participate.
+
+
+
Find your MeshMate
+
+
MeshMates are experienced Meshery community members, who will help you learn your way around, discover live projects, and expand your community network. Connect with a MeshMate today!
+ Not sure where to start? Grab an open issue with the help-wanted label.
+
+
+
+
+
+## Contributing
+
+Please do! We're a warm and welcoming community of open source contributors. Please join. All types of contributions are welcome. Be sure to read the [Contributor Guides](https://docs.meshery.io/project/contributing) for a tour of resources available to you and how to get started.
+
+
+
+
+
+### Stargazers
+
+
+ If you like Meshery, please ★ star this repository to show your support! 🤩
+
+
+
+
+
+[](https://discord.com/invite/consensys)
+[](https://pypi.python.org/pypi/mythril)
+[](https://mythril-classic.readthedocs.io/en/develop/)
+[](https://dl.circleci.com/status-badge/redirect/gh/Consensys/mythril/tree/develop)
+[](https://sonarcloud.io/dashboard?id=mythril)
+[](https://pepy.tech/project/mythril)
+[](https://cloud.docker.com/u/mythril/repository/docker/mythril/myth)
+
+Mythril is a symbolic-execution-based security analysis tool for EVM bytecode. It detects security vulnerabilities in smart contracts built for Ethereum and other EVM-compatible blockchains.
+
+Whether you want to contribute, need support, or want to learn what we have cooking for the future, you can checkout diligence-mythx channel in [ConsenSys Discord server](https://discord.gg/consensys).
+
+## Installation and setup
+
+Get it with [Docker](https://www.docker.com):
+
+```bash
+$ docker pull mythril/myth
+```
+
+Install from Pypi (Python 3.7-3.10):
+
+```bash
+$ pip3 install mythril
+```
+
+Use it via pre-commit hook (replace `$GIT_TAG` with real tag):
+
+```YAML
+- repo: https://github.com/Consensys/mythril
+ rev: $GIT_TAG
+ hooks:
+ - id: mythril
+```
+
+Additionally, set `args: [disassemble]` or `args: [read-storage]` to use a different command than `analyze`.
+
+See the [docs](https://mythril-classic.readthedocs.io/en/master/installation.html) for more detailed instructions.
+
+## Usage
+
+Run:
+
+```
+$ myth analyze
+```
+
+Or:
+
+```
+$ myth analyze -a
+```
+
+Specify the maximum number of transactions to explore with `-t `. You can also set a timeout with `--execution-timeout `.
+
+Here is an example of running Mythril on the file `killbilly.sol` which is in the `solidity_examples` directory for `3` transactions:
+
+```
+> myth a killbilly.sol -t 3
+==== Unprotected Selfdestruct ====
+SWC ID: 106
+Severity: High
+Contract: KillBilly
+Function name: commencekilling()
+PC address: 354
+Estimated Gas Usage: 974 - 1399
+Any sender can cause the contract to self-destruct.
+Any sender can trigger execution of the SELFDESTRUCT instruction to destroy this contract account and withdraw its balance to an arbitrary address. Review the transaction trace generated for this issue and make sure that appropriate security controls are in place to prevent unrestricted access.
+--------------------
+In file: killbilly.sol:22
+
+selfdestruct(msg.sender)
+
+--------------------
+Initial State:
+
+Account: [CREATOR], balance: 0x2, nonce:0, storage:{}
+Account: [ATTACKER], balance: 0x1001, nonce:0, storage:{}
+
+Transaction Sequence:
+
+Caller: [CREATOR], calldata: , decoded_data: , value: 0x0
+Caller: [ATTACKER], function: killerize(address), txdata: 0x9fa299cc000000000000000000000000deadbeefdeadbeefdeadbeefdeadbeefdeadbeef, decoded_data: ('0xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef',), value: 0x0
+Caller: [ATTACKER], function: activatekillability(), txdata: 0x84057065, value: 0x0
+Caller: [ATTACKER], function: commencekilling(), txdata: 0x7c11da20, value: 0x0
+
+```
+
+
+Instructions for using Mythril are found on the [docs](https://mythril-classic.readthedocs.io/en/develop/).
+
+For support or general discussions please checkout [diligence-mythx channel](https://discord.com/channels/697535391594446898/712829485350649886) in [ConsenSys Discord server](https://discord.gg/consensys)..
+
+## Building the Documentation
+Mythril's documentation is contained in the `docs` folder and is published to [Read the Docs](https://mythril-classic.readthedocs.io/en/develop/). It is based on Sphinx and can be built using the Makefile contained in the subdirectory:
+
+```
+cd docs
+make html
+```
+
+This will create a `build` output directory containing the HTML output. Alternatively, PDF documentation can be built with `make latexpdf`. The available output format options can be seen with `make help`.
+
+## Vulnerability Remediation
+
+Visit the [Smart Contract Vulnerability Classification Registry](https://swcregistry.io/) to find detailed information and remediation guidance for the vulnerabilities reported.
diff --git a/data/readmes/nats-v2123-rc2.md b/data/readmes/nats-v2123-rc2.md
new file mode 100644
index 0000000..0388a5b
--- /dev/null
+++ b/data/readmes/nats-v2123-rc2.md
@@ -0,0 +1,83 @@
+# NATS - README (v2.12.3-RC.2)
+
+**Repository**: https://github.com/nats-io/nats-server
+**Version**: v2.12.3-RC.2
+
+---
+
+
+
+
+
+[NATS](https://nats.io) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation ([CNCF](https://cncf.io)). NATS has over [40 client language implementations](https://nats.io/download/), and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems.
+
+[![License][License-Image]][License-Url] [![Build][Build-Status-Image]][Build-Status-Url] [![Release][Release-Image]][Release-Url] [![Slack][Slack-Image]][Slack-Url] [![Coverage][Coverage-Image]][Coverage-Url] [![Docker Downloads][Docker-Image]][Docker-Url] [![GitHub Downloads][GitHub-Image]][Somsubhra-URL] [![CII Best Practices][CIIBestPractices-Image]][CIIBestPractices-Url] [![Artifact Hub][ArtifactHub-Image]][ArtifactHub-Url]
+
+## Documentation
+
+- [Official Website](https://nats.io)
+- [Official Documentation](https://docs.nats.io)
+- [FAQ](https://docs.nats.io/reference/faq)
+- Watch [a video overview](https://rethink.synadia.com/episodes/1/) of NATS.
+- Watch [this video from SCALE 13x](https://www.youtube.com/watch?v=sm63oAVPqAM) to learn more about its origin story and design philosophy.
+
+## Contact
+
+- [Twitter](https://twitter.com/nats_io): Follow us on Twitter!
+- [Google Groups](https://groups.google.com/forum/#!forum/natsio): Where you can ask questions
+- [Slack](https://natsio.slack.com): Click [here](https://slack.nats.io) to join. You can ask questions to our maintainers and to the rich and active community.
+
+## Contributing
+
+If you are interested in contributing to NATS, read about our...
+
+- [Contributing guide](./CONTRIBUTING.md)
+- [Report issues or propose Pull Requests](https://github.com/nats-io)
+
+[License-Url]: https://www.apache.org/licenses/LICENSE-2.0
+[License-Image]: https://img.shields.io/badge/License-Apache2-blue.svg
+[Docker-Image]: https://img.shields.io/docker/pulls/_/nats.svg
+[Docker-Url]: https://hub.docker.com/_/nats
+[Slack-Image]: https://img.shields.io/badge/chat-on%20slack-green
+[Slack-Url]: https://slack.nats.io
+[Fossa-Url]: https://app.fossa.io/projects/git%2Bgithub.com%2Fnats-io%2Fnats-server?ref=badge_shield
+[Fossa-Image]: https://app.fossa.io/api/projects/git%2Bgithub.com%2Fnats-io%2Fnats-server.svg?type=shield
+[Build-Status-Url]: https://github.com/nats-io/nats-server/actions/workflows/tests.yaml
+[Build-Status-Image]: https://github.com/nats-io/nats-server/actions/workflows/tests.yaml/badge.svg?branch=main
+[Release-Url]: https://github.com/nats-io/nats-server/releases/latest
+[Release-Image]: https://img.shields.io/github/v/release/nats-io/nats-server
+[Coverage-Url]: https://coveralls.io/r/nats-io/nats-server?branch=main
+[Coverage-image]: https://coveralls.io/repos/github/nats-io/nats-server/badge.svg?branch=main
+[ReportCard-Url]: https://goreportcard.com/report/nats-io/nats-server
+[ReportCard-Image]: https://goreportcard.com/badge/github.com/nats-io/nats-server
+[CIIBestPractices-Url]: https://bestpractices.coreinfrastructure.org/projects/1895
+[CIIBestPractices-Image]: https://bestpractices.coreinfrastructure.org/projects/1895/badge
+[ArtifactHub-Url]: https://artifacthub.io/packages/helm/nats/nats
+[ArtifactHub-Image]: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/nats
+[GitHub-Release]: https://github.com/nats-io/nats-server/releases/
+[GitHub-Image]: https://img.shields.io/github/downloads/nats-io/nats-server/total.svg?logo=github
+[Somsubhra-url]: https://somsubhra.github.io/github-release-stats/?username=nats-io&repository=nats-server
+
+## Roadmap
+
+The NATS product roadmap can be found [here](https://nats.io/about/#roadmap).
+
+## Adopters
+
+Who uses NATS? See our [list of users](https://nats.io/#who-uses-nats) on [https://nats.io](https://nats.io).
+
+## Security
+
+### Security Audit
+
+A third party security audit was performed by Trail of Bits following engagement by the Open Source Technology Improvement Fund (OSTIF). You can see the [full report from April 2025 here](https://github.com/trailofbits/publications/blob/master/reviews/2025-04-ostif-nats-securityreview.pdf).
+
+### Reporting Security Vulnerabilities
+
+If you've found a vulnerability or a potential vulnerability in the NATS server, please let us know at
+[nats-security](mailto:security@nats.io).
+
+## License
+
+Unless otherwise noted, the NATS source files are distributed
+under the Apache Version 2.0 license found in the LICENSE file.
diff --git a/data/readmes/near-protocol-2101.md b/data/readmes/near-protocol-2101.md
new file mode 100644
index 0000000..c56a6d3
--- /dev/null
+++ b/data/readmes/near-protocol-2101.md
@@ -0,0 +1,99 @@
+# Near Protocol - README (2.10.1)
+
+**Repository**: https://github.com/near/nearcore
+**Version**: 2.10.1
+
+---
+
+
+
+
+
+
+
+
+
+
+
+
+## Reference implementation of NEAR Protocol
+
+[![Build][build-badge]][build-url]
+![Stable Status][stable-release]
+![Prerelease Status][prerelease]
+[![codecov][codecov-badge]][codecov-url]
+[![Discord chat][discord-badge]][discord-url]
+[![Twitter][twitter-badge]][twitter-url]
+[![Telegram Group][telegram-badge]][telegram-url]
+
+[build-badge]: https://github.com/near/nearcore/actions/workflows/ci.yml/badge.svg?branch=master
+[build-url]: https://github.com/near/nearcore/actions
+[stable-release]: https://img.shields.io/github/v/release/nearprotocol/nearcore?label=stable
+[prerelease]: https://img.shields.io/github/v/release/nearprotocol/nearcore?include_prereleases&label=prerelease
+[ci-badge-master]: https://badge.buildkite.com/a81147cb62c585cc434459eedd1d25e521453120ead9ee6c64.svg?branch=master
+[ci-url]: https://buildkite.com/nearprotocol/nearcore
+[codecov-badge]: https://codecov.io/gh/near/nearcore/branch/master/graph/badge.svg
+[codecov-url]: https://codecov.io/gh/near/nearcore
+[discord-badge]: https://img.shields.io/discord/490367152054992913.svg
+[discord-url]: https://discord.com/invite/nearprotocol
+[twitter-badge]: https://img.shields.io/twitter/follow/NEARProtocol
+[twitter-url]: https://x.com/NEARProtocol
+[telegram-badge]: https://cdn.jsdelivr.net/gh/Patrolavia/telegram-badge@8fe3382b3fd3a1c533ba270e608035a27e430c2e/chat.svg
+[telegram-url]: https://t.me/cryptonear
+
+## About NEAR
+
+NEAR's purpose is to enable community-driven innovation to benefit people around the world.
+
+To achieve this purpose, *NEAR* provides a developer platform where developers and entrepreneurs can create apps that put users back in control of their data and assets, which is the foundation of ["Open Web" movement][open-web-url].
+
+One of the components of *NEAR* is the NEAR Protocol, an infrastructure for server-less applications and smart contracts powered by a blockchain.
+NEAR Protocol is built to deliver usability and scalability of modern PaaS like Firebase at fraction of the prices that blockchains like Ethereum charge.
+
+Overall, *NEAR* provides a wide range of tools for developers to easily build applications:
+ - [JS Client library][js-api] to connect to NEAR Protocol from your applications.
+ - [Rust][rust-sdk] and [JavaScript/TypeScript][js-sdk] SDKs to write smart contracts and stateful server-less functions.
+ - [Several essential repositories](https://github.com/near/dx) to guide you in building across Near's Open Web Platform.
+ - [Numerous examples][examples-url] with links to hack on them right inside your browser.
+ - [Lots of documentation][docs-url], with [Tutorials][tutorials-url] and [API docs][api-docs-url].
+
+[open-web-url]: https://techcrunch.com/2016/04/10/1301496/
+[js-api]: https://github.com/near/near-api-js
+[rust-sdk]: https://github.com/near/near-sdk-rs
+[js-sdk]: https://github.com/near/near-sdk-js
+[examples-url]: https://github.com/near-examples
+[docs-url]: https://docs.near.org
+[tutorials-url]: https://docs.near.org/tutorials/welcome
+[api-docs-url]: https://docs.near.org/api/rpc/introduction
+
+## Join the Network
+
+The easiest way to join the network, is by using the `nearup` command, which you can install as follows:
+
+```bash
+pip3 install --user nearup
+```
+
+You can join all the active networks:
+* mainnet: `nearup run mainnet`
+* testnet: `nearup run testnet`
+* betanet: `nearup run betanet`
+
+Check the `nearup` repository for [more details](https://github.com/near/nearup) on how to run with or without docker.
+
+To learn how to become a validator, checkout [documentation](https://docs.near.org/docs/develop/node/validator/staking-and-delegation).
+
+## Contributing
+
+The workflow and details of setup to contribute are described in [CONTRIBUTING.md](CONTRIBUTING.md), and security policy is described in [SECURITY.md](SECURITY.md).
+To propose new protocol changes or standards use [Specification & Standards repository](https://github.com/nearprotocol/NEPs).
+
+## Getting in Touch
+
+We use Zulip for semi-synchronous technical discussion, feel free to chime in:
+
+https://near.zulipchat.com/
+
+For non-technical discussion and overall direction of the project, see our Discourse forum:
+
+https://gov.near.org
diff --git a/data/readmes/nethermind-1353.md b/data/readmes/nethermind-1353.md
new file mode 100644
index 0000000..d59ac88
--- /dev/null
+++ b/data/readmes/nethermind-1353.md
@@ -0,0 +1,143 @@
+# Nethermind - README (1.35.3)
+
+**Repository**: https://github.com/NethermindEth/nethermind
+**Version**: 1.35.3
+
+---
+
+
+
+
+
+
+
+
+
+# Nethermind Ethereum client
+
+[](https://github.com/nethermindeth/nethermind/actions/workflows/nethermind-tests.yml)
+[](https://x.com/nethermindeth)
+[](https://discord.gg/GXJFaYk)
+[](https://github.com/nethermindeth/nethermind/discussions)
+[](https://www.gitpoap.io/gh/NethermindEth/nethermind)
+
+The Nethermind Ethereum execution client, built on .NET, delivers industry-leading performance in syncing and tip-of-chain processing. With its modular design and plugin system, it offers extensibility and features for new chains. As one of the most adopted execution clients on Ethereum, Nethermind plays a crucial role in enhancing the diversity and resilience of the Ethereum ecosystem.
+
+## Documentation
+
+Nethermind documentation is available at [docs.nethermind.io](https://docs.nethermind.io).
+
+### Supported networks
+
+**Ethereum** · **Gnosis** · **Optimism** · **Base** · **Taiko** · **World Chain** · **Linea** · **Energy Web**
+
+## Installing
+
+The standalone release builds are available on [GitHub Releases](https://github.com/nethermindeth/nethermind/releases).
+
+### Package managers
+
+- **Linux**
+
+ On Debian-based distros, Nethermind can be installed via Launchpad PPA:
+
+ ```bash
+ sudo add-apt-repository ppa:nethermindeth/nethermind
+ # If command not found, run
+ # sudo apt-get install software-properties-common
+
+ sudo apt-get install nethermind
+ ```
+
+- **Windows**
+
+ On Windows, Nethermind can be installed via Windows Package Manager:
+
+ ```powershell
+ winget install --id Nethermind.Nethermind
+ ```
+
+- **macOS**
+
+ On macOS, Nethermind can be installed via Homebrew:
+
+ ```bash
+ brew tap nethermindeth/nethermind
+ brew install nethermind
+ ```
+
+Once installed, Nethermind can be launched as follows:
+
+```bash
+nethermind -c mainnet --data-dir path/to/data/dir
+```
+
+For further instructions, see [Running a node](https://docs.nethermind.io/get-started/running-node).
+
+### Docker containers
+
+The official Docker images of Nethermind are available on [Docker Hub](https://hub.docker.com/r/nethermind/nethermind) and tagged as follows:
+
+- `latest`: the latest version of Nethermind (the default tag)
+- `latest-chiseled`: a rootless and chiseled image of the latest version of Nethermind
+- `x.x.x`: a specific version of Nethermind
+- `x.x.x-chiseled`: a rootless and chiseled image of the specific version of Nethermind
+
+For more info, see [Installing Nethermind](https://docs.nethermind.io/get-started/installing-nethermind).
+
+## Building from source
+
+### Docker image
+
+This is the easiest and fastest way to build Nethermind if you don't want to clone the Nethermind repo, deal with .NET SDK installation, and other configurations. Running the following simple command builds the Docker image, which is ready to run right after:
+
+```bash
+docker build https://github.com/nethermindeth/nethermind.git -t nethermind
+```
+
+For more info, see [Building Docker image](https://docs.nethermind.io/developers/building-from-source#building-docker-image).
+
+### Standalone binaries
+
+**Prerequisites**
+
+Install the [.NET SDK](https://aka.ms/dotnet/download).
+
+**Clone the repository**
+
+```bash
+git clone --recursive https://github.com/nethermindeth/nethermind.git
+```
+
+**Build and run**
+
+```bash
+cd nethermind/src/Nethermind/Nethermind.Runner
+dotnet run -c release -- -c mainnet
+```
+
+**Test**
+
+```bash
+cd nethermind/src/Nethermind
+
+# Run Nethermind tests
+dotnet test Nethermind.slnx -c release
+
+# Run Ethereum Foundation tests
+dotnet test EthereumTests.slnx -c release
+```
+
+For more info, see [Building standalone binaries](https://docs.nethermind.io/developers/building-from-source#building-standalone-binaries).
+
+## Contributing
+
+Before you start working on a feature or fix, please read and follow our [contributing guidelines](./CONTRIBUTING.md) to help avoid any wasted or duplicate effort.
+
+## Security
+
+If you believe you have found a security vulnerability in our code, please report it to us as described in our [security policy](SECURITY.md).
+
+## License
+
+Nethermind is an open-source software licensed under the [LGPL-3.0](./LICENSE-LGPL). By using this project, you agree to abide by the license and [additional terms](https://nethermindeth.github.io/NethermindEthereumClientTermsandConditions/).
diff --git a/data/readmes/network-service-mesh-v1160.md b/data/readmes/network-service-mesh-v1160.md
new file mode 100644
index 0000000..cc23a59
--- /dev/null
+++ b/data/readmes/network-service-mesh-v1160.md
@@ -0,0 +1,10 @@
+# Network Service Mesh - README (v1.16.0)
+
+**Repository**: https://github.com/networkservicemesh/api
+**Version**: v1.16.0
+
+---
+
+# api
+
+This repo provides the basic Network Service Mesh GRPC APIs
diff --git a/data/readmes/nginx-release-1293.md b/data/readmes/nginx-release-1293.md
new file mode 100644
index 0000000..ac2a273
--- /dev/null
+++ b/data/readmes/nginx-release-1293.md
@@ -0,0 +1,239 @@
+# Nginx - README (release-1.29.3)
+
+**Repository**: https://github.com/nginx/nginx
+**Version**: release-1.29.3
+
+---
+
+
+
+
+
+
+
+[](https://www.repostatus.org/#active)
+[](https://community.nginx.org)
+[](/LICENSE)
+[](/CODE_OF_CONDUCT.md)
+
+NGINX (pronounced "engine x" or "en-jin-eks") is the world's most popular Web Server, high performance Load Balancer, Reverse Proxy, API Gateway and Content Cache.
+
+NGINX is free and open source software, distributed under the terms of a simplified [2-clause BSD-like license](LICENSE).
+
+Enterprise distributions, commercial support and training are available from [F5, Inc](https://www.f5.com/products/nginx).
+
+> [!IMPORTANT]
+> The goal of this README is to provide a basic, structured introduction to NGINX for novice users. Please refer to the [full NGINX documentation](https://nginx.org/en/docs/) for detailed information on [installing](https://nginx.org/en/docs/install.html), [building](https://nginx.org/en/docs/configure.html), [configuring](https://nginx.org/en/docs/dirindex.html), [debugging](https://nginx.org/en/docs/debugging_log.html), and more. These documentation pages also contain a more detailed [Beginners Guide](https://nginx.org/en/docs/beginners_guide.html), How-Tos, [Development guide](https://nginx.org/en/docs/dev/development_guide.html), and a complete module and [directive reference](https://nginx.org/en/docs/dirindex.html).
+
+# Table of contents
+- [How it works](#how-it-works)
+ - [Modules](#modules)
+ - [Configurations](#configurations)
+ - [Runtime](#runtime)
+- [Downloading and installing](#downloading-and-installing)
+ - [Stable and Mainline binaries](#stable-and-mainline-binaries)
+ - [Linux binary installation process](#linux-binary-installation-process)
+ - [FreeBSD installation process](#freebsd-installation-process)
+ - [Windows executables](#windows-executables)
+ - [Dynamic modules](#dynamic-modules)
+- [Getting started with NGINX](#getting-started-with-nginx)
+ - [Installing SSL certificates and enabling TLS encryption](#installing-ssl-certificates-and-enabling-tls-encryption)
+ - [Load Balancing](#load-balancing)
+ - [Rate limiting](#rate-limiting)
+ - [Content caching](#content-caching)
+- [Building from source](#building-from-source)
+ - [Installing dependencies](#installing-dependencies)
+ - [Cloning the NGINX GitHub repository](#cloning-the-nginx-github-repository)
+ - [Configuring the build](#configuring-the-build)
+ - [Compiling](#compiling)
+ - [Location of binary and installation](#location-of-binary-and-installation)
+ - [Running and testing the installed binary](#running-and-testing-the-installed-binary)
+- [Asking questions and reporting issues](#asking-questions-and-reporting-issues)
+- [Contributing code](#contributing-code)
+- [Additional help and resources](#additional-help-and-resources)
+- [Changelog](#changelog)
+- [License](#license)
+
+# How it works
+NGINX is installed software with binary packages available for all major operating systems and Linux distributions. See [Tested OS and Platforms](https://nginx.org/en/#tested_os_and_platforms) for a full list of compatible systems.
+
+> [!IMPORTANT]
+> While nearly all popular Linux-based operating systems are distributed with a community version of nginx, we highly advise installation and usage of official [packages](https://nginx.org/en/linux_packages.html) or sources from this repository. Doing so ensures that you're using the most recent release or source code, including the latest feature-set, fixes and security patches.
+
+## Modules
+NGINX is comprised of individual modules, each extending core functionality by providing additional, configurable features. See "Modules reference" at the bottom of [nginx documentation](https://nginx.org/en/docs/) for a complete list of official modules.
+
+NGINX modules can be built and distributed as static or dynamic modules. Static modules are defined at build-time, compiled, and distributed in the resulting binaries. See [Dynamic Modules](#dynamic-modules) for more information on how they work, as well as, how to obtain, install, and configure them.
+
+> [!TIP]
+> You can issue the following command to see which static modules your NGINX binaries were built with:
+```bash
+nginx -V
+```
+> See [Configuring the build](#configuring-the-build) for information on how to include specific Static modules into your nginx build.
+
+## Configurations
+NGINX is highly flexible and configurable. Provisioning the software is achieved via text-based config file(s) accepting parameters called "[Directives](https://nginx.org/en/docs/dirindex.html)". See [Configuration File's Structure](https://nginx.org/en/docs/beginners_guide.html#conf_structure) for a comprehensive description of how NGINX configuration files work.
+
+> [!NOTE]
+> The set of directives available to your distribution of NGINX is dependent on which [modules](#modules) have been made available to it.
+
+## Runtime
+Rather than running in a single, monolithic process, NGINX is architected to scale beyond Operating System process limitations by operating as a collection of processes. They include:
+- A "master" process that maintains worker processes, as well as, reads and evaluates configuration files.
+- One or more "worker" processes that process data (eg. HTTP requests).
+
+The number of [worker processes](https://nginx.org/en/docs/ngx_core_module.html#worker_processes) is defined in the configuration file and may be fixed for a given configuration or automatically adjusted to the number of available CPU cores. In most cases, the latter option optimally balances load across available system resources, as NGINX is designed to efficiently distribute work across all worker processes.
+
+> [!TIP]
+> Processes synchronize data through shared memory. For this reason, many NGINX directives require the allocation of shared memory zones. As an example, when configuring [rate limiting](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req), connecting clients may need to be tracked in a [common memory zone](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone) so all worker processes can know how many times a particular client has accessed the server in a span of time.
+
+# Downloading and installing
+Follow these steps to download and install precompiled NGINX binaries. You may also choose to [build NGINX locally from source code](#building-from-source).
+
+## Stable and Mainline binaries
+NGINX binaries are built and distributed in two versions: stable and mainline. Stable binaries are built from stable branches and only contain critical fixes backported from the mainline version. Mainline binaries are built from the [master branch](https://github.com/nginx/nginx/tree/master) and contain the latest features and bugfixes. You'll need to [decide which is appropriate for your purposes](https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/#choosing-between-a-stable-or-a-mainline-version).
+
+## Linux binary installation process
+The NGINX binary installation process takes advantage of package managers native to specific Linux distributions. For this reason, first-time installations involve adding the official NGINX package repository to your system's package manager. Follow [these steps](https://nginx.org/en/linux_packages.html) to download, verify, and install NGINX binaries using the package manager appropriate for your Linux distribution.
+
+### Upgrades
+Future upgrades to the latest version can be managed using the same package manager without the need to manually download and verify binaries.
+
+## FreeBSD installation process
+For more information on installing NGINX on FreeBSD system, visit https://nginx.org/en/docs/install.html
+
+## Windows executables
+Windows executables for mainline and stable releases can be found on the main [NGINX download page](https://nginx.org/en/download.html). Note that the current implementation of NGINX for Windows is at the Proof-of-Concept stage and should only be used for development and testing purposes. For additional information, please see [nginx for Windows](https://nginx.org/en/docs/windows.html).
+
+## Dynamic modules
+NGINX version 1.9.11 added support for [Dynamic Modules](https://nginx.org/en/docs/ngx_core_module.html#load_module). Unlike Static modules, dynamically built modules can be downloaded, installed, and configured after the core NGINX binaries have been built. [Official dynamic module binaries](https://nginx.org/en/linux_packages.html#dynmodules) are available from the same package repository as the core NGINX binaries described in previous steps.
+
+> [!TIP]
+> [NGINX JavaScript (njs)](https://github.com/nginx/njs), is a popular NGINX dynamic module that enables the extension of core NGINX functionality using familiar JavaScript syntax.
+
+> [!IMPORTANT]
+> If desired, dynamic modules can also be built statically into NGINX at compile time.
+
+# Getting started with NGINX
+For a gentle introduction to NGINX basics, please see our [Beginner’s Guide](https://nginx.org/en/docs/beginners_guide.html).
+
+## Installing SSL certificates and enabling TLS encryption
+See [Configuring HTTPS servers](https://nginx.org/en/docs/http/configuring_https_servers.html) for a quick guide on how to enable secure traffic to your NGINX installation.
+
+## Load Balancing
+For a quick start guide on configuring NGINX as a Load Balancer, please see [Using nginx as HTTP load balancer](https://nginx.org/en/docs/http/load_balancing.html).
+
+## Rate limiting
+See our [Rate Limiting with NGINX](https://blog.nginx.org/blog/rate-limiting-nginx) blog post for an overview of core concepts for provisioning NGINX as an API Gateway.
+
+## Content caching
+See [A Guide to Caching with NGINX and NGINX Plus](https://blog.nginx.org/blog/nginx-caching-guide) blog post for an overview of how to use NGINX as a content cache (e.g. edge server of a content delivery network).
+
+# Building from source
+The following steps can be used to build NGINX from source code available in this repository.
+
+## Installing dependencies
+Most Linux distributions will require several dependencies to be installed in order to build NGINX. The following instructions are specific to the `apt` package manager, widely available on most Ubuntu/Debian distributions and their derivatives.
+
+> [!TIP]
+> It is always a good idea to update your package repository lists prior to installing new packages.
+> ```bash
+> sudo apt update
+> ```
+
+### Installing compiler and make utility
+Use the following command to install the GNU C compiler and Make utility.
+
+```bash
+sudo apt install gcc make
+```
+
+### Installing dependency libraries
+
+```bash
+sudo apt install libpcre3-dev zlib1g-dev
+```
+
+> [!WARNING]
+> This is the minimal set of dependency libraries needed to build NGINX with rewriting and gzip capabilities. Other dependencies may be required if you choose to build NGINX with additional modules. Monitor the output of the `configure` command discussed in the following sections for information on which modules may be missing. For example, if you plan to use SSL certificates to encrypt traffic with TLS, you'll need to install the OpenSSL library. To do so, issue the following command.
+
+>```bash
+>sudo apt install libssl-dev
+
+## Cloning the NGINX GitHub repository
+Using your preferred method, clone the NGINX repository into your development directory. See [Cloning a GitHub Repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) for additional help.
+
+```bash
+git clone https://github.com/nginx/nginx.git
+```
+
+## Configuring the build
+Prior to building NGINX, you must run the `configure` script with [appropriate flags](https://nginx.org/en/docs/configure.html). This will generate a Makefile in your NGINX source root directory that can then be used to compile NGINX with [options specified during configuration](https://nginx.org/en/docs/configure.html).
+
+From the NGINX source code repository's root directory:
+
+```bash
+auto/configure
+```
+
+> [!IMPORTANT]
+> Configuring the build without any flags will compile NGINX with the default set of options. Please refer to https://nginx.org/en/docs/configure.html for a full list of available build configuration options.
+
+## Compiling
+The `configure` script will generate a `Makefile` in the NGINX source root directory upon successful execution. To compile NGINX into a binary, issue the following command from that same directory:
+
+```bash
+make
+```
+
+## Location of binary and installation
+After successful compilation, a binary will be generated at `/objs/nginx`. To install this binary, issue the following command from the source root directory:
+
+```bash
+sudo make install
+```
+
+> [!IMPORTANT]
+> The binary will be installed into the `/usr/local/nginx/` directory.
+
+## Running and testing the installed binary
+To run the installed binary, issue the following command:
+
+```bash
+sudo /usr/local/nginx/sbin/nginx
+```
+
+You may test NGINX operation using `curl`.
+
+```bash
+curl localhost
+```
+
+The output of which should start with:
+
+```html
+
+
+
+Welcome to nginx!
+```
+
+# Asking questions and reporting issues
+See our [Support](SUPPORT.md) guidelines for information on how discuss the codebase, ask troubleshooting questions, and report issues.
+
+# Contributing code
+Please see the [Contributing](CONTRIBUTING.md) guide for information on how to contribute code.
+
+# Additional help and resources
+- See the [NGINX Community Blog](https://blog.nginx.org/) for more tips, tricks and HOW-TOs related to NGINX and related projects.
+- Access [nginx.org](https://nginx.org/), your go-to source for all documentation, information and software related to the NGINX suite of projects.
+
+# Changelog
+See our [changelog](https://nginx.org/en/CHANGES) to keep track of updates.
+
+# License
+[2-clause BSD-like license](LICENSE)
+
+---
+Additional documentation available at: https://nginx.org/en/docs
diff --git a/data/readmes/nifi-supportnifi-1111.md b/data/readmes/nifi-supportnifi-1111.md
new file mode 100644
index 0000000..4610b31
--- /dev/null
+++ b/data/readmes/nifi-supportnifi-1111.md
@@ -0,0 +1,195 @@
+# NiFi - README (support/nifi-1.11.1)
+
+**Repository**: https://github.com/apache/nifi
+**Version**: support/nifi-1.11.1
+
+---
+
+
+[][nifi]
+
+[](https://travis-ci.org/apache/nifi)
+[](https://hub.docker.com/r/apache/nifi/)
+[](https://nifi.apache.org/download.html)
+[](https://s.apache.org/nifi-community-slack)
+
+[Apache NiFi](https://nifi.apache.org/) is an easy to use, powerful, and
+reliable system to process and distribute data.
+
+## Table of Contents
+
+- [Features](#features)
+- [Requirements](#requirements)
+- [Getting Started](#getting-started)
+- [Getting Help](#getting-help)
+- [Documentation](#documentation)
+- [License](#license)
+- [Export Control](#export-control)
+
+## Features
+
+Apache NiFi was made for dataflow. It supports highly configurable directed graphs of data routing, transformation, and system mediation logic. Some of its key features include:
+
+- Web-based user interface
+ - Seamless experience for design, control, and monitoring
+ - Multi-tenant user experience
+- Highly configurable
+ - Loss tolerant vs guaranteed delivery
+ - Low latency vs high throughput
+ - Dynamic prioritization
+ - Flows can be modified at runtime
+ - Back pressure
+ - Scales up to leverage full machine capability
+ - Scales out with zero-master clustering model
+- Data Provenance
+ - Track dataflow from beginning to end
+- Designed for extension
+ - Build your own processors and more
+ - Enables rapid development and effective testing
+- Secure
+ - SSL, SSH, HTTPS, encrypted content, etc...
+ - Pluggable fine-grained role-based authentication/authorization
+ - Multiple teams can manage and share specific portions of the flow
+
+## Requirements
+* JDK 1.8 (*ongoing work to enable NiFi to run on Java 9/10/11; see [NIFI-5174](https://issues.apache.org/jira/browse/NIFI-5174)*)
+* Apache Maven 3.1.1 or newer
+* Git Client (used during build process by 'bower' plugin)
+
+## Getting Started
+
+- Read through the [quickstart guide for development](http://nifi.apache.org/quickstart.html).
+ It will include information on getting a local copy of the source, give pointers on issue
+ tracking, and provide some warnings about common problems with development environments.
+- For a more comprehensive guide to development and information about contributing to the project
+ read through the [NiFi Developer's Guide](http://nifi.apache.org/developer-guide.html).
+
+To build:
+- Execute `mvn clean install` or for parallel build execute `mvn -T 2.0C clean install`. On a
+ modest development laptop that is a couple of years old, the latter build takes a bit under ten
+ minutes. After a large amount of output you should eventually see a success message.
+
+ laptop:nifi myuser$ mvn -T 2.0C clean install
+ [INFO] Scanning for projects...
+ [INFO] Inspecting build with total of 115 modules...
+ ...tens of thousands of lines elided...
+ [INFO] ------------------------------------------------------------------------
+ [INFO] BUILD SUCCESS
+ [INFO] ------------------------------------------------------------------------
+ [INFO] Total time: 09:24 min (Wall Clock)
+ [INFO] Finished at: 2015-04-30T00:30:36-05:00
+ [INFO] Final Memory: 173M/1359M
+ [INFO] ------------------------------------------------------------------------
+- Execute `mvn clean install -DskipTests` to compile tests, but skip running them.
+
+To deploy:
+- Change directory to 'nifi-assembly'. In the target directory, there should be a build of nifi.
+
+ laptop:nifi myuser$ cd nifi-assembly
+ laptop:nifi-assembly myuser$ ls -lhd target/nifi*
+ drwxr-xr-x 3 myuser mygroup 102B Apr 30 00:29 target/nifi-1.0.0-SNAPSHOT-bin
+ -rw-r--r-- 1 myuser mygroup 144M Apr 30 00:30 target/nifi-1.0.0-SNAPSHOT-bin.tar.gz
+ -rw-r--r-- 1 myuser mygroup 144M Apr 30 00:30 target/nifi-1.0.0-SNAPSHOT-bin.zip
+
+- For testing ongoing development you could use the already unpacked build present in the directory
+ named "nifi-*version*-bin", where *version* is the current project version. To deploy in another
+ location make use of either the tarball or zipfile and unpack them wherever you like. The
+ distribution will be within a common parent directory named for the version.
+
+ laptop:nifi-assembly myuser$ mkdir ~/example-nifi-deploy
+ laptop:nifi-assembly myuser$ tar xzf target/nifi-*-bin.tar.gz -C ~/example-nifi-deploy
+ laptop:nifi-assembly myuser$ ls -lh ~/example-nifi-deploy/
+ total 0
+ drwxr-xr-x 10 myuser mygroup 340B Apr 30 01:06 nifi-1.0.0-SNAPSHOT
+
+To run NiFi:
+- Change directory to the location where you installed NiFi and run it.
+
+ laptop:~ myuser$ cd ~/example-nifi-deploy/nifi-*
+ laptop:nifi-1.0.0-SNAPSHOT myuser$ ./bin/nifi.sh start
+
+- Direct your browser to http://localhost:8080/nifi/ and you should see a screen like this screenshot:
+ 
+
+- For help building your first data flow see the [NiFi User Guide](http://nifi.apache.org/docs/nifi-docs/html/user-guide.html)
+
+- If you are testing ongoing development, you will likely want to stop your instance.
+
+ laptop:~ myuser$ cd ~/example-nifi-deploy/nifi-*
+ laptop:nifi-1.0.0-SNAPSHOT myuser$ ./bin/nifi.sh stop
+
+## Getting Help
+If you have questions, you can reach out to our mailing list: dev@nifi.apache.org
+([archive](http://mail-archives.apache.org/mod_mbox/nifi-dev)). For more interactive discussions, community members can often be found in the following locations:
+
+- Apache NiFi Slack Workspace: https://apachenifi.slack.com/
+
+ New users can join the workspace using the following [invite link](https://s.apache.org/nifi-community-slack).
+
+- IRC: #nifi on [irc.freenode.net](http://webchat.freenode.net/?channels=#nifi)
+
+## Documentation
+
+See http://nifi.apache.org/ for the latest documentation.
+
+## License
+
+Except as otherwise noted this software is licensed under the
+[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html)
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+## Export Control
+
+This distribution includes cryptographic software. The country in which you
+currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software. BEFORE using any
+encryption software, please check your country's laws, regulations and
+policies concerning the import, possession, or use, and re-export of encryption
+software, to see if this is permitted. See for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+The following provides more details on the included cryptographic software:
+
+Apache NiFi uses BouncyCastle, JCraft Inc., and the built-in
+Java cryptography libraries for SSL, SSH, and the protection
+of sensitive configuration parameters. See
+http://bouncycastle.org/about.html
+http://www.jcraft.com/c-info.html
+http://www.oracle.com/us/products/export/export-regulations-345813.html
+for more details on each of these libraries cryptography features.
+
+[nifi]: https://nifi.apache.org/
+[logo]: https://nifi.apache.org/assets/images/apache-nifi-logo.svg
diff --git a/data/readmes/nimbus-nightly.md b/data/readmes/nimbus-nightly.md
new file mode 100644
index 0000000..7be2834
--- /dev/null
+++ b/data/readmes/nimbus-nightly.md
@@ -0,0 +1,174 @@
+# Nimbus - README (nightly)
+
+**Repository**: https://github.com/status-im/nimbus-eth2
+**Version**: nightly
+
+---
+
+# Nimbus Eth2 (Beacon Chain)
+
+[](https://github.com/status-im/nimbus-eth2/actions/workflows/ci.yml?query=branch%3Astable)
+[](https://opensource.org/licenses/Apache-2.0)
+[](https://opensource.org/licenses/MIT)
+
+[](https://github.com/status-im/nimbus-eth2/releases)
+[](https://discord.gg/XRxWahP)
+[](https://join.status.im/nimbus-general)
+[](https://www.gitpoap.io/gh/status-im/nimbus-eth2)
+
+Nimbus-eth2 is an extremely efficient consensus layer (eth2) client implementation. While it's optimised for embedded systems and resource-restricted devices -- including Raspberry Pis, its low resource usage also makes it an excellent choice for any server or desktop (where it simply takes up fewer resources).
+
+
+
+
+- [Documentation](#documentation)
+- [Related projects](#related-projects)
+- [Donations](#donations)
+- [Branch guide](#branch-guide)
+- [Developer resources](#developer-resources)
+- [Tooling and utilities](#tooling-and-utilities)
+- [For researchers](#for-researchers)
+ - [Block simulation](#block-simulation)
+ - [Local network simulation](#local-network-simulation)
+ - [Visualising simulation metrics](#visualising-simulation-metrics)
+ - [CI setup](#ci-setup)
+- [License](#license)
+
+
+
+## Documentation
+
+You can find the information you need to run a beacon node and operate as a validator in [The Book](https://nimbus.guide/).
+
+The [Quickstart](https://nimbus.guide/quick-start.html) in particular will help you quickly connect to either mainnet or the Hoodi testnet.
+
+## Quickly test your tooling against Nimbus
+
+ The [Nimbus REST api](https://nimbus.guide/rest-api.html) is now available from:
+
+* http://testing.mainnet.beacon-api.nimbus.team/
+* http://unstable.mainnet.beacon-api.nimbus.team/
+* http://testing.hoodi.beacon-api.nimbus.team/
+* http://unstable.hoodi.beacon-api.nimbus.team/
+* http://testing.holesky.beacon-api.nimbus.team/
+* http://unstable.holesky.beacon-api.nimbus.team/
+* http://unstable.sepolia.beacon-api.nimbus.team/
+
+Note that right now these are very much unstable testing instances. They may be unresponsive at times - so **please do not rely on them for validating**. We may also disable them at any time.
+
+## Migrate from another client
+
+This [guide](https://nimbus.guide/migration.html) will take you through the basics of how to migrate to Nimbus from another client. See [here](https://nimbus.guide/migration-options.html) for advanced options.
+
+
+## Related projects
+
+* [status-im/nimbus-eth1](https://github.com/status-im/nimbus-eth1/): Nimbus for Ethereum 1
+* [ethereum/consensus-specs](https://github.com/ethereum/consensus-specs/tree/v1.3.0/#stable-specifications): Consensus specification that this project implements
+
+You can check where the beacon chain fits in the Ethereum ecosystem in our Two-Point-Oh series: https://our.status.im/tag/two-point-oh/
+
+## Donations
+
+If you'd like to contribute to Nimbus development, our donation address is [`0x70E47C843E0F6ab0991A3189c28F2957eb6d3842`](https://etherscan.io/address/0x70E47C843E0F6ab0991A3189c28F2957eb6d3842)
+## Branch guide
+
+* `stable` - latest stable release - **this branch is recommended for most users**
+* `testing` - pre-release branch with features and bugfixes slated for the next stable release - this branch is suitable for use on testnets and for adventurous users that want to live on the edge.
+* `unstable` - main development branch against which PR's are merged - if you want to contribute to Nimbus, start here.
+
+## Developer resources
+
+To get started with developing Nimbus itself, see the [developer handbook](https://nimbus.guide/developers.html).
+
+## Tooling and utilities
+
+We provide several tools to interact with ETH2 and the data in the beacon chain:
+
+* [ncli](ncli/ncli.nim) - command line tool with pretty printers, SSZ decoders, state transition helpers to interact with Eth2 data structures and functions
+* [ncli_db](ncli/ncli_db.nim) - command line tool to perform surgery on the Nimbus sqlite database
+* [multinet](https://github.com/status-im/nimbus-eth2/tree/master/multinet) - a set of scripts to build and run several Eth2 clients locally
+
+## For researchers
+
+### Block simulation
+
+The block simulator can quickly run the Beacon chain state transition function in isolation. The simulation runs without networking and without slot time delays.
+
+```bash
+# build and run the block simulator, then display its help ("-d:release" speeds it
+# up substantially, allowing the simulation of longer runs in reasonable time)
+make NIMFLAGS="-d:release" block_sim
+build/block_sim --help
+```
+
+### Local network simulation
+
+The local network simulation will create a full peer-to-peer network of beacon nodes and validators on a single machine, and run the beacon chain in real time.
+Parameters such as shard, validator counts, and data folders can be set in as environment variables before launching the simulation.
+
+```bash
+# Clear data files from your last run and start the simulation with a new genesis block:
+make VALIDATORS=192 NUM_NODES=6 USER_NODES=1 local-testnet-minimal
+
+# In another terminal, get a shell with the right environment variables set:
+./env.sh bash
+
+# In the above example, the network is prepared for 7 beacon nodes but one of
+# them is not started by default (`USER_NODES`) - this is useful to test
+# catching up to the consensus. The following command will start the missing node.
+./tests/simulation/run_node.sh 0 # (or the index (0-based) of the missing node)
+
+# Running a separate node allows you to test sync as well as see what the action
+# looks like from a single nodes' perspective.
+```
+
+By default, validators will be split in half between beacon node and validator
+client processes (50/50), communicating through the
+[common validator API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi)
+(for example with `192` validators and `6` nodes you will roughly end up with 6
+beacon node and 6 validator client processes, where each of them will handle 16
+validators), but if you don't want to use external validator clients and instead
+want to have all the validators handled by the beacon nodes you may use
+`USE_VC=0` as an additional argument to `make local-testnet-minimal`.
+
+_Alternatively, fire up our [experimental Vagrant instance with Nim pre-installed](https://our.status.im/setting-up-a-local-vagrant-environment-for-nim-development/) and give us your feedback about the process!_
+
+### Visualising simulation metrics
+
+
+
+The [generic instructions from the Nimbus repo](https://github.com/status-im/nimbus-eth1/#metric-visualisation) apply here as well.
+
+Specific steps:
+
+```bash
+# This will generate the Prometheus config on the fly, based on the number of nodes:
+make REMOTE_VALIDATORS_COUNT=192 NUM_NODES=6 USER_NODES=0 local-testnet-minimal
+
+# In another terminal tab, after the sim started:
+cd tests/simulation/prometheus
+prometheus
+```
+
+The dashboard you need to import in Grafana is `grafana/beacon_nodes_Grafana_dashboard.json`.
+
+
+
+### CI setup
+
+Local testnets run for 4 epochs each, to test finalization. That happens only on Jenkins Linux hosts, and their logs are available for download as artifacts, from the job's page. Don't expect these artifacts to be kept more than a day after the corresponding branch is deleted.
+
+
+
+## License
+
+Licensed and distributed under either of
+
+* MIT license: [LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT
+
+or
+
+* Apache License, Version 2.0: [LICENSE-APACHEv2](LICENSE-APACHEv2) or https://www.apache.org/licenses/LICENSE-2.0
+
+at your option. These files may not be copied, modified, or distributed except according to those terms.
diff --git a/data/readmes/nocalhost-v0634.md b/data/readmes/nocalhost-v0634.md
new file mode 100644
index 0000000..605cc10
--- /dev/null
+++ b/data/readmes/nocalhost-v0634.md
@@ -0,0 +1,132 @@
+# Nocalhost - README (v0.6.34)
+
+**Repository**: https://github.com/nocalhost/nocalhost
+**Version**: v0.6.34
+
+---
+
+[](https://bestpractices.coreinfrastructure.org/projects/5381)
+
+[](#contributors-)
+
+
+
+[](https://goreportcard.com/report/github.com/nocalhost/nocalhost)
+[](https://github.com/nocalhost/nocalhost/blob/main/LICENSE)
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fnocalhost%2Fnocalhost?ref=badge_shield)
+
+[](https://cloudstudio.net#https://github.com/nocalhost/nocalhost)
+
+
+
+
+
+
+ Most productive way to build cloud-native applications.
+
+
+## Nocalhost
+
+The term Nocalhost originates from No Local, which is a cloud-native development tool based on IDE, provides realtime cloud native application developing experience.
+
+When developing cloud-based application in Nocalhost, any code changes can immediately take effects in remote side, and there is no need to rebuild a new image. This can shorten the entire development feedback loops and massively improve R&D efficiency.
+
+In order to give you a better understanding of Nocalhost, it is recommended that you read our blog post [Nocalhost - Refine Cloud Native Dev Environment](https://nocalhost.dev/blog/2021/01/01/)
+
+[](https://www.youtube.com/watch?v=z7I-vopn-gQ)
+
+## Key Features
+
+### IDE Supports
+
+Nocalhost provides easy-to-use IDE extension for JetBrains and VSCode. These extension will enable developer to develop and debug cloud-based application in their local machine.
+
+## Start cloud-native application development in one click
+
+No need to config complex dev environments in your local machine anymore. Nocalhost helps you to connect to any Kubernetes environment in one click.
+
+### See code change under a second
+
+automatically synchronize the code to container every time you make a change. Nocalhost eliminate the submit, building and pushing cycles, significantly speed up the feedback loop of development, so you see change in under a second.
+
+### Isolated development space
+
+Every team member can enjoy their own independent development space to ensure that they are not disturbed by others.
+
+## Getting Started
+
+* [Installation](https://nocalhost.dev/docs/installation)
+* [Quick Start](https://nocalhost.dev/docs/quick-start)
+* [How it works](https://nocalhost.dev/docs/introduction/#how-does-it-work)
+
+## Documentation
+
+Full documentation is available on the [Nocalhost](https://nocalhost.dev/)
+
+## Community
+
+* Meeting: [Google Doc](https://docs.google.com/document/d/19xrULkXK51tO0yupZnHXccC2EpJUlPI4y1eCI2HnjBM)
+* Slack(English): [CNCF Slack](https://slack.cncf.io/) #nocalhost channel
+* WeChat(Chinese): Scan the QR Code to add CODING assistance with note "Nocalhost", assistance will add you into our WeChat group
+
+
+
+## Talks and Conferences
+
+| Engagement | Link |
+| ---------- | ---- |
+| :video_camera: Nocalhost Youtube | [https://www.youtube.com/channel/UC2QC6HvFG8zOtFRvvMzcAUw](https://www.youtube.com/channel/UC2QC6HvFG8zOtFRvvMzcAUw) |
+
+##
+
+Check our [Changelogs](https://github.com/nocalhost/nocalhost/releases)
+
+## Contributing
+
+Check out [CONTRIBUTING](./CONTRIBUTING.md) to see how to develop with Nocalhost.
+
+## Code of Conduct
+
+Nocalhost adopts [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)
+
+## License
+
+Nocalhost is [Apache 2.0 licensed](./LICENSE)
+
+
+[](https://app.fossa.com/projects/git%2Bgithub.com%2Fnocalhost%2Fnocalhost?ref=badge_large)
+
+## Roadmap
+
+See [ROADMAP](./ROADMAP.md)
+
+## Contributors ✨
+
+Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
+
+
+
+
+
diff --git a/data/readmes/nodejs-v20196.md b/data/readmes/nodejs-v20196.md
new file mode 100644
index 0000000..1c911d2
--- /dev/null
+++ b/data/readmes/nodejs-v20196.md
@@ -0,0 +1,928 @@
+# Node.js - README (v20.19.6)
+
+**Repository**: https://github.com/nodejs/node
+**Version**: v20.19.6
+
+---
+
+# Node.js
+
+Node.js is an open-source, cross-platform JavaScript runtime environment.
+
+For information on using Node.js, see the [Node.js website][].
+
+The Node.js project uses an [open governance model](./GOVERNANCE.md). The
+[OpenJS Foundation][] provides support for the project.
+
+Contributors are expected to act in a collaborative manner to move
+the project forward. We encourage the constructive exchange of contrary
+opinions and compromise. The [TSC](./GOVERNANCE.md#technical-steering-committee)
+reserves the right to limit or block contributors who repeatedly act in ways
+that discourage, exhaust, or otherwise negatively affect other participants.
+
+**This project has a [Code of Conduct][].**
+
+## Table of contents
+
+* [Support](#support)
+* [Release types](#release-types)
+ * [Download](#download)
+ * [Current and LTS releases](#current-and-lts-releases)
+ * [Nightly releases](#nightly-releases)
+ * [API documentation](#api-documentation)
+ * [Verifying binaries](#verifying-binaries)
+* [Building Node.js](#building-nodejs)
+* [Security](#security)
+* [Contributing to Node.js](#contributing-to-nodejs)
+* [Current project team members](#current-project-team-members)
+ * [TSC (Technical Steering Committee)](#tsc-technical-steering-committee)
+ * [Collaborators](#collaborators)
+ * [Triagers](#triagers)
+ * [Release keys](#release-keys)
+* [License](#license)
+
+## Support
+
+Looking for help? Check out the
+[instructions for getting support](.github/SUPPORT.md).
+
+## Release types
+
+* **Current**: Under active development. Code for the Current release is in the
+ branch for its major version number (for example,
+ [v22.x](https://github.com/nodejs/node/tree/v22.x)). Node.js releases a new
+ major version every 6 months, allowing for breaking changes. This happens in
+ April and October every year. Releases appearing each October have a support
+ life of 8 months. Releases appearing each April convert to LTS (see below)
+ each October.
+* **LTS**: Releases that receive Long Term Support, with a focus on stability
+ and security. Every even-numbered major version will become an LTS release.
+ LTS releases receive 12 months of _Active LTS_ support and a further 18 months
+ of _Maintenance_. LTS release lines have alphabetically-ordered code names,
+ beginning with v4 Argon. There are no breaking changes or feature additions,
+ except in some special circumstances.
+* **Nightly**: Code from the Current branch built every 24-hours when there are
+ changes. Use with caution.
+
+Current and LTS releases follow [semantic versioning](https://semver.org). A
+member of the Release Team [signs](#release-keys) each Current and LTS release.
+For more information, see the
+[Release README](https://github.com/nodejs/Release#readme).
+
+### Download
+
+Binaries, installers, and source tarballs are available at
+.
+
+#### Current and LTS releases
+
+
+
+The [latest](https://nodejs.org/download/release/latest/) directory is an
+alias for the latest Current release. The latest-_codename_ directory is an
+alias for the latest release from an LTS line. For example, the
+[latest-hydrogen](https://nodejs.org/download/release/latest-hydrogen/)
+directory contains the latest Hydrogen (Node.js 18) release.
+
+#### Nightly releases
+
+
+
+Each directory and filename includes the version (e.g., `v22.0.0`),
+followed by the UTC date (e.g., `20240424` for April 24, 2024),
+and the short commit SHA of the HEAD of the release (e.g., `ddd0a9e494`).
+For instance, a full directory name might look like `v22.0.0-nightly20240424ddd0a9e494`.
+
+#### API documentation
+
+Documentation for the latest Current release is at .
+Version-specific documentation is available in each release directory in the
+_docs_ subdirectory. Version-specific documentation is also at
+.
+
+### Verifying binaries
+
+Download directories contain a `SHASUMS256.txt.asc` file with SHA checksums for the
+files and the releaser PGP signature.
+
+You can get a trusted keyring from nodejs/release-keys, e.g. using `curl`:
+
+```bash
+curl -fsLo "/path/to/nodejs-keyring.kbx" "https://github.com/nodejs/release-keys/raw/HEAD/gpg/pubring.kbx"
+```
+
+Alternatively, you can import the releaser keys in your default keyring, see
+[Release keys](#release-keys) for commands to how to do that.
+
+Then, you can verify the files you've downloaded locally
+(if you're using your default keyring, pass `--keyring="${GNUPGHOME:-~/.gnupg}/pubring.kbx"`):
+
+```bash
+curl -fsO "https://nodejs.org/dist/${VERSION}/SHASUMS256.txt.asc" \
+&& gpgv --keyring="/path/to/nodejs-keyring.kbx" --output SHASUMS256.txt < SHASUMS256.txt.asc \
+&& shasum --check SHASUMS256.txt --ignore-missing
+```
+
+## Building Node.js
+
+See [BUILDING.md](BUILDING.md) for instructions on how to build Node.js from
+source and a list of supported platforms.
+
+## Security
+
+For information on reporting security vulnerabilities in Node.js, see
+[SECURITY.md](./SECURITY.md).
+
+## Contributing to Node.js
+
+* [Contributing to the project][]
+* [Working Groups][]
+* [Strategic initiatives][]
+* [Technical values and prioritization][]
+
+## Current project team members
+
+For information about the governance of the Node.js project, see
+[GOVERNANCE.md](./GOVERNANCE.md).
+
+
+
+### TSC (Technical Steering Committee)
+
+#### TSC voting members
+
+
+
+* [aduh95](https://github.com/aduh95) -
+ **Antoine du Hamel** <> (he/him)
+* [anonrig](https://github.com/anonrig) -
+ **Yagiz Nizipli** <> (he/him)
+* [benjamingr](https://github.com/benjamingr) -
+ **Benjamin Gruenbaum** <>
+* [BridgeAR](https://github.com/BridgeAR) -
+ **Ruben Bridgewater** <> (he/him)
+* [gireeshpunathil](https://github.com/gireeshpunathil) -
+ **Gireesh Punathil** <> (he/him)
+* [jasnell](https://github.com/jasnell) -
+ **James M Snell** <> (he/him)
+* [joyeecheung](https://github.com/joyeecheung) -
+ **Joyee Cheung** <> (she/her)
+* [legendecas](https://github.com/legendecas) -
+ **Chengzhong Wu** <> (he/him)
+* [marco-ippolito](https://github.com/marco-ippolito) -
+ **Marco Ippolito** <> (he/him)
+* [mcollina](https://github.com/mcollina) -
+ **Matteo Collina** <> (he/him)
+* [panva](https://github.com/panva) -
+ **Filip Skokan** <> (he/him)
+* [RafaelGSS](https://github.com/RafaelGSS) -
+ **Rafael Gonzaga** <> (he/him)
+* [RaisinTen](https://github.com/RaisinTen) -
+ **Darshan Sen** <> (he/him)
+* [richardlau](https://github.com/richardlau) -
+ **Richard Lau** <>
+* [ronag](https://github.com/ronag) -
+ **Robert Nagy** <>
+* [ruyadorno](https://github.com/ruyadorno) -
+ **Ruy Adorno** <> (he/him)
+* [ShogunPanda](https://github.com/ShogunPanda) -
+ **Paolo Insogna** <> (he/him)
+* [targos](https://github.com/targos) -
+ **Michaël Zasso** <> (he/him)
+* [tniessen](https://github.com/tniessen) -
+ **Tobias Nießen** <> (he/him)
+
+#### TSC regular members
+
+* [BethGriggs](https://github.com/BethGriggs) -
+ **Beth Griggs** <> (she/her)
+* [bnoordhuis](https://github.com/bnoordhuis) -
+ **Ben Noordhuis** <>
+* [cjihrig](https://github.com/cjihrig) -
+ **Colin Ihrig** <> (he/him)
+* [codebytere](https://github.com/codebytere) -
+ **Shelley Vohr** <> (she/her)
+* [GeoffreyBooth](https://github.com/GeoffreyBooth) -
+ **Geoffrey Booth** <> (he/him)
+* [MoLow](https://github.com/MoLow) -
+ **Moshe Atlow** <> (he/him)
+* [Trott](https://github.com/Trott) -
+ **Rich Trott** <> (he/him)
+
+
+
+TSC emeriti members
+
+#### TSC emeriti members
+
+* [addaleax](https://github.com/addaleax) -
+ **Anna Henningsen** <> (she/her)
+* [apapirovski](https://github.com/apapirovski) -
+ **Anatoli Papirovski** <> (he/him)
+* [ChALkeR](https://github.com/ChALkeR) -
+ **Сковорода Никита Андреевич** <> (he/him)
+* [chrisdickinson](https://github.com/chrisdickinson) -
+ **Chris Dickinson** <>
+* [danbev](https://github.com/danbev) -
+ **Daniel Bevenius** <> (he/him)
+* [danielleadams](https://github.com/danielleadams) -
+ **Danielle Adams** <> (she/her)
+* [evanlucas](https://github.com/evanlucas) -
+ **Evan Lucas** <> (he/him)
+* [fhinkel](https://github.com/fhinkel) -
+ **Franziska Hinkelmann** <> (she/her)
+* [Fishrock123](https://github.com/Fishrock123) -
+ **Jeremiah Senkpiel** <> (he/they)
+* [gabrielschulhof](https://github.com/gabrielschulhof) -
+ **Gabriel Schulhof** <>
+* [gibfahn](https://github.com/gibfahn) -
+ **Gibson Fahnestock** <> (he/him)
+* [indutny](https://github.com/indutny) -
+ **Fedor Indutny** <>
+* [isaacs](https://github.com/isaacs) -
+ **Isaac Z. Schlueter** <