Skip to content

Explainable AI (XAI) • Model interpretability • Feature attribution • Model diagnostics • SHAP / LIME • Trust in AI • Black-box analysis • Responsible AI • Visual analytics for ML

Dedication

I would like to express my deepest gratitude to the Hood Research Department, whose work inspired this project. Their guidance taught me to question foundational knowledge and to strive for a comprehensive understanding of complex processes. I dedicate this research on XAI and blockchain to them. The outcomes and discoveries presented herein will be implemented on the xaiQo platform, making them accessible to others and contributing to the ongoing advancement of the field.

Popular repositories Loading

  1. llvm-ir-graph-embedding llvm-ir-graph-embedding Public

    A hpc LLVM Pass extracting semantic Control-Data Flow Graphs (CDFG) from Intermediate Representation for Graph Neural Networks. Enables cross-language code retrieval and clone detection beyond toke…

    Python 1

  2. .github .github Public

  3. AIFlow AIFlow Public

    A framework for end-to-end AI inference optimization: from model parsing and graph IR, through graph and kernel optimizations, to hardware profiling, auto-tuning, and visualization.

    Python

  4. torchxai torchxai Public

    Explainability Toolkit for PyTorch - unified API, integrated XAI methods, metrics, and visualization.

    Python

Repositories

Showing 4 of 4 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…