- Germany
- in/ajay-mh
Pinned Loading
-
activation-patching-framework
activation-patching-framework PublicCausal intervention framework for mechanistic interpretability research. Implements activation patching methodology for identifying causally important components in transformer language models.
Python 1
-
logit-lens-explorer
logit-lens-explorer PublicMechanistic interpretability tool visualizing GPT-2's layer-by-layer predictions using the logit lens technique
Python 2
-
induction-head-detector
induction-head-detector PublicMechanistic interpretability tool to detect induction heads in GPT-2 using TransformerLens
Python 1
-
minigpt-shakespeare
minigpt-shakespeare PublicFrom-scratch GPT implementation trained on Shakespeare (10.75M params, 25 tests)
Python 1
-
bpe-tokenizer-scratch
bpe-tokenizer-scratch PublicByte-Pair Encoding tokenizer built from scratch in Python. The same algorithm used by GPT-2.
Python 1
-
toy-model-superposition
toy-model-superposition PublicA minimal Python implementation replicating Anthropic’s “Toy Models of Superposition,” illustrating feature superposition under sparse activations.
Python 1
If the problem persists, check the GitHub status page or contact support.