Building data platforms and AI systems.
Google Cloud • Southeast Asia
- AI Interpretability: Activation probing, steering vectors, sandbagging detection
- Distributed Systems: Formal verification, UPIR framework, large-scale data platforms
- LLM Infrastructure: Evaluation at scale, context management (MCP), agentic architectures
- I Trained Probes to Catch AI Models Sandbagging - First empirical demonstration of activation-level sandbagging detection
- Why Steering Vectors Beat Prompting (And When They Don't) - Testing activation steering on agent behaviors
- Why I Built a Spark-Native LLM Evaluation Framework - Distributed LLM evaluation that actually scales
- The MCP Maturity Model - A framework for evaluating multi-agent context strategy
- From 11% to 88% Peak Bandwidth: Custom Triton Kernels for LLM Inference
→ More at subhadipmitra.com/blog
Data Platforms │ AI/ML Infrastructure │ Research
───────────────────────┼──────────────────────────┼─────────────────────
Context-Aware Pipelines│ LLM Evaluation │ Interpretability
ETLC / Context Stores │ Model Serving │ Activation Steering
Agent-Ready Platforms │ MLOps / LLMOps │ Formal Verification
Dynamic Context Engines│ Agentic Systems │ AI Safety
Singapore • Google Cloud • Schedule a call



