Skip to content

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

License

Notifications You must be signed in to change notification settings

juspay/neurolink

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

🧠 NeuroLink

The Enterprise AI SDK for Production Applications

12 Providers | 58+ MCP Tools | HITL Security | Redis Persistence

npm version npm downloads Build Status Coverage Status License: MIT TypeScript GitHub Stars Discord

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

🧠 What is NeuroLink?

NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

Where we're headed: We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →

Get Started in <5 Minutes →


What's New (Q1 2026)

Feature Version Description Guide
Image Generation with Gemini v8.31.0 Native image generation using Gemini 2.0 Flash Experimental (imagen-3.0-generate-002). High-quality image synthesis directly from Google AI. Image Generation Guide
HTTP/Streamable HTTP Transport v8.29.0 Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. HTTP Transport Guide
// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generateImage({
  prompt: "A futuristic cityscape",
  provider: "google-ai",
  model: "imagen-3.0-generate-002",
});

// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
  transport: "http",
  url: "https://mcp.example.com/v1",
  headers: { Authorization: "Bearer token" },
  retries: 3,
  timeout: 15000,
});

Previous Updates (Q4 2025)
  • Image Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → Guide
  • Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking
  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation. → Guide
  • CSV & PDF File Support – Attach CSV/PDF files to prompts with auto-detection. → CSV | PDF
  • LiteLLM & SageMaker – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → LiteLLM | SageMaker
  • OpenRouter Integration – Access 300+ models through a single unified API. → Guide
  • HITL & Guardrails – Human-in-the-loop approval workflows and content filtering middleware. → HITL | Guardrails
  • Redis & Context Management – Session export, conversation history, and automatic summarization. → History

Enterprise Security: Human-in-the-Loop (HITL)

NeuroLink includes a production-ready HITL system for regulated industries and high-stakes AI operations:

Capability Description Use Case
Tool Approval Workflows Require human approval before AI executes sensitive tools Financial transactions, data modifications
Output Validation Route AI outputs through human review pipelines Medical diagnosis, legal documents
Confidence Thresholds Automatically trigger human review below confidence level Critical business decisions
Complete Audit Trail Full audit logging for compliance (HIPAA, SOC2, GDPR) Regulated industries
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  hitl: {
    enabled: true,
    requireApproval: ["writeFile", "executeCode", "sendEmail"],
    confidenceThreshold: 0.85,
    reviewCallback: async (action, context) => {
      // Custom review logic - integrate with your approval system
      return await yourApprovalSystem.requestReview(action);
    },
  },
});

// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
  input: { text: "Send quarterly report to stakeholders" },
});

Enterprise HITL Guide | Quick Start

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more →

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

12 providers unified under one API - Switch providers with a single parameter change.

Provider Models Free Tier Tool Support Status Documentation
OpenAI GPT-4o, GPT-4o-mini, o1 ✅ Full ✅ Production Setup Guide
Anthropic Claude 3.5/3.7 Sonnet, Opus ✅ Full ✅ Production Setup Guide
Google AI Studio Gemini 2.5 Flash/Pro ✅ Free Tier ✅ Full ✅ Production Setup Guide
AWS Bedrock Claude, Titan, Llama, Nova ✅ Full ✅ Production Setup Guide
Google Vertex Gemini 3/2.5 (gemini-3-*-preview) ✅ Full ✅ Production Setup Guide
Azure OpenAI GPT-4, GPT-4o, o1 ✅ Full ✅ Production Setup Guide
LiteLLM 100+ models unified Varies ✅ Full ✅ Production Setup Guide
AWS SageMaker Custom deployed models ✅ Full ✅ Production Setup Guide
Mistral AI Mistral Large, Small ✅ Free Tier ✅ Full ✅ Production Setup Guide
Hugging Face 100,000+ models ✅ Free ⚠️ Partial ✅ Production Setup Guide
Ollama Local models (Llama, Mistral) ✅ Free (Local) ⚠️ Partial ✅ Production Setup Guide
OpenAI Compatible Any OpenAI-compatible endpoint Varies ✅ Full ✅ Production Setup Guide

📖 Provider Comparison Guide - Detailed feature matrix and selection criteria 🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 12 providers


🔧 Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

Tool Purpose Auto-Available Documentation
getCurrentTime Real-time clock access Tool Reference
readFile File system reading Tool Reference
writeFile File system writing Tool Reference
listDirectory Directory listing Tool Reference
calculateMath Mathematical operations Tool Reference
websearchGrounding Google Vertex web search ⚠️ Requires credentials Tool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
  transport: "http",
  url: "https://api.githubcopilot.com/mcp",
  headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
  timeout: 15000,
  retries: 5,
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

MCP Transport Options:

Transport Use Case Key Features
stdio Local servers Command execution, environment variables
http Remote servers URL-based, auth headers, retries, rate limiting
sse Event streams Server-Sent Events, real-time updates
websocket Bi-directional Full-duplex communication

📖 MCP Integration Guide - Setup external servers 📖 HTTP Transport Guide - Remote MCP server configuration


💻 Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

Feature Description Documentation
Auto Provider Selection Intelligent provider fallback SDK Guide
Streaming Responses Real-time token streaming Streaming Guide
Conversation Memory Automatic context management Memory Guide
Full Type Safety Complete TypeScript types Type Reference
Error Handling Graceful provider fallback Error Guide
Analytics & Evaluation Usage tracking, quality scores Analytics Guide
Middleware System Request/response hooks Middleware Guide
Framework Integration Next.js, SvelteKit, Express Framework Guides
Extended Thinking Native thinking/reasoning mode for Gemini 3 and Claude models Thinking Guide

🏢 Enterprise & Production Features

Production-ready capabilities for regulated industries:

Feature Description Use Case Documentation
Enterprise Proxy Corporate proxy support Behind firewalls Proxy Setup
Redis Memory Distributed conversation state Multi-instance deployment Redis Guide
Cost Optimization Automatic cheapest model selection Budget control Cost Guide
Multi-Provider Failover Automatic provider switching High availability Failover Guide
Telemetry & Monitoring OpenTelemetry integration Observability Telemetry Guide
Security Hardening Credential management, auditing Compliance Security Guide
Custom Model Hosting SageMaker integration Private models SageMaker Guide
Load Balancing LiteLLM proxy integration Scale & routing Load Balancing

Security & Compliance:

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage

📖 Enterprise Deployment Guide - Complete production checklist


Enterprise Persistence: Redis Memory

Production-ready distributed conversation state for multi-instance deployments:

Capabilities

Feature Description Benefit
Distributed Memory Share conversation context across instances Horizontal scaling
Session Export Export full history as JSON Analytics, debugging, audit
Auto-Detection Automatic Redis discovery from environment Zero-config in containers
Graceful Failover Falls back to in-memory if Redis unavailable High availability
TTL Management Configurable session expiration Memory management

Quick Setup

import { NeuroLink } from "@juspay/neurolink";

// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis", // Automatically uses REDIS_URL
    ttl: 86400, // 24-hour session expiration
  },
});

// Or explicit configuration
const neuriolinkExplicit = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
    redis: {
      host: "redis.example.com",
      port: 6379,
      password: process.env.REDIS_PASSWORD,
      tls: true, // Enable for production
    },
  },
});

// Export conversation for analytics
const history = await neurolink.exportConversation({ format: "json" });
await saveToDataWarehouse(history);

Docker Quick Start

# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine

# Configure NeuroLink
export REDIS_URL=redis://localhost:6379

# Start your application
node your-app.js

Redis Setup Guide | Production Configuration | Migration Patterns


🎨 Professional CLI

15+ commands for every workflow:

Command Purpose Example Documentation
setup Interactive provider configuration neurolink setup Setup Guide
generate Text generation neurolink gen "Hello" Generate
stream Streaming generation neurolink stream "Story" Stream
status Provider health check neurolink status Status
loop Interactive session neurolink loop Loop
mcp MCP server management neurolink mcp discover MCP CLI
models Model listing neurolink models Models
eval Model evaluation neurolink eval Eval

📖 Complete CLI Reference - All commands and options

💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

Revolutionary Interactive CLI

NeuroLink's CLI goes beyond simple commands - it's a full AI development environment:

Why Interactive Mode Changes Everything

Feature Traditional CLI NeuroLink Interactive
Session State None Full persistence
Memory Per-command Conversation-aware
Configuration Flags per command /set persists across session
Tool Testing Manual per tool Live discovery & testing
Streaming Optional Real-time default

Live Demo: Development Session

$ npx @juspay/neurolink loop --enable-conversation-memory

neurolink > /set provider vertex
✓ provider set to vertex (Gemini 3 support enabled)

neurolink > /set model gemini-3-flash-preview
✓ model set to gemini-3-flash-preview

neurolink > Analyze my project architecture and suggest improvements

✓ Analyzing your project structure...
[AI provides detailed analysis, remembering context]

neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]

neurolink > /mcp discover
✓ Discovered 58 MCP tools:
   GitHub: create_issue, list_repos, create_pr...
   PostgreSQL: query, insert, update...
   [full list]

neurolink > Use the GitHub tool to create an issue for this improvement
✓ Creating issue... (requires HITL approval if configured)

neurolink > /export json > session-2026-01-01.json
✓ Exported 15 messages to session-2026-01-01.json

neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json

Session Commands Reference

Command Purpose
/set <key> <value> Persist configuration (provider, model, temperature)
/mcp discover List all available MCP tools
/export json Export conversation to JSON
/history View conversation history
/clear Clear context while keeping settings

Interactive CLI Guide | CLI Reference

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Gemini 3 with Extended Thinking

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
  input: {
    text: "Solve this step by step: What is the optimal strategy for...",
  },
  provider: "vertex",
  model: "gemini-3-flash-preview",
  thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
});

console.log(result.content);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

Capability Highlights
Provider unification 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipeline Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governance Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & context Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4).
CLI tooling Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise ops Proxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystem MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

Area When to Use Link
Getting started Install, configure, run first prompt docs/getting-started/index.md
Feature guides Understand new functionality front-to-back docs/features/index.md
CLI reference Command syntax, flags, loop sessions docs/cli/index.md
SDK reference Classes, methods, options docs/sdk/index.md
Integrations LiteLLM, SageMaker, MCP, Mem0 docs/litellm-integration.md
Advanced Middleware, architecture, streaming patterns docs/advanced/index.md
Cookbook Practical recipes for common patterns docs/cookbook/index.md
Guides Migration, Redis, troubleshooting, provider selection docs/guides/index.md
Operations Configuration, troubleshooting, provider matrix docs/reference/index.md

New in 2026: Enhanced Documentation

Enterprise Features:

Provider Intelligence:

Middleware System:

Redis & Persistence:

Migration Guides:

Developer Experience:

Integrations

Contributing & Support


NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.

About

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors 50