A modular, extensible system for constructing optimized AI prompts with layered composition and generating code from natural language prompts.
The Cyber Prompt Builder application follows a well-defined path from user input to code generation:
- User Input: User enters a prompt describing the code they want to generate
- Prompt Processing: The prompt is enhanced and optimized
- Provider Selection: The appropriate AI provider is selected (OpenAI, Claude, or Gemini)
- Code Generation: The AI generates code based on the prompt
- Response Processing: The response is parsed and formatted
- Code Display: The generated code is displayed in the editor
CyberPromptBuilder is designed as a layered prompt construction system that allows precise control over AI prompts through composable components. The system follows functional composition patterns with priority-based ordering.
- PromptBuilder: Central orchestration service
- PromptLayers: Modular components (system, task, memory, preferences)
- CompositionStrategies: Pluggable composition algorithms
- Provider Integration: Adapters for various AI providers
- Layered Composition: Build prompts from distinct semantic layers
- Priority-Based Ordering: Control the importance of different prompt components
- Provider-Specific Optimization: Format prompts optimally for each AI provider
- Context Integration: Seamlessly incorporate contextual information
- Memory Support: Include relevant previous interactions
- Integrated Routing: Smart provider selection based on prompt characteristics
CyberPromptBuilder seamlessly integrates with multiple AI service providers:
- Claude: Optimized for Claude's system message format and capabilities
- OpenAI: Structured for chat completion endpoints
- Gemini: Formatted for Google's Gemini API
npm install
npm run build
firebase login
firebase deployπ Detailed guide: FIREBASE_QUICKSTART.md
npm install
# Create .env.local with your API keys (see .env.example)
npm run devPush to main branch and it will auto-deploy via GitHub Actions!
π Full deployment options: DEPLOYMENT_GUIDE.md
The application is production-ready with the following features:
- β Comprehensive error handling and logging
- β Environment validation and security checks
- β Performance optimizations and code splitting
- β Security headers and input sanitization
- β Browser compatibility checks
- β Graceful fallbacks for missing dependencies
-
Get a Free Gemini API Key (Recommended for beginners)
- Visit Google AI Studio
- Sign in with your Google account
- Create a new API key (free tier includes generous limits)
- Copy the API key
-
Configure the Application
- Set environment variable:
REACT_APP_PROVIDERS_GEMINI_API_KEY=your_key_here - Or configure through the Settings UI
- Start using the AI features!
- Set environment variable:
-
Alternative Free Options
- OpenAI: $5 free credit for new accounts at OpenAI Platform
- Anthropic Claude: Free tier available at Anthropic Console
Note: The
.env.examplefile now usesREACT_APP_PROVIDERS_*keys instead of the oldVITE_*names. Render expects these variable names when configuring your service.
# Required
NODE_ENV=production
REACT_APP_APP_ENVIRONMENT=production
# At least one AI provider API key
REACT_APP_PROVIDERS_GEMINI_API_KEY=your_gemini_key
# OR
REACT_APP_PROVIDERS_OPENAI_API_KEY=your_openai_key
# OR
REACT_APP_PROVIDERS_CLAUDE_API_KEY=your_claude_key
# Optional configuration
REACT_APP_PROVIDERS_DEFAULT_PROVIDER=gemini
REACT_APP_AGENT_MAX_ITERATIONS=3
REACT_APP_PROMPT_BUILDER_MAX_TOKENS=4096
REACT_APP_PUBLIC_URL=/npm install
npm run build
npm run build:check # Validates build output
npm start # Starts production server- Health endpoint:
/health - Config endpoint:
/api/config - Environment validation runs on startup
// The chat interface automatically handles:
// - Provider selection (Gemini, OpenAI, Claude)
// - API key management
// - Code extraction from responses
// - Error handling
// Example prompts to try:
"Create a React component for a todo list"
"Explain how async/await works in JavaScript"
"Write a Python function to sort a list"
"Help me debug this CSS flexbox layout"import { promptBuilderService } from './services/prompt-builder';
// Create a system prompt
const systemId = promptBuilderService.createSystemPrompt(
'You are an expert TypeScript developer.'
);
// Create a task instruction
const taskId = promptBuilderService.createTaskInstruction(
'Create a utility function that formats dates.'
);
// Add a specific example
promptBuilderService.addTaskExample(
taskId,
'formatDate(new Date(), "YYYY-MM-DD") β "2025-05-11"'
);
// Create a memory layer with context
const memoryId = promptBuilderService.createMemoryLayer();
promptBuilderService.addMemoryEntry(
memoryId,
MemoryEntryType.CODE,
'function getISODate(date) { return date.toISOString().split("T")[0]; }',
'project_code'
);
// Compose the prompt
const prompt = promptBuilderService.compose();
// Generate AI content
const result = await aiService.generateCode({
prompt: prompt.text
});The system integrates with the model router to optimize prompts for specific providers:
import { enhancePrompt } from './services/prompt-builder/model-router-extensions';
// Enhance a raw prompt with the builder
const enhancedPrompt = await enhancePrompt(
{ content: 'Create a React component for a to-do list' },
{
isCodeTask: true,
language: 'typescript',
provider: 'claude'
}
);
// Send to AI provider
const result = await provider.generateCode(enhancedPrompt);Filter specific layers for different contexts:
// Create a filter for only system and task layers
const filter = new SimpleLayerFilter(layer =>
layer.type === 'system' || layer.type === 'task'
);
// Compose with the filter
const filteredPrompt = promptBuilderService.compose(filter);For detailed documentation, see:
- Prompt Builder Architecture
- Service Integration Guide
- Provider Integration
- Prompt to Code Flow
- Prompt Examples
- Deployment Guide
- Layer Prioritization: Assign priorities that reflect logical importance
- Context Management: Include only relevant contextual information
- Provider Specificity: Use provider-specific formatters for optimal results
- Memory Usage: Strategically include previous interactions for continuity
- Preset Usage: Leverage built-in presets for common scenarios
MIT