An intelligent pharmaceutical analysis platform with AI-powered insights and real-time data processing.
PharmaLens/
├── ai_engine/ # Python AI/ML backend
├── server/ # Node.js API server
├── client/ # React frontend
└── docs/ # Documentation
- Python 3.8+ (for AI engine)
- Node.js 16+ (for server and client)
- Git
git clone https://github.com/ritik0506/PharmaLens.git
cd PharmaLenscd ai_engine
# Install dependencies (globally or in venv)
pip install -r requirements.txt
# Copy environment file
copy .env.example .env
# Edit .env and add your OpenAI API key for LLM features
# Note: App works in deterministic mode without API keycd ../server
# Install dependencies
npm install
# Copy environment file
copy .env.example .env
# Edit .env if neededcd ../client
# Install dependencies
npm install
# Copy environment file
copy .env.example .env
# Edit .env if needed# Run the startup script (starts all services automatically)
.\start.ps1Terminal 1: Start AI Engine
cd ai_engine
python -m uvicorn app.main:app --reload --port 8000Terminal 2: Start Server
cd server
npm startTerminal 3: Start Client
cd client
npm run devNote: No virtual environment is required if Python dependencies are installed globally.
- Frontend: http://localhost:5173 (or 5174 if 5173 is in use)
- API Server: http://localhost:3001
- AI Engine: http://localhost:8000
OPENAI_API_KEY: Your OpenAI API key (optional, enables GPT-4 powered agents)CLOUD_ENABLED: Enable/disable cloud LLMLOCAL_ENABLED: Enable/disable local modelLOCAL_MODEL_PATH: Path to local Llama model file (.gguf)
Note: Application works perfectly without LLM configuration using deterministic responses.
To enable AI-powered insights:
- Quick Setup: Run
.\setup-llm.ps1(interactive configuration) - Detailed Guide: See
LLM_SETUP_GUIDE.md
PORT: Server port (default: 3001)CLIENT_URL: Frontend URL for CORSAI_ENGINE_URL: AI engine URL
VITE_API_URL: Backend API URL
- 🤖 AI-powered pharmaceutical analysis (Cloud GPT-4 or Local Llama)
- ⚡ Real-time data processing with 12 specialized agents
- 📊 Interactive dashboard with ROI calculations
- 🎯 Multi-agent system (IQVIA, Clinical, Patent, Market, etc.)
- 🔒 Dual LLM support: Cloud (OpenAI) or Local (HIPAA-compliant)
- 🧪 Deterministic mode for testing without LLM
The application supports three modes:
- Cloud Mode - OpenAI GPT-4 (best quality, requires API key)
- Local Mode - Llama models (HIPAA-compliant, requires model download)
- Deterministic Mode - Pre-programmed responses (no setup required)
See LLM_SETUP_GUIDE.md for configuration instructions.
Detailed documentation is available in the docs/ directory:
When working on this project:
- Create a new branch for your feature
- Make your changes
- Test thoroughly
- Submit a pull request
If you get a port conflict error, change the port in the respective .env file.
Make sure you're using compatible versions of Python and Node.js.
Ensure you've activated the virtual environment before installing Python packages or running the AI engine.
Each team member should:
- Clone the repository
- Set up all three components (ai_engine, server, client)
- Copy
.env.exampleto.envin each directory - Install dependencies
- Create a new branch for their work
MIT License - See LICENSE file for details