Dell Validated Designs for AI PCs
are open-source reference guides that streamline development of AI applications meant to run on Dell AI PCs with Dell Pro AI Studio.
This project showcases a Progressive Web App (PWA) chat interface with advanced RAG (Retrieval Augmented Generation) capabilities. This application supports both local document management and remote vector database integration for enterprise-level document retrieval.
Table of Contents
- 🤖 Chat interface for conversational AI interactions
- 📄 Local document management for personal RAG
- 🌐 Company Documents integration with remote vector databases
- 💾 Persistent chat history with IndexedDB
- 📝 Support for multiple document formats (PDF, text, etc.)
- 📱 Progressive Web App (PWA) for offline and mobile use
- 🎨 Customizable themes and appearance
📝 Dell Pro AI Studio Core, a Text Generation Large Language Model, and a Text Embeddings model is required be installed before accessing the live demo. See Installing Dell Pro AI Studio for more information.
Before installing the project, you'll need to set up your development environment:
For Dell Pro AI Studio, you may either install dependencies, the Dell AI Framework, and require models manually or by using the Dell Pro AI Studio Command Line Interface (dpais CLI) for easier setup.
Refer to the installation guide for full details on installation and usage.
# Install dpais CLI
winget install Dell.DPAIS
# Install Dell Pro AI Studio dependencies, Dell AI Framework, and select and install initial models
dpais init
# Dell Pro AI Studio Chat requires at least a text embeddings model and a text generation to be installed.Before using Dell Pro AI Studio models, ensure you have the following prerequisites installed:
-
Required Runtimes
-
Dell Pro AI Studio Core
Choose the appropriate version for your system:
Architecture Download Link ARM64 Download Core ARM64 x64 Download Core x64 -
Recommended Models
Download these starter models to begin using Dell Pro AI Studio from Dell Enterprise Hub: dell.huggingface.co
Model Type Dell Enterprise Hub Model Text Generation Dell Enterprise Hub: Microsoft Phi-3.5 Mini Instruct Text Generation Dell Enterprise Hub: IBM Granite 4.0 H Small Text Generation Dell Enterprise Hub: IBM Granite 4.0 H Tiny Text Embeddings* Dell Enterprise Hub: Nomic Embed Text v1.5 *Required for Document Chat
📝 For detailed installation instructions, please refer to the Dell Pro AI Studio Core Installation Guide
# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# Add these lines to your ~/.bashrc, ~/.zshrc, ~/.profile, or ~/.bash_profile
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
# Reload your shell configuration
source ~/.bashrc # or source ~/.zshrc, etc.
# Verify NVM installation
nvm --version- Download the NVM for Windows installer from: https://github.com/coreybutler/nvm-windows/releases
- Run the installer (nvm-setup.exe)
- Open a new Command Prompt or PowerShell window
- Verify installation:
nvm version# Install Node.js 22 (LTS)
nvm install 22
# Use Node.js 22
nvm use 22
# Verify installation
node --version
npm --version# Install Yarn globally
npm install -g yarn
# Verify installation
yarn --versionThe backend uses Podman Compose to manage containerized services like PostgreSQL with pgvector.
# Install Podman
# For Ubuntu/Debian
sudo apt-get install -y podman podman-compose
# For Fedora
sudo dnf install -y podman
# Verify installation
podman --version
podman-compose --versionFollow the instructions at https://podman.io/getting-started/installation#windows to install Podman Desktop, which includes Podman Compose.
# Clone the repository
git clone https://github.com/yourusername/dpais-chat-template-pwa.git
cd dpais-chat-template-pwa
# Install dependencies
yarn install
# Create a .env file from the template
cp .env.example .env
# Configure your environment variables in .envYou can run both the frontend and backend services using a single command:
# Start both backend (with podman-compose) and frontend
yarn start
# Start only the frontend
yarn dev
# Start only the backend
yarn backend:start
# Stop the backend services
yarn backend:stop
# View backend logs
yarn backend:logs
# Restart the backend services
yarn backend:restartThe application currently supports connecting to PostgreSQL with pgvector extension for vector search operations:
- PGVector: Full integration with PostgreSQL's vector extension for document similarity search
- Additional databases: Support for other vector databases (Milvus, Qdrant, Weaviate, Chroma, Pinecone) is planned for future releases
For detailed information about the backend API, see the Backend API README.
The backend provides a REST API for vector database operations. Follow these steps to set up and use the backend:
# Start both frontend and backend services
yarn start
# Alternatively, start only the backend
yarn backend:startThe backend includes scripts to load sample data into the PGVector database using a local python venv:
# start the backend if it is not already started
yarn backend:start
# setup local venv (only need to run this once)
yarn backend:setup-venv
# Clear existing data and load NASA Apollo mission transcripts
yarn backend:load-data
# Or load only the mission data
yarn backend:load-missions
# To clear the database without loading new data
yarn backend:clear-dataFor more details about loading data, see the Data Loading Scripts README.
Once the backend is running, you can access:
- API documentation: http://localhost:8000/docs
- Health check: http://localhost:8000/health
# Run all backend tests
yarn backend:test
# Run only backend unit tests
yarn backend:test:unit
# Run only backend integration tests
yarn backend:test:integrationFor development without containers, you can use the mock server:
yarn backend:mockThe mock server provides the same API endpoints but uses in-memory data instead of actual vector databases.
You can test the vector database integration with:
# Test the UI and client-side integration without requiring databases
yarn test:db-data
# Test data loader functionality (file structure and config)
yarn test:data-loader
# Run all UI and structure tests
yarn test:allThese tests verify:
- UI components for database configuration
- RAG query functionality with sample questions
- Data loader structure and configuration
- Vector database configuration detection
The tests are resilient to varying environments and will generate helpful diagnostics even when databases aren't available.
Note we are using docker and podman interchangeably here, but is tested using podman in the implementation.
The application includes comprehensive Podman integration tests that verify the entire stack functions correctly with real vector databases:
# Run the Podman integration test script (interactive)
yarn test:docker
# Run Podman integration tests directly
yarn test:docker-integration
# Environment variables:
# AUTO_START=1 # Automatically start containers without prompting
# SKIP_CONTAINER_CHECK=1 # Skip container availability check
# DEBUG=1 # Enable verbose debugging output
# Examples:
AUTO_START=1 yarn test:docker # Start containers automatically and run testsThe Podman integration tests verify:
- Podman container status for all databases
- Data loader execution and success
- Database connections and query capabilities
- End-to-end RAG functionality with the sample data
These tests generate detailed logs and screenshots in the test-results directory that you can use for diagnostics.
The application includes comprehensive test suites for both frontend and backend:
# Frontend tests (Playwright)
yarn test
# Backend tests (Python pytest)
yarn backend:test
# Backend unit tests only
yarn backend:test:unit
# Backend integration tests
yarn backend:test:integration
# Frontend with Docker integration tests
yarn test:docker-integrationAfter installation, you'll need to configure the default settings in the application:
- Start the development server:
yarn dev-
Open the application in your browser (http://localhost:5173)
-
Click on the Settings icon in the sidebar
-
Configure the following settings:
- Select your preferred embeddings model (e.g., OpenAI's text-embedding-3-small)
- This model will be used for all document embeddings
- Set your primary LLM model (e.g., Phi3.5-mini-instruct, Qwen2.5:1.5b)
- Configure fallback models if needed
- Set model parameters (temperature, max tokens, etc.)
- Enter your DPAIS URL (e.g., https://api.dpais.com/v1)
- Test the connection to verify it works
-
Save all settings
Note: These settings can be changed later through the Settings dialog. The configuration is persisted in your browser's local storage.
# Start the development server
yarn dev
# The application will be available at http://localhost:5173# Build the application
yarn build
# Preview the production build locally
yarn previewTo deploy the application:
- Build the application:
yarn build-
The production build will be available in the
distdirectory -
Deploy the contents of the
distdirectory to your hosting service
- Start a new chat session
- Enter your query in the input box
- Wait for the AI assistant's response
- Navigate to the Documents tab in the sidebar
- Upload documents you want to use for RAG
- Select a document or tag to include in the current chat session
- Ask questions related to your documents
- Navigate to the Company tab in the sidebar
- Set up vector database connections in Settings
- Search across configured vector databases
- Add vector database sources to your chat for context-aware answers
Configure your LLM API settings in the Settings dialog:
- API Base URL for Dell Pro AI Studio
- API Key: Authentication key for the service
- Embeddings Model: Model to use for document embeddings
- Go to Settings > Vector Databases
- Click "Add Vector Database"
- Select the database type and fill in connection details
- Test the connection to verify it works
- Save the configuration
You can add multiple instances of each vector database type and combine them in your RAG workflow.
-
Node.js Version Issues
- Ensure you're using Node.js 22:
nvm use 22 - Clear npm cache:
npm cache clean --force
- Ensure you're using Node.js 22:
-
Dependency Issues
- Delete node_modules and yarn.lock:
rm -rf node_modules yarn.lock - Reinstall dependencies:
yarn install
- Delete node_modules and yarn.lock:
-
Environment Variables
- Ensure all required environment variables are set in
.env - Restart the development server after updating
.env
- Ensure all required environment variables are set in
- React with TypeScript
- Material UI for interface components
- LangChain for RAG workflow integration
- RxDB for local database management
- Various vector database client libraries