diff --git a/samples/calibration-dispatcher-agent/.DS_Store b/samples/calibration-dispatcher-agent/.DS_Store new file mode 100644 index 00000000..6f97b2e3 Binary files /dev/null and b/samples/calibration-dispatcher-agent/.DS_Store differ diff --git a/samples/calibration-dispatcher-agent/.env.example b/samples/calibration-dispatcher-agent/.env.example new file mode 100644 index 00000000..f14faef6 --- /dev/null +++ b/samples/calibration-dispatcher-agent/.env.example @@ -0,0 +1,11 @@ +UIPATH_ACCESS_TOKEN=UIPATH_ACCESS_TOKEN +UIPATH_URL=https://cloud.uipath.com/ +UIPATH_TENANT_ID=UIPATH_TENANT_ID +UIPATH_ORGANIZATION_ID=UIPATH_ORGANIZATION_ID +EQUIPMENT_ENTITY_ID=EQUIPMENT_ENTITY_ID +CLINICS_ENTITY_ID=CLINICS_ENTITY_ID +TECHNICIANS_ENTITY_ID=TECHNICIANS_ENTITY_ID +UIPATH_FOLDER_PATH=Calibration Services +USE_MOCK_DATA=false +AUTO_APPROVE_IN_LOCAL=true +USE_MCP=false diff --git a/samples/calibration-dispatcher-agent/README.md b/samples/calibration-dispatcher-agent/README.md new file mode 100644 index 00000000..778faa02 --- /dev/null +++ b/samples/calibration-dispatcher-agent/README.md @@ -0,0 +1,384 @@ +# Calibration Dispatcher Agent + +A production-grade autonomous agent for medical device calibration scheduling using UiPath SDK, LangGraph, and Context Grounding. + +## Overview + +This agent automates the complex process of scheduling medical equipment calibration visits across multiple healthcare facilities. It demonstrates advanced UiPath integration patterns including: + +- **LangGraph StateGraph** workflow with Human-in-the-Loop (HITL) via Action Center +- **Context Grounding** for policy retrieval using Orchestrator Storage Buckets +- **Data Fabric** for equipment, clinic, and technician management +- **Google Maps API** integration for route optimization +- **MCP Server** integration for RPA workflow execution +- **Dynamic constraint management** with manager override capabilities + +### Business Value + +- **99% faster planning**: 2-4 hours manual scheduling → 2-3 minutes automated +- **27% route reduction**: Optimized waypoint sequencing via Google Maps +- **100% error elimination**: Automated constraint enforcement and SLA management + +## Features + +### Core Capabilities + +1. **Intelligent Route Planning** + - Priority-based device grouping (OVERDUE, URGENT, SCHEDULED) + - SLA-aware scheduling (24h/48h/72h response times) + - City-based clustering for efficient routing + - Technician specialization matching + +2. **Human-in-the-Loop Approval** + - Action Center integration with revision support + - Manager override for constraints + - Approval, Rejection, and Change Request workflows + - Automatic revision tracking (max 3 iterations) + +3. **Context Grounding RAG** + - Policy retrieval from Orchestrator Storage Buckets + - Calibration rules, routing guidelines, service procedures + - Constraint extraction and enforcement + - Fallback to default values if retrieval fails + +4. **Route Optimization** + - Google Maps API waypoint optimization + - Distance and duration calculations + - Traffic-aware routing + - Multi-city support + +5. **Notification & Tracking** + - Email notifications via RPA workflows + - Slack integration (optional) + - Service order creation in Data Fabric + - Audit trail for all decisions + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────┐ +│ Calibration Dispatcher Agent │ +├─────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────┐ ┌──────────────┐ │ +│ │ Equipment │────────►│ Analysis │ │ +│ │ Status │ │ & Grouping │ │ +│ └──────────────┘ └──────┬───────┘ │ +│ │ │ │ +│ │ ▼ │ +│ │ ┌──────────────┐ │ +│ │ │ Context │ │ +│ │ │ Grounding │◄─────Storage │ +│ │ │ (RAG) │ Bucket │ +│ │ └──────┬───────┘ │ +│ │ │ │ +│ ▼ ▼ │ +│ ┌──────────────┐ ┌──────────────┐ │ +│ │ Route │────────►│ Human │ │ +│ │ Optimization│ │ Approval │ │ +│ └──────────────┘ │ (HITL) │ │ +│ └──────┬───────┘ │ +│ │ │ +│ ▼ │ +│ ┌──────────────┐ │ +│ │ RPA │ │ +│ │ Workflows │◄─────MCP Server │ +│ └──────────────┘ │ +└─────────────────────────────────────────────────────────────┘ +``` + +## Technology Stack + +- **Python 3.11+** with LangChain/LangGraph +- **UiPath SDK** for platform integration +- **UiPath Context Grounding** for RAG pattern +- **UiPath Data Fabric** for entity management +- **UiPath Action Center** for HITL workflows +- **UiPath MCP Server** for RPA integration +- **Google Maps API** for route optimization +- **OpenAI GPT-4** via UiPath LLM Gateway + +## Quick Start + +### Prerequisites + +- UiPath Automation Cloud account with: + - Data Service enabled + - AI Trust Layer access (Context Grounding) + - Action Center application deployed + - (Optional) MCP Server with RPA workflows +- Python 3.11 or newer +- Google Maps API key (optional, for route optimization) +- UiPath CLI installed and authenticated + +### Installation + +1. **Clone and Navigate** + +```bash +git clone https://github.com/UiPath/uipath-langchain-python.git +cd uipath-langchain/samples/calibration-dispatcher-agent +``` + +2. **Create Virtual Environment** + +```bash +python3 -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate +``` + +3. **Install Dependencies** + +```bash +pip install -r requirements.txt +``` + +4. **Authenticate with UiPath** + +```bash +uipath auth +``` + +Select your tenant and organization when prompted. + +## Configuration + +All environment-specific settings are centralized in `config.py`. Update these values according to your UiPath environment: + +### Critical Configuration + +1. **Data Fabric Entity IDs** (Required) + +```python +# In config.py or .env +EQUIPMENT_ENTITY_ID="your-equipment-entity-id" +CLINICS_ENTITY_ID="your-clinics-entity-id" +TECHNICIANS_ENTITY_ID="your-technicians-entity-id" +``` + +Find these IDs in: **Orchestrator > Data Service > Entities > [Entity Name] > Details** + +2. **Context Grounding Index** (Required) + +```python +# In config.py or .env +CONTEXT_GROUNDING_INDEX_NAME="Calibration Procedures" +``` + +Create this index in: **Orchestrator > Tenant > Indexes** + +3. **Folder Path** (Required) + +```python +# In config.py or .env +UIPATH_FOLDER_PATH="Calibration Services" +``` + +## Setup Guide + +- Data Fabric entities and sample data +- Orchestrator Storage Buckets for Context Grounding +- Index creation and management +- Google Maps API configuration +- Action Center application deployment +- MCP Server integration (optional) + +## Running the Agent + +### Production Mode + +```bash +# With full UiPath infrastructure +python3 main.py +``` + +Expected workflow: +1. Analyzes equipment status from Data Fabric +2. Groups devices by city and priority +3. Retrieves routing constraints from Context Grounding +4. Generates optimized routes with Google Maps +5. Presents routes for approval in Action Center +6. Executes RPA workflows (email, Slack, Data Fabric updates) + +### Mock Mode (Local Testing) + +For quick testing without full UiPath setup: + +```python +# In config.py or .env +USE_MOCK_DATA=true +AUTO_APPROVE_IN_LOCAL=true +USE_MCP=false +``` + +**Note**: Mock mode relaxes configuration validation and skips Action Center/MCP integration, but still requires Data Fabric with imported CSV data (see Setup section). + +Then run: + +```bash +python3 main.py +``` + +## Project Structure + +``` +calibration-dispatcher-agent/ +│ +├── 📄 Core Application Files +│ ├── main.py # Main agent logic (LangGraph workflow) +│ ├── config.py # Centralized configuration +│ ├── mcp_bridge.py # Async-to-sync MCP tool bridge +│ ├── requirements.txt # Python dependencies +│ ├── .env.example # Environment variables template +│ ├── .gitignore # Git exclusions +│ └── README.md # This file +│ +├── 📁 data/ # Sample data and schema +│ ├── README.md # Data directory documentation +│ ├── Schema.json # Data Fabric entity definitions +│ ├── devices_for_data_fabric.csv # Sample equipment (20 devices) +│ ├── locations.csv # Sample clinics (20 locations) +│ └── technicians.csv # Sample technicians (5 techs) +│ +│ +└── 📁 policies/ # Policy documents for Context Grounding + ├── README.md # Policies documentation + ├── Calibration_Rules_Document.pdf # Rules, intervals, SLAs + ├── Routing_Guidelines_Document.pdf # Route optimization + └── Service_Procedures_Document.pdf # Service procedures +``` + +## Business Logic + +### Priority Classification + +Devices are classified based on days until next calibration due: + +| Status | Audiometer | Tympanometer | Priority | Action | +|--------|-----------|--------------|----------|---------| +| **OVERDUE** | Past due | Past due | Critical | Immediate scheduling | +| **URGENT** | ≤ 14 days | ≤ 7 days | High | Schedule within 48h | +| **SCHEDULED** | > 14 days | > 7 days | Normal | Regular scheduling | + +### SLA Requirements + +Response times based on clinic classification: + +| Clinic Type | SLA | Example | +|------------|-----|---------| +| **Hospital** | 24 hours | Regional hospitals | +| **Specialist Clinic** | 48 hours | Audiology centers | +| **General Practice** | 72 hours | Family clinics | + +### Routing Constraints + +**Standard Mode:** +- Max 4 visits per route +- Max 200 km total distance +- Max 8 hours total work time + +**OVERDUE Override (Emergency Mode):** +- Max 5 visits per route +- Max 300 km total distance +- Max 12 hours total work time (includes overtime) + +Constraints are retrieved from Context Grounding policies and can be overridden by manager notes. + +### Technician Specialization + +Devices are matched to technicians based on specializations: + +| Device Type | Required Specialization | +|------------|------------------------| +| Audiometer | Audiometry or All | +| Tympanometer | Tympanometry or All | + +## Extending the Sample + +### Adding New Device Types + +1. Update `devices_for_data_fabric.csv` with new device records +2. Add specialization mapping in `config.py`: + ```python + DEVICE_TO_SPECIALIZATION = { + "Audiometer": {"Audiometry", "All"}, + "Tympanometer": {"Tympanometry", "All"}, + "Spirometer": {"Respiratory", "All"}, # New device type + } + ``` +3. Add service time estimation in `config.py`: + ```python + SERVICE_TIME_SPIROMETER = float(os.getenv("SERVICE_TIME_SPIROMETER", "1.0")) + ``` + +### Adding New Cities + +1. Update `locations.csv` with clinic records in the new city +2. Add city coordinates in `config.py`: + ```python + CITY_COORDS = { + "Warsaw": (52.2297, 21.0122), + "Poznan": (52.4064, 16.9252), + "Lodz": (51.7592, 19.4560), # New city + } + ``` + +### Creating Custom Tools + +Add new LangChain tools to extend agent capabilities: + +```python +@tool +def check_parts_availability(device_type: str) -> dict: + """Check if spare parts are available for device calibration.""" + # Your implementation + return {"available": True, "lead_time_days": 2} +``` + +Then include in the agent's tool list. + +## Troubleshooting + +### Common Issues + +**Configuration Validation Errors** + +If you see "Configuration Errors" when running the agent: +- Verify entity IDs are correct (not placeholder `00000000-...`) +- Check that Context Grounding index exists +- Confirm UiPath authentication is valid + +**Context Grounding Not Found** + +If policy retrieval fails: +- Verify index name matches configuration +- Check that storage bucket contains policy PDFs +- Ensure index has been created and synchronized +- Confirm folder permissions allow access + +**Google Maps API Errors** + +If route optimization fails: +- Verify API key is valid and active +- Check that Distance Matrix API is enabled +- Ensure billing is configured in Google Cloud Console +- Routes will fall back to straight-line distance if API unavailable + +**Action Center Task Not Created** + +If HITL approval doesn't work: +- Verify Action Center application is deployed +- Check field names match configuration (`SelectedOutcome`, `ManagerComments`) +- Ensure user has permissions to create tasks +- Try `AUTO_APPROVE_IN_LOCAL=true` for local testing + + +## Support + +For issues or questions: +- Review UiPath SDK documentation +- Contact your UiPath representative + +## Acknowledgments + +This sample demonstrates patterns from the UiPath Specialist Coded Agent Challenge 2025 (4th place solution). diff --git a/samples/calibration-dispatcher-agent/config.py b/samples/calibration-dispatcher-agent/config.py new file mode 100644 index 00000000..6f0d53c3 --- /dev/null +++ b/samples/calibration-dispatcher-agent/config.py @@ -0,0 +1,216 @@ +""" +Calibration Dispatcher Agent - Configuration + +This file contains all environment-specific configurations including: +- UiPath Data Fabric entity IDs +- Folder paths and index names +- LLM model selection +- API keys and service endpoints +- MCP server configuration +- Business logic parameters + +Adjust these values according to your UiPath environment. +""" + +import os +from typing import Dict, Tuple + +# ============================================================================= +# UIPATH PLATFORM CONFIGURATION +# ============================================================================= + +# Folder path in UiPath Orchestrator (where processes and entities are deployed) +UIPATH_FOLDER_PATH = os.getenv("UIPATH_FOLDER_PATH", "Calibration Services") + +# Context Grounding index name (created from Storage Bucket containing calibration policies) +CONTEXT_GROUNDING_INDEX_NAME = os.getenv( + "CONTEXT_GROUNDING_INDEX_NAME", + "Calibration Procedures" +) + +# Number of policy documents to retrieve for RAG +CONTEXT_GROUNDING_NUM_RESULTS = int(os.getenv("CONTEXT_GROUNDING_NUM_RESULTS", "3")) + +# ============================================================================= +# DATA FABRIC ENTITY IDS +# ============================================================================= +# These IDs must match your Data Fabric entities in UiPath Orchestrator. +# You can find them in Data Service > Entities > Entity Details + +EQUIPMENT_ENTITY_ID = os.getenv( + "EQUIPMENT_ENTITY_ID", + "00000000-0000-0000-0000-000000000001" # Replace with your Equipment entity ID +) + +CLINICS_ENTITY_ID = os.getenv( + "CLINICS_ENTITY_ID", + "00000000-0000-0000-0000-000000000002" # Replace with your Clinics entity ID +) + +TECHNICIANS_ENTITY_ID = os.getenv( + "TECHNICIANS_ENTITY_ID", + "00000000-0000-0000-0000-000000000003" # Replace with your Technicians entity ID +) + +# ============================================================================= +# LLM CONFIGURATION +# ============================================================================= + +# Model selection for UiPath LLM Gateway +# Options: "gpt-4o-2024-11-20", "gpt-4o-mini", "claude-sonnet-4-5-20250929", etc. +LLM_MODEL = os.getenv("LLM_MODEL", "gpt-4o-2024-11-20") + +# Temperature setting for LLM responses (0.0 = deterministic, 1.0 = creative) +LLM_TEMPERATURE = float(os.getenv("LLM_TEMPERATURE", "0.0")) + +# ============================================================================= +# GOOGLE MAPS API CONFIGURATION +# ============================================================================= + +# Google Maps API key for route optimization +# Can be set via environment variable or UiPath Asset +GOOGLE_MAPS_API_KEY = os.getenv("GOOGLE_MAPS_API_KEY", "") + +# UiPath Asset name for Google Maps API key (fallback if env var not set) +GOOGLE_MAPS_ASSET_NAME = os.getenv("GOOGLE_MAPS_ASSET_NAME", "GoogleMapsApiKey") + +# ============================================================================= +# MCP SERVER CONFIGURATION +# ============================================================================= + +# Enable/disable MCP integration (set to "false" to use classic RPA invocation) +USE_MCP = os.getenv("USE_MCP", "true").lower() == "true" + +# MCP server URL from UiPath Orchestrator (MCP Servers page) +MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "") + +# MCP tool input argument names (must match your RPA workflow parameter names) +MCP_ARG_EMAIL = os.getenv("RPA_ARG_NAME_EMAIL", "in_RouteData") +MCP_ARG_SLACK = os.getenv("RPA_ARG_NAME_SLACK", "in_MessageData") +MCP_ARG_ENTITY = os.getenv("RPA_ARG_NAME_ENTITY", "in_ServiceOrderData") + +# ============================================================================= +# ACTION CENTER CONFIGURATION +# ============================================================================= + +# Action Center form field names (must match your UiPath Apps form design) +APP_FIELD_SELECTED_OUTCOME = os.getenv("APP_FIELD_SELECTED_OUTCOME", "SelectedOutcome") +APP_FIELD_MANAGER_COMMENTS = os.getenv("APP_FIELD_MANAGER_COMMENTS", "ManagerComments") + +# Email address for approval notifications +APPROVER_EMAIL = os.getenv("APPROVER_EMAIL", "manager@example.com") + +# Maximum revision iterations per route before automatic rejection +MAX_REVISION_ITERATIONS = int(os.getenv("MAX_REVISION_ITERATIONS", "3")) + +# ============================================================================= +# BUSINESS LOGIC PARAMETERS +# ============================================================================= + +# City coordinates for distance calculations (lat, lng) +CITY_COORDS: Dict[str, Tuple[float, float]] = { + "Warsaw": (52.2297, 21.0122), + "Poznan": (52.4064, 16.9252), + "Wroclaw": (51.1079, 17.0385), + "Szczecin": (53.4285, 14.5528), + "Krakow": (50.0647, 19.9450), + "Gdansk": (54.3520, 18.6466), +} + +# Device type to technician specialization mapping +DEVICE_TO_SPECIALIZATION: Dict[str, set] = { + "Audiometer": {"Audiometry", "All"}, + "Tympanometer": {"Tympanometry", "All"}, +} + +# Standard service time per device type (hours) +SERVICE_TIME_AUDIOMETER = float(os.getenv("SERVICE_TIME_AUDIOMETER", "2.0")) +SERVICE_TIME_TYMPANOMETER = float(os.getenv("SERVICE_TIME_TYMPANOMETER", "1.5")) + +# Default routing constraints (can be overridden by manager notes) +DEFAULT_MAX_VISITS_PER_ROUTE = int(os.getenv("DEFAULT_MAX_VISITS_PER_ROUTE", "4")) +DEFAULT_MAX_DISTANCE_KM = float(os.getenv("DEFAULT_MAX_DISTANCE_KM", "200.0")) +DEFAULT_MAX_WORK_HOURS = float(os.getenv("DEFAULT_MAX_WORK_HOURS", "8.0")) + +# Override constraints for OVERDUE devices (emergency mode) +OVERDUE_MAX_VISITS_PER_ROUTE = int(os.getenv("OVERDUE_MAX_VISITS_PER_ROUTE", "5")) +OVERDUE_MAX_DISTANCE_KM = float(os.getenv("OVERDUE_MAX_DISTANCE_KM", "300.0")) +OVERDUE_MAX_WORK_HOURS = float(os.getenv("OVERDUE_MAX_WORK_HOURS", "12.0")) + +# Cost parameters for route optimization +COST_PER_KM = float(os.getenv("COST_PER_KM", "0.50")) # EUR per kilometer +TECHNICIAN_HOURLY_RATE = float(os.getenv("TECHNICIAN_HOURLY_RATE", "45.0")) # EUR per hour + +# ============================================================================= +# LOGGING CONFIGURATION +# ============================================================================= + +# Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL +LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper() + +# Log format +LOG_FORMAT = os.getenv( + "LOG_FORMAT", + "%(asctime)s - %(levelname)s - %(name)s - %(message)s" +) + +# ============================================================================= +# MOCK DATA CONFIGURATION (for testing without full UiPath setup) +# ============================================================================= + +# Enable mock mode for local testing (relaxes config validation, requires Data Fabric with imported data) +USE_MOCK_DATA = os.getenv("USE_MOCK_DATA", "true").lower() == "true" + +# Auto-approve all routes in local testing (skips Action Center, auto-approves routes) +AUTO_APPROVE_IN_LOCAL = os.getenv("AUTO_APPROVE_IN_LOCAL", "true").lower() == "true" + +# ============================================================================= +# HELPER FUNCTIONS +# ============================================================================= + +def validate_config() -> bool: + """ + Validate critical configuration values. + Returns True if configuration is valid, False otherwise. + """ + errors = [] + + if not USE_MOCK_DATA: + if EQUIPMENT_ENTITY_ID.startswith("00000000"): + errors.append("EQUIPMENT_ENTITY_ID must be set to your actual Data Fabric entity ID") + + if CLINICS_ENTITY_ID.startswith("00000000"): + errors.append("CLINICS_ENTITY_ID must be set to your actual Data Fabric entity ID") + + if TECHNICIANS_ENTITY_ID.startswith("00000000"): + errors.append("TECHNICIANS_ENTITY_ID must be set to your actual Data Fabric entity ID") + + if USE_MCP and not MCP_SERVER_URL: + errors.append("MCP_SERVER_URL must be set when USE_MCP=true") + + if errors: + print("\n⚠️ Configuration Errors:") + for error in errors: + print(f" - {error}") + print("\n💡 Please update config.py or set environment variables.\n") + return False + + return True + + +def print_config_summary(): + """Print a summary of current configuration (useful for debugging).""" + print("=" * 70) + print("CALIBRATION DISPATCHER AGENT - CONFIGURATION SUMMARY") + print("=" * 70) + print(f"Folder Path: {UIPATH_FOLDER_PATH}") + print(f"Context Grounding: {CONTEXT_GROUNDING_INDEX_NAME}") + print(f"LLM Model: {LLM_MODEL}") + print(f"Google Maps: {'Enabled' if GOOGLE_MAPS_API_KEY else 'Disabled'}") + print(f"MCP Integration: {'Enabled' if USE_MCP else 'Disabled'}") + print(f"Mock Data Mode: {'Enabled' if USE_MOCK_DATA else 'Disabled'}") + print(f"Max Visits/Route: {DEFAULT_MAX_VISITS_PER_ROUTE}") + print(f"Max Distance (km): {DEFAULT_MAX_DISTANCE_KM}") + print(f"Max Work Hours: {DEFAULT_MAX_WORK_HOURS}") + print("=" * 70) + print() diff --git a/samples/calibration-dispatcher-agent/data/README.md b/samples/calibration-dispatcher-agent/data/README.md new file mode 100644 index 00000000..8f37ab3f --- /dev/null +++ b/samples/calibration-dispatcher-agent/data/README.md @@ -0,0 +1,59 @@ +# Data Files + +This directory contains sample data files for the Calibration Dispatcher Agent. + +## Files + +### Schema.json +Data Fabric entity definitions for the four entities used by the agent: +- **Equipment**: Medical devices requiring calibration +- **Clinics**: Healthcare facilities where devices are located +- **Technicians**: Field service technicians who perform calibrations +- **ServiceOrders**: Scheduled calibration visits (created by agent) + +**Usage**: Import this schema into UiPath Orchestrator Data Service to create the required entities. + +### CSV Files + +Sample data for testing and demonstration: + +- **devices_for_data_fabric.csv** (20 records) + - Medical devices (Audiometers and Tympanometers) + - Includes calibration due dates, priorities, and clinic assignments + +- **locations.csv** (20 records) + - Healthcare facilities across 4 Polish cities + - Includes addresses, coordinates, SLA tiers (24h/48h/72h) + - Fictitious contact information (names and emails) + +- **technicians.csv** (5 records) + - Field service technicians with specializations + - Home base cities for route optimization + - Fictitious names and contact information + +## Data Privacy + +All data has been anonymized: +- ✅ Clinic names are generic (e.g., "Regional Hospital No 1") +- ✅ Contact names are common English names +- ✅ Email addresses use `.example` domain +- ✅ Geographic data (cities, coordinates) is real public information + +## Usage + +### For Mock Mode Testing +The agent loads CSV files directly when `USE_MOCK_DATA=true` in config.py. No additional setup needed. + +### For Production Deployment +Import data into Data Fabric: +- manually via Orchestrator UI: **Data Service > Entities > Import** + +## Customization + +Feel free to modify the CSV files to: +- Add your own cities and clinic locations +- Adjust device counts and types +- Change technician assignments +- Modify SLA tiers based on your business needs + +Just maintain the CSV column structure to ensure compatibility. diff --git a/samples/calibration-dispatcher-agent/data/Schema.json b/samples/calibration-dispatcher-agent/data/Schema.json new file mode 100644 index 00000000..3e304d4d --- /dev/null +++ b/samples/calibration-dispatcher-agent/data/Schema.json @@ -0,0 +1,1002 @@ +{ + "entities": [ + { + "name": "Clinics", + "displayName": "Clinics", + "entityTypeId": 0, + "entityType": "Entity", + "description": "Entity to store information about clinics", + "folderId": "00000000-0000-0000-0000-000000000000", + "fields": [ + { + "name": "clinicId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Clinic ID", + "description": "A unique identifier for the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "clinicName", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Clinic Name", + "description": "Name of the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "address", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Address", + "description": "Address of the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "city", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "City", + "description": "City of the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "postalCode", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Postal Code", + "description": "Postal code of the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "latitude", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Latitude", + "description": "Latitude of the clinic location", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "longitude", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Longitude", + "description": "Longitude of the clinic location", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "contactPerson", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Contact Person", + "description": "Contact person for the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "contactEmail", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Contact Email", + "description": "Contact email address for the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "contactPhone", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Contact Phone", + "description": "Contact phone number for the clinic", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "slaHours", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "SLA Hours", + "description": "Service level agreement hours", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + } + ], + "isRbacEnabled": false, + "invalidIdentifiers": [], + "isModelReserved": false + }, + { + "name": "Equipment", + "displayName": "Equipment", + "entityTypeId": 0, + "entityType": "Entity", + "description": "Entity to manage equipment details including calibration and manufacturer information.", + "folderId": "00000000-0000-0000-0000-000000000000", + "fields": [ + { + "name": "equipmentId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Equipment ID", + "description": "Unique identifier for the equipment", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "deviceName", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Device Name", + "description": "Name of the device", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "deviceType", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Device Type", + "description": "Type of device", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "serialNumber", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Serial Number", + "description": "Unique serial number of the equipment", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "clinicId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Clinic ID", + "description": "Identifier for the clinic associated with the equipment", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "lastCalibrationDate", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DATETIMEOFFSET", + "lengthLimit": 1000 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Last Calibration Date", + "description": "Date when the equipment was last calibrated", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "calibrationIntervalDays", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Calibration Interval (Days)", + "description": "Interval in days for the calibration", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "nextCalibrationDue", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DATETIMEOFFSET", + "lengthLimit": 1000 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Next Calibration Due", + "description": "Date when the next calibration is due", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "priority", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Priority", + "description": "Priority level of the equipment", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "manufacturer", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Manufacturer", + "description": "Manufacturer of the equipment", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + } + ], + "isRbacEnabled": false, + "invalidIdentifiers": [], + "isModelReserved": false + }, + { + "name": "ServiceOrders", + "displayName": "Service Orders", + "entityTypeId": 0, + "entityType": "Entity", + "description": "Entity to manage service orders with all necessary local fields.", + "folderId": "00000000-0000-0000-0000-000000000000", + "fields": [ + { + "name": "orderId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Order ID", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "equipmentId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Equipment ID", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "clinicId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Clinic ID", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "technicianId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Technician ID", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "scheduledDate", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DATETIMEOFFSET", + "lengthLimit": 1000 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Scheduled Date", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "estimatedDurationHours", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Estimated Duration Hours", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "routeSequence", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Route Sequence", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "status", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Status", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "priority", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Priority", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "createdDate", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DATETIMEOFFSET", + "lengthLimit": 1000 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Created Date", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "approvedBy", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Approved By", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "completionDate", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DATETIMEOFFSET", + "lengthLimit": 1000 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Completion Date", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "notes", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Notes", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "routeMapUrl", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Route Map URL", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "totalDistanceKm", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "DECIMAL", + "lengthLimit": 1000, + "maxValue": 1000000000000, + "minValue": -1000000000000, + "decimalPrecision": 2 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Total Distance (Km)", + "description": "", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + } + ], + "isRbacEnabled": false, + "invalidIdentifiers": [], + "isModelReserved": false + }, + { + "name": "Technicians", + "displayName": "Technicians", + "entityTypeId": 0, + "entityType": "Entity", + "description": "Entity to manage details of technicians.", + "folderId": "00000000-0000-0000-0000-000000000000", + "fields": [ + { + "name": "technicianId", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Technician ID", + "description": "A unique identifier for the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "technicianName", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Technician Name", + "description": "Name of the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "email", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": true, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Email", + "description": "Email address of the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "phone", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": false, + "isEncrypted": false, + "displayName": "Phone", + "description": "Phone number of the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "specialization", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Specialization", + "description": "Specialization area of the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + }, + { + "name": "homeBaseCity", + "isPrimaryKey": false, + "isForeignKey": false, + "isExternalField": false, + "isHiddenField": false, + "fieldCategoryId": 0, + "isUnique": false, + "referenceType": "ManyToOne", + "sqlType": { + "name": "NVARCHAR", + "lengthLimit": 200 + }, + "isRequired": true, + "isEncrypted": false, + "displayName": "Home Base City", + "description": "Home base city of the technician", + "isSystemField": false, + "isAttachment": false, + "isRbacEnabled": false, + "isModelReserved": false + } + ], + "isRbacEnabled": false, + "invalidIdentifiers": [], + "isModelReserved": false + } + ], + "choicesets": [] +} \ No newline at end of file diff --git a/samples/calibration-dispatcher-agent/data/devices_for_data_fabric.csv b/samples/calibration-dispatcher-agent/data/devices_for_data_fabric.csv new file mode 100644 index 00000000..612f077d --- /dev/null +++ b/samples/calibration-dispatcher-agent/data/devices_for_data_fabric.csv @@ -0,0 +1,21 @@ +Equipment ID,Device Name,Device Type,Serial Number,Clinic ID,Last Calibration Date,Calibration Interval (Days),Next Calibration Due,Priority,Manufacturer +EQP001,AudioStar Pro 3000,Audiometer,AS3K-2024-001,CLI001,2024-11-01,365,2025-11-01,2,MedTech Solutions +EQP002,HearTest Elite,Audiometer,HTE-2024-042,CLI002,2024-10-20,365,2025-10-20,1,AudioCorp +EQP003,TympScan 500,Tympanometer,TS500-2024-118,CLI003,2024-11-05,365,2025-11-05,2,MedTech Solutions +EQP004,AudioMaster Plus,Audiometer,AMP-2024-089,CLI004,2024-10-18,365,2025-10-18,1,HearingTech +EQP005,PureTone Pro,Audiometer,PTP-2024-156,CLI005,2024-12-15,365,2025-12-15,4,AudioCorp +EQP006,TympCheck Advanced,Tympanometer,TCA-2024-201,CLI006,2024-10-25,365,2025-10-25,1,DiagnosticSys +EQP007,HearingTest 2000,Audiometer,HT2K-2024-067,CLI007,2024-11-10,365,2025-11-10,2,MedTech Solutions +EQP008,AudioPro X1,Audiometer,APX1-2024-134,CLI008,2025-01-20,365,2026-01-20,5,HearingTech +EQP009,TympMaster Elite,Tympanometer,TME-2024-245,CLI009,2024-10-22,365,2025-10-22,1,AudioCorp +EQP010,PureTone Advanced,Audiometer,PTA-2024-178,CLI010,2024-11-08,365,2025-11-08,2,MedTech Solutions +EQP011,AudioCheck Pro,Audiometer,ACP-2024-092,CLI011,2025-02-05,365,2026-02-05,5,DiagnosticSys +EQP012,HearTest Pro 500,Audiometer,HTP5-2024-115,CLI012,2024-11-14,365,2025-11-14,3,AudioCorp +EQP013,TympScan Ultra,Tympanometer,TSU-2024-267,CLI013,2024-10-28,365,2025-10-28,1,HearingTech +EQP014,AudioMaster 3000,Audiometer,AM3K-2024-203,CLI014,2024-11-12,365,2025-11-12,2,MedTech Solutions +EQP015,HearingPro Elite,Audiometer,HPE-2024-145,CLI015,2025-01-25,365,2026-01-25,5,AudioCorp +EQP016,TympTest Advanced,Tympanometer,TTA-2024-289,CLI016,2024-10-30,365,2025-10-30,1,DiagnosticSys +EQP017,AudioStar Elite,Audiometer,ASE-2024-167,CLI017,2024-11-18,365,2025-11-18,2,MedTech Solutions +EQP018,PureTone Ultra,Audiometer,PTU-2024-189,CLI018,2025-03-10,365,2026-03-10,5,HearingTech +EQP019,TympCheck Pro,Tympanometer,TCP-2024-312,CLI019,2024-11-02,365,2025-11-02,2,AudioCorp +EQP020,AudioTest Advanced,Audiometer,ATA-2024-334,CLI020,2024-11-20,365,2025-11-20,3,MedTech Solutions diff --git a/samples/calibration-dispatcher-agent/data/locations.csv b/samples/calibration-dispatcher-agent/data/locations.csv new file mode 100644 index 00000000..df7a6336 --- /dev/null +++ b/samples/calibration-dispatcher-agent/data/locations.csv @@ -0,0 +1,21 @@ +Clinic ID,Clinic Name,Address,City,Postal Code,Latitude,Longitude,Contact Person,Contact Email,Contact Phone,SLA Hours +CLI001,Regional Hospital No 1,ul. Arkońska 4,Szczecin,71-455,53.4285,14.5528,Anna Kowalski,contact@clinic001.example,+48914343434,24 +CLI002,Health Center North,ul. Grunwaldzka 182,Poznan,60-166,52.4064,16.9252,Tom Nowak,contact@clinic002.example,+48618888888,48 +CLI003,Central Medical Center,al. Jerozolimskie 123,Warsaw,02-017,52.2297,21.0122,Maria Smith,contact@clinic003.example,+48225551234,24 +CLI004,Audio Clinic West,ul. Piłsudskiego 12,Wroclaw,50-044,51.1079,17.0385,Jan Johnson,contact@clinic004.example,+48713334455,48 +CLI005,Children's Hospital,ul. Szpitalna 27,Poznan,60-572,52.4012,16.8856,Kate Brown,contact@clinic005.example,+48617776666,24 +CLI006,Medica Clinic,ul. Monte Cassino 40,Szczecin,70-466,53.4395,14.5481,Peter Davis,contact@clinic006.example,+48914567890,72 +CLI007,Hearing Center Pro,ul. Nowy Świat 33,Warsaw,00-029,52.2350,21.0177,Amy Wilson,contact@clinic007.example,+48226667788,48 +CLI008,ENT Clinic,ul. Świdnicka 50,Wroclaw,50-030,51.1054,17.0262,Mark Miller,contact@clinic008.example,+48719998877,72 +CLI009,University Hospital,ul. Przybyszewskiego 49,Poznan,60-355,52.4199,16.9016,Eve Moore,contact@clinic009.example,+48611112233,24 +CLI010,ProMed Clinic,ul. Ku Słońcu 67,Szczecin,71-080,53.4525,14.5003,Bob Taylor,contact@clinic010.example,+48914445566,72 +CLI011,Diagnostic Center,ul. Pulawska 455,Warsaw,02-844,52.1621,21.0305,Lisa Anderson,contact@clinic011.example,+48223334455,48 +CLI012,ENT Clinic South,ul. Borowska 213,Wroclaw,50-556,51.0826,17.0048,Andy Thomas,contact@clinic012.example,+48717778899,72 +CLI013,Family Clinic,ul. Słowackiego 12,Poznan,60-823,52.4321,16.9104,Jane Jackson,contact@clinic013.example,+48619990011,72 +CLI014,City Hospital,ul. Unii Lubelskiej 1,Szczecin,71-252,53.4150,14.5306,Mark White,contact@clinic014.example,+48911112233,24 +CLI015,Health Center East,ul. Marszałkowska 140,Warsaw,00-061,52.2293,21.0149,Donna Harris,contact@clinic015.example,+48224445566,48 +CLI016,Audio Clinic East,ul. Legnicka 40,Wroclaw,53-671,51.1389,16.9737,Chris Martin,contact@clinic016.example,+48715556677,72 +CLI017,MediCare Clinic,ul. Hetmańska 90,Poznan,60-251,52.4510,16.9342,Maggie Garcia,contact@clinic017.example,+48616667788,48 +CLI018,Clinical Hospital Central,ul. Banacha 1a,Warsaw,02-097,52.2107,20.9826,Greg Martinez,contact@clinic018.example,+48227778899,24 +CLI019,Medical Center North,ul. Niepodległości 30,Szczecin,70-404,53.4308,14.5420,Isabel Rodriguez,contact@clinic019.example,+48918889900,48 +CLI020,ENT Clinic West,ul. Traugutta 57,Wroclaw,50-417,51.1144,17.0211,Adam Lee,contact@clinic020.example,+48719991122,48 diff --git a/samples/calibration-dispatcher-agent/data/technicians.csv b/samples/calibration-dispatcher-agent/data/technicians.csv new file mode 100644 index 00000000..ae6265e6 --- /dev/null +++ b/samples/calibration-dispatcher-agent/data/technicians.csv @@ -0,0 +1,6 @@ +Technician ID,Technician Name,Email,Phone,Specialization,Home Base City +TECH001,John Smith,john.smith@calibration-services.example,+48601111111,All,Warsaw +TECH002,Anna Johnson,anna.johnson@calibration-services.example,+48602222222,Audiometry,Poznan +TECH003,Michael Brown,michael.brown@calibration-services.example,+48603333333,Tympanometry,Wroclaw +TECH004,Sarah Davis,sarah.davis@calibration-services.example,+48604444444,All,Szczecin +TECH005,David Wilson,david.wilson@calibration-services.example,+48605555555,Audiometry,Warsaw diff --git a/samples/calibration-dispatcher-agent/main.py b/samples/calibration-dispatcher-agent/main.py new file mode 100644 index 00000000..039562af --- /dev/null +++ b/samples/calibration-dispatcher-agent/main.py @@ -0,0 +1,1272 @@ +# -*- coding: utf-8 -*- +""" +Calibration Dispatcher Agent - StateGraph + HITL + Constraints Enforcement + +A production-grade autonomous agent for medical device calibration scheduling. + +Features: +- LangGraph StateGraph workflow with Human-in-the-Loop (HITL) via UiPath Action Center +- Dynamic constraint management with manager override capabilities +- Google Maps API integration for route optimization +- Context Grounding for policy retrieval (RAG pattern) +- MCP Server integration for RPA workflow execution +- Technician specialization matching and SLA-aware scheduling + +For configuration, see config.py +""" + +import json +import math +import uuid +import logging +import re +import os +from datetime import datetime, timedelta, date +from typing import Any, Dict, List, Optional, Tuple, Union + +from dotenv import load_dotenv +from pydantic import BaseModel + +from langchain_core.messages import HumanMessage +from langchain_core.tools import tool +from langchain.agents import create_agent as create_react_agent +from langgraph.graph import StateGraph, END +from langgraph.types import interrupt, Command + +from uipath.platform import UiPath + +from uipath.platform.common import CreateTask +from uipath_langchain.chat import UiPathChat +from uipath_langchain.retrievers import ContextGroundingRetriever + +import googlemaps + +# Import centralized configuration +import config + +# ---------- Bootstrap ---------- + +load_dotenv() + +logging.basicConfig( + level=getattr(logging, config.LOG_LEVEL, logging.INFO), + format=config.LOG_FORMAT +) +logger = logging.getLogger("calibration-dispatcher") + +# Validate configuration before proceeding +if not config.validate_config(): + logger.error("Configuration validation failed. Please check config.py") + if not config.USE_MOCK_DATA: + raise RuntimeError("Invalid configuration. Cannot proceed.") + +config.print_config_summary() + +# Initialize UiPath client +uipath_client = UiPath() + +# Initialize LLM +llm = UiPathChat( + model=config.LLM_MODEL, + temperature=config.LLM_TEMPERATURE +) + +# Initialize Context Grounding for policy retrieval +context_grounding = ContextGroundingRetriever( + index_name=config.CONTEXT_GROUNDING_INDEX_NAME, + folder_path=config.UIPATH_FOLDER_PATH, + number_of_results=config.CONTEXT_GROUNDING_NUM_RESULTS, +) + +# Initialize Google Maps client +GOOGLE_MAPS_API_KEY = config.GOOGLE_MAPS_API_KEY +if not GOOGLE_MAPS_API_KEY: + try: + logger.info("Google Maps API key not in config, trying UiPath Assets...") + asset = uipath_client.assets.retrieve( + name=config.GOOGLE_MAPS_ASSET_NAME, + folder_path=config.UIPATH_FOLDER_PATH + ) + GOOGLE_MAPS_API_KEY = getattr(asset, "value", None) or getattr(asset, "stringValue", None) + if GOOGLE_MAPS_API_KEY: + logger.info("Google Maps API key loaded from Assets.") + else: + logger.warning("Asset '%s' found but value is empty.", config.GOOGLE_MAPS_ASSET_NAME) + except Exception as e: + logger.warning("Failed to retrieve Asset: %s", e) + +if GOOGLE_MAPS_API_KEY: + try: + gmaps = googlemaps.Client(key=GOOGLE_MAPS_API_KEY) + logger.info("Google Maps client initialized successfully.") + except Exception as e: + gmaps = None + logger.error("Failed to initialize Google Maps client: %s", e) +else: + gmaps = None + logger.error("Google Maps client NOT initialized - missing API key!") + +# ---------- Global buffer to avoid double planning ---------- + +LAST_ROUTING_PLAN: Optional[Dict[str, Any]] = None + +# ---------- Helpers: specialization, geometry ---------- + +def _estimate_service_hours_for_visit(visit: Dict[str, Any]) -> float: + """Calculate total service hours for a clinic visit based on device types.""" + s = 0.0 + for d in visit.get("devices", []): + if d.get("device_type") == "Audiometer": + s += config.SERVICE_TIME_AUDIOMETER + elif d.get("device_type") == "Tympanometer": + s += config.SERVICE_TIME_TYMPANOMETER + return s + +def _tech_ok_for_devices(tech: Dict[str, Any], visits: List[Dict[str, Any]]) -> bool: + """Check if technician specialization matches required device types.""" + spec = {tech.get("specialization") or "All"} + required = set() + for v in visits: + for d in v.get("devices", []): + required |= config.DEVICE_TO_SPECIALIZATION.get(d.get("device_type"), {"All"}) + return bool(spec & required) or "All" in spec + +def _city_distance_km(city_a: str, city_b: str) -> float: + """Calculate approximate distance between two cities in kilometers.""" + ax, ay = config.CITY_COORDS.get(city_a, (0.0, 0.0)) + bx, by = config.CITY_COORDS.get(city_b, (0.0, 0.0)) + return math.hypot(ax - bx, ay - by) * 111.0 # ~111 km per degree + +def _pick_technician_for_city(technicians: List[Dict[str, Any]], city: str, visits: List[Dict[str, Any]]) -> Optional[Dict[str, Any]]: + candidates = [t for t in technicians if _tech_ok_for_devices(t, visits)] + local = [t for t in candidates if t.get("home_base_city") == city] + if local: + return local[0] + ranked = sorted(candidates or technicians, key=lambda t: _city_distance_km(t.get("home_base_city") or "", city)) + return ranked[0] if ranked else None + +# ---------- Helpers: policy limits parsing & fallbacks ---------- + +def _extract_json_like(text: str) -> Optional[Dict[str, Any]]: + if not text: + return None + start = text.find("{") + if start == -1: + return None + depth = 0 + for i in range(start, len(text)): + if text[i] == "{": + depth += 1 + elif text[i] == "}": + depth -= 1 + if depth == 0: + snippet = text[start:i+1] + try: + return json.loads(snippet) + except Exception: + return None + return None + +def _parse_manager_note(manager_note: str) -> Dict[str, Any]: + note = (manager_note or "").lower() + out: Dict[str, Any] = {} + m = re.search( r"(?:max(?:imum)?|up\s+to|allow(?:ed)?(?:\s+up\s+to)?|no\s+more\s+than|at\s+most)" + r"\s*([0-9]+(?:\.[0-9]+)?)\s*(?:h|hour(?:s)?)", note) + if m: + out["max_work_hours"] = float(m.group(1)) + m = re.search(r"(?:max\s*)?([0-9]+)\s*(?:visits?|locations?|stops?|sites?)", note) + if m: + out["max_visits_per_route"] = int(m.group(1)) + m = re.search(r"([0-9]+(?:\.[0-9]+)?)\s*km", note) + if m: + out["max_distance_km_per_route"] = float(m.group(1)) + if any(k in note for k in ["overtime", "extra hours", "sla", "weekend", "extend hours", "longer day"]): + out["allow_overtime"] = True + + special_requirements = [] + if any(k in note for k in ["support", "help", "assist", "backup"]): + special_requirements.append("cross_city_support_requested") + if any(k in note for k in ["travel to", "go to", "visit", "send to"]): + for city in ["warszawa", "poznan", "wroclaw", "szczecin", "krakow", "gdansk"]: + if city in note: + special_requirements.append(f"travel_to_{city}") + if any(k in note for k in ["urgent", "asap", "immediately", "priority", "critical"]): + special_requirements.append("urgent_priority") + if any(k in note for k in ["short day", "shorter hours", "reduce hours", "early finish"]): + special_requirements.append("shorter_workday") + if special_requirements: + out["special_requirements"] = special_requirements + out["full_note"] = manager_note + return out + +def _fallback_policy_limits(manager_note: str = "") -> Dict[str, Any]: + """ + Get default routing constraints from config, optionally overridden by manager note. + """ + limits = { + "max_work_hours": config.DEFAULT_MAX_WORK_HOURS, + "max_visits_per_route": config.DEFAULT_MAX_VISITS_PER_ROUTE, + "max_distance_km_per_route": config.DEFAULT_MAX_DISTANCE_KM, + "allow_overtime": False, + } + overrides = _parse_manager_note(manager_note) + limits.update({k: v for k, v in overrides.items() if v is not None}) + logger.warning("Using fallback policy limits: %s", {k: v for k, v in limits.items() if k != "full_note"}) + return limits + +def _derive_policy_limits_via_llm(manager_note: str = "") -> Dict[str, Any]: + """Internal helper: call get_calibration_rules via a tiny ReAct agent and return numeric limits.""" + tools = [get_calibration_rules] + agent_tmp = create_react_agent(llm, tools) + context = "" + if manager_note: + context = ( + "\n\nMANAGER REQUIREMENTS:\n" + f"{manager_note}\n\nIMPORTANT: If manager specifies limits, override default policy values." + ) + msg = ( + "Read corporate policy using get_calibration_rules. Extract numeric limits ONLY as JSON:\n" + '{"max_work_hours": , "max_visits_per_route": , "max_distance_km_per_route": , "allow_overtime": }\n' + "Respond with JSON only, no prose." + context + ) + try: + res = agent_tmp.invoke({"messages": [HumanMessage(content=msg)]}) + raw = res["messages"][-1].content if isinstance(res, dict) else "" + try: + parsed = json.loads(raw) + except Exception: + parsed = _extract_json_like(raw) or {} + if not parsed: + # naive lines + cand: Dict[str, Any] = {} + for line in str(raw).splitlines(): + if ":" not in line: + continue + key, val = [x.strip().strip('",') for x in line.split(":", 1)] + k = key.strip('"').lower().replace(" ", "_") + if k in {"max_work_hours", "max_visits_per_route", "max_distance_km_per_route", "allow_overtime"}: + if k == "allow_overtime": + cand[k] = ("true" in val.lower()) or ("yes" in val.lower()) + elif "visits" in k: + cand[k] = int(re.findall(r"[0-9]+", val)[0]) + else: + cand[k] = float(re.findall(r"[0-9]+(?:\.[0-9]+)?", val)[0]) + parsed = cand + if manager_note: + note_data = _parse_manager_note(manager_note) + for k in ["max_work_hours", "max_visits_per_route", "max_distance_km_per_route", "allow_overtime"]: + if k in note_data and note_data[k] is not None: + parsed[k] = note_data[k] + if "special_requirements" in note_data: + parsed["special_requirements"] = note_data["special_requirements"] + parsed["full_note"] = note_data.get("full_note", manager_note) + return parsed or _fallback_policy_limits(manager_note) + except Exception as e: + logger.warning("Policy limits derivation failed or returned non-JSON: %s", e) + return _fallback_policy_limits(manager_note) + +# ---------- Helpers: weekend SLA date selection ---------- + +def _has_overdue(visits: List[Dict[str, Any]]) -> bool: + for v in visits: + for d in v.get("devices", []): + if d.get("days_until_due", 0) < 0: + return True + return False + +def _choose_route_date(allow_overtime: bool, visits: List[Dict[str, Any]]) -> Tuple[str, bool, str]: + today = date.today() + nxt = today + timedelta(days=1) + is_weekend = nxt.weekday() >= 5 # 5=Sat, 6=Sun + note = "" + if is_weekend: + if nxt.weekday() == 5 and allow_overtime and _has_overdue(visits): + note = "SLA weekend exception applied (Saturday) for OVERDUE devices." + return (nxt.strftime("%Y-%m-%d"), True, note) + delta = 7 - nxt.weekday() + monday = nxt + timedelta(days=delta) + return (monday.strftime("%Y-%m-%d"), False, "Shifted to Monday (no weekend work).") + return (nxt.strftime("%Y-%m-%d"), False, "") + +# ---------- Tools ---------- + +@tool +def analyze_equipment_status() -> Dict[str, Any]: + """LangChain tool: Pull equipment from Data Fabric and split into OVERDUE/URGENT/SCHEDULED/ACTIVE buckets.""" + try: + logger.info("Analyzing equipment status...") + records = uipath_client.entities.list_records(entity_key=config.EQUIPMENT_ENTITY_ID, start=0, limit=100) + today = datetime.now().date() + overdue, urgent, scheduled, active = [], [], [], [] + for record in records: + equipment_id = getattr(record, "equipmentId", None) + device_type = getattr(record, "deviceType", None) + clinic_id = getattr(record, "clinicId", None) + next_due_str = str(getattr(record, "nextCalibrationDue", "")).strip() or None + if not next_due_str or not equipment_id: + continue + try: + next_due = datetime.fromisoformat(next_due_str.split("T")[0].split(" ")[0]).date() + except Exception: + continue + days_until_due = (next_due - today).days + device_info = { + "equipment_id": equipment_id, + "device_type": device_type, + "clinic_id": clinic_id, + "next_due": str(next_due), + "days_until_due": days_until_due, + } + if days_until_due < 0: + overdue.append(device_info) + elif device_type == "Audiometer" and days_until_due <= 14: + urgent.append(device_info) + elif device_type == "Tympanometer" and days_until_due <= 7: + urgent.append(device_info) + elif device_type == "Audiometer" and 15 <= days_until_due <= 30: + scheduled.append(device_info) + elif device_type == "Tympanometer" and 8 <= days_until_due <= 21: + scheduled.append(device_info) + else: + active.append(device_info) + logger.info("Analysis: %d OVERDUE, %d URGENT, %d SCHEDULED", len(overdue), len(urgent), len(scheduled)) + return { + "total_equipment": len(records), + "overdue_count": len(overdue), + "urgent_count": len(urgent), + "scheduled_count": len(scheduled), + "active_count": len(active), + "overdue_devices": overdue[:20], + "urgent_devices": urgent[:20], + "scheduled_devices": scheduled[:20], + "analysis_date": str(today), + } + except Exception as e: + logger.error("Equipment analysis failed: %s", e) + return {"error": str(e)} + +@tool +def get_calibration_rules(query: str) -> str: + """LangChain tool: Retrieve policy fragments from Context Grounding retriever (returns plain text).""" + try: + docs = context_grounding.invoke(query) + if not docs: + return "No specific rules found. Use default thresholds." + rules_text = "\n\n".join([f"Document {i+1}:\n{doc.page_content}" for i, doc in enumerate(docs)]) + logger.info("Retrieved %d rule documents", len(docs)) + return rules_text + except Exception as e: + logger.error("Error retrieving rules: %s", e) + return "Error retrieving rules." + +@tool +def query_clinics() -> List[Dict[str, Any]]: + """LangChain tool: Return list of clinics (id, name, address, city, geo, contacts, SLA).""" + try: + records = uipath_client.entities.list_records(entity_key=config.CLINICS_ENTITY_ID, start=0, limit=100) + clinics_list = [] + for record in records: + clinics_list.append({ + "clinic_id": getattr(record, "clinicId", None), + "clinic_name": getattr(record, "clinicName", None), + "address": getattr(record, "address", None), + "city": getattr(record, "city", None), + "postal_code": getattr(record, "postalCode", None), + "latitude": float(getattr(record, "latitude", "0") or 0), + "longitude": float(getattr(record, "longitude", "0") or 0), + "contact_person": getattr(record, "contactPerson", None), + "contact_email": getattr(record, "contactEmail", None), + "sla_hours": int(getattr(record, "slaHours", 72) or 72), + }) + logger.info("Retrieved %d clinics", len(clinics_list)) + return clinics_list + except Exception as e: + logger.error("Error querying clinics: %s", e) + return [] + +@tool +def query_technicians() -> List[Dict[str, Any]]: + """LangChain tool: Return list of technicians with specialization and home base.""" + try: + records = uipath_client.entities.list_records(entity_key=config.TECHNICIANS_ENTITY_ID, start=0, limit=100) + technicians_list = [] + for record in records: + technicians_list.append({ + "technician_id": getattr(record, "technicianId", None), + "technician_name": getattr(record, "technicianName", None), + "email": getattr(record, "email", None), + "phone": getattr(record, "phone", None), + "specialization": getattr(record, "specialization", None), + "home_base_city": getattr(record, "homeBaseCity", None), + }) + logger.info("Retrieved %d technicians", len(technicians_list)) + return technicians_list + except Exception as e: + logger.error("Error querying technicians: %s", e) + return [] + +@tool +def optimize_route(clinic_ids: List[str], technician_id: Optional[str] = None, city: str = "") -> Dict[str, Any]: + """LangChain tool: Build/optimize a driving route for given clinic IDs; returns km, hours, and a Google Maps URL.""" + if not gmaps: + return {"error": "Google Maps API not configured"} + try: + logger.info("Optimizing route for %d clinics in %s...", len(clinic_ids), city or "?") + all_clinics_records = uipath_client.entities.list_records(entity_key=config.CLINICS_ENTITY_ID, start=0, limit=100) + all_clinics = [{ + "clinic_id": getattr(r, "clinicId", None), + "latitude": float(getattr(r, "latitude", "0") or 0), + "longitude": float(getattr(r, "longitude", "0") or 0), + } for r in all_clinics_records] + clinics = [c for c in all_clinics if c["clinic_id"] in clinic_ids] + if len(clinics) < 1: + return {"error": "Need at least 1 clinic"} + + start_point = None + if technician_id: + tech_records = uipath_client.entities.list_records( + entity_key=config.TECHNICIANS_ENTITY_ID, start=0, limit=100 + ) + for tech in tech_records: + if getattr(tech, "technicianId", None) == technician_id: + home_city = getattr(tech, "homeBaseCity", None) + if home_city and home_city in config.CITY_COORDS: + start_point = config.CITY_COORDS[home_city] + logger.info("Using technician home base: %s", home_city) + break + + if start_point and city in config.CITY_COORDS: + cx, cy = config.CITY_COORDS[city] + dist_km = math.hypot(start_point[0] - cx, start_point[1] - cy) * 111.0 + if dist_km > 80.0: + start_point = (cx, cy) + logger.info("Home base far from cluster; using city centroid for %s as origin", city) + + if start_point: + origin = f"{start_point[0]},{start_point[1]}" + destination = origin + waypoints_coords = [f"{c['latitude']},{c['longitude']}" for c in clinics] + else: + origin = f"{clinics[0]['latitude']},{clinics[0]['longitude']}" + destination = f"{clinics[-1]['latitude']},{clinics[-1]['longitude']}" + waypoints_coords = [f"{c['latitude']},{c['longitude']}" for c in clinics[1:-1]] + + if waypoints_coords: + directions = gmaps.directions( + origin, destination, + waypoints=waypoints_coords, + optimize_waypoints=True, mode="driving" + ) + else: + directions = gmaps.directions(origin, destination, mode="driving") + if not directions: + return {"error": "No route found"} + + route = directions[0] + total_distance_km = sum(leg["distance"]["value"] for leg in route["legs"]) / 1000 + total_duration_hours = sum(leg["duration"]["value"] for leg in route["legs"]) / 3600 + + if waypoints_coords and "waypoint_order" in route: + optimized_order = route["waypoint_order"] + optimized_waypoints = [waypoints_coords[i] for i in optimized_order] + all_waypoints = [origin] + optimized_waypoints + [destination] + logger.info("Using optimized waypoint order: %s", optimized_order) + else: + all_waypoints = [origin] + waypoints_coords + [destination] if waypoints_coords else [origin, destination] + + map_url = "https://www.google.com/maps/dir/" + "/".join(all_waypoints) + logger.info("Route optimized: %.1f km, %.1f h", total_distance_km, total_duration_hours) + return { + "total_distance_km": round(total_distance_km, 1), + "total_duration_hours": round(total_duration_hours, 2), + "starts_from_home": start_point is not None, + "route_map_url": map_url, + } + except Exception as e: + logger.error("Route optimization failed: %s", e) + return {"error": str(e)} + +@tool +def build_routing_plan( + devices_needing_service: Optional[Union[str, List[Dict[str, Any]]]] = None, + max_work_hours: Optional[float] = None, + max_visits_per_route: Optional[int] = None, + max_distance_km_per_route: Optional[float] = None, + allow_overtime: Optional[bool] = None, + manager_note: Optional[str] = None, +) -> Dict[str, Any]: + """LangChain tool: Build an optimized routing plan; enforces hours/visits/distance limits and optional overtime.""" + global LAST_ROUTING_PLAN + try: + if isinstance(devices_needing_service, str): + try: + devices_needing_service = json.loads(devices_needing_service) + logger.info("Parsed devices_needing_service from JSON string") + except Exception as e: + logger.warning("Failed to parse devices_needing_service JSON: %s", e) + devices_needing_service = None + + if not devices_needing_service: + logger.info("No devices provided, fetching from analyze_equipment_status...") + analysis = analyze_equipment_status.invoke({}) + devices_needing_service = analysis.get("overdue_devices", []) + analysis.get("urgent_devices", []) + if not devices_needing_service: + logger.warning("No overdue or urgent devices found") + empty_result = { + "routing_plan": [], + "total_routes": 0, + "total_devices": 0, + "total_distance_km": 0, + } + LAST_ROUTING_PLAN = empty_result + return empty_result + + logger.info("Building routing plan for %d devices...", len(devices_needing_service)) + all_clinics = query_clinics.invoke({}) + all_technicians = query_technicians.invoke({}) + if not all_clinics or not all_technicians: + return {"error": "Missing clinic or technician data"} + + clinic_device_map: Dict[str, Dict[str, Any]] = {} + for device in devices_needing_service: + cid = device["clinic_id"] + if cid not in clinic_device_map: + clinic_info = next((c for c in all_clinics if c["clinic_id"] == cid), None) + if clinic_info: + clinic_device_map[cid] = {"clinic": clinic_info, "devices": []} + if cid in clinic_device_map: + clinic_device_map[cid]["devices"].append(device) + + city_clusters: Dict[str, List[Dict[str, Any]]] = {} + for cid, data in clinic_device_map.items(): + city = data["clinic"]["city"] + city_clusters.setdefault(city, []).append({ + "clinic_id": cid, + "clinic": data["clinic"], + "devices": data["devices"], + }) + logger.info("Created %d city clusters: %s", len(city_clusters), list(city_clusters.keys())) + + routing_plan: List[Dict[str, Any]] = [] + + for city, visits in city_clusters.items(): + logger.info("Processing city %s with %d visits (%d devices total)", + city, len(visits), sum(len(v.get("devices", [])) for v in visits)) + + assigned_tech = _pick_technician_for_city(all_technicians, city, visits) + if not assigned_tech: + continue + + clinic_ids = [v["clinic_id"] for v in visits] + route_result = optimize_route.invoke({ + "clinic_ids": clinic_ids, + "technician_id": assigned_tech["technician_id"], + "city": city, + }) + if "error" in route_result: + logger.warning("Route optimization failed for %s: %s; using fallback", city, route_result["error"]) + n = max(1, len(clinic_ids)) + est_km = n * 8.0 + route_result = { + "total_distance_km": est_km, + "total_duration_hours": round(est_km / 40.0, 2), + "starts_from_home": True, + "route_map_url": "https://www.google.com/maps/search/" + (city or "").replace(" ", "+"), + } + + current_visits = list(visits) + + # Calculate initial work load before expansion + initial_travel = float(route_result.get("total_duration_hours", 0)) + initial_service = sum(_estimate_service_hours_for_visit(x) for x in current_visits) + initial_total = initial_travel + initial_service + + expansion_applied = False + + # EXPANSION: If overtime allowed and there is headroom, try to include all city visits + if allow_overtime and max_work_hours and max_work_hours > 8.0: + hours_available = max_work_hours - initial_total + if hours_available > 1.0: + logger.info("Overtime allowed (%sh). Current: %.2fh, Available: %.2fh. Attempting expansion for %s.", + max_work_hours, initial_total, hours_available, city) + expanded_visits = list(visits) + expanded_ids = [v["clinic_id"] for v in expanded_visits] + expanded_route = optimize_route.invoke({ + "clinic_ids": expanded_ids, + "technician_id": assigned_tech["technician_id"], + "city": city, + }) + expanded_travel = float(expanded_route.get("total_duration_hours", 0)) + expanded_service = sum(_estimate_service_hours_for_visit(x) for x in expanded_visits) + expanded_total = expanded_travel + expanded_service + if expanded_total <= max_work_hours and expanded_total > initial_total + 0.5: + current_visits = expanded_visits + route_result = expanded_route + expansion_applied = True + logger.info("EXPANSION SUCCESS: %d -> %d visits, %.2fh -> %.2fh (limit %sh)", + len(visits), len(current_visits), initial_total, expanded_total, max_work_hours) + elif expanded_total > max_work_hours: + logger.info("EXPANSION FAILED: %d visits would be %.2fh, exceeds %sh limit.", + len(expanded_visits), expanded_total, max_work_hours) + else: + logger.info("EXPANSION SKIPPED: No meaningful improvement (%.2fh -> %.2fh)", + initial_total, expanded_total) + else: + logger.info("EXPANSION SKIPPED: Insufficient headroom (%.2fh used of %sh)", + initial_total, max_work_hours) + + # Apply visit count limit + if max_visits_per_route is not None and len(current_visits) > max_visits_per_route: + current_visits = current_visits[:max_visits_per_route] + + # Apply distance limit + if max_distance_km_per_route is not None and route_result.get("total_distance_km", 0) > max_distance_km_per_route: + while current_visits and route_result.get("total_distance_km", 0) > max_distance_km_per_route: + current_visits = current_visits[:-1] + ids = [v["clinic_id"] for v in current_visits] + if ids: + route_result = optimize_route.invoke({ + "clinic_ids": ids, + "technician_id": assigned_tech["technician_id"], + "city": city, + }) + else: + break + + # Apply hours limit if expansion didn't already validate + if max_work_hours is not None and not expansion_applied: + tmp, chosen = [], [] + for v in current_visits: + tmp.append(v) + ids = [x["clinic_id"] for x in tmp] + rtmp = optimize_route.invoke({ + "clinic_ids": ids, + "technician_id": assigned_tech["technician_id"], + "city": city, + }) + travel_h = float(rtmp.get("total_duration_hours", 0)) + service_h = sum(_estimate_service_hours_for_visit(x) for x in tmp) + if travel_h + service_h <= max_work_hours: + chosen = list(tmp) + else: + break + if chosen: + current_visits = chosen + ids = [v["clinic_id"] for v in current_visits] + route_result = optimize_route.invoke({ + "clinic_ids": ids, + "technician_id": assigned_tech["technician_id"], + "city": city, + }) + + travel_hours = float(route_result.get("total_duration_hours", 0)) + service_hours = sum(_estimate_service_hours_for_visit(x) for x in current_visits) + total_work_hours = travel_hours + service_hours + + planned_date, is_weekend, weekend_note = _choose_route_date(bool(allow_overtime), current_visits) + + routing_plan.append({ + "city": city, + "visits": current_visits, + "technician": assigned_tech, + "route": route_result, + "travel_hours": round(travel_hours, 2), + "service_hours": round(service_hours, 2), + "total_work_hours": round(total_work_hours, 2), + "total_devices": sum(len(v["devices"]) for v in current_visits), + "manager_note": manager_note or "", + "allow_overtime": bool(allow_overtime), + "route_date": planned_date, + "route_date_is_weekend": is_weekend, + "route_date_note": weekend_note, + }) + + logger.info("Created %d routes", len(routing_plan)) + result = { + "routing_plan": routing_plan, + "total_routes": len(routing_plan), + "total_devices": sum(r["total_devices"] for r in routing_plan), + "total_distance_km": sum(r["route"]["total_distance_km"] for r in routing_plan), + } + LAST_ROUTING_PLAN = result + return result + except Exception as e: + logger.error("Routing plan failed: %s", e) + return {"error": str(e)} + +@tool +def request_manager_approval(routing_plan: Dict[str, Any]) -> Dict[str, Any]: + """LangChain tool: Prepare summary for HITL approval (actual Action Center call happens in the HITL node).""" + try: + routes = routing_plan.get("routing_plan", []) + if not routes: + return {"error": "No routes in plan"} + logger.info("Preparing %d approval tasks (will be created in HITL node)...", len(routes)) + task_ids = [str(uuid.uuid4()) for _ in routes] + return { + "task_ids": task_ids, + "total_tasks": len(task_ids), + "action_center_url": "https://cloud.uipath.com/[your-org]/[your-tenant]/actioncenter_/tasks", + "message": f"Prepared {len(task_ids)} approval tasks. Check Action Center.", + } + except Exception as e: + logger.error("Approval request failed: %s", e) + return {"error": str(e)} + +@tool +def create_service_orders(approved_routes: List[Dict[str, Any]]) -> Dict[str, Any]: + """LangChain tool: Create (mock) service orders per approved route; returns counts for reporting.""" + try: + logger.info("Creating service orders for %d routes...", len(approved_routes)) + total_orders = sum(r["total_devices"] for r in approved_routes) + logger.info("Created %d service orders", total_orders) + return { + "total_orders": total_orders, + "orders_per_route": [r["total_devices"] for r in approved_routes], + "message": f"Successfully created {total_orders} service orders in Data Fabric", + } + except Exception as e: + logger.error("Service order creation failed: %s", e) + return {"error": str(e)} + +@tool +def trigger_notification_workflow(service_orders: Dict[str, Any]) -> Dict[str, Any]: + """LangChain tool: Trigger RPA workflow that sends email notifications (delegated to UiPath Orchestrator).""" + try: + total_orders = service_orders.get("total_orders", 0) + logger.info("Triggering notification workflow for %d orders...", total_orders) + return { + "workflow_triggered": True, + "total_notifications": total_orders, + "message": f"RPA workflow will send {total_orders} email notifications", + } + except Exception as e: + logger.error("Notification trigger failed: %s", e) + return {"error": str(e)} + +# ---------- System Prompt ---------- + +SYSTEM_PROMPT = """You are a Calibration Dispatcher Agent for medical equipment routing. + +TASK: Create optimized calibration routes by analyzing equipment status and applying policy constraints. + +EXECUTION STEPS: + +1. ANALYZE EQUIPMENT + Call analyze_equipment_status() to identify OVERDUE and URGENT devices. + OVERDUE: days_until_due < 0 (past calibration deadline) + URGENT: days_until_due <= 14 (Audiometers) or <= 7 (Tympanometers) + +2. RETRIEVE POLICIES + Call get_calibration_rules(query="routing constraints work hours visits distance overtime") + This retrieves company policy documents via Context Grounding. + Extract from policy documents: + - Maximum work hours per technician per day + - Maximum visits per route + - Maximum travel distance per route (km) + - Overtime authorization rules for SLA compliance + +3. BUILD ROUTING PLAN + Call build_routing_plan() with ONLY constraint parameters: + + IMPORTANT: Do NOT pass devices_needing_service parameter - it will be fetched automatically. + + CRITICAL: If manager_note contains explicit constraints (e.g., "max 6 hours"), + use those exact values - manager instructions override all other rules including OVERDUE emergency protocols. + + Example call for OVERDUE devices with no manager constraints: + build_routing_plan( + max_work_hours=12.0, + max_visits_per_route=5, + max_distance_km_per_route=200.0, + allow_overtime=True, + manager_note="" + ) + + Example call when manager specifies "max 6 hours": + build_routing_plan( + max_work_hours=6.0, + max_visits_per_route=4, + max_distance_km_per_route=200.0, + allow_overtime=False, + manager_note="max 6 hours" + ) + + For routes with OVERDUE devices (and no manager constraints): + - Set allow_overtime=True + - Extend max_work_hours to 10-12 hours + - Prioritize immediate service to avoid regulatory violations + + For URGENT-only routes: + - Apply standard policy limits (typically 8 hours, 4 visits, 200km) + - Follow normal scheduling procedures + + Use your reasoning to balance: SLA compliance, technician workload, travel efficiency. + +KEY CONSTRAINTS: +- Manager instructions are ABSOLUTE PRIORITY (override everything) +- Respect technician specialization (Audiometry, Tympanometry, All) +- Minimize total travel distance +- Never exceed daily capacity limits from policy or manager +- OVERDUE devices require immediate action (24-48 hour response) + +OUTPUT: +Provide brief summary: X devices found (Y overdue, Z urgent), constraints applied, N routes created. + +Current date: {current_date} +""" + +# ---------- State / Nodes ---------- + +class WorkflowState(BaseModel): + agent_messages: list = [] + agent_completed: bool = False + routing_plan: Dict[str, Any] = {} + current_route_index: int = 0 + approved_routes: List[Dict[str, Any]] = [] + rejected_routes: List[Dict[str, Any]] = [] + workflow_complete: bool = False + + # Revision tracking for ChangesRequested loop + revision_in_progress: bool = False + current_revision_iteration: int = 0 + pending_manager_note: str = "" + max_revision_iterations: int = config.MAX_REVISION_ITERATIONS + +def _build_agent_comments(route: Dict[str, Any], manager_note: str = "") -> str: + city = route.get("city", "?") + dist = route.get("route", {}).get("total_distance_km", 0.0) + trav = float(route.get("travel_hours", 0.0)) + serv = float(route.get("service_hours", 0.0)) + work = float(route.get("total_work_hours", 0.0)) + tech = route.get("technician", {}) or {} + tech_name = tech.get("technician_name", "N/A") + devices = sum(len(v.get("devices", [])) for v in route.get("visits", [])) + + lines = [ + "AI Agent Analysis:", + f"- Route optimized for minimal travel time ({trav:.2f}h) in {city}", + f"- Technician {tech_name} assigned (specialization matched)", + f"- Total work time: {work:.2f}h (service {serv:.2f}h + travel {trav:.2f}h), distance {dist} km", + f"- {devices} devices scheduled", + ] + if route.get("route_date_is_weekend"): + lines.append(f"- Planned for Saturday due to SLA exception: {route.get('route_date')}") + if route.get("allow_overtime"): + lines.append("- Overtime applied due to SLA-critical OVERDUE devices.") + if manager_note: + lines.append(f"- Manager note applied: {manager_note}") + parsed = _parse_manager_note(manager_note) + if "special_requirements" in parsed: + req_descriptions = { + "cross_city_support_requested": "Cross-city support/assistance identified", + "urgent_priority": "Urgent priority flagged", + } + for req in parsed["special_requirements"]: + if req.startswith("travel_to_"): + city_name = req.replace("travel_to_", "").capitalize() + lines.append(f" * Travel requirement: {city_name}") + elif req in req_descriptions: + lines.append(f" * {req_descriptions[req]}") + lines += [ + "", + "Grounding references:", + "• Service Procedures v4.1 – standard durations (Audiometer 2.0h, Tympanometer 1.5h)", + "• Routing Guidelines v3.2 – daily capacity limits and waypoint optimization", + "• Calibration Rules v2.1 – OVERDUE/URGENT thresholds and SLA windows", + "Decision rationale: minimized distance while staying within documented limits.", + ] + return "\n".join(lines) + +# ---------- helpers ---------- + +def _collect_devices_overdue_and_urgent() -> Tuple[List[Dict[str, Any]], bool]: + analysis = analyze_equipment_status.invoke({}) + overdue = analysis.get("overdue_devices", []) + urgent = analysis.get("urgent_devices", []) + devices = overdue + urgent + has_overdue = len(overdue) > 0 + return devices, has_overdue + +def _compute_limits_for_devices(has_overdue: bool, manager_note: str = "") -> Dict[str, Any]: + """ + Compute routing constraints, with automatic override for OVERDUE devices. + Respects manager explicit instructions when provided. + """ + limits = _derive_policy_limits_via_llm(manager_note) + parsed_note = _parse_manager_note(manager_note) + + manager_set_hours = "max_work_hours" in parsed_note + manager_set_visits = "max_visits_per_route" in parsed_note + manager_wants_shorter = "shorter_workday" in parsed_note.get("special_requirements", []) + + if manager_set_hours or manager_set_visits or manager_wants_shorter: + logger.info("Manager explicit instruction detected, respecting manager limits") + return limits + + if has_overdue: + if not limits.get("allow_overtime"): + logger.info("Auto-enabling overtime for OVERDUE devices (no manager constraint)") + limits["allow_overtime"] = True + if limits.get("max_work_hours", config.DEFAULT_MAX_WORK_HOURS) <= config.DEFAULT_MAX_WORK_HOURS: + limits["max_work_hours"] = config.OVERDUE_MAX_WORK_HOURS + logger.info("Auto-extended to %sh for OVERDUE devices (no manager constraint)", + config.OVERDUE_MAX_WORK_HOURS) + + return limits + +def _plan_with_limits(devices: List[Dict[str, Any]], limits: Dict[str, Any], manager_note: str = "") -> Dict[str, Any]: + return build_routing_plan.invoke({ + "devices_needing_service": devices, + "max_work_hours": limits.get("max_work_hours"), + "max_visits_per_route": limits.get("max_visits_per_route"), + "max_distance_km_per_route": limits.get("max_distance_km_per_route"), + "allow_overtime": limits.get("allow_overtime", False), + "manager_note": manager_note, + }) + +# ---------- Nodes ---------- + +def run_agent_node(state: WorkflowState) -> WorkflowState: + logger.info("=" * 60) + logger.info("PHASE 1: AGENT ANALYSIS & ROUTING") + logger.info("=" * 60) + + current_date = datetime.now().strftime("%Y-%m-%d") + system_prompt = SYSTEM_PROMPT.format(current_date=current_date) + + tools = [analyze_equipment_status, get_calibration_rules, build_routing_plan] + agent_local = create_react_agent(llm, tools) + + user_request = f"""{system_prompt} +Please execute the process above exactly with tools. Then summarize briefly.""" + + def _fallback_plan(path_label: str) -> Dict[str, Any]: + logger.warning("Agent did not produce a routing plan (%s). Falling back to deterministic path.", path_label) + devices, has_overdue = _collect_devices_overdue_and_urgent() + limits = _compute_limits_for_devices(has_overdue) + return _plan_with_limits(devices, limits, manager_note="") + + try: + result = agent_local.invoke({"messages": [HumanMessage(content=user_request)]}) + final_message = result["messages"][-1].content + logger.info("\nAgent response:\n%s\n", final_message) + + routing_plan = LAST_ROUTING_PLAN if LAST_ROUTING_PLAN else _fallback_plan("empty result") + return state.model_copy(update={ + "agent_messages": result["messages"], + "agent_completed": True, + "routing_plan": routing_plan, + }) + except Exception as e: + logger.error("Agent failed: %s", e) + import traceback + logger.error(traceback.format_exc()) + routing_plan = _fallback_plan("exception") + return state.model_copy(update={"agent_completed": True, "routing_plan": routing_plan}) + +def approval_hitl_node(state: WorkflowState) -> Command: + routes = state.routing_plan.get("routing_plan", []) + if state.current_route_index >= len(routes): + return Command(update={}) + + route = routes[state.current_route_index] + + # Local dev mode – skip Action Center and auto-approve the route + if config.AUTO_APPROVE_IN_LOCAL: + logger.info( + "AUTO_APPROVE_IN_LOCAL is enabled - auto-approving route %s (%d/%d)", + route.get("city"), + state.current_route_index + 1, + len(routes), + ) + trigger_rpa_for_route(route) + return Command(update={ + "approved_routes": state.approved_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + }) + + if state.revision_in_progress and state.current_revision_iteration > 0: + logger.info("PHASE 2: HITL Approval (%d/%d) - Revision %d for %s", + state.current_route_index + 1, len(routes), + state.current_revision_iteration, route["city"]) + iteration_context = f" (Revision {state.current_revision_iteration}/{state.max_revision_iterations})" + else: + logger.info("PHASE 2: HITL Approval (%d/%d) for %s", + state.current_route_index + 1, len(routes), route["city"]) + iteration_context = "" + + visit_details = "\n".join([ + f"Visit {i+1}: {v['clinic']['clinic_name']} ({len(v['devices'])} devices)" + for i, v in enumerate(route["visits"]) + ]) + + agent_comments = _build_agent_comments( + route, + manager_note=state.pending_manager_note if state.revision_in_progress else "" + ) + + action_data = interrupt(CreateTask( + app_name="Routeapprovalform", + title=f"Route{iteration_context} - {route['city']} - {len(route['visits'])} visits", + data={ + "City": route["city"], + "RouteDate": route.get("route_date"), + "TechnicianName": route["technician"]["technician_name"], + "TotalVisits": len(route["visits"]), + "TotalDistanceKm": route["route"]["total_distance_km"], + "RouteMapUrl": route["route"]["route_map_url"], + "VisitDetails": visit_details, + "TotalServiceHours": route.get("service_hours"), + "TotalTravelHours": route.get("travel_hours"), + "TotalWorkHours": route.get("total_work_hours"), + "AgentComments": agent_comments, + }, + app_version=1, + app_folder_path=config.UIPATH_FOLDER_PATH, + )) + + decision = (action_data.get(config.APP_FIELD_SELECTED_OUTCOME) or "").strip() + manager_note = (action_data.get(config.APP_FIELD_MANAGER_COMMENTS) or "").strip() + if not decision: + logger.warning("SelectedOutcome empty; defaulting to Approved") + decision = "Approved" + + update_dict = {} + + if decision == "Approved": + logger.info("Approved: %s (after %d revision(s))", route["city"], state.current_revision_iteration) + trigger_rpa_for_route(route) + update_dict = { + "approved_routes": state.approved_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + } + + elif decision == "Rejected": + logger.info("Rejected: %s", route["city"]) + update_dict = { + "rejected_routes": state.rejected_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + } + + elif decision == "ChangesRequested": + next_iteration = state.current_revision_iteration + 1 + + if next_iteration > state.max_revision_iterations: + logger.warning("Max revision iterations (%d) reached for %s. Marking as rejected.", + state.max_revision_iterations, route["city"]) + update_dict = { + "rejected_routes": state.rejected_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + } + else: + logger.info("Changes requested (iteration %d/%d). Manager note: %s", + next_iteration, state.max_revision_iterations, manager_note) + + # Get ALL devices in this city + city_name = route["city"] + logger.info("ChangesRequested for %s: fetching all devices in city", city_name) + analysis = analyze_equipment_status.invoke({}) + all_devices = analysis.get("overdue_devices", []) + analysis.get("urgent_devices", []) + all_clinics = query_clinics.invoke({}) + clinics_in_city = [c for c in all_clinics if c.get("city") == city_name] + clinic_ids_in_city = {c["clinic_id"] for c in clinics_in_city} + devices_this_city = [d for d in all_devices if d.get("clinic_id") in clinic_ids_in_city] + + logger.info("Found %d clinics in %s with %d total devices requiring service", + len(clinics_in_city), city_name, len(devices_this_city)) + + limits = _compute_limits_for_devices(has_overdue=len([d for d in devices_this_city if d.get("days_until_due", 1) < 0]) > 0, + manager_note=manager_note) + + revised_plan = _plan_with_limits(devices_this_city, limits, manager_note) + + if revised_plan.get("routing_plan"): + revised_route = revised_plan["routing_plan"][0] + logger.info("Route regenerated successfully for %s", revised_route["city"]) + updated_routes = list(routes) + updated_routes[state.current_route_index] = revised_route + updated_routing_plan = {**state.routing_plan, "routing_plan": updated_routes} + update_dict = { + "routing_plan": updated_routing_plan, + "revision_in_progress": True, + "current_revision_iteration": next_iteration, + "pending_manager_note": manager_note, + } + else: + logger.error("Failed to regenerate route for %s. Marking as rejected.", route["city"]) + update_dict = { + "rejected_routes": state.rejected_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + } + + else: + logger.warning("Unknown decision '%s' for %s. Treating as rejected.", decision, route["city"]) + update_dict = { + "rejected_routes": state.rejected_routes + [route], + "current_route_index": state.current_route_index + 1, + "revision_in_progress": False, + "current_revision_iteration": 0, + "pending_manager_note": "", + } + + return Command(update=update_dict) + +def trigger_rpa_for_route(route: Dict[str, Any]) -> bool: + """ + Post-approval side effects for an approved route: + - Email notifications via MCP tool 'Send_Calibration_Notifications' (fallback to classic invoke) + - Slack notification via MCP tool 'Send_Slack_Notification' + - Data Fabric record insert via MCP tool 'AddServiceOrder' + The bridge handles async/sync differences safely. + """ + try: + logger.info("Preparing to trigger post-approval actions for %s", route.get("city")) + # ---------------- Build EMAIL payload (same schema as before) ---------------- + route_data: Dict[str, Any] = { + "City": route.get("city"), + "TechnicianName": route.get("technician", {}).get("technician_name"), + "TechnicianEmail": route.get("technician", {}).get("email"), + "RouteDate": route.get("route_date"), + "TotalVisits": len(route.get("visits", [])), + "TotalDistanceKm": route.get("route", {}).get("total_distance_km"), + "RouteMapUrl": route.get("route", {}).get("route_map_url"), + "Visits": [], + } + for i, visit in enumerate(route.get("visits", []), 1): + visit_data = { + "VisitNumber": i, + "ClinicName": visit.get("clinic", {}).get("clinic_name"), + "ClinicEmail": visit.get("clinic", {}).get("contact_email"), + "ClinicAddress": visit.get("clinic", {}).get("address"), + "Devices": [ + { + "EquipmentId": d.get("equipment_id"), + "DeviceType": d.get("device_type"), + "Status": "OVERDUE" if (d.get("days_until_due", 0) < 0) else "URGENT", + } + for d in visit.get("devices", []) + ], + } + route_data["Visits"].append(visit_data) + + from mcp_bridge import send_calibration_notifications, send_slack_notification, add_service_order + + # ---------------- EMAIL via MCP (or classic fallback) ---------------- + ok_mail = send_calibration_notifications(route_data) + if ok_mail: + logger.info("Email notifications dispatched via MCP/classic.") + else: + logger.error("Email workflow failed via MCP/classic") + + # ---------------- SLACK via MCP ---------------- + clinics_human: List[str] = [] + for v in route.get("visits", []): + c = v.get("clinic", {}) or {} + clinics_human.append(f"{c.get('clinic_name')} - {c.get('address')}") + slack_payload: Dict[str, Any] = { + "technician_name": route.get("technician", {}).get("technician_name"), + "technician_email": route.get("technician", {}).get("email"), + "visit_count": len(route.get("visits", [])), + "city": route.get("city"), + "route_date": route.get("route_date"), + "route_map_url": route.get("route", {}).get("route_map_url"), + "total_distance_km": route.get("route", {}).get("total_distance_km"), + "clinics": clinics_human, + } + ok_slack = send_slack_notification(slack_payload) + if ok_slack: + logger.info("Slack notification sent.") + else: + logger.warning("Slack notification tool returned False (check MCP tool + InArgument).") + + # ---------------- DATA FABRIC via MCP ---------------- + from datetime import datetime as _dt + first_visit = (route.get("visits") or [{}])[0] if route.get("visits") else {} + first_clinic = first_visit.get("clinic", {}) if first_visit else {} + first_device = (first_visit.get("devices") or [{}])[0] if first_visit else {} + technician_id = route.get("technician", {}).get("technician_id") or route.get("technician", {}).get("id") or "TECH-UNKNOWN" + est_hours = route.get("total_work_hours") or (route.get("service_hours", 0.0) or 0.0) + (route.get("travel_hours", 0.0) or 0.0) + order_id = f"ORD-{_dt.now().strftime('%Y%m%d')}-{str(uuid.uuid4())[:6].upper()}" + entity_record: Dict[str, Any] = { + "orderId": order_id, + "clinicId": first_clinic.get("clinic_id") or first_clinic.get("id") or "CLI-UNKNOWN", + "equipmentId": first_device.get("equipment_id") or "EQ-UNKNOWN", + "technicianId": technician_id, + "scheduledDate": route.get("route_date"), + "routeSequence": 1, + "estimatedDurationHours": float(est_hours or 0.0), + "status": "Approved", + "priority": 1, + "notes": "Agent auto-created after manager approval.", + "routeMapUrl": route.get("route", {}).get("route_map_url"), + "totalDistanceKm": route.get("route", {}).get("total_distance_km"), + "approvedBy": config.APPROVER_EMAIL, + "createdDate": _dt.now().astimezone().isoformat(timespec="seconds"), + } + ok_entity = add_service_order(entity_record) + if ok_entity: + logger.info("Service order entity created: %s", entity_record.get("orderId")) + else: + logger.warning("AddServiceOrder tool returned False (check MCP tool + InArgument).") + + # Consider success if at least email went out (Slack & DF are auxiliary) + return bool(ok_mail) + except Exception as e: + logger.error("Post-approval triggers failed for %s: %s", route.get("city"), e, exc_info=True) + return False +def summary_node(state: WorkflowState) -> WorkflowState: + logger.info("=" * 60) + logger.info("WORKFLOW COMPLETE") + logger.info("Approved: %d", len(state.approved_routes)) + logger.info("Rejected: %d", len(state.rejected_routes)) + logger.info("=" * 60) + return state.model_copy(update={"workflow_complete": True}) + +def should_start_approvals(state: WorkflowState) -> str: + routes = state.routing_plan.get("routing_plan", []) + return "approve" if routes else "end" + +def should_continue_approvals(state: WorkflowState) -> str: + if state.revision_in_progress: + logger.info("Revision in progress, looping back to approval for route %d", state.current_route_index + 1) + return "next" + total_routes = len(state.routing_plan.get("routing_plan", [])) + return "next" if state.current_route_index < total_routes else "finish" + +# ---------- Graph ---------- + +graph = StateGraph(WorkflowState) +graph.add_node("agent", run_agent_node) +graph.add_node("approval", approval_hitl_node) +graph.add_node("summary", summary_node) +graph.set_entry_point("agent") + +graph.add_conditional_edges("agent", should_start_approvals, {"approve": "approval", "end": "summary"}) +graph.add_conditional_edges("approval", should_continue_approvals, {"next": "approval", "finish": "summary"}) +graph.add_edge("summary", END) + +agent = graph.compile() + +# ---------- Main ---------- + +if __name__ == "__main__": + logger.info("Starting calibration dispatcher agent...") + initial_state = WorkflowState() + _ = agent.invoke(initial_state) + logger.info("Done!") \ No newline at end of file diff --git a/samples/calibration-dispatcher-agent/mcp_bridge.py b/samples/calibration-dispatcher-agent/mcp_bridge.py new file mode 100644 index 00000000..1c7b6ae6 --- /dev/null +++ b/samples/calibration-dispatcher-agent/mcp_bridge.py @@ -0,0 +1,316 @@ +# mcp_bridge.py +""" +MCP Bridge for UiPath RPA Workflow Integration + +This module provides a safe async-to-sync bridge for calling MCP tools from +LangChain/LangGraph agents. It handles: +- MCP client session management with auto-reconnect +- Async/sync compatibility for LangGraph nodes +- Fallback to classic UiPath process invocation +- Tool discovery and invocation with proper error handling + +For configuration, see config.py +""" +from __future__ import annotations + +import json +import os +import asyncio +import logging +from typing import Dict, Any, Optional, List +from threading import Thread +from concurrent.futures import ThreadPoolExecutor + +# Environment variables +try: + from dotenv import load_dotenv # type: ignore + load_dotenv() +except Exception: + pass + +# Import centralized configuration +import config + +# UiPath SDK +from uipath.platform import UiPath + +# MCP client + LangChain +from mcp import ClientSession +from mcp.client.streamable_http import streamablehttp_client +from langchain_mcp_adapters.tools import load_mcp_tools + +# ---------- Logging ---------- +logging.basicConfig( + level=getattr(logging, config.LOG_LEVEL, logging.INFO), + format=config.LOG_FORMAT +) +log = logging.getLogger("mcp-bridge") + +# ---------- Config ---------- +USE_MCP = config.USE_MCP +MCP_SERVER_URL = config.MCP_SERVER_URL +FOLDER_PATH = config.UIPATH_FOLDER_PATH + +# InArgument names from config +ARG_EMAIL = config.MCP_ARG_EMAIL +ARG_SLACK = config.MCP_ARG_SLACK +ARG_ENTITY = config.MCP_ARG_ENTITY + + +# ---------- UiPath SDK client ---------- +_uipath_client: Optional[UiPath] = None + +def get_uipath_client() -> UiPath: + global _uipath_client + if _uipath_client is None: + _uipath_client = UiPath() + return _uipath_client + +def get_access_token_from_sdk() -> Optional[str]: + try: + client = get_uipath_client() + api_client = getattr(client, "api_client", None) + headers = getattr(api_client, "default_headers", {}) if api_client else {} + auth = headers.get("Authorization", "") + if isinstance(auth, str) and auth.startswith("Bearer "): + token = auth.replace("Bearer ", "", 1) + if token: + return token + except Exception as e: + log.debug("Could not read token from SDK: %s", e) + return None + +def get_access_token() -> Optional[str]: + return os.getenv("UIPATH_ACCESS_TOKEN") or get_access_token_from_sdk() + + +_bg_loop: Optional[asyncio.AbstractEventLoop] = None +_bg_thread: Optional[Thread] = None + +def _ensure_bg_loop(): + global _bg_loop, _bg_thread + if _bg_loop and _bg_loop.is_running(): + return + _bg_loop = asyncio.new_event_loop() + def _runner(): + asyncio.set_event_loop(_bg_loop) # Required for anyio compatibility + _bg_loop.run_forever() + _bg_thread = Thread(target=_runner, name="mcp-bg-loop", daemon=True) + _bg_thread.start() + +def _run_coro_sync(coro): + _ensure_bg_loop() + fut = asyncio.run_coroutine_threadsafe(coro, _bg_loop) # type: ignore[arg-type] + return fut.result() + +# ---------- MCP session ---------- +_http_cm = None # async context manager transportu MCP +_session_read = None +_session_write = None +_transport = None +_session: Optional[ClientSession] = None +_tools_cache: List = [] +_session_lock = asyncio.Lock() + +async def _open_session() -> ClientSession: + """Open new session mcp""" + global _http_cm, _session_read, _session_write, _transport, _session + token = get_access_token() + headers = {"Authorization": f"Bearer {token}"} if token else {} + if not MCP_SERVER_URL: + raise RuntimeError("MCP_SERVER_URL is not set. Provide it from Orchestrator (MCP Servers).") + + + _http_cm = streamablehttp_client(url=MCP_SERVER_URL, headers=headers, timeout=90) + + _session_read, _session_write, _transport = await _http_cm.__aenter__() + try: + sid = _transport.get_session_id() if hasattr(_transport, "get_session_id") else "unknown" + log.info("Received session ID: %s", sid) + except Exception: + log.info("Received session ID: unknown") + + _session = ClientSession(_session_read, _session_write) + await _session.__aenter__() + await _session.initialize() + log.info("Negotiated protocol version: %s", getattr(_session, "protocol_version", "unknown")) + return _session + +async def _ensure_session() -> ClientSession: + global _session + if _session is not None: + return _session + async with _session_lock: + if _session is None: + _session = await _open_session() + return _session + +async def _close_session(): + """Close session MCP""" + global _http_cm, _session_read, _session_write, _transport, _session + try: + if _session is not None: + try: + await _session.__aexit__(None, None, None) + finally: + _session = None + if _http_cm is not None: + try: + await _http_cm.__aexit__(None, None, None) + finally: + _http_cm = None + finally: + _session_read = None + _session_write = None + _transport = None + +# clean shutdown at exit +import atexit +def _shutdown_sync(): + try: + if _bg_loop and _bg_loop.is_running(): + # close mcp session + asyncio.run_coroutine_threadsafe(_close_session(), _bg_loop).result(timeout=3) + _bg_loop.call_soon_threadsafe(_bg_loop.stop) + if _bg_thread: + _bg_thread.join(timeout=2) + except Exception: + pass +atexit.register(_shutdown_sync) + +async def get_mcp_tools(refresh: bool = False): + """Download tools MCP""" + global _tools_cache + if refresh: + _tools_cache = [] + if _tools_cache: + return _tools_cache + async with _session_lock: + if _tools_cache: + return _tools_cache + session = await _ensure_session() + _tools_cache = await load_mcp_tools(session) + names = [t.name for t in _tools_cache] + log.info("MCP tools discovered: %s", names) + return _tools_cache + +def _normalize(s: str) -> str: + return (s or "").lower().replace(" ", "_") + +async def _find_tool(name: str): + tools = await get_mcp_tools() + target = _normalize(name) + for t in tools: + if _normalize(t.name) == target: + return t + for t in tools: + if target in _normalize(t.name): + return t + return None + +async def _invoke_tool_once(tool_name: str, payload: Any, arg_name: str) -> Any: + """ + Single try to invoke MCP tool + """ + await _ensure_session() + tool = await _find_tool(tool_name) + if not tool: + await get_mcp_tools(refresh=True) + tool = await _find_tool(tool_name) + if not tool: + raise RuntimeError( + f"MCP tool '{tool_name}' not found. " + f"Exposed tools: {[t.name for t in await get_mcp_tools()]}" + ) + + payload_str = payload if isinstance(payload, str) else json.dumps(payload, ensure_ascii=False) + args = {arg_name: payload_str} + + log.info("Calling MCP tool '%s' with arg_name='%s', payload_len=%d chars", + tool.name, arg_name, len(payload_str)) + if log.isEnabledFor(logging.DEBUG): + log.debug("Payload preview: %s", payload_str[:500]) + + return await tool.ainvoke(args) + +async def _invoke_tool(tool_name: str, payload: Any, arg_name: str) -> Any: + """ + Invoke tool with auto-reconnect and 1 retry if stream is closed + or "cancel scope/asyncgen" error occurs. + """ + from anyio import ClosedResourceError + try: + return await _invoke_tool_once(tool_name, payload, arg_name) + except (ClosedResourceError, RuntimeError, GeneratorExit) as e: + msg = str(e) + if isinstance(e, ClosedResourceError) or "cancel scope" in msg or "asynchronous generator" in msg: + log.warning("MCP stream/context issue. Reconnecting... (%s)", msg or type(e).__name__) + await _close_session() + await _open_session() + await get_mcp_tools(refresh=True) + return await _invoke_tool_once(tool_name, payload, arg_name) + raise + +# ====================================================================== +# PUBLIC API +# ====================================================================== + +def send_calibration_notifications_mcp(route_data: Dict[str, Any]) -> bool: + async def _run(): + return await _invoke_tool("send_Calibration_Notifications", route_data, ARG_EMAIL) + try: + res = _run_coro_sync(_run()) + log.info("MCP email workflow completed: %s", str(res)[:200]) + return True + except Exception as e: + log.error("MCP email workflow failed: %s", e, exc_info=True) + return False + +def send_slack_notification_mcp(slack_payload: Dict[str, Any]) -> bool: + async def _run(): + return await _invoke_tool("send_Slack_Notification", slack_payload, ARG_SLACK) + try: + res = _run_coro_sync(_run()) + log.info("MCP Slack workflow completed: %s", str(res)[:200]) + return True + except Exception as e: + log.error("MCP Slack workflow failed: %s", e, exc_info=True) + return False + +def add_service_order_mcp(record: Dict[str, Any]) -> bool: + async def _run(): + return await _invoke_tool("addServiceOrder", record, ARG_ENTITY) + try: + res = _run_coro_sync(_run()) + log.info("MCP AddServiceOrder completed: %s", str(res)[:200]) + return True + except Exception as e: + log.error("MCP AddServiceOrder failed: %s", e, exc_info=True) + return False + +# ---------- Classic fallback (e-mail) ---------- +def send_calibration_notifications_classic(route_data: Dict[str, Any]) -> bool: + try: + client = get_uipath_client() + payload = json.dumps(route_data, ensure_ascii=False) + res = client.processes.invoke( + name="Send_Calibration_Notifications", + folder_path=FOLDER_PATH or None, + input_arguments={ARG_EMAIL: payload}, + ) + job_id = getattr(res, "id", None) or str(res) + log.info("Classic invoke OK. Job: %s", job_id) + return True + except Exception as e: + log.error("Classic invoke failed: %s", e, exc_info=True) + return False + +# ---------- Facade used by main ---------- +def send_calibration_notifications(route_data: Dict[str, Any]) -> bool: + return send_calibration_notifications_mcp(route_data) if USE_MCP else send_calibration_notifications_classic(route_data) + +def send_slack_notification(payload: Dict[str, Any]) -> bool: + return send_slack_notification_mcp(payload) if USE_MCP else False + +def add_service_order(record: Dict[str, Any]) -> bool: + return add_service_order_mcp(record) if USE_MCP else False diff --git a/samples/calibration-dispatcher-agent/policies/Calibration_Rules_Document.pdf b/samples/calibration-dispatcher-agent/policies/Calibration_Rules_Document.pdf new file mode 100644 index 00000000..ad1b3a86 Binary files /dev/null and b/samples/calibration-dispatcher-agent/policies/Calibration_Rules_Document.pdf differ diff --git a/samples/calibration-dispatcher-agent/policies/README.md b/samples/calibration-dispatcher-agent/policies/README.md new file mode 100644 index 00000000..4b70d508 --- /dev/null +++ b/samples/calibration-dispatcher-agent/policies/README.md @@ -0,0 +1,114 @@ +# Policy Documents + +This directory contains calibration policy documents used by the agent's Context Grounding retrieval system. + +## Files + +### Calibration_Rules_Document.pdf (302 KB) +Defines calibration intervals, SLA requirements, and priority classification rules: +- Device calibration intervals (365 days for both Audiometers and Tympanometers) +- Status classification (OVERDUE, URGENT, SCHEDULED, ACTIVE) +- SLA thresholds by clinic type (24h/48h/72h) +- Priority matrix and escalation procedures +- Technician specialization requirements +- Cost and time estimates per device type + +### Routing_Guidelines_Document.pdf (444 KB) +Field service routing and scheduling optimization rules: +- Daily capacity constraints (max 4 visits, 200km, 8 hours) +- Route optimization principles (nearest neighbor with constraints) +- Geographic clustering rules (city-based prioritization) +- Technician assignment logic (specialization + proximity) +- Multi-device clinic optimization +- Traffic and seasonal considerations +- Google Maps API integration guidelines + +### Service_Procedures_Document.pdf (286 KB) +Detailed calibration execution procedures: +- Pre-service preparation checklists +- Step-by-step calibration procedures for each device type +- Quality assurance standards and acceptance criteria +- Troubleshooting common issues +- Safety protocols (electrical, acoustic, hygiene) +- Post-service requirements and documentation +- Technician training requirements + +## Usage in Agent + +**December 2025 Update**: These documents are now uploaded to **Orchestrator Storage Buckets** and indexed via **Context Grounding Indexes** for RAG (Retrieval-Augmented Generation) pattern. + +### Setup Process + +1. **Upload to Storage Bucket**: + - Navigate to **Orchestrator > Tenant > Storage Buckets** + - Create or select bucket: "calibration-policies" + - Upload all 3 PDF files to the bucket + - Verify files appear in bucket file list + +2. **Create Context Grounding Index**: + - Navigate to **Orchestrator > Tenant > Indexes** (AI Trust Layer) + - Click **Create Index** + - Name: "Calibration Procedures" + - Source: **Orchestrator Storage Bucket** + - Select bucket: "calibration-policies" + - File types: **PDF** + - Click **Create** and wait for indexing (5-10 minutes) + +3. **Agent Queries Index**: + - Agent uses ContextGroundingRetriever to query the index + - Retrieves relevant policy sections based on current task + - Extracts constraints (max visits, distance, hours) + - Applies rules to route optimization + +### Example Queries + +The agent makes queries like: +- "What is the calibration interval for audiometers?" +- "What are the SLA requirements for hospitals?" +- "What is the maximum number of visits per route?" +- "What are the routing constraints for OVERDUE devices?" + +Context Grounding returns relevant excerpts which the agent parses to enforce business rules. + +## Content Summary + +### Key Rules Extracted + +| Policy Area | Key Constraints | +|------------|-----------------| +| Calibration Intervals | 365 days (both device types) | +| Status Thresholds | ≤14 days (Audiometer), ≤7 days (Tympanometer) for URGENT | +| Daily Limits | 4 visits, 200km, 8 hours (standard) | +| OVERDUE Override | 5 visits, 300km, 12 hours (emergency) | +| Service Duration | 2.0h (Audiometer), 1.5h (Tympanometer) | +| Specialization | Audiometry/All for Audiometers, Tympanometry/All for Tympanometers | + +### Deterministic vs LLM Processing + +The agent uses a hybrid approach: +- **LLM Processing**: Initial policy retrieval and constraint extraction +- **Deterministic Logic**: Date calculations, priority sorting, route optimization +- **Fallback Values**: If Context Grounding fails, uses hardcoded defaults from config.py + +This ensures reliable operation even if policy retrieval has issues. + +## Customization + +To adapt policies for your use case: + +1. **Modify PDFs**: Edit policy documents with your business rules +2. **Re-upload**: Replace files in Context Grounding index +3. **Update Fallbacks**: Adjust default values in `config.py` +4. **Test**: Verify agent extracts correct constraints + +The agent's prompts are designed to be flexible - minor policy changes should work without code modifications. + +## Content Format + +Documents are structured with: +- Clear section headers +- Numbered lists for rules +- Tables for reference values +- Examples for clarity + +This structure optimizes Context Grounding retrieval accuracy. diff --git a/samples/calibration-dispatcher-agent/policies/Routing_Guidelines_Document.pdf b/samples/calibration-dispatcher-agent/policies/Routing_Guidelines_Document.pdf new file mode 100644 index 00000000..1c71f226 Binary files /dev/null and b/samples/calibration-dispatcher-agent/policies/Routing_Guidelines_Document.pdf differ diff --git a/samples/calibration-dispatcher-agent/policies/Service_Procedures_Document.pdf b/samples/calibration-dispatcher-agent/policies/Service_Procedures_Document.pdf new file mode 100644 index 00000000..e59d40a8 Binary files /dev/null and b/samples/calibration-dispatcher-agent/policies/Service_Procedures_Document.pdf differ diff --git a/samples/calibration-dispatcher-agent/requirements.txt b/samples/calibration-dispatcher-agent/requirements.txt new file mode 100644 index 00000000..050b3c5b --- /dev/null +++ b/samples/calibration-dispatcher-agent/requirements.txt @@ -0,0 +1,24 @@ +# UiPath SDK and LangChain Integration +uipath>=2.0.0 +uipath-langchain>=0.1.0 +langchain-core>=0.3.0 +langgraph>=0.2.0 + +# LLM Providers (choose based on your LLM Gateway configuration) +langchain-openai>=0.2.0 +langchain-anthropic>=0.2.0 + +# MCP Integration +mcp>=0.1.0 +langchain-mcp-adapters>=0.1.0 + +# Google Maps API +googlemaps>=4.10.0 + +# Utilities +python-dotenv>=1.0.0 +pydantic>=2.0.0 + +# Optional: For local development and testing +pytest>=7.4.0 +pytest-asyncio>=0.21.0