Skip to content

Agent-based warehouse simulation platform with AI optimization. Models human-robot collaboration, inventory management, and order fulfillment. Features Bayesian optimization for operational parameters, real-time visualization dashboard, and comprehensive KPI tracking. Built with Mesa & Optuna.

Notifications You must be signed in to change notification settings

AliBoukind13/WareSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WareSim — Warehouse Simulation & AI Optimisation Platform

A modular Python engine for simulating large‑scale human‑robot warehouses, analysing operational Key Performance Indicators (KPIs), and searching for optimal configurations.


Installation

git clone https://gitlab.doc.ic.ac.uk/g24mai03/waresim
cd waresim
python3.12 -m venv venv
source venv/bin/activate      # Windows: venv\Scripts\activate
pip install -r requirements.txt

Requires Python 3.12.


Overview

WareSim replicates inbound (stock replenishment), storage, and outbound activities inside a warehouse. It models humans, robots, inventory, and orders; records key operational metrics; and offers visualisations. Built with Mesa, the simulation captures realistic warehouse behaviours such as human-robot collaboration, narrow aisle accessibility, stochastic elements like battery variation, and collision tracking.

In addition to simulation, WareSim includes an AI-based optimisation module that uses Bayesian Optimisation, powered by Optuna, to identify high-performing warehouse configurations. It efficiently explores operational parameters and supports both single- and multi-objective optimisation, helping users align simulation outcomes with specific business priorities.


Key Features

Warehouse Simulator Features

  • Agent‑based discrete‑event simulation of humans and robots.
  • JSON‑driven customisable warehouse layouts (example layout has 32 × 27 grid and 240 Stock Keeping Units (SKUs) which are item types).
  • Four picking strategies: FIFO, Furthest‑First, Nearest‑Item, Optimal‑Path.
  • Visualisation: Live Solara dashboard and Matplotlib plot generation.
  • Detailed outputs: KPI log, machine‑readable metrics JSON, per‑timestep CSV, event tracker.

AI Module Features

  • Intelligent discovery of near-optimal warehouse parameters (robot counts, worker numbers, picking strategies)
  • Customisable efficiency score system that combines multiple metrics based on business priorities
  • Support for both single-objective (maximising one metric) and multi-objective (two metrics) optimisation (balancing competing goals)
  • Comprehensive visualisations to understand parameter relationships and performance impacts
  • Interactive dashboard for exploring optimisation results with multiple analysis views

Architecture at a Glance

Layer Path Description
backend backend/ Domain objects, task logic, KPI collection
ai ai/ Add-on module for optimising discrete, non-spatial warehouse parameters
frontend frontend/ Static plots and Solara dashboard

Directory Structure

WARESIM/
├── ai/                  
├── backend/                                    # Core simulation engine and logic
│   ├── human_robot_logic/                      # Robot and human behaviour modelling
│   │   ├── human_robot_agents.py               # Definition of agents and agent-specific logic
│   │   ├── human_robot_task_management.py      # Human-robot interaction logic
│   │   ├── path_finding.py                     # Robot movement logic
│   │   └── picking_strategies.py       
│   ├── metrics/                                # Performance data collection and analysis
│   │   ├── metrics_analysis.py        
│   │   └── metrics.py                  
│   ├── order_management/                       # Order processing workflows
│   │   ├── inbound_order_management.py         # Stock replenishment logic
│   │   └── outbound_order_management.py        # New order generation logic
│   ├── tests/                                  # Unit and integration tests
│   ├── warehouse_model/                        # Core simulation framework
│   │   ├── base_simulation.py                  # Mesa model implementation
│   │   └── utils.py                    
│   └── warehouse_set_up/                       # Physical warehouse components used in Solara Visualisation
│       ├── inbound_zone_agents.py       
│       ├── inventory_agents.py          
│       ├── maintenance_zone_agents.py          # For robot battery charging
│       └── outbound_zone_agents.py     
├── frontend/                                   # Visualisation and UI components
│   ├── live_sim_custom_visual.py               # Agent visualisation styling
│   ├── plots_from_main.py                      # Static plot generation for CLI mode
│   ├── plots.py                                # Interactive plot components for dashboard
│   └── simulation_dashboard.py                 # Solara dashboard components
├── input/                                      # JSON configs & parameter ranges
├── output/                                     # Artefacts (one sub‑folder per run)
├── main.py                                     # CLI entry point for headless execution
├── visualisation.py                            # Solara dashboard entry point
└── requirements.txt                            # Project dependencies

Note: The ai subdirectory should be considered as a "standalone" module that uses the simulator. By this, we mean:

  • The ai subdirectory has its inputs in input/
  • But, its outputs (ai/results/ and ai/visualisations/), its main (ai/main.py) and its "front-end" (ai/optimization_dashboard.py) are inside the ai/ subdirectory.
  • main.py,output/and visualisation.py are proper to the simulator.
  • The reason behind this approach is that the ai module should be viewed as an "add-on" to the simulator. The directory is structured in a way that serves this vision.

Main Simulation Configuration (config.json)

This file defines all parameters for a specific warehouse simulation run. The main sections include:

  • Run Settings: Top-level settings like timesteps (simulation duration) or num_multiple_runs (only used by the dashboard when you run multiple simulations).
  • warehouse: Defines the physical grid dimensions (depth, width) and the coordinates of key operational zones (inbound_zone, drop_off_zone, maintenance_zones).
  • robots: Configures the robotic fleet:
    • Number of inbound and outbound robots.
    • Carrying capacity for each type.
    • battery_capacity and charging threshold (go_to_charge).
    • The picking_strategy used by outbound robots (see comments in the file for strategy codes, e.g., 1=FIFO, 3=Nearest Item).
  • humans: Configures the human workforce:
    • Number of workers per shift (num_shift_A, num_shift_B).
    • shift_duration and rest_zone location (where workers rest between shifts).
    • Parameters for the stochastic competency model (allow_competency - if turned off, humans take 1 timestep per task).
  • orders: Controls workload generation:
    • Toggles for dynamic generate_new_orders and generate_inbound_items.
    • Parameters for stochastic outbound order generation (distribution type, mean/std dev/max for size and arrival rate, generation frequency generate_every).
    • stock_replenishment_threshold to trigger inbound tasks.
  • item_types: A list defining each SKU (product) by name and its relative demand (order_rate).
  • aisles: Defines the detailed static layout:
    • Geometry (start, end coordinates) and height_levels (determines number of shelves per bay) for each aisle.
    • Access rules: robot_access (boolean, false indicates a narrow aisle) and human_access (from which side of aisles humans need to approach from: Left or Right).
    • Initial stock allocation (bay_allocations) specifies for each bay the item type it contains and the quantity on each level at the start of the simulation (each SKU gets permanently assigned to a unique shelf).

Note: Aisles, bays and shelves define item storage. An aisle is a collection of bays placed in a line. A bay occupies a single cell in the simulation. Each bay has a certain number of shelves on which items are stored.

Note: The human competency model simulates variable worker performance. When enabled (allow_competency: true), each worker is assigned an individual competency level sampled from a normal distribution (competency_distribution_mean, competency_distribution_std). During task execution, the time taken is then sampled from another normal distribution using that worker's competency level and competency_std, clamped between 1 and max_time timesteps. This creates realistic variation in worker productivity.

Note: The example input/config.json contains detailed comments ("_comment...") explaining each parameter. Refer to this file for specifics when creating custom configurations.

Configuration Tips

Key parameters to experiment with:

  • robots.picking_strategy: Test different strategies (1=FIFO, 2=Furthest-First, 3=Nearest-Item, 4=Optimal-Path)
  • robots.num_outbound_robots: Adjust the number of robots used for outbound operations (taking items from the shelves to the drop-off zone for order fulfillment)
  • robots.num_inbound_robots: Adjust the number of robots used for inbound operations (taking items from the inbound zone to the shelves for stock replenishment)
  • humans.num_shift_A and humans.num_shift_B: Modify staffing levels
  • orders.order_mean_per_step: Change order volume to test system under load

Warehouse Simulator Quickstart

This simulation can be run in two ways:

  1. Developer Mode (main.py) – for CLI-based execution, debugging, and backend testing.
  2. Dashboard Mode (visualisation.py) – for live simulation, interactive controls, and performance evaluation plots using a Solara-powered web interface.

All configuration is managed via the .env file.

Warehouse Configuration via .env file

The .env file is used to centralise all important settings for both main.py and visualisation.py.

This includes:

  • Simulation parameters (e.g. config file path, number of steps)
  • Output filenames and directories

Running a Basic Simulation (Developer Mode)

Run a single simulation using the CLI tool (primarily for development and debugging):

python main.py

The simulation will:

  • Initialise the warehouse based on the configuration from the .env file
  • Run the specified number of steps
  • Generate metrics and performance logs
  • Create visualisation plots automatically

All output files are saved to output/run_from_main/ directory, including:

  • key_metrics_log.txt: Summary statistics
  • metrics_output.json: Structured metrics data
  • warehouse_data_output.csv: Timestep-by-timestep data
  • warehouse_simulation_plots_2x2.png: Automatically generated visualisations

Interactive Dashboard Visualisation (Dashboard Mode)

For a more interactive experience with support for multiple runs, launch the Solara-powered dashboard:

# Start the web interface for real-time visualisation
python -m solara run visualisation.py

The dashboard allows you to:

  • Run single or multiple simulations with configurable parameters
  • View the warehouse in real-time with colour-coded agents
  • Toggle different metrics visualisations
  • Compare results across multiple simulation runs

Note: The Solara dashboard is the recommended way to run multiple simulations and compare results. The CLI tool (main.py) only supports single runs and is primarily used for development purposes.

What To Expect

After running a simulation, the tool creates a structured output directory that varies based on your run type:

output/
├── run_from_main/                              # Results from CLI-based execution (Developer Mode)
│   ├── key_metrics_log.txt                     # Summary performance statistics
│   ├── warehouse_data_output.csv               # Timestep-by-timestep data
│   ├── warehouse_metrics.json                  # Structured performance metrics for programmatic use
│   ├── warehouse_simulation_plots_2x2.png      # Auto-generated performance visualisations
│   └── warehouse_simulation.log                # Warehouse simulation logs
│
├── run_from_visualisation/                     # Results from front-end interface (Dashboard Mode)
│   ├── single_run/                             # Results from dashboard single simulation
│   │   ├── key_metrics_log.txt                 
│   │   ├── warehouse_data_output.csv
│   │   ├── warehouse_metrics.json
│   │   └── warehouse_simulation.log
│   │
│   └── multiple_runs/                          # Results from dashboard multiple runs (e.g. 3 runs)
│       ├── key_metrics_log_run1.txt
│       ├── key_metrics_log_run2.txt
│       ├── key_metrics_log_run3.txt
│       ├── warehouse_data_output_run1.csv
│       ├── warehouse_data_output_run2.csv
│       ├── warehouse_data_output_run3.csv
│       ├── warehouse_metrics_run1.json
│       ├── warehouse_metrics_run2.json
└────── └── warehouse_metrics_run3.json

Each output directory contains multiple file types that provide different views of the simulation results:

Output File Description
key_metrics_log.txt Human-readable summary with KPIs like order count, resource utilisation, and robot behaviour
warehouse_metrics.json Machine-readable metrics in JSON format for programmatic analysis or visualisation
warehouse_data_output.csv Detailed per-timestep data for time-series analysis

Sample output from key_metrics_log.txt:

WAREHOUSE METRICS:
Total Timesteps in Simulation:  500
Total Orders Processed:         16

Resource Utilisation:
Average robot utilisation:      1.0
Average human utilisation:      0.27

Robot Behaviour Analysis:
Total collisions detected:      2
Total number of stuck instances: 25

Interactive Visualisation Features

The Solara dashboard (visualisation.py) provides several powerful visualisation options:

Real-time Warehouse View

  • Colour-coded agents with dynamic state indicators
  • Outbound robots (blue/red): Blue when available, red when fulfilling tasks
  • Inbound robots (purple/yellow): Purple when available, yellow when fulfilling tasks
  • Human workers (green/orange/grey): Green when working, orange when idle, grey during rest shifts
  • Shelving areas (black) and warehouse zones (white/grey)

Metric Visualisations

Toggle buttons let you view:

  • Robot Movement Heatmap: Heat map visualisation of most-visited warehouse cells
  • Orders Log: Tracking of inbound and outbound orders over time
  • Processed Orders: Number of orders and items processed per timestep
  • Worker Utilization: Robot and human capacity utilisation rates
  • Stock Levels: Total warehouse inventory trends
  • Stuck Robot Analysis: Tracking of robot navigation issues
  • Out-of-Stock Metrics: Product availability monitoring

For multiple simulation runs, the dashboard automatically aggregates results with statistical bands showing mean ±1 standard deviation.


Warehouse Simulator Outputs

File Content Format
key_metrics_log.txt Human‑readable headline KPIs Text
metrics_output.json Same KPIs in structured form JSON
warehouse_data_output.csv Per‑timestep state snapshot CSV
warehouse_simulation.log Warehouse Simulation Logs LOG

Sample section from key_metrics_log.txt:

Total Orders Processed: 8
Average robot utilisation: 1.0
Total collisions detected: 1

Warehouse Simulator Testing

Layer Command (run from project root) Scope
Quick functional check python -m backend.tests.run_tests Basic grid maths, picking, and pathfinding sanity cases
Full unit + integration python -m unittest discover -s backend/tests pathfinding_tests.py, test_picking_strategies.py, basic_test_cases.py, human_robot_interaction_cases.py
stress load RUN_STRESS=1 python -m backend.tests.stress_testing_robots 200‑step high‑traffic scenario; writes full metrics to output/run_from_main/

AI Module

The AI module identifies near-optimal warehouse configurations through Bayesian Optimisation using Optuna, exploring non-spatial discrete parameters like worker numbers through repeated simulation runs (to account for stochasticity). The system supports both single-objective and multi-objective (2 objectives) optimisation, and can target either raw simulation metrics (like robot utilisation) or derived efficiency scores metrics (average efficiency and efficiency standard deviation). Efficiency score is a customisable metric that combines multiple warehouse simulation's metrics into a single value using user-defined weights, allowing optimisation to align precisely with business-specific priorities.

What it does

The ai/ module allows us to define:

  • The parameters we would like to optimise in our configuration, along with the ranges we would like to explore.
  • The metrics outputted by the warehouse simulator that are interesting to us, how important they are relative to one another (via the efficiency score weights), and whether we would like to maximise or minimise them.

Then, our system uses Bayesian Optimisation via Optuna to give us insights on which parameter sets best serve our operational goals.

Directory structure

ai/
├── dirs_and_filenames_constants.py              # Centralised constants for file paths and naming conventions
├── experiment_logger.py                         # Logging utilities for timing and recording experiments
├── main.py                                      # Main orchestration script for running optimisation experiments of interest
├── normalization_range_sampler.py               # Samples parameter space to establish min/max metric values for efficiency score normalisation
├── optimisation_dashboard.py                    # Interactive web dashboard for visualising optimisation results and visualisations
│
├── optimization_pipeline/                       # Core modules for executing the Bayesian optimisation process
│   ├── best_solution_selector.py                # Selects optimal solutions from both single and multi-objective optimisations
│   ├── efficiency_score_metrics.py              # Calculates normalised efficiency scores by combining weighted normalised metrics
│   ├── run_single_optimization_experiment.py    # Orchestrates a single optimisation experiment with specified parameters to optimise and objectives
│   └── simulation_runner.py                     # Executes warehouse simulations with specified parameters and collects metrics
│
├── plot_builders/                               
│   ├── plot_parameter_heatmaps.py               # Creates 2D heatmaps to visualise how different parameter combinations affect warehouse efficiency metrics
│   ├── plot_parameter_sensitivity_graphs.py     # Shows how the best solution for warehouse efficiency avg and sd changes with each possible value of individual parameters
│   ├── plot_pareto_front_efficiency.py          # Generates a Pareto front plot visualising the trade-off between Average Efficiency and Efficiency Standard Deviation
│   ├── plot_utils.py                            # Utility functions for loading and processing data for plots
│   └── run_all_visualizations.py                # Executes all visualisation scripts sequentially:
│
├── results/                                     # JSON outputs from optimisation experiments
│   ├── best_params/                             # Best parameter configurations identified
│   └── trials/                                  # Data for all trials observed during the experiment
│
├── tests/                                       # Test suites for module validation
│
├── utils/                                       # Common utility functions and helpers
│
└── visualisations/                              # Generated visualisation outputs
  • Each file has a very detailed introductory docstring that clearly explains what the files do, what are their inputs, outputs, CLI arguments, dependencies, etc.

AI module's inputs

The AI module requires three main input JSON files:

Input 1: Parameter Ranges JSON

This file defines the parameters to be optimised and their exploration ranges (inclusive):

{
    "robots.num_inbound_robots": [1, 10],
    "robots.num_outbound_robots": [1, 10],
    "humans.num_shift_A": [1, 10],
    "humans.num_shift_B": [1, 10],
    "robots.picking_strategy": [2, 4]
}

Requirements and Usage:

  • Parameters must be discrete and non-spatial (aisles, bays, width, depth are not recommended)
  • Ranges must be specified as integers [min, max]
  • To optimise for an additional parameter: simply add it with its range to this dictionary
  • To remove a parameter from the optimisation pipeline: remove it from this dictionary

Input 2: Metrics Weights and Directions JSON

This file defines which metrics to optimise, their relative importance, and optimisation direction:

{
    "Total Orders Processed": [0.4, "max"],
    "Average Robot Utilization": [0.2, "max"],
    "Average Human Utilization": [0.2, "max"],
    "Total Collisions Detected": [0.2, "min"]
}

Requirements and Usage:

  • The sum of all weights must be approximately 1.0
  • Each metric needs a weight (float between 0-1) and direction ("max" or "min")
  • To consider an additional metric: add it to this dictionary with weight and direction
  • To remove a metric from our considerations: remove it from this dictionary Any metrics listed in this file that don't appear in the warehouse simulator's output will be ignored

Input 3: Warehouse Configuration JSON

This file provides the baseline warehouse configuration that serves as input to the warehouse simulator. The AI module pipeline overrides specific parameters (from the parameter ranges JSON) within this baseline configuration to evaluate different parameter combinations. The new values will be within the ranges specified in the parameter ranges JSON.


AI Module Execution Guide

Our optimisation pipeline has several runnables that each provide essential elements for warehouse optimisation. They can all be run through the main.py or independently:

Runnable Files Overview

  1. Normalization Range Sampler (normalization_range_sampler.py):

    • Pre-optimisation step that samples parameter space to establish metric ranges for our metrics of interest
    • Determines min/max values essential for efficiency score normalisation
    • Important: Required prerequisite before running any optimisation with Average Efficiency and/or Efficiency Standard Deviation as metric objective(s)
  2. Single Optimization Experiment Runner (run_single_optimization_experiment.py):

    • Orchestrates single or multi-objective optimization experiments to find optimal warehouse parameters
    • Supports exactly one (single-objective) or two metrics (multi-objective)
    • Objectives can be any metric in Metrics Weights and Directions JSON (input 2) or Average Efficiency or Efficiency Standard Deviation
    • Prerequisite: Requires Normalization Range Sampler's output (HISTORICAL_MIN_MAX_JSON) when using Average Efficiency and/or Efficiency Standard Deviation as (an) objective metric(s)
  3. Visualisation Tools (all analyse results from previously run multi-objective optimisation trials with Average Efficiency and Efficiency Standard Deviation as objectives):

    • plot_parameter_sensitivity_graphs.py: Shows the "best" solution (that balances Average Efficiency and Efficiency Standard Deviation) for each possible value of each parameter
    • plot_pareto_front_efficiency.py: Visualises the trade-off between Average Efficiency and Efficiency Standard Deviation by showing all trials (both Pareto and non-Pareto), the Pareto Front, and the best solution
    • plot_parameter_heatmaps.py: Creates 2D heatmaps to visualise how different parameter combinations affect Average Efficiency and Efficiency Standard Deviation
    • run_all_visualizations.py: Executes all three visualisation scripts
  4. Main Orchestration Script (main.py):

    • Integrates all components into a complete workflow with three key steps:
      • Step 1: Metric normalisation sampling (pre-optimisation process) (using: normalization_range_sampler.py)
      • Step 2: Multi-objective optimisation with Average Efficiency and Efficiency Standard Deviation as objectives (using run_single_optimization_experiment.py)
      • Step 3: Single-objective optimisation with Average Efficiency as objective (using run_single_optimization_experiment.py)
    • Can run any combination of steps above using the CLI
    • Automatically generates visualisations when running Step 2 or all steps
    • Note: If you need to optimise using metrics other than efficiency-related ones as objectives, use run_single_optimization_experiment.py directly

Detailed Runnable Files Guide

Here are more details on the different executable files. Note that for the CLI [CLI], all of these are optional. If not included, these arguments will use their default values, which are easily modifiable (Consult each file's intro docstring for more information).

1. Normalization Range Sampler (normalization_range_sampler.py)

This prerequisite step samples the parameter space to determine min/max metric ranges needed for proper normalisation during efficiency score calculations.

python -m ai.normalization_range_sampler [CLI]

CLI:

  • --parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)
  • --baseline_config: Baseline warehouse configuration to be optimised (input 3)
  • --metrics_weights_and_directions_json: Metrics to sample with their weights and directions (input 2)
  • --steps: Number of timesteps per simulation (higher = slower)
  • --number_of_samples: Number of parameter combinations to sample
  • --seed: Random seed for reproducible results

Output:

  • HISTORICAL_MIN_MAX_JSON : Contains min/max values for each metric used in normalisation

Note: Must be run before any optimisation experiments that use Average Efficiency or Efficiency Standard Deviation as objectives.

2. Single Optimization Experiment Runner (run_single_optimization_experiment.py):

This script orchestrates complete optimisation experiments, finding optimal parameter combinations based on specified objectives.

python -m ai.optimization_pipeline.run_single_optimization_experiment [CLI]

CLI:

  • --parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)
  • --metrics_weights_and_directions_json: Metrics with their weights and optimisation directions (input 2)
  • --baseline_config: Baseline warehouse configuration to be optimised (input 3)
  • --n_trials: Number of optimisation trials to run (higher = more thorough exploration but longer runtime)
  • --steps: Number of timesteps per simulation (higher = slower)
  • --number_of_runs: Number of simulation reruns for each parameter set (to account for stochasticity)
  • --objectives: Metrics to optimise (one or two metrics) Note: use " " for each objective, if inputting 2 objectives, just separate them with a space
  • --cores: Number of CPU cores for parallel processing Note: default: 1, currently simulator is not multithread friendly, so adding more cores will not make the experiment run faster
  • --seed: Random seed for reproducible results

Optimisation Types:

  • Single-objective optimisation:

    • Maximises or minimises a single objective metric
  • Multi-objective optimisation:

    • Limited to EXACTLY TWO objectives
    • Outputs Pareto-optimal solutions

Default setup optimises "Average Efficiency" (maximise) and "Efficiency Standard Deviation" (minimise).

Output Files:

  • All trials data (TRIALS_DIR/[optimization_type]/all_trials__[metric_name(s)]__[type].json):

    • Contains data for all parameter combinations tried
    • Includes trial ID, parameters, metric values
    • For multi-objective, flags Pareto-optimal and "best" solutions
  • Best parameters (BEST_PARAMS_DIR/[optimization_type]/best_params__[metric_name(s)]__[type].json):

    • Contains optimal parameter set found
    • Includes trial ID, parameters, and metric values

Example Usage:

# For multi-objective optimisation
python -m ai.optimization_pipeline.run_single_optimization_experiment \
  --parameter_ranges_json input/param_ranges.json \
  --metrics_weights_and_directions_json input/metrics_weights_and_directions.json \
  --baseline_config input/config.json \
  --n_trials 20 \
  --steps 200 \
  --number_of_runs 3 \
  --objectives "Average Efficiency" "Efficiency Standard Deviation" \
  --cores 1 \
  --seed 13

# For single-objective optimisation
python -m ai.optimization_pipeline.run_single_optimization_experiment \
  --parameter_ranges_json input/param_ranges.json \
  --metrics_weights_and_directions_json input/metrics_weights_and_directions.json \
  --baseline_config input/config.json \
  --n_trials 20 \
  --steps 200 \
  --number_of_runs 3 \
  --objectives "Average Efficiency" \
  --cores 1 \
  --seed 13

3. Visualisation Tools

IMPORTANT NOTE: All visualisation tools require that you have previously run a multi-objective optimisation with objectives being "Average Efficiency" and "Efficiency Standard Deviation." The all_trials file needs to be found in TRIALS_DIR/multi (which is where our optimisation pipeline places it).

If this hasn't been done yet, you can run:

  • ai/main.py --run_step_2
  • or ai/main.py --run_step_1 --run_step_2 if pre-optimisation normalisation step wasn't done OR
  • ai/optimization_pipeline/run_single_optimization_experiment.py with --objectives "Average Efficiency" "Efficiency Standard Deviation" (but make sure the pre-optimisation step was done before)

This will generate the required all_trials file at the correct location.

3.1 Parameter Sensitivity Graphs (plot_parameter_sensitivity_graphs.py)

python -m ai.plot_builders.plot_parameter_sensitivity_graphs [CLI]

CLI:

  • --parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)

What the Sensitivity Graphs Show:

  • Each graph shows one parameter, with parameter values on the x-axis and Average Efficiency on the y-axis
  • For each possible parameter value, it plots the single best solution from all trials with that parameter value
  • "Best" solution means the trial with minimum normalised Euclidean distance from the ideal point (explained in the code base)
  • Error bars represent the Efficiency Standard Deviation value of that best solution
  • Green circles indicate Pareto-optimal solutions, blue squares indicate non-Pareto solutions

Output Files:

  • One PNG graph per parameter, saved to the SENSITIVITY_DIR directory
  • Named: sensitivity_[param_name].png

3.2 Pareto Front Efficiency Plot (plot_pareto_front_efficiency.py)

python -m ai.plot_builders.plot_pareto_front_efficiency

What the Pareto Front Plot Shows:

  • Each point represents a simulation trial with specific parameter values
  • X-axis shows Average Efficiency (higher is better)
  • Y-axis shows Efficiency Standard Deviation (lower is better)
  • Red points represent the Pareto-optimal solutions
  • Blue points represent non-Pareto solutions
  • The green-bordered point shows the "best" solution
  • Numbers shown on Pareto points represent the trial ID (matching all_trials and best_params json files)

Output File:

  • A PNG visualisation of the Pareto front, saved to PARETO_FRONTS_DIR

3.3 Parameter Heatmaps (plot_parameter_heatmaps.py)

python -m ai.plot_builders.plot_parameter_heatmaps [CLI]

CLI:

  • --parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)

What the Heatmaps Show:

  • Each heatmap displays the relationship between two warehouse parameters
  • Colours represent either Average Efficiency (higher is better) or Efficiency Standard Deviation (lower is better)
  • Black cells indicate parameter combinations that weren't explored during optimisation
  • The percentage of explored combinations is displayed at the bottom of each heatmap

Output Files:

  • Two PNG heatmaps per parameter pair, saved to the HEATMAPS_DIR directory:
    • heatmap_[param1]_vs_[param2]_avg.png: Average efficiency
    • heatmap_[param1]_vs_[param2]_std.png: Efficiency standard deviation

3.4 Run All Visualizations (run_all_visualizations.py)

This script executes all three visualisation scripts for warehouse optimisation analysis in sequence.

python -m ai.plot_builders.run_all_visualizations [CLI]

CLI:

  • --parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)

What This Script Does:

  1. Runs Parameter Sensitivity Graphs
  2. Runs Pareto Front Efficiency Plot
  3. Runs Parameter Heatmaps

4. Main Orchestration Script (main.py)

This script serves as the main entry point for the warehouse optimisation AI module, orchestrating and timing the entire optimisation process.

python -m ai.main [CLI]

CLI:

  • --parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)
  • --baseline_config: Baseline warehouse configuration to be optimised (input 3)
  • --metrics_weights_and_directions_json: Metrics with their weights and optimisation directions (input 2)
  • --steps: Number of timesteps per simulation (higher = slower)
  • --seed: Random seed for reproducible results
  • --number_of_samples: Number of parameter combinations to sample in Step 1 (higher = more accurate but slower)
  • --n_trials: Number of optimisation trials for Steps 2-3 (higher = more thorough exploration)
  • --number_of_runs: Number of simulation reruns per parameter set (to account for stochasticity)
  • --cores: Number of CPU cores for parallel processing (reminder: not useful in our case)
  • --run_step_1: Run Step 1: Sample metric ranges for normalisation
  • --run_step_2: Run Step 2: Multi-objective optimisation with Average Efficiency and Efficiency Standard Deviation
  • --run_step_3: Run Step 3: Single-objective optimisation with Average Efficiency
  • --run_all: Run all steps (default if no steps are specified)

Workflow Dependencies:

  • Step 1 is a prerequisite for Steps 2 and 3 (creates necessary normalisation ranges)
  • Steps 2 and 3 can run independently after Step 1 is completed
  • Step 1 can be skipped if normalisation ranges (HISTORICAL_MIN_MAX_JSON) already exist from a previous run
  • Visualisations are automatically generated when running Step 2 or all steps

Output Files:

  1. Experiments log (EXPERIMENTS_LOG):

    • Detailed log of all experiments with timing information, parameters used, and metadata
  2. Step 1 outputs:

    • Historical min/max JSON (HISTORICAL_MIN_MAX_JSON): Contains min/max values for each metric in metrics_weights_and_directions, obtained from sampling in step 1
  3. Step 2 outputs:

    • Multi-objective optimisation results:

      • All trials data: Contains parameters tested and resulting metrics

      • Location: TRIALS_DIR/multi/all_trials__average_efficiency__efficiency_standard_deviation__multi.json

      • Best parameters: Contains the "best" parameter set selected from the Pareto front

      • Location: BEST_PARAMS_DIR/multi/best_params__average_efficiency__efficiency_standard_deviation__multi.json
    • Visualisation files:

      • Parameter sensitivity graphs in SENSITIVITY_DIR
      • Pareto front plot in PARETO_FRONTS_DIR
      • Parameter heatmaps in HEATMAPS_DIR
  4. Step 3 outputs:

    • Single-objective optimisation results:

      • All trials data: Contains all trials sorted by objective metric value

      • Location: TRIALS_DIR/single/all_trials__average_efficiency__single.json

      • Best parameters: Contains the best parameter set that maximises Average Efficiency

      • Location: BEST_PARAMS_DIR/single/best_params__average_efficiency__single.json

Example Usage:

# Run all steps with default parameters
python -m ai.main

# OR:
python -m ai.main --run_all

# Run specific steps
python -m ai.main --run_step_1 --run_step_2

# Run with custom parameters
python -m ai.main --run_all --parameter_ranges_json custom_ranges.json --steps 5000 --n_trials 100

Output File Examples

This section provides samples of key output files generated by the optimisation pipeline.

Historical Min/Max File

Generated by: normalization_range_sampler.py directly, or main.py with --run_step_1 or --run_all flags

{
    "Total Orders Processed": {
        "min": 106,
        "max": 486
    },
    "Average Robot Utilization": {
        "min": 0.88,
        "max": 1.0
    },
    "Average Human Utilization": {
        "min": 0.06,
        "max": 0.87
    },
    "Total Collisions Detected": {
        "min": 0,
        "max": 222
    }
}

Best Parameters Files

Generated by: run_single_optimization_experiment.py, or main.py with --run_step_2 or --run_all flags (for main.py: will only do so with objectives: "Average Efficiency" and "Efficiency Standard Deviation")

{
    "best_trial_id": 41,
    "best_trial_params": {
        "robots.num_inbound_robots": 1,
        "robots.num_outbound_robots": 7,
        "humans.num_shift_A": 7,
        "humans.num_shift_B": 5
    },
    "best_trial_metrics_values": {
        "Average Efficiency": 0.7255105631947737,
        "Efficiency Standard Deviation": 0.008934878939733683
    }
}

Note: single would look the same, but with just one metric in "best_trial_metrics_values"

All Trials Files

Multi:

Generated by: run_single_optimization_experiment.py, or main.py with --run_step_2 or --run_all flags (for main.py: will only do so with objectives: "Average Efficiency" and "Efficiency Standard Deviation")

[
    {
        "trial_id": 35,
        "params": {
            "robots.num_inbound_robots": 6,
            "robots.num_outbound_robots": 10,
            "humans.num_shift_A": 3,
            "humans.num_shift_B": 6
        },
        "Average Efficiency": 0.7731409479655094,
        "Efficiency Standard Deviation": 0.026364188712305347,
        "pareto": true,
        "best": false
    },
    // ... additional trials ...
]

Single:

Generated by: run_single_optimization_experiment.py with "Average Efficiency" as the only objective, or main.py with --run_step_3 or --run_all flags (for main.py: will only do so with objective: "Average Efficiency")

[
    {
        "trial_id": 36,
        "params": {
            "robots.num_inbound_robots": 3,
            "robots.num_outbound_robots": 8,
            "humans.num_shift_A": 3,
            "humans.num_shift_B": 9
        },
        "Average Efficiency": 0.781779323182832
    },
    // ... additional trials ...
]

Experiment Log

Generated by: main.py

=== Warehouse Optimization Experiment Suite ===
Started at: 2025-04-27 05:25:41


=== System Information ===
Cores to use: 1

Parameter ranges (from input/param_ranges.json):
{
  "robots.num_inbound_robots": [
    1,
    10
  ],
  "robots.num_outbound_robots": [
    1,
    10
  ],
  "humans.num_shift_A": [
    1,
    10
  ],
  "humans.num_shift_B": [
    1,
    10
  ]
}

Metrics weights and directions (from input/metrics_weights_and_directions.json):
{
  "Total Orders Processed": [
    0.4,
    "max"
  ],
  "Average Robot Utilization": [
    0.2,
    "max"
  ],
  "Average Human Utilization": [
    0.2,
    "max"
  ],
  "Total Collisions Detected": [
    0.2,
    "min"
  ]
}

Output paths:
  Experiments log: ai/results/experiment_results.log
  Historical min/max JSON: ai/results/metrics_min_max.json
  Trials directory: ai/results/trials
  Best params directory: ai/results/best_params


=== Step 1: Metric Range Sampling ===
Start time: 2025-04-27 05:25:41
End time: 2025-04-27 05:25:47
Elapsed time: 5 seconds

Parameters used:
  parameter_ranges_json: input/param_ranges.json
  baseline_config: input/config.json
  metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
  steps: 200
  number_of_samples: 5
  metrics_historical_min_max: ai/results/metrics_min_max.json
  seed: 13

=== Step 2: Multi-objective Optimization (Average Efficiency & Efficiency Standard Deviation) ===
Start time: 2025-04-27 05:25:47
End time: 2025-04-27 05:26:09
Elapsed time: 22 seconds

Parameters used:
  parameter_ranges_json: input/param_ranges.json
  baseline_config: input/config.json
  metrics_historical_min_max: ai/results/metrics_min_max.json
  n_trials: 10
  steps: 200
  number_of_runs: 2
  metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
  trials_dir: ai/results/trials
  best_params_dir: ai/results/best_params
  cores: 1
  seed: 13

=== Step 3: Single-objective Optimization (Average Efficiency) ===
Start time: 2025-04-27 05:26:09
End time: 2025-04-27 05:26:31
Elapsed time: 22 seconds

Parameters used:
  parameter_ranges_json: input/param_ranges.json
  baseline_config: input/config.json
  metrics_historical_min_max: ai/results/metrics_min_max.json
  n_trials: 10
  steps: 200
  number_of_runs: 2
  metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
  trials_dir: ai/results/trials
  best_params_dir: ai/results/best_params
  cores: 1
  seed: 13

All experiments completed at: 2025-04-27 05:26:31

Optimisation Dashboard

The optimisation_dashboard.py provides an interactive web-based interface for exploring the results of warehouse optimisation experiments.

Dashboard Features

The dashboard consists of four main views, accessible via navigation buttons:

  1. Multi-Objective View

    • Displays a Pareto front visualisation
    • Provides sortable tables of Pareto-optimal and non-Pareto solutions
    • Allows sorting by Average Efficiency, Efficiency Standard Deviation, or normalised distance from the ideal point
  2. Single-Objective View

    • Shows a table of solutions optimised solely for Average Efficiency
    • Results are presented in descending order of efficiency
  3. Parameter Sensitivity View

    • Shows parameter sensitivity graphs
    • Dropdown selection allows you to navigate between different parameters
  4. Parameter Heatmaps View

    • Displays parameter heatmaps
    • Multiple dropdown selections let you choose different parameter pairs and switch between Average Efficiency and Standard Deviation metrics

Running the Dashboard

To run the dashboard:

  1. Ensure you have run the necessary optimisation experiments:

    python -m ai.main --run_all

    Alternatively, you can run specific steps:

    • For Multi-Objective View, Parameter Sensitivity, and Heatmaps: python -m ai.main --run_step_2
    • For Single-Objective View: python -m ai.main --run_step_3
    • For specific missing visualisations: Run the relevant script from ai/plot_builders
  2. Launch the dashboard:

    python -m solara run ai.optimisation_dashboard

Acknowledgements

Initial development in partnership with Datasparq.

About

Agent-based warehouse simulation platform with AI optimization. Models human-robot collaboration, inventory management, and order fulfillment. Features Bayesian optimization for operational parameters, real-time visualization dashboard, and comprehensive KPI tracking. Built with Mesa & Optuna.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages