A modular Python engine for simulating large‑scale human‑robot warehouses, analysing operational Key Performance Indicators (KPIs), and searching for optimal configurations.
git clone https://gitlab.doc.ic.ac.uk/g24mai03/waresim
cd waresim
python3.12 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txtRequires Python 3.12.
WareSim replicates inbound (stock replenishment), storage, and outbound activities inside a warehouse. It models humans, robots, inventory, and orders; records key operational metrics; and offers visualisations. Built with Mesa, the simulation captures realistic warehouse behaviours such as human-robot collaboration, narrow aisle accessibility, stochastic elements like battery variation, and collision tracking.
In addition to simulation, WareSim includes an AI-based optimisation module that uses Bayesian Optimisation, powered by Optuna, to identify high-performing warehouse configurations. It efficiently explores operational parameters and supports both single- and multi-objective optimisation, helping users align simulation outcomes with specific business priorities.
- Agent‑based discrete‑event simulation of humans and robots.
- JSON‑driven customisable warehouse layouts (example layout has 32 × 27 grid and 240 Stock Keeping Units (SKUs) which are item types).
- Four picking strategies: FIFO, Furthest‑First, Nearest‑Item, Optimal‑Path.
- Visualisation: Live Solara dashboard and Matplotlib plot generation.
- Detailed outputs: KPI log, machine‑readable metrics JSON, per‑timestep CSV, event tracker.
- Intelligent discovery of near-optimal warehouse parameters (robot counts, worker numbers, picking strategies)
- Customisable efficiency score system that combines multiple metrics based on business priorities
- Support for both single-objective (maximising one metric) and multi-objective (two metrics) optimisation (balancing competing goals)
- Comprehensive visualisations to understand parameter relationships and performance impacts
- Interactive dashboard for exploring optimisation results with multiple analysis views
| Layer | Path | Description |
|---|---|---|
| backend | backend/ |
Domain objects, task logic, KPI collection |
| ai | ai/ |
Add-on module for optimising discrete, non-spatial warehouse parameters |
| frontend | frontend/ |
Static plots and Solara dashboard |
WARESIM/
├── ai/
├── backend/ # Core simulation engine and logic
│ ├── human_robot_logic/ # Robot and human behaviour modelling
│ │ ├── human_robot_agents.py # Definition of agents and agent-specific logic
│ │ ├── human_robot_task_management.py # Human-robot interaction logic
│ │ ├── path_finding.py # Robot movement logic
│ │ └── picking_strategies.py
│ ├── metrics/ # Performance data collection and analysis
│ │ ├── metrics_analysis.py
│ │ └── metrics.py
│ ├── order_management/ # Order processing workflows
│ │ ├── inbound_order_management.py # Stock replenishment logic
│ │ └── outbound_order_management.py # New order generation logic
│ ├── tests/ # Unit and integration tests
│ ├── warehouse_model/ # Core simulation framework
│ │ ├── base_simulation.py # Mesa model implementation
│ │ └── utils.py
│ └── warehouse_set_up/ # Physical warehouse components used in Solara Visualisation
│ ├── inbound_zone_agents.py
│ ├── inventory_agents.py
│ ├── maintenance_zone_agents.py # For robot battery charging
│ └── outbound_zone_agents.py
├── frontend/ # Visualisation and UI components
│ ├── live_sim_custom_visual.py # Agent visualisation styling
│ ├── plots_from_main.py # Static plot generation for CLI mode
│ ├── plots.py # Interactive plot components for dashboard
│ └── simulation_dashboard.py # Solara dashboard components
├── input/ # JSON configs & parameter ranges
├── output/ # Artefacts (one sub‑folder per run)
├── main.py # CLI entry point for headless execution
├── visualisation.py # Solara dashboard entry point
└── requirements.txt # Project dependencies
Note: The ai subdirectory should be considered as a "standalone" module that uses the simulator. By this, we mean:
- The
aisubdirectory has its inputs ininput/ - But, its outputs (
ai/results/andai/visualisations/), its main (ai/main.py) and its "front-end" (ai/optimization_dashboard.py) are inside theai/subdirectory. main.py,output/andvisualisation.pyare proper to the simulator.- The reason behind this approach is that the
aimodule should be viewed as an "add-on" to the simulator. The directory is structured in a way that serves this vision.
This file defines all parameters for a specific warehouse simulation run. The main sections include:
- Run Settings: Top-level settings like
timesteps(simulation duration) ornum_multiple_runs(only used by the dashboard when you run multiple simulations). warehouse: Defines the physical grid dimensions (depth,width) and the coordinates of key operational zones (inbound_zone,drop_off_zone,maintenance_zones).robots: Configures the robotic fleet:- Number of
inboundandoutboundrobots. - Carrying
capacityfor each type. battery_capacityand charging threshold (go_to_charge).- The
picking_strategyused by outbound robots (see comments in the file for strategy codes, e.g., 1=FIFO, 3=Nearest Item).
- Number of
humans: Configures the human workforce:- Number of workers per shift (
num_shift_A,num_shift_B). shift_durationandrest_zonelocation (where workers rest between shifts).- Parameters for the stochastic competency model (
allow_competency- if turned off, humans take 1 timestep per task).
- Number of workers per shift (
orders: Controls workload generation:- Toggles for dynamic
generate_new_ordersandgenerate_inbound_items. - Parameters for stochastic outbound order generation (distribution type, mean/std dev/max for size and arrival rate, generation frequency
generate_every). stock_replenishment_thresholdto trigger inbound tasks.
- Toggles for dynamic
item_types: A list defining each SKU (product) bynameand its relative demand (order_rate).aisles: Defines the detailed static layout:- Geometry (
start,endcoordinates) andheight_levels(determines number of shelves per bay) for each aisle. - Access rules:
robot_access(boolean,falseindicates a narrow aisle) andhuman_access(from which side of aisles humans need to approach from: Left or Right). - Initial stock allocation (
bay_allocations) specifies for each bay the item type it contains and the quantity on each level at the start of the simulation (each SKU gets permanently assigned to a unique shelf).
- Geometry (
Note: Aisles, bays and shelves define item storage. An aisle is a collection of bays placed in a line. A bay occupies a single cell in the simulation. Each bay has a certain number of shelves on which items are stored.
Note: The human competency model simulates variable worker performance. When enabled (allow_competency: true), each worker is assigned an individual competency level sampled from a normal distribution (competency_distribution_mean, competency_distribution_std). During task execution, the time taken is then sampled from another normal distribution using that worker's competency level and competency_std, clamped between 1 and max_time timesteps. This creates realistic variation in worker productivity.
Note: The example input/config.json contains detailed comments ("_comment...") explaining each parameter. Refer to this file for specifics when creating custom configurations.
Key parameters to experiment with:
robots.picking_strategy: Test different strategies (1=FIFO, 2=Furthest-First, 3=Nearest-Item, 4=Optimal-Path)robots.num_outbound_robots: Adjust the number of robots used for outbound operations (taking items from the shelves to the drop-off zone for order fulfillment)robots.num_inbound_robots: Adjust the number of robots used for inbound operations (taking items from the inbound zone to the shelves for stock replenishment)humans.num_shift_Aandhumans.num_shift_B: Modify staffing levelsorders.order_mean_per_step: Change order volume to test system under load
This simulation can be run in two ways:
- Developer Mode (
main.py) – for CLI-based execution, debugging, and backend testing. - Dashboard Mode (
visualisation.py) – for live simulation, interactive controls, and performance evaluation plots using a Solara-powered web interface.
All configuration is managed via the .env file.
The .env file is used to centralise all important settings for both main.py and visualisation.py.
This includes:
- Simulation parameters (e.g. config file path, number of steps)
- Output filenames and directories
Run a single simulation using the CLI tool (primarily for development and debugging):
python main.pyThe simulation will:
- Initialise the warehouse based on the configuration from the
.envfile - Run the specified number of steps
- Generate metrics and performance logs
- Create visualisation plots automatically
All output files are saved to output/run_from_main/ directory, including:
key_metrics_log.txt: Summary statisticsmetrics_output.json: Structured metrics datawarehouse_data_output.csv: Timestep-by-timestep datawarehouse_simulation_plots_2x2.png: Automatically generated visualisations
For a more interactive experience with support for multiple runs, launch the Solara-powered dashboard:
# Start the web interface for real-time visualisation
python -m solara run visualisation.pyThe dashboard allows you to:
- Run single or multiple simulations with configurable parameters
- View the warehouse in real-time with colour-coded agents
- Toggle different metrics visualisations
- Compare results across multiple simulation runs
Note: The Solara dashboard is the recommended way to run multiple simulations and compare results. The CLI tool (main.py) only supports single runs and is primarily used for development purposes.
After running a simulation, the tool creates a structured output directory that varies based on your run type:
output/
├── run_from_main/ # Results from CLI-based execution (Developer Mode)
│ ├── key_metrics_log.txt # Summary performance statistics
│ ├── warehouse_data_output.csv # Timestep-by-timestep data
│ ├── warehouse_metrics.json # Structured performance metrics for programmatic use
│ ├── warehouse_simulation_plots_2x2.png # Auto-generated performance visualisations
│ └── warehouse_simulation.log # Warehouse simulation logs
│
├── run_from_visualisation/ # Results from front-end interface (Dashboard Mode)
│ ├── single_run/ # Results from dashboard single simulation
│ │ ├── key_metrics_log.txt
│ │ ├── warehouse_data_output.csv
│ │ ├── warehouse_metrics.json
│ │ └── warehouse_simulation.log
│ │
│ └── multiple_runs/ # Results from dashboard multiple runs (e.g. 3 runs)
│ ├── key_metrics_log_run1.txt
│ ├── key_metrics_log_run2.txt
│ ├── key_metrics_log_run3.txt
│ ├── warehouse_data_output_run1.csv
│ ├── warehouse_data_output_run2.csv
│ ├── warehouse_data_output_run3.csv
│ ├── warehouse_metrics_run1.json
│ ├── warehouse_metrics_run2.json
└────── └── warehouse_metrics_run3.json
Each output directory contains multiple file types that provide different views of the simulation results:
| Output File | Description |
|---|---|
key_metrics_log.txt |
Human-readable summary with KPIs like order count, resource utilisation, and robot behaviour |
warehouse_metrics.json |
Machine-readable metrics in JSON format for programmatic analysis or visualisation |
warehouse_data_output.csv |
Detailed per-timestep data for time-series analysis |
Sample output from key_metrics_log.txt:
WAREHOUSE METRICS:
Total Timesteps in Simulation: 500
Total Orders Processed: 16
Resource Utilisation:
Average robot utilisation: 1.0
Average human utilisation: 0.27
Robot Behaviour Analysis:
Total collisions detected: 2
Total number of stuck instances: 25
The Solara dashboard (visualisation.py) provides several powerful visualisation options:
- Colour-coded agents with dynamic state indicators
- Outbound robots (blue/red): Blue when available, red when fulfilling tasks
- Inbound robots (purple/yellow): Purple when available, yellow when fulfilling tasks
- Human workers (green/orange/grey): Green when working, orange when idle, grey during rest shifts
- Shelving areas (black) and warehouse zones (white/grey)
Toggle buttons let you view:
- Robot Movement Heatmap: Heat map visualisation of most-visited warehouse cells
- Orders Log: Tracking of inbound and outbound orders over time
- Processed Orders: Number of orders and items processed per timestep
- Worker Utilization: Robot and human capacity utilisation rates
- Stock Levels: Total warehouse inventory trends
- Stuck Robot Analysis: Tracking of robot navigation issues
- Out-of-Stock Metrics: Product availability monitoring
For multiple simulation runs, the dashboard automatically aggregates results with statistical bands showing mean ±1 standard deviation.
| File | Content | Format |
|---|---|---|
key_metrics_log.txt |
Human‑readable headline KPIs | Text |
metrics_output.json |
Same KPIs in structured form | JSON |
warehouse_data_output.csv |
Per‑timestep state snapshot | CSV |
warehouse_simulation.log |
Warehouse Simulation Logs | LOG |
Sample section from key_metrics_log.txt:
Total Orders Processed: 8
Average robot utilisation: 1.0
Total collisions detected: 1
| Layer | Command (run from project root) | Scope |
|---|---|---|
| Quick functional check | python -m backend.tests.run_tests |
Basic grid maths, picking, and pathfinding sanity cases |
| Full unit + integration | python -m unittest discover -s backend/tests |
pathfinding_tests.py, test_picking_strategies.py, basic_test_cases.py, human_robot_interaction_cases.py |
| stress load | RUN_STRESS=1 python -m backend.tests.stress_testing_robots |
200‑step high‑traffic scenario; writes full metrics to output/run_from_main/ |
The AI module identifies near-optimal warehouse configurations through Bayesian Optimisation using Optuna, exploring non-spatial discrete parameters like worker numbers through repeated simulation runs (to account for stochasticity). The system supports both single-objective and multi-objective (2 objectives) optimisation, and can target either raw simulation metrics (like robot utilisation) or derived efficiency scores metrics (average efficiency and efficiency standard deviation). Efficiency score is a customisable metric that combines multiple warehouse simulation's metrics into a single value using user-defined weights, allowing optimisation to align precisely with business-specific priorities.
The ai/ module allows us to define:
- The parameters we would like to optimise in our configuration, along with the ranges we would like to explore.
- The metrics outputted by the warehouse simulator that are interesting to us, how important they are relative to one another (via the efficiency score weights), and whether we would like to maximise or minimise them.
Then, our system uses Bayesian Optimisation via Optuna to give us insights on which parameter sets best serve our operational goals.
ai/
├── dirs_and_filenames_constants.py # Centralised constants for file paths and naming conventions
├── experiment_logger.py # Logging utilities for timing and recording experiments
├── main.py # Main orchestration script for running optimisation experiments of interest
├── normalization_range_sampler.py # Samples parameter space to establish min/max metric values for efficiency score normalisation
├── optimisation_dashboard.py # Interactive web dashboard for visualising optimisation results and visualisations
│
├── optimization_pipeline/ # Core modules for executing the Bayesian optimisation process
│ ├── best_solution_selector.py # Selects optimal solutions from both single and multi-objective optimisations
│ ├── efficiency_score_metrics.py # Calculates normalised efficiency scores by combining weighted normalised metrics
│ ├── run_single_optimization_experiment.py # Orchestrates a single optimisation experiment with specified parameters to optimise and objectives
│ └── simulation_runner.py # Executes warehouse simulations with specified parameters and collects metrics
│
├── plot_builders/
│ ├── plot_parameter_heatmaps.py # Creates 2D heatmaps to visualise how different parameter combinations affect warehouse efficiency metrics
│ ├── plot_parameter_sensitivity_graphs.py # Shows how the best solution for warehouse efficiency avg and sd changes with each possible value of individual parameters
│ ├── plot_pareto_front_efficiency.py # Generates a Pareto front plot visualising the trade-off between Average Efficiency and Efficiency Standard Deviation
│ ├── plot_utils.py # Utility functions for loading and processing data for plots
│ └── run_all_visualizations.py # Executes all visualisation scripts sequentially:
│
├── results/ # JSON outputs from optimisation experiments
│ ├── best_params/ # Best parameter configurations identified
│ └── trials/ # Data for all trials observed during the experiment
│
├── tests/ # Test suites for module validation
│
├── utils/ # Common utility functions and helpers
│
└── visualisations/ # Generated visualisation outputs
- Each file has a very detailed introductory docstring that clearly explains what the files do, what are their inputs, outputs, CLI arguments, dependencies, etc.
The AI module requires three main input JSON files:
This file defines the parameters to be optimised and their exploration ranges (inclusive):
{
"robots.num_inbound_robots": [1, 10],
"robots.num_outbound_robots": [1, 10],
"humans.num_shift_A": [1, 10],
"humans.num_shift_B": [1, 10],
"robots.picking_strategy": [2, 4]
}Requirements and Usage:
- Parameters must be discrete and non-spatial (aisles, bays, width, depth are not recommended)
- Ranges must be specified as integers [min, max]
- To optimise for an additional parameter: simply add it with its range to this dictionary
- To remove a parameter from the optimisation pipeline: remove it from this dictionary
This file defines which metrics to optimise, their relative importance, and optimisation direction:
{
"Total Orders Processed": [0.4, "max"],
"Average Robot Utilization": [0.2, "max"],
"Average Human Utilization": [0.2, "max"],
"Total Collisions Detected": [0.2, "min"]
}Requirements and Usage:
- The sum of all weights must be approximately 1.0
- Each metric needs a weight (float between 0-1) and direction ("max" or "min")
- To consider an additional metric: add it to this dictionary with weight and direction
- To remove a metric from our considerations: remove it from this dictionary Any metrics listed in this file that don't appear in the warehouse simulator's output will be ignored
This file provides the baseline warehouse configuration that serves as input to the warehouse simulator. The AI module pipeline overrides specific parameters (from the parameter ranges JSON) within this baseline configuration to evaluate different parameter combinations. The new values will be within the ranges specified in the parameter ranges JSON.
Our optimisation pipeline has several runnables that each provide essential elements for warehouse optimisation. They can all be run through the main.py or independently:
-
Normalization Range Sampler (
normalization_range_sampler.py):- Pre-optimisation step that samples parameter space to establish metric ranges for our metrics of interest
- Determines min/max values essential for efficiency score normalisation
- Important: Required prerequisite before running any optimisation with Average Efficiency and/or Efficiency Standard Deviation as metric objective(s)
-
Single Optimization Experiment Runner (
run_single_optimization_experiment.py):- Orchestrates single or multi-objective optimization experiments to find optimal warehouse parameters
- Supports exactly one (single-objective) or two metrics (multi-objective)
- Objectives can be any metric in Metrics Weights and Directions JSON (input 2) or Average Efficiency or Efficiency Standard Deviation
- Prerequisite: Requires Normalization Range Sampler's output (
HISTORICAL_MIN_MAX_JSON) when using Average Efficiency and/or Efficiency Standard Deviation as (an) objective metric(s)
-
Visualisation Tools (all analyse results from previously run multi-objective optimisation trials with Average Efficiency and Efficiency Standard Deviation as objectives):
plot_parameter_sensitivity_graphs.py: Shows the "best" solution (that balances Average Efficiency and Efficiency Standard Deviation) for each possible value of each parameterplot_pareto_front_efficiency.py: Visualises the trade-off between Average Efficiency and Efficiency Standard Deviation by showing all trials (both Pareto and non-Pareto), the Pareto Front, and the best solutionplot_parameter_heatmaps.py: Creates 2D heatmaps to visualise how different parameter combinations affect Average Efficiency and Efficiency Standard Deviationrun_all_visualizations.py: Executes all three visualisation scripts
-
Main Orchestration Script (
main.py):- Integrates all components into a complete workflow with three key steps:
- Step 1: Metric normalisation sampling (pre-optimisation process) (using:
normalization_range_sampler.py) - Step 2: Multi-objective optimisation with Average Efficiency and Efficiency Standard Deviation as objectives (using
run_single_optimization_experiment.py) - Step 3: Single-objective optimisation with Average Efficiency as objective (using
run_single_optimization_experiment.py)
- Step 1: Metric normalisation sampling (pre-optimisation process) (using:
- Can run any combination of steps above using the CLI
- Automatically generates visualisations when running Step 2 or all steps
- Note: If you need to optimise using metrics other than efficiency-related ones as objectives, use
run_single_optimization_experiment.pydirectly
- Integrates all components into a complete workflow with three key steps:
Here are more details on the different executable files. Note that for the CLI [CLI], all of these are optional. If
not included, these arguments will use their default values, which are easily modifiable (Consult each file's intro docstring for more information).
This prerequisite step samples the parameter space to determine min/max metric ranges needed for proper normalisation during efficiency score calculations.
python -m ai.normalization_range_sampler [CLI]CLI:
--parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)--baseline_config: Baseline warehouse configuration to be optimised (input 3)--metrics_weights_and_directions_json: Metrics to sample with their weights and directions (input 2)--steps: Number of timesteps per simulation (higher = slower)--number_of_samples: Number of parameter combinations to sample--seed: Random seed for reproducible results
Output:
HISTORICAL_MIN_MAX_JSON: Contains min/max values for each metric used in normalisation
Note: Must be run before any optimisation experiments that use Average Efficiency or Efficiency Standard Deviation as objectives.
This script orchestrates complete optimisation experiments, finding optimal parameter combinations based on specified objectives.
python -m ai.optimization_pipeline.run_single_optimization_experiment [CLI]CLI:
--parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)--metrics_weights_and_directions_json: Metrics with their weights and optimisation directions (input 2)--baseline_config: Baseline warehouse configuration to be optimised (input 3)--n_trials: Number of optimisation trials to run (higher = more thorough exploration but longer runtime)--steps: Number of timesteps per simulation (higher = slower)--number_of_runs: Number of simulation reruns for each parameter set (to account for stochasticity)--objectives: Metrics to optimise (one or two metrics) Note: use " " for each objective, if inputting 2 objectives, just separate them with a space--cores: Number of CPU cores for parallel processing Note: default: 1, currently simulator is not multithread friendly, so adding more cores will not make the experiment run faster--seed: Random seed for reproducible results
Optimisation Types:
-
Single-objective optimisation:
- Maximises or minimises a single objective metric
-
Multi-objective optimisation:
- Limited to EXACTLY TWO objectives
- Outputs Pareto-optimal solutions
Default setup optimises "Average Efficiency" (maximise) and "Efficiency Standard Deviation" (minimise).
Output Files:
-
All trials data (
TRIALS_DIR/[optimization_type]/all_trials__[metric_name(s)]__[type].json):- Contains data for all parameter combinations tried
- Includes trial ID, parameters, metric values
- For multi-objective, flags Pareto-optimal and "best" solutions
-
Best parameters (
BEST_PARAMS_DIR/[optimization_type]/best_params__[metric_name(s)]__[type].json):- Contains optimal parameter set found
- Includes trial ID, parameters, and metric values
Example Usage:
# For multi-objective optimisation
python -m ai.optimization_pipeline.run_single_optimization_experiment \
--parameter_ranges_json input/param_ranges.json \
--metrics_weights_and_directions_json input/metrics_weights_and_directions.json \
--baseline_config input/config.json \
--n_trials 20 \
--steps 200 \
--number_of_runs 3 \
--objectives "Average Efficiency" "Efficiency Standard Deviation" \
--cores 1 \
--seed 13
# For single-objective optimisation
python -m ai.optimization_pipeline.run_single_optimization_experiment \
--parameter_ranges_json input/param_ranges.json \
--metrics_weights_and_directions_json input/metrics_weights_and_directions.json \
--baseline_config input/config.json \
--n_trials 20 \
--steps 200 \
--number_of_runs 3 \
--objectives "Average Efficiency" \
--cores 1 \
--seed 13IMPORTANT NOTE: All visualisation tools require that you have previously run a multi-objective optimisation with objectives being "Average Efficiency" and "Efficiency Standard Deviation." The all_trials file needs to be found in TRIALS_DIR/multi (which is where our optimisation pipeline places it).
If this hasn't been done yet, you can run:
ai/main.py --run_step_2- or
ai/main.py --run_step_1 --run_step_2if pre-optimisation normalisation step wasn't done OR ai/optimization_pipeline/run_single_optimization_experiment.pywith--objectives "Average Efficiency" "Efficiency Standard Deviation"(but make sure the pre-optimisation step was done before)
This will generate the required all_trials file at the correct location.
python -m ai.plot_builders.plot_parameter_sensitivity_graphs [CLI]CLI:
--parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)
What the Sensitivity Graphs Show:
- Each graph shows one parameter, with parameter values on the x-axis and Average Efficiency on the y-axis
- For each possible parameter value, it plots the single best solution from all trials with that parameter value
- "Best" solution means the trial with minimum normalised Euclidean distance from the ideal point (explained in the code base)
- Error bars represent the Efficiency Standard Deviation value of that best solution
- Green circles indicate Pareto-optimal solutions, blue squares indicate non-Pareto solutions
Output Files:
- One PNG graph per parameter, saved to the SENSITIVITY_DIR directory
- Named:
sensitivity_[param_name].png
python -m ai.plot_builders.plot_pareto_front_efficiencyWhat the Pareto Front Plot Shows:
- Each point represents a simulation trial with specific parameter values
- X-axis shows Average Efficiency (higher is better)
- Y-axis shows Efficiency Standard Deviation (lower is better)
- Red points represent the Pareto-optimal solutions
- Blue points represent non-Pareto solutions
- The green-bordered point shows the "best" solution
- Numbers shown on Pareto points represent the trial ID (matching all_trials and best_params json files)
Output File:
- A PNG visualisation of the Pareto front, saved to PARETO_FRONTS_DIR
python -m ai.plot_builders.plot_parameter_heatmaps [CLI]CLI:
--parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)
What the Heatmaps Show:
- Each heatmap displays the relationship between two warehouse parameters
- Colours represent either Average Efficiency (higher is better) or Efficiency Standard Deviation (lower is better)
- Black cells indicate parameter combinations that weren't explored during optimisation
- The percentage of explored combinations is displayed at the bottom of each heatmap
Output Files:
- Two PNG heatmaps per parameter pair, saved to the HEATMAPS_DIR directory:
heatmap_[param1]_vs_[param2]_avg.png: Average efficiencyheatmap_[param1]_vs_[param2]_std.png: Efficiency standard deviation
This script executes all three visualisation scripts for warehouse optimisation analysis in sequence.
python -m ai.plot_builders.run_all_visualizations [CLI]CLI:
--parameter_ranges_json: Path to JSON file with parameter ranges to explore (input 1)
What This Script Does:
- Runs Parameter Sensitivity Graphs
- Runs Pareto Front Efficiency Plot
- Runs Parameter Heatmaps
This script serves as the main entry point for the warehouse optimisation AI module, orchestrating and timing the entire optimisation process.
python -m ai.main [CLI]CLI:
--parameter_ranges_json: JSON defining parameters and their value ranges to explore (input 1)--baseline_config: Baseline warehouse configuration to be optimised (input 3)--metrics_weights_and_directions_json: Metrics with their weights and optimisation directions (input 2)--steps: Number of timesteps per simulation (higher = slower)--seed: Random seed for reproducible results--number_of_samples: Number of parameter combinations to sample in Step 1 (higher = more accurate but slower)--n_trials: Number of optimisation trials for Steps 2-3 (higher = more thorough exploration)--number_of_runs: Number of simulation reruns per parameter set (to account for stochasticity)--cores: Number of CPU cores for parallel processing (reminder: not useful in our case)--run_step_1: Run Step 1: Sample metric ranges for normalisation--run_step_2: Run Step 2: Multi-objective optimisation with Average Efficiency and Efficiency Standard Deviation--run_step_3: Run Step 3: Single-objective optimisation with Average Efficiency--run_all: Run all steps (default if no steps are specified)
Workflow Dependencies:
- Step 1 is a prerequisite for Steps 2 and 3 (creates necessary normalisation ranges)
- Steps 2 and 3 can run independently after Step 1 is completed
- Step 1 can be skipped if normalisation ranges (
HISTORICAL_MIN_MAX_JSON) already exist from a previous run - Visualisations are automatically generated when running Step 2 or all steps
Output Files:
-
Experiments log (
EXPERIMENTS_LOG):- Detailed log of all experiments with timing information, parameters used, and metadata
-
Step 1 outputs:
- Historical min/max JSON (
HISTORICAL_MIN_MAX_JSON): Contains min/max values for each metric in metrics_weights_and_directions, obtained from sampling in step 1
- Historical min/max JSON (
-
Step 2 outputs:
-
Multi-objective optimisation results:
• All trials data: Contains parameters tested and resulting metrics
- Location:
TRIALS_DIR/multi/all_trials__average_efficiency__efficiency_standard_deviation__multi.json
• Best parameters: Contains the "best" parameter set selected from the Pareto front
- Location:
BEST_PARAMS_DIR/multi/best_params__average_efficiency__efficiency_standard_deviation__multi.json
- Location:
-
Visualisation files:
- Parameter sensitivity graphs in
SENSITIVITY_DIR - Pareto front plot in
PARETO_FRONTS_DIR - Parameter heatmaps in
HEATMAPS_DIR
- Parameter sensitivity graphs in
-
-
Step 3 outputs:
-
Single-objective optimisation results:
• All trials data: Contains all trials sorted by objective metric value
- Location:
TRIALS_DIR/single/all_trials__average_efficiency__single.json
• Best parameters: Contains the best parameter set that maximises Average Efficiency
- Location:
BEST_PARAMS_DIR/single/best_params__average_efficiency__single.json
- Location:
-
Example Usage:
# Run all steps with default parameters
python -m ai.main
# OR:
python -m ai.main --run_all
# Run specific steps
python -m ai.main --run_step_1 --run_step_2
# Run with custom parameters
python -m ai.main --run_all --parameter_ranges_json custom_ranges.json --steps 5000 --n_trials 100This section provides samples of key output files generated by the optimisation pipeline.
Generated by: normalization_range_sampler.py directly, or main.py with --run_step_1 or --run_all flags
{
"Total Orders Processed": {
"min": 106,
"max": 486
},
"Average Robot Utilization": {
"min": 0.88,
"max": 1.0
},
"Average Human Utilization": {
"min": 0.06,
"max": 0.87
},
"Total Collisions Detected": {
"min": 0,
"max": 222
}
}Generated by: run_single_optimization_experiment.py, or main.py with --run_step_2 or --run_all flags (for main.py: will only do so with objectives: "Average Efficiency" and "Efficiency Standard Deviation")
{
"best_trial_id": 41,
"best_trial_params": {
"robots.num_inbound_robots": 1,
"robots.num_outbound_robots": 7,
"humans.num_shift_A": 7,
"humans.num_shift_B": 5
},
"best_trial_metrics_values": {
"Average Efficiency": 0.7255105631947737,
"Efficiency Standard Deviation": 0.008934878939733683
}
}Note: single would look the same, but with just one metric in "best_trial_metrics_values"
Generated by: run_single_optimization_experiment.py, or main.py with --run_step_2 or --run_all flags (for main.py: will only do so with objectives: "Average Efficiency" and "Efficiency Standard Deviation")
[
{
"trial_id": 35,
"params": {
"robots.num_inbound_robots": 6,
"robots.num_outbound_robots": 10,
"humans.num_shift_A": 3,
"humans.num_shift_B": 6
},
"Average Efficiency": 0.7731409479655094,
"Efficiency Standard Deviation": 0.026364188712305347,
"pareto": true,
"best": false
},
// ... additional trials ...
]Generated by: run_single_optimization_experiment.py with "Average Efficiency" as the only objective, or main.py with --run_step_3 or --run_all flags (for main.py: will only do so with objective: "Average Efficiency")
[
{
"trial_id": 36,
"params": {
"robots.num_inbound_robots": 3,
"robots.num_outbound_robots": 8,
"humans.num_shift_A": 3,
"humans.num_shift_B": 9
},
"Average Efficiency": 0.781779323182832
},
// ... additional trials ...
]Generated by: main.py
=== Warehouse Optimization Experiment Suite ===
Started at: 2025-04-27 05:25:41
=== System Information ===
Cores to use: 1
Parameter ranges (from input/param_ranges.json):
{
"robots.num_inbound_robots": [
1,
10
],
"robots.num_outbound_robots": [
1,
10
],
"humans.num_shift_A": [
1,
10
],
"humans.num_shift_B": [
1,
10
]
}
Metrics weights and directions (from input/metrics_weights_and_directions.json):
{
"Total Orders Processed": [
0.4,
"max"
],
"Average Robot Utilization": [
0.2,
"max"
],
"Average Human Utilization": [
0.2,
"max"
],
"Total Collisions Detected": [
0.2,
"min"
]
}
Output paths:
Experiments log: ai/results/experiment_results.log
Historical min/max JSON: ai/results/metrics_min_max.json
Trials directory: ai/results/trials
Best params directory: ai/results/best_params
=== Step 1: Metric Range Sampling ===
Start time: 2025-04-27 05:25:41
End time: 2025-04-27 05:25:47
Elapsed time: 5 seconds
Parameters used:
parameter_ranges_json: input/param_ranges.json
baseline_config: input/config.json
metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
steps: 200
number_of_samples: 5
metrics_historical_min_max: ai/results/metrics_min_max.json
seed: 13
=== Step 2: Multi-objective Optimization (Average Efficiency & Efficiency Standard Deviation) ===
Start time: 2025-04-27 05:25:47
End time: 2025-04-27 05:26:09
Elapsed time: 22 seconds
Parameters used:
parameter_ranges_json: input/param_ranges.json
baseline_config: input/config.json
metrics_historical_min_max: ai/results/metrics_min_max.json
n_trials: 10
steps: 200
number_of_runs: 2
metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
trials_dir: ai/results/trials
best_params_dir: ai/results/best_params
cores: 1
seed: 13
=== Step 3: Single-objective Optimization (Average Efficiency) ===
Start time: 2025-04-27 05:26:09
End time: 2025-04-27 05:26:31
Elapsed time: 22 seconds
Parameters used:
parameter_ranges_json: input/param_ranges.json
baseline_config: input/config.json
metrics_historical_min_max: ai/results/metrics_min_max.json
n_trials: 10
steps: 200
number_of_runs: 2
metrics_weights_and_directions_json: input/metrics_weights_and_directions.json
trials_dir: ai/results/trials
best_params_dir: ai/results/best_params
cores: 1
seed: 13
All experiments completed at: 2025-04-27 05:26:31
The optimisation_dashboard.py provides an interactive web-based interface for exploring the results of warehouse optimisation experiments.
The dashboard consists of four main views, accessible via navigation buttons:
-
Multi-Objective View
- Displays a Pareto front visualisation
- Provides sortable tables of Pareto-optimal and non-Pareto solutions
- Allows sorting by Average Efficiency, Efficiency Standard Deviation, or normalised distance from the ideal point
-
Single-Objective View
- Shows a table of solutions optimised solely for Average Efficiency
- Results are presented in descending order of efficiency
-
Parameter Sensitivity View
- Shows parameter sensitivity graphs
- Dropdown selection allows you to navigate between different parameters
-
Parameter Heatmaps View
- Displays parameter heatmaps
- Multiple dropdown selections let you choose different parameter pairs and switch between Average Efficiency and Standard Deviation metrics
To run the dashboard:
-
Ensure you have run the necessary optimisation experiments:
python -m ai.main --run_all
Alternatively, you can run specific steps:
- For Multi-Objective View, Parameter Sensitivity, and Heatmaps:
python -m ai.main --run_step_2 - For Single-Objective View:
python -m ai.main --run_step_3 - For specific missing visualisations: Run the relevant script from
ai/plot_builders
- For Multi-Objective View, Parameter Sensitivity, and Heatmaps:
-
Launch the dashboard:
python -m solara run ai.optimisation_dashboard
Initial development in partnership with Datasparq.