An Arduino-based autonomous maze-solving robot that uses neural networks for intelligent navigation, combining sensor data with machine learning to navigate complex environments.
This project implements an autonomous robot capable of navigating mazes using:
- Neural Network Control: ML-based decision making for motor control
- Multi-Sensor Fusion: Front, left, and right proximity sensors
- Encoder-based Navigation: Precise positioning and turning
- Real-time Learning: Data collection for continuous improvement
- OLED Display: Live status and debugging information
- Microcontroller: Teensy (or compatible Arduino board)
- Motor Driver: TB6612FNG or similar
- Motors: DC motors with encoders
- Sensors: 3x analog proximity sensors (front, left, right)
- ADC: 12-bit resolution (0-4095)
- Display: Adafruit SH110X OLED (128x64)
- Storage: SD card for data logging
- Power: Appropriate battery pack for motors and logic
bot_ultimate.ino- Main robot firmware with neural network integrationbot_maze_nn_hybrid.ino- Hybrid control (NN + rule-based)bot_maze_solver.ino- Pure maze-solving algorithmbot_v*.ino- Various development versionsdata_collector_v*.ino- Data collection firmwarecalibrate_turn.ino- Encoder calibration utilityweights.h- Neural network weights (auto-generated)
train_comprehensive.py- Main training script using all available datatrain_final.py- Optimized training pipelinetrain_ultimate.py- Advanced training with safety rulestrain_nn.py- Basic neural network trainertrain_bot.py- Original training script
data/- Training data collected from robot runsrun_*.csv- Individual run datatrain*.csv- Labeled training datasetsprocessed_train_data.csv- Preprocessed datarun **/- Organized run sessions
- Wire all components according to pin definitions in firmware
- Calibrate sensors and encoder ticks (see Configuration section)
- Upload data collection firmware to gather training data
# Upload data_collector_v3.ino to your robot
# Run the robot in your maze
# Copy CSV files from SD card to data/ folder# Install dependencies
pip install -r requirements.txt
# Train the model
python train_comprehensive.py
# This will generate weights.h file# Upload bot_ultimate.ino with the new weights.h
# The robot will now use the trained neural networkconst int16_t FRONT_EMERGENCY = 2700; // Hard stop
const int16_t FRONT_CLOSE = 2200; // Start slowing
const int16_t SIDE_CRASH = 3200; // Emergency avoidance
const int16_t SIDE_DANGER = 2900; // Strong correction
const int16_t SIDE_WALL = 2000; // Wall detectedconst int16_t MAX_SPEED = 160;
const int16_t CRUISE_SPEED = 130;
const int16_t TURN_SPEED = 120;
const long TICKS_90 = 455; // Adjust for your robotHIDDEN_LAYERS = (32, 16) # Two hidden layers
ACTIVATION = 'tanh'
MAX_ITERATIONS = 5000
LEARNING_RATE = 0.001-
Data Collection: Robot runs in manual/semi-auto mode, logging:
- Sensor readings (front, left, right)
- Motor PWM values
- Encoder positions
- Timestamps
-
Data Processing:
- Normalize sensor values (0-4095 β 0-1)
- Filter bad samples (crashes, stuck states)
- Inject safety rules for wall avoidance
- Data augmentation for better generalization
-
Training:
- Input: 3 normalized sensor values
- Output: 2 PWM values (left motor, right motor)
- Architecture: Multi-layer perceptron with tanh activation
- Export: Quantized weights in C header format
-
Inference:
- Real-time prediction on Teensy (< 1ms per inference)
- Safety overrides prevent crashes
- Fallback to rule-based control if needed
Sensors β Neural Network β Motor PWM
β
Safety Layer β Hard limits β Final Output
CSV files contain:
timestamp,sensorF,sensorL,sensorR,pwmL,pwmR,encL,encR
1234,2000,1500,1600,150,150,100,98
- Modify firmware in
*.inofiles - Collect new training data
- Retrain neural network
- Test incrementally
- OLED displays real-time sensor and motor values
- SD card logging for post-analysis
- Serial monitor for detailed debugging
# Upload calibrate_turn.ino
# Measure encoder ticks for 90Β° turn
# Update TICKS_90 in firmware- Response Time: < 1ms neural network inference
- Turn Accuracy: Β±2Β° with encoder feedback
- Wall Detection: Reliable at 5-30cm range
- Success Rate: Depends on training data quality
Contributions are welcome! Areas for improvement:
- Advanced path planning algorithms
- Multi-maze generalization
- Real-time learning/adaptation
- Vision-based navigation
- ROS integration
MIT License - See LICENSE file for details
- Arduino/Teensy community
- scikit-learn for ML tools
- Adafruit libraries
For questions or collaboration, open an issue on GitHub.
Status: Active Development π§
Last updated: February 2026